Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 图像分类
在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。
获取数据
请运行以下单元,以下载 CIFAR-10 数据集(Python版)。
Step2: 探索数据
该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片:
飞机
汽车
鸟类
猫
鹿
狗
青蛙
马
船只
卡车
了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。
问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。
Step5: 实现预处理函数
标准化
在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。
Step8: One-hot 编码
和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。
提示:不要重复发明轮子。
Step10: 随机化数据
之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。
预处理所有数据并保存
运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。
Step12: 检查点
这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。
Step17: 构建网络
对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。
注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。
但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。
我们开始吧!
输入
神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数:
实现 neural_net_image_input
返回 TF Placeholder
使用 image_shape 设置形状,部分大小设为 None
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名
实现 neural_net_label_input
返回 TF Placeholder
使用 n_classes 设置形状,部分大小设为 None
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名
实现 neural_net_keep_prob_input
返回 TF Placeholder,用于丢弃保留概率
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名
这些名称将在项目结束时,用于加载保存的模型。
注意:TensorFlow 中的 None 表示形状可以是动态大小。
Step20: 卷积和最大池化层
卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化:
使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。
使用权重和 conv_strides 对 x_tensor 应用卷积。
建议使用我们建议的间距(padding),当然也可以使用任何其他间距。
添加偏置
向卷积中添加非线性激活(nonlinear activation)
使用 pool_ksize 和 pool_strides 应用最大池化
建议使用我们建议的间距(padding),当然也可以使用任何其他间距。
注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。
Step23: 扁平化层
实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
Step26: 完全连接的层
实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
Step29: 输出层
实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。
Step32: 创建卷积模型
实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型:
应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers)
应用一个扁平层(Flatten Layer)
应用 1、2 或 3 个完全连接层(Fully Connected Layers)
应用一个输出层(Output Layer)
返回输出
使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout
Step35: 训练神经网络
单次优化
实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数:
x 表示图片输入
y 表示标签
keep_prob 表示丢弃的保留率
每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。
注意:不需要返回任何内容。该函数只是用来优化神经网络。
Step37: 显示数据
实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。
Step38: 超参数
调试以下超参数:
* 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数
* 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小:
64
128
256
...
设置 keep_probability 表示使用丢弃时保留节点的概率
Step40: 在单个 CIFAR-10 部分上训练
我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。
Step42: 完全训练模型
现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。
Step45: 检查点
模型已保存到本地。
测试模型
利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。 | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: 图像分类
在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。
获取数据
请运行以下单元,以下载 CIFAR-10 数据集(Python版)。
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 50
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: 探索数据
该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片:
飞机
汽车
鸟类
猫
鹿
狗
青蛙
马
船只
卡车
了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。
问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return (x / 255)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: 实现预处理函数
标准化
在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。
End of explanation
import numpy as np
from sklearn import preprocessing
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return np.eye(10)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot 编码
和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。
提示:不要重复发明轮子。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: 随机化数据
之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。
预处理所有数据并保存
运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: 检查点
这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None, *image_shape), name = "x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.int8, shape = (None, n_classes), name = "y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = None, name = "keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: 构建网络
对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。
注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。
但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。
我们开始吧!
输入
神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数:
实现 neural_net_image_input
返回 TF Placeholder
使用 image_shape 设置形状,部分大小设为 None
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名
实现 neural_net_label_input
返回 TF Placeholder
使用 n_classes 设置形状,部分大小设为 None
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名
实现 neural_net_keep_prob_input
返回 TF Placeholder,用于丢弃保留概率
使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名
这些名称将在项目结束时,用于加载保存的模型。
注意:TensorFlow 中的 None 表示形状可以是动态大小。
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
input_chanel = int(x_tensor.shape[3])
output_chanel = conv_num_outputs
weight_shape = (*conv_ksize,input_chanel,output_chanel) # *
weight = tf.Variable(tf.random_normal(weight_shape, stddev = 0.1)) #权重
bias = tf.Variable(tf.zeros(output_chanel)) #设置偏置项
l_active = tf.nn.conv2d(x_tensor, weight, (1, *conv_strides, 1), 'SAME')
l_active = tf.nn.bias_add(l_active,bias)
#active_layers = tf.nn.relu(tf.add(tf.matmul(features,label),bias)) #ReLu
mx_layer = tf.nn.relu(l_active)
return tf.nn.max_pool(mx_layer, (1, *pool_ksize, 1), (1, *pool_strides, 1), 'VALID')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: 卷积和最大池化层
卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化:
使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。
使用权重和 conv_strides 对 x_tensor 应用卷积。
建议使用我们建议的间距(padding),当然也可以使用任何其他间距。
添加偏置
向卷积中添加非线性激活(nonlinear activation)
使用 pool_ksize 和 pool_strides 应用最大池化
建议使用我们建议的间距(padding),当然也可以使用任何其他间距。
注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。
End of explanation
from functools import reduce
from operator import mul
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
_, *image_size = x_tensor.get_shape().as_list()
#print(*image_size)
return tf.reshape(x_tensor, (-1, reduce(mul, image_size)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: 扁平化层
实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
num_input = x_tensor.get_shape().as_list()[1]
weight_shape = (num_input, num_outputs)
#print(weight_shape)
weight = tf.Variable(tf.truncated_normal(weight_shape, stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
activation = tf.nn.bias_add(tf.matmul(x_tensor, weight), bias)
return tf.nn.relu(activation)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: 完全连接的层
实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
num_input = x_tensor.get_shape().as_list()[1] #not 0
weight_shape = (num_input, num_outputs)
weight = tf.Variable(tf.truncated_normal(weight_shape, stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.nn.bias_add(tf.matmul(x_tensor,weight),bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: 输出层
实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。
注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(x, 64, (3, 3), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, keep_prob)
x = conv2d_maxpool(x, 128, (3, 3), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, keep_prob)
# x has shape (batch, 8, 8, 128)
x = conv2d_maxpool(x, 256, (3, 3), (1, 1), (2, 2), (2, 2))
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 512)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
return output(x, 10)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: 创建卷积模型
实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型:
应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers)
应用一个扁平层(Flatten Layer)
应用 1、2 或 3 个完全连接层(Fully Connected Layers)
应用一个输出层(Output Layer)
返回输出
使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: 训练神经网络
单次优化
实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数:
x 表示图片输入
y 表示标签
keep_prob 表示丢弃的保留率
每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。
注意:不需要返回任何内容。该函数只是用来优化神经网络。
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
global valid_features, valid_labels
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
loss = session.run( cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
prt = 'Loss: {:.4f} Accuracy: {:.4f}'
print(prt.format(loss, validation_accuracy, prec=3))
Explanation: 显示数据
实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。
End of explanation
# TODO: Tune Parameters
epochs = 200
batch_size = 128
keep_probability = 0.5
Explanation: 超参数
调试以下超参数:
* 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数
* 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小:
64
128
256
...
设置 keep_probability 表示使用丢弃时保留节点的概率
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: 在单个 CIFAR-10 部分上训练
我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: 完全训练模型
现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: 检查点
模型已保存到本地。
测试模型
利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。
End of explanation |
10,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Atmospherically Corrected Earth Engine Time Series
Overview
This notebook creates atmospherically corrected time series of satellite imagery using Google Earth Engine and the 6S emulator.
Supported missions
Sentintel2
Landsat8
Landsat7
Landsat5
Landsat4
Output
Average, cloud-free pixel values
Cloud masking
Uses standard cloud masks, i.e. FMASK for Landsat and ESA-QA60 for Sentinel 2. There is no guarantee they will find all clouds, a discussion on more advance and/or alternative cloud masking strategies is available here
Initialize
Step1: User Input
Step2: All time series
This function extracts cloud-free time series for each mission, atmospherically corrects them and joins them together
Step3: Data post-processing
Resample into daily intervals using liner interpolation and calculate hue-saturation-value from RGB.
Step4: Hue Stretch
We visualize hue by taking a HSV color triplet and 'strecthing' the saturation and value (i.e. setting them to 1) then converting the new 'stretched' HSV color-triplet back into RGB for display on the screen.
Step5: bringing it all together...
make a pretty graph to help us do some science. | Python Code:
# standard modules
import os
import sys
import ee
import colorsys
from IPython.display import display, Image
%matplotlib inline
ee.Initialize()
# custom modules
# base_dir = os.path.dirname(os.getcwd())
# sys.path.append(os.path.join(base_dir,'atmcorr'))
from atmcorr.timeSeries import timeSeries
from atmcorr.postProcessing import postProcessing
from atmcorr.plots import plotTimeSeries
Explanation: Atmospherically Corrected Earth Engine Time Series
Overview
This notebook creates atmospherically corrected time series of satellite imagery using Google Earth Engine and the 6S emulator.
Supported missions
Sentintel2
Landsat8
Landsat7
Landsat5
Landsat4
Output
Average, cloud-free pixel values
Cloud masking
Uses standard cloud masks, i.e. FMASK for Landsat and ESA-QA60 for Sentinel 2. There is no guarantee they will find all clouds, a discussion on more advance and/or alternative cloud masking strategies is available here
Initialize
End of explanation
target = 'forest'
geom = ee.Geometry.Rectangle(-82.10941, 37.33251, -82.08195, 37.34698)
# start and end of time series
startDate = '1990-01-01'# YYYY-MM-DD
stopDate = '2017-01-01'# YYYY-MM-DD
# satellite missions
missions = ['Sentinel2', 'Landsat8', 'Landsat7', 'Landsat5', 'Landsat4']
Explanation: User Input
End of explanation
allTimeSeries = timeSeries(target, geom, startDate, stopDate, missions)
Explanation: All time series
This function extracts cloud-free time series for each mission, atmospherically corrects them and joins them together
End of explanation
DF = postProcessing(allTimeSeries, startDate, stopDate)
Explanation: Data post-processing
Resample into daily intervals using liner interpolation and calculate hue-saturation-value from RGB.
End of explanation
hue_stretch = [colorsys.hsv_to_rgb(hue,1,1) for hue in DF['hue']]
Explanation: Hue Stretch
We visualize hue by taking a HSV color triplet and 'strecthing' the saturation and value (i.e. setting them to 1) then converting the new 'stretched' HSV color-triplet back into RGB for display on the screen.
End of explanation
plotTimeSeries(DF, hue_stretch, startDate, stopDate)
Explanation: bringing it all together...
make a pretty graph to help us do some science.
End of explanation |
10,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import Data
Step1: I downloaded the Zillow codes dataset
Step2: API Reference
Step3: Percent of homes increasing in value
Step4: Using Prophet for time series forecasting
Step5: Creating a chart using Altair
Step6: Layered Vega-lite chart created using Altair
Step7: Geopy
Step8: Getting the geocoder locations from addresses
Step9: Folium plots | Python Code:
import quandl
quandl.ApiConfig.api_key = '############'
Explanation: Import Data:
Explore the data.
Pick a starting point and create visualizations that might help understand the data better.
Come back and explore other parts of the data and create more visualizations and models.
Quandl is a great place to start exploring datasets and has the Zillow Research Datasets(and many other datasets) that can be merged to create the customized dataset that might provide solutions to a specific problem.
It also has an easy to use api. I started with the Zillow data because it was the first real estate dataset on the Quandl site and it contains a lot of metrics that look interesting.
End of explanation
zillow_codes = pd.read_csv('input/ZILLOW-datasets-codes.csv',header=None)
zillow_codes.columns = ['codes','description']
Explanation: I downloaded the Zillow codes dataset: https://www.quandl.com/data/ZILLOW-Zillow-Real-Estate-Research/usage/export
This was useful while exploring area specific codes and descriptions in Zillow Research Dataset which contains 1,318,489 datasets. One can use regular expressions among other tools during EDA.
End of explanation
def cleanup_desc(df):
'''Function cleans up description column of Zillow codes dataframe.'''
df.description.values[:] = (df.loc[:,'description']).str.split(':').apply(lambda x: x[1].split('- San Francisco, CA')).apply(lambda x:x[0])
return df
def get_df(df,col='code',exp='/M'):
'''
Function takes in the zillow_codes dataframe and ouputs
a dataframe filtered by specified column and expression:
Inputs:
col: 'code' or 'description'
exp: string Reference: https://blog.quandl.com/api-for-housing-data
Ouputs:
pd.DataFrame
'''
indices = [i for i,val in enumerate(df[col].str.findall(exp)) if val != []]
print('Number of data points: {}'.format(len(indices)))
return df.iloc[indices,:]
def print_random_row(df):
randint = np.random.randint(df.shape[0])
print(df.codes.iloc[randint])
print(df.description.iloc[randint])
print_random_row(zillow_codes)
#Zip with Regex:
zip_df = get_df(zillow_codes,col='codes',exp=r'/Z94[0-9][0-9][0-9]_')
print_random_row(zip_df)
#Metro Code: '/M'
metro_df = get_df(zillow_codes,col='codes',exp='/M12')
print_random_row(metro_df)
Explanation: API Reference:
https://blog.quandl.com/api-for-housing-data
End of explanation
#Getting neighborhood level information: '/N'
#Getting metro level information: '/M'
neighborhoods = get_df(zillow_codes,col='codes',exp='/N')
zips = get_df(zillow_codes,col='codes',exp='/Z')
zips_chicago = get_df(zips,col='description',exp='Chicago, IL')
# neighborhoods_sfo = get_df(neighborhoods,col='description',exp=' San Francisco, CA')
neighborhoods_chicago = get_df(neighborhoods,col='description',exp='Chicago, IL')
print_random_row(neighborhoods_chicago)
#mspf = Median Sale Price per Square Foot
# mspf_neighborhoods_sfo = get_df(neighborhoods_sfo,col='codes',exp='_MSPFAH')
#prr = Price to rent ratio
# prr_neighborhoods_sfo = get_df(neighborhoods_sfo,col='codes',exp='_PRRAH')
#phi = Percent of Homes increasing in values - all homes
phiv_neighborhoods_chicago = get_df(neighborhoods_chicago,col='codes',exp='_PHIVAH')
phiv_zips_chicago = get_df(zips_chicago,col='codes',exp='_PHIVAH')
print_random_row(phiv_zips_chicago)
#Clearning up descriptions:
neighborhood_names = phiv_neighborhoods_chicago.description.apply(lambda x: x.replace('Zillow Home Value Index (Neighborhood): Percent Of Homes Increasing In Values - All Homes - ',''))
zips_names = phiv_zips_chicago.description.apply(lambda x:x.replace('Zillow Home Value Index (Zip): Percent Of Homes Increasing In Values - All Homes - ',''))
zips_names[:1]
neighborhood_names[:1]
def get_quandl_data(df,names,filter_val=246):
quandl_get = [quandl.get(code) for i,code in enumerate(df['codes'])]
#Cleaned up DF and DF columns
cleaned_up = pd.concat([val for i,val in enumerate(quandl_get) if val.shape[0]>=filter_val],axis=1)
cleaned_names = [names.iloc[i] for i,val in enumerate(quandl_get) if val.shape[0]>=filter_val]
cleaned_up.columns = cleaned_names
#Some time series have fewer than 246 data points, ignoring these points for the moment.
#Saving the indices and time series in a separate anomaly dict with {name:ts}
anomaly_dict = {names.iloc[i]:val for i,val in enumerate(quandl_get) if val.shape[0]<filter_val}
return quandl_get,anomaly_dict,cleaned_up
# = get_quandl_data(phiv_neighborhoods_chicago)
phiv_quandl_get_list,phiv_anomaly_dict,phiv_chicago_neighborhoods_df = get_quandl_data(phiv_neighborhoods_chicago,neighborhood_names)
phiv_chicago_neighborhoods_df.sample(2)
phiv_quandl_get_list,phiv_anomaly_dict,phiv_chicago_zips_df = get_quandl_data(phiv_zips_chicago,zips_names)
phiv_chicago_zips_df.shape
phiv_chicago_zips_df.sample(10)
phiv_chicago_neighborhoods_df['Logan Square, Chicago, IL'].plot()
phiv_chicago_neighborhoods_df.to_csv('input/phiv_chicago_neighborhoods_df.csv')
phiv_chicago_zips_df.to_csv('input/phiv_zips_df.csv')
# phiv_chicago_neighborhoods = pd.read_csv('input/phiv_chicago_neighborhoods_df.csv')
phiv_chicago_fil_neigh_df = pd.read_csv('input/phiv_chicago_fil_neigh_df.csv')
phiv_chicago_fil_neigh_df.set_index('Date',inplace=True)
# phiv_chicago_neighborhoods_df.shape
gc.collect()
Explanation: Percent of homes increasing in value:
Percent of homes increasing in value is a metric that may be useful while making the decision to buy or rent.
Using the code is "/Z" for getting data by zipcode.
Using the description to filter the data to Chicago zipcodes.
Make API call to Quandl and get the percent of homes increasing values for all homes by neighborhood in Chicago.(http://www.realestatedecoded.com/zillow-percent-homes-increasing-value/)
End of explanation
from fbprophet import Prophet
data = phiv_chicago_zips_df['60647, Chicago, IL']
m = Prophet(mcmc_samples=200,interval_width=0.95,weekly_seasonality=False,changepoint_prior_scale=4,seasonality_prior_scale=1)
data = phiv_chicago_neighborhoods_df['Logan Square, Chicago, IL']
m = Prophet(mcmc_samples=200,interval_width=0.95,weekly_seasonality=False,changepoint_prior_scale=4)
# data = np.log(data)
data = pd.DataFrame(data).reset_index()
data.columns=['ds','y']
# data = data[data['ds'].dt.year>2009]
data.sample(10)
# m.fit(data)
params = dict(mcmc_samples=200,interval_width=0.98,weekly_seasonality=False,changepoint_prior_scale=0.5)
def prophet_forecast(data,params,periods=4,freq='BMS'):
m = Prophet(**params)
data = pd.DataFrame(data).reset_index()
data.columns=['ds','y']
# data = data[data['ds'].dt.year>2008]
# print(data.sample(10))
m.fit(data)
future = m.make_future_dataframe(periods=4,freq=freq)#'M')
# print(type(future))
forecast = m.predict(future)
return m,forecast
data = phiv_chicago_zips_df['60645, Chicago, IL']
m,forecast = prophet_forecast(data,params)
# forecast = m.predict()
m.plot(forecast)
# data
Explanation: Using Prophet for time series forecasting:
"Prophet is a procedure for forecasting time series data. It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It works best with daily periodicity data with at least one year of historical data. Prophet is robust to missing data, shifts in the trend, and large outliers." -
https://facebookincubator.github.io/prophet/
End of explanation
def area_chart_create(fcst,cols,trend_name='PHIC(%)',title='60645'):
#Process data:
fcst = fcst[cols]
# fcst.loc[:,'ymin']=fcst[cols[2]]+fcst[cols[3]]#+fcst['trend']
# fcst.loc[:,'ymax']=fcst[cols[2]]+fcst[cols[4]]#+fcst['trend']
chart = alt.Chart(fcst).mark_area().encode(
x = alt.X(fcst.columns[0]+':T',title=title,
axis=alt.Axis(
ticks=20,
axisWidth=0.0,
format='%Y',
labelAngle=0.0,
),
scale=alt.Scale(
nice='month',
),
timeUnit='yearmonth',
),
y= alt.Y(fcst.columns[3]+':Q',title=trend_name),
y2=fcst.columns[4]+':Q')
return chart.configure_cell(height=200.0,width=700.0,)
# cols = ['ds','trend']+['yearly','yearly_lower','yearly_upper']
cols = ['ds','trend']+['yhat','yhat_lower','yhat_upper']
yhat_uncertainity = area_chart_create(forecast,cols=cols)
yhat_uncertainity
def trend_chart_create(fcst,trend_name='PHIC(%)trend'):
chart = alt.Chart(fcst).mark_line().encode(
color= alt.Color(value='#000'),
x = alt.X(fcst.columns[0]+':T',title='Logan Sq',
axis=alt.Axis(ticks=10,
axisWidth=0.0,
format='%Y',
labelAngle=0.0,
),
scale=alt.Scale(
nice='month',
),
timeUnit='yearmonth',
),
y=alt.Y(fcst.columns[2]+':Q',title=trend_name)
)
return chart.configure_cell(height=200.0,width=700.0,)
trend = trend_chart_create(forecast)
trend
Explanation: Creating a chart using Altair:
This provides a pythonic interface to Vega-lite and makes it easy to create plots: https://altair-viz.github.io/
End of explanation
layers = [yhat_uncertainity,trend]
lchart = alt.LayeredChart(forecast,layers = layers)
cols = ['ds','trend']+['yhat','yhat_lower','yhat_upper']
def create_unc_chart(fcst,cols=cols,tsname='% of Homes increasing in value',title='Logan Sq'):
'''
Create Chart showing the trends and uncertainity in forecasts.
'''
yhat_uncertainity = area_chart_create(fcst,cols=cols,trend_name=tsname,title=title)
trend = trend_chart_create(fcst,trend_name=tsname)
layers = [yhat_uncertainity,trend]
unc_chart = alt.LayeredChart(fcst,layers=layers)
return unc_chart
unc_chart = create_unc_chart(forecast,title='60645')
unc_chart
Explanation: Layered Vega-lite chart created using Altair:
End of explanation
from geopy.geocoders import Nominatim
geolocator = Nominatim()
#Example:
location = geolocator.geocode("60647")
location
Explanation: Geopy:
Geopy is great for geocoding and geolocation:
It provides a geocoder wrapper class for the OpenStreetMap Nominatim class.
It is convenient and can be used for getting latitude and longitude information from addresses.
https://github.com/geopy/geopy
End of explanation
from time import sleep
def get_lat_lon(location):
lat = location.latitude
lon = location.longitude
return lat,lon
def get_locations_list(address_list,geolocator,wait_time=np.arange(10,20,5)):
'''
Function returns the geocoded locations of addresses in address_list.
Input:
address_list : Python list
Output:
locations: Python list containing geocoded location objects.
'''
locations = []
for i,addr in enumerate(address_list):
# print(addr)
sleep(5)
loc = geolocator.geocode(addr)
lat = loc.latitude
lon = loc.longitude
locations.append((addr,lat,lon))
# print(lat,lon)
sleep(1)
return locations
zip_list = phiv_chicago_zips_df.columns.tolist()
zip_locations= get_locations_list(zip_list,geolocator)
zip_locations[:2]
zips_lat_lon = pd.DataFrame(zip_locations)
zips_lat_lon.columns=['zip','lat','lon']
zips_lat_lon.sample(2)
Explanation: Getting the geocoder locations from addresses:
End of explanation
import folium
from folium import plugins
params = dict(mcmc_samples=20,interval_width=0.95,weekly_seasonality=False,changepoint_prior_scale=4)
map_name='CHI_zips_cluster_phiv_forecast.html'
map_osm = folium.Map(location=[41.8755546,-87.6244212],zoom_start=10)
marker_cluster = plugins.MarkerCluster().add_to(map_osm)
for name,row in zips_lat_lon[:].iterrows():
address = row['zip']
if pd.isnull(address):
continue
data = phiv_chicago_zips_df[address]
m,forecast = prophet_forecast(data,params)
unc_chart = create_unc_chart(forecast,title=address)
unc_chart = unc_chart.to_json()
popchart = folium.VegaLite(unc_chart)
popup = folium.Popup(max_width=800).add_child(popchart)
lat = row['lat']
lon = row['lon']
folium.Marker(location=(lat,lon),popup=popup).add_to(marker_cluster)
map_osm.save(map_name)
Explanation: Folium plots:
Using openstreetmaps(https://www.openstreetmap.org) to create a map with popup forecasts on zip code markers.
Folium makes it easy to plot data on interactive maps.
It provides an interface to the Leaflet.js library.
Open .html file in browser to view the interactive map
End of explanation |
10,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keyboard shortcuts
In this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.
First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.
By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.
Exercise
Step1: Help with commands
If you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.
Creating new cells
One of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell.
Exercise
Step2: Line numbers
A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell.
Exercise
Step3: Saving the notebook
Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy!
The Command Palette
You can easily access the command palette by pressing Shift + Control/Command + P.
Note | Python Code:
# mode practice
Explanation: Keyboard shortcuts
In this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.
First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.
By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.
Exercise: Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.
End of explanation
## Practice here
def fibo(n): # Recursive Fibonacci sequence!
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
Explanation: Help with commands
If you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.
Creating new cells
One of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell.
Exercise: Create a cell above this cell using the keyboard command.
Exercise: Create a cell below this cell using the keyboard command.
Switching between Markdown and code
With keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to cell, press Y. To switch from code to Markdown, press M.
Exercise: Switch the cell below between Markdown and code cells.
End of explanation
# DELETE ME
Explanation: Line numbers
A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell.
Exercise: Turn line numbers on and off in the above code cell.
Deleting cells
Deleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice!
Exercise: Delete the cell below.
End of explanation
# Move this cell down
# below this cell
Explanation: Saving the notebook
Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy!
The Command Palette
You can easily access the command palette by pressing Shift + Control/Command + P.
Note: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.
This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands.
Exercise: Use the command palette to move the cell below down one position.
End of explanation |
10,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objective
Showing results of fading on ber
Method
Step1: Parameters
Step2: Simulation
Step3: Plotting | Python Code:
# importing
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 30}
plt.rc('font', **font)
#plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(30, 12) )
Explanation: Content and Objective
Showing results of fading on ber
Method: BPSK is transmitted over awgn and fading channel; several trials are simulated for estimating error probability
End of explanation
# max. numbers of errors and/or symbols
max_errors = int( 1e2 )
max_syms = int( 1e5 )
# Eb/N0
EbN0_db_min = 0
EbN0_db_max = 30
EbN0_db_step = 3
# initialize Eb/N0 array
EbN0_db_range = np.arange( EbN0_db_min, EbN0_db_max, EbN0_db_step )
EbN0_range = 10**( EbN0_db_range / 10 )
# constellation points
constellation = [-1, 1]
Explanation: Parameters
End of explanation
###
# initialize BER arrays
# theoretical ber for bpsk as on slides
ber_bpsk = np.zeros_like( EbN0_db_range, dtype=float )
ber_bpsk_theo = 1 - stats.norm.cdf( np.sqrt( 2 * EbN0_range ) )
# ber in fading channel
ber_fading = np.zeros_like( EbN0_db_range, dtype=float )
ber_fading_theo = 1 / ( 4 * 10**(EbN0_db_range / 10 ) )
# ber when applying channel inversion
ber_inverted = np.zeros_like( EbN0_db_range, dtype=float )
###
# loop for snr
for ind_snr, val_snr in enumerate( EbN0_range ):
# initialize error counter
num_errors_bpsk = 0
num_errors_fading = 0
num_errors_inverted = 0
num_syms = 0
# get noise variance
sigma2 = 1. / ( val_snr )
# loop for errors
while ( num_errors_bpsk < max_errors and num_syms < max_syms ):
# generate data and modulate by look-up
d = np.random.randint( 0, 2)
s = constellation[ d ]
# generate noise
noise = np.sqrt( sigma2 / 2 ) * ( np.random.randn() + 1j * np.random.randn() )
###
# apply different channels
# bpsk without fading
r_bpsk = s + noise
# bpsk with slow flat fading
h = 1/np.sqrt(2) * ( np.random.randn() + 1j * np.random.randn() )
r_flat = h * s + noise
### receiver
# matched filter and inverting channel
# mf and channel inversion
y_mf = np.conjugate( h / np.abs(h) )* r_flat
y_inv = r_flat / h
# demodulate symbols
d_est_bpsk = int( np.real( r_bpsk ) > 0 )
d_est_flat = int( np.real( y_mf ) > 0 )
d_est_inv = int( np.real( y_inv ) > 0 )
###
# count errors
num_errors_bpsk += int( d_est_bpsk != d )
num_errors_fading += int( d_est_flat != d )
num_errors_inverted += int( d_est_inv != d )
# increase counter for symbols
num_syms += 1
# ber by relative amount of errors
ber_bpsk[ ind_snr ] = num_errors_bpsk / num_syms
ber_fading[ ind_snr ] = num_errors_fading / ( num_syms * 1.0 )
ber_inverted[ ind_snr ] = num_errors_inverted / ( num_syms * 1.0 )
# show progress if you like to
#print('Eb/N0 planned (dB) = {:2.1f}\n'.format( 10*np.log10(val_snr) ) )
Explanation: Simulation
End of explanation
# plot bpsk results using identical colors for theory and simulation
ax_sim = plt.plot( EbN0_db_range, ber_bpsk, marker = 'o', mew=4, ms=18, markeredgecolor = 'none', linestyle='None', label='AWGN, sim.' )
color_sim = ax_sim[0].get_color()
plt.plot(EbN0_db_range, ber_bpsk_theo, linewidth = 2.0, color = color_sim, label='AWGN, theo.')
# plot slow flat results using identical colors for theory and simulation
ax_sim = plt.plot( EbN0_db_range, ber_fading , marker = 'D', mew=4, ms=18, markeredgecolor = 'none', linestyle='None', label = 'Fading, sim.' )
color_sim = ax_sim[0].get_color()
plt.plot(EbN0_db_range, ber_fading_theo, linewidth = 2.0, color = color_sim, label='Fading, theo.')
# plot ber when using channel inversion
ax_sim = plt.plot( EbN0_db_range, ber_inverted , marker = 'v', mew=4, ms=18, markeredgecolor = 'none', linestyle='None', label = 'Fading, inv., sim.' )
plt.yscale('log')
plt.grid(True)
plt.legend(loc='lower left')
plt.ylim( (1e-7, 1) )
plt.xlabel('$E_b/N_0$ (dB)')
plt.ylabel('BER')
Explanation: Plotting
End of explanation |
10,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyEarthScience
Step1: Generate x- and y-values.
Step2: Draw data, set title and axis labels.
Step3: Show the plot in this notebook. | Python Code:
import numpy as np
import Ngl, Nio
Explanation: PyEarthScience: Python examples for Earth Scientists
XY-plots
Using PyNGL
Line plot with
- marker
- different colors
- legend
- title
- x-axis label
- y-axis label
End of explanation
x2 = np.arange(100)
data = np.arange(1,40,5)
linear = np.arange(100)
square = [v * v for v in np.arange(0,10,0.1)]
#-- retrieve maximum size of plotting data
maxdim = max(len(data),len(linear),len(square))
#-- create 2D arrays to hold 1D arrays above
y = -999.*np.ones((3,maxdim),'f') #-- assign y array containing missing values
y[0,0:(len(data))] = data
y[1,0:(len(linear))] = linear
y[2,0:(len(square))] = square
Explanation: Generate x- and y-values.
End of explanation
#-- open a workstation
wkres = Ngl.Resources() #-- generate an res object for workstation
wks = Ngl.open_wks("png","plot_xy_simple_PyNGL",wkres)
#-- set resources
res = Ngl.Resources() #-- generate an res object for plot
res.tiMainString = "Title string" #-- set x-axis label
res.tiXAxisString = "x-axis label" #-- set x-axis label
res.tiYAxisString = "y-axis label" #-- set y-axis label
res.vpWidthF = 0.9 #-- viewport width
res.vpHeightF = 0.6 #-- viewport height
res.caXMissingV = -999. #-- indicate missing value
res.caYMissingV = -999. #-- indicate missing value
#-- marker and line settings
res.xyLineColors = ["blue","green","red"] #-- set line colors
res.xyLineThicknessF = 3.0 #-- define line thickness
res.xyDashPatterns = [0,0,2] #-- ( none, solid, cross )
res.xyMarkLineModes = ["Markers","Lines","Markers"] #-- marker mode for each line
res.xyMarkers = [16,0,2] #-- marker type of each line
res.xyMarkerSizeF = 0.01 #-- default is 0.01
res.xyMarkerColors = ["blue","green","red"] #-- set marker colors
#-- legend settings
res.xyExplicitLegendLabels = [" data"," linear"," square"] #-- set explicit legend labels
res.pmLegendDisplayMode = "Always" #-- turn on the drawing
res.pmLegendOrthogonalPosF = -1.13 #-- move the legend upwards
res.pmLegendParallelPosF = 0.15 #-- move the legend to the right
res.pmLegendWidthF = 0.2 #-- change width
res.pmLegendHeightF = 0.10 #-- change height
res.lgBoxMinorExtentF = 0.16 #-- legend lines shorter
#-- draw the plot
plot = Ngl.xy(wks,x2,y,res)
#-- the end
Ngl.delete_wks(wks) #-- this need to be done to close the graphics output file
Ngl.end()
Explanation: Draw data, set title and axis labels.
End of explanation
from IPython.display import Image
Image(filename='plot_xy_simple_PyNGL.png')
Explanation: Show the plot in this notebook.
End of explanation |
10,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
class Item(object):
def __init__(self, key, value):
# TODO: Implement me
pass
class HashTable(object):
def __init__(self, size):
# TODO: Implement me
pass
def hash_function(self, key):
# TODO: Implement me
pass
def set(self, key, value):
# TODO: Implement me
pass
def get(self, key):
# TODO: Implement me
pass
def remove(self, key):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement a hash table with set, get, and remove methods.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
For simplicity, are the keys integers only?
Yes
For collision resolution, can we use chaining?
Yes
Do we have to worry about load factors?
No
Test Cases
get on an empty hash table index
set on an empty hash table index
set on a non empty hash table index
set on a key that already exists
remove on a key with an entry
remove on a key without an entry
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_hash_map.py
from nose.tools import assert_equal
class TestHashMap(object):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
assert_equal(hash_table.get(0), None)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
assert_equal(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
assert_equal(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), None)
print("Test: remove on a key that doesn't exist")
hash_table.remove(-1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
10,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 2
Step4: (a)
Let $f(x)=x^7$. Evaluate the derivative matrix and the derivatives at quadrature points using Gauss-Lobatto-Legendre quadrature with $Q=7,\ 8,\ 9$.
Step6: (b)
The same problem with exercise 2a, except that now the interval on which we apply quadrature is $x \in [2,\ 10]$. Use chain rule to evaluate the derivative at mapped quadrature points.
Step11: (c)
Use the differentiation techniques to numerically integrate $-\int_{0}^{\pi/2}\frac{d}{dx}\cos{(x)}dx$. Plot the error with respect to number of quadrature points, $Q$. Use $2 \le Q \le 8$. | Python Code:
import numpy
import re
from matplotlib import pyplot
from IPython.display import Latex, Math, display
% matplotlib inline
import os, sys
sys.path.append(os.path.split(os.path.split(os.getcwd())[0])[0])
import utils.quadrature as quad
import utils.poly as poly
Explanation: Exercise 2
End of explanation
def f_ex2a(x):
f = x^7
return x**7
def df_ex2a(x):
df/dx = 7 * (x**6)
return 7 * (x**6)
def ex2a(Qi, f, df):
a wrapper for generating solutions
numpy.set_printoptions(
formatter={'float': "{:5.2e}".format}, linewidth=120)
qd = quad.GaussLobattoJacobi(Qi)
p = poly.LagrangeBasis(qd.nodes)
D = p.derivative(qd.nodes)
ans = D.dot(f(qd.nodes))
exact = df(qd.nodes)
err = numpy.abs(ans - exact)
def generateLatex(A):
return "{\\scriptsize \\left[\\begin{array}{r}" + \
re.sub(r"[ ]+", "&", re.sub(r"\n[ ]*", "\\\\\\\\",
re.sub(r"\][ ]*", "", re.sub(r"\[[ ]*", "", str(A))))) + \
"\\end{array}\\right]}"
display(Latex("For " + "$Q={0}$".format(Qi) + ":"))
display(Latex("$\qquad$The derivative matrix is: "))
display(Math("\\qquad" + generateLatex(D)))
display(Latex("$\\qquad$The derivatives at quadrature points are: "))
display(Math("\\qquad" + generateLatex(ans)))
display(Latex("$\\qquad$The exact derivatives at quadrature points are: "))
display(Math("\\qquad" + generateLatex(exact)))
display(Latex("$\\qquad$The absolute errors are: "))
display(Math("\\qquad" + generateLatex(err)))
for Qi in [7, 8, 9]:
ex2a(Qi, f_ex2a, df_ex2a)
Explanation: (a)
Let $f(x)=x^7$. Evaluate the derivative matrix and the derivatives at quadrature points using Gauss-Lobatto-Legendre quadrature with $Q=7,\ 8,\ 9$.
End of explanation
def ex2b(Qi, f, df):
a wrapper for generating solutions
numpy.set_printoptions(
formatter={'float': "{:5.2e}".format}, linewidth=120)
x = lambda xi: (xi + 1) * (10 - 2) / 2. + 2
dxi_dx = 2. / (10 - 2)
qd = quad.GaussLobattoJacobi(Qi)
p = poly.LagrangeBasis(qd.nodes)
D = p.derivative(qd.nodes)
ans = D.dot(f(x(qd.nodes))) * dxi_dx
exact = df(x(qd.nodes))
err = numpy.abs(ans - exact)
def generateLatex(A):
return "{\\scriptsize \\left[\\begin{array}{r}" + \
re.sub(r"[ ]+", "&", re.sub(r"\n[ ]*", "\\\\\\\\",
re.sub(r"\][ ]*", "", re.sub(r"\[[ ]*", "", str(A))))) + \
"\\end{array}\\right]}"
display(Latex("For " + "$Q={0}$".format(Qi) + ":"))
display(Latex("$\\qquad$The derivatives at quadrature points are: "))
display(Math("\\qquad" + generateLatex(ans)))
display(Latex("$\\qquad$The exact derivatives at quadrature points are: "))
display(Math("\\qquad" + generateLatex(exact)))
display(Latex("$\\qquad$The absolute errors are: "))
display(Math("\\qquad" + generateLatex(err)))
for Qi in [7, 8, 9]:
ex2b(Qi, f_ex2a, df_ex2a)
Explanation: (b)
The same problem with exercise 2a, except that now the interval on which we apply quadrature is $x \in [2,\ 10]$. Use chain rule to evaluate the derivative at mapped quadrature points.
End of explanation
def f_ex2c(x):
integrand of exercise 1c
return - numpy.cos(x)
def ex2c(Qi, f):
a wrapper for generating solutions
x = lambda xi: (xi + 1) * (numpy.pi / 2. - 0.) / 2. + 0.
dxi_dx = 2. / (numpy.pi / 2. - 0.)
dx_dxi = (numpy.pi / 2. - 0.) / 2.
qd = quad.GaussLobattoJacobi(Qi)
p = poly.LagrangeBasis(qd.nodes)
d = p.derivative(qd.nodes).dot(f(x(qd.nodes))) * dxi_dx
ans = numpy.sum(d * qd.weights * dx_dxi)
err = numpy.abs(ans - 1.)
print("The numerical solution is: " +
"{0}; the absolute error is: {1}".format(ans, err))
return err
Q = numpy.arange(2, 9)
err = numpy.zeros_like(Q, dtype=numpy.float64)
for i, Qi in enumerate(range(2, 9)):
err[i] = ex2c(Qi, f_ex2c)
pyplot.semilogy(Q, err, 'k.-', lw=2, markersize=15)
pyplot.title("Absolute error of numerical integration of " +
r"$-\int_{0}^{\pi/2}\frac{d}{dx}\cos{(x)} dx$" +
"\n with Gauss-Lobatto-Legendre quadrature", y=1.08)
pyplot.xlabel(r"$Q$" + ", number of quadratiure points")
pyplot.ylabel("absolute error")
pyplot.grid();
def df_ex2c(x):
derivative of f
return numpy.sin(x)
def ex2c_mod(Qi, f, df):
a wrapper for generating solutions
x = lambda xi: (xi + 1) * (numpy.pi / 2. - 0.) / 2. + 0.
dxi_dx = 2. / (numpy.pi / 2. - 0.)
dx_dxi = (numpy.pi / 2. - 0.) / 2.
qd = quad.GaussLobattoJacobi(Qi)
p = poly.LagrangeBasis(qd.nodes)
d = p.derivative(qd.nodes).dot(f(x(qd.nodes))) * dxi_dx - df(x(qd.nodes))
d *= d
err = numpy.sqrt(numpy.sum(d * qd.weights * dx_dxi) / (numpy.pi / 2. - 0.))
print("The H1-norm is: {0}: ".format(err))
return err
for i, Qi in enumerate(range(2, 9)):
err[i] = ex2c_mod(Qi, f_ex2c, df_ex2c)
pyplot.semilogy(Q, err, 'k.-', lw=2, markersize=15)
pyplot.title("H1-norm of numerical integration of " +
r"$-\int_{0}^{\pi/2}\frac{d}{dx}\cos{(x)} dx$" +
"\n with Gauss-Lobatto-Legendre quadrature", y=1.08)
pyplot.xlabel(r"$Q$" + ", number of quadratiure points")
pyplot.ylabel("H1-norm")
pyplot.grid();
Explanation: (c)
Use the differentiation techniques to numerically integrate $-\int_{0}^{\pi/2}\frac{d}{dx}\cos{(x)}dx$. Plot the error with respect to number of quadrature points, $Q$. Use $2 \le Q \le 8$.
End of explanation |
10,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
easysnmp
Step1: The type of .value is always a Python string but the string returned in .snmp_type can be used to convert to the correct Python type.
* INTEGER32
* INTEGER
* UNSIGNED32
* GAUGE
* IPADDR
* OCTETSTR
* TICKS
* OPAQUE
* OBJECTID
* NETADDR
* COUNTER64
* NULL
* BITS
* UINTEGER
Step2: easysnmptable
Step3: A short-coming of easysnmptable is that it does not return the snmp_type for columns. A possible work-around is to use easysnmp to fetch a single column for a random index. This should probably be memoized or cached.
Step4: Such a table could be build by "walking" a device. | Python Code:
import easysnmp
session = easysnmp.Session(hostname='localhost', community='public', version=2,
timeout=1, retries=1, use_sprint_value=True)
# IMPORTANT: use_sprint_value=True for proper formatting of values
location = session.get('sysLocation.0')
location.oid, location.oid_index, location.snmp_type, location.value
iftable = session.walk('IF-MIB::ifTable')
for item in iftable:
print(item.oid, item.oid_index, item.snmp_type, item.value, type(item.value))
Explanation: easysnmp
End of explanation
macaddress = session.get('IF-MIB::ifPhysAddress.2')
macaddress
ifindex = session.get('IF-MIB::ifIndex.2')
ifindex
Explanation: The type of .value is always a Python string but the string returned in .snmp_type can be used to convert to the correct Python type.
* INTEGER32
* INTEGER
* UNSIGNED32
* GAUGE
* IPADDR
* OCTETSTR
* TICKS
* OPAQUE
* OBJECTID
* NETADDR
* COUNTER64
* NULL
* BITS
* UINTEGER
End of explanation
import easysnmptable
session = easysnmptable.Session(hostname='localhost', community='public', version=2,
timeout=1, retries=1, use_sprint_value=True)
# IMPORTANT: use_sprint_value=True for proper formatting of values)
iftable = session.gettable('IF-MIB::ifTable')
iftable.indices
iftable.cols
iftable
import pprint
for index,row in iftable.rows.items():
pprint.pprint(index)
pprint.pprint(row)
Explanation: easysnmptable
End of explanation
random_index = iftable.indices.pop()
iftable.indices.add(random_index)
random_index
column2type = {column: session.get('{}.{}'.format(column, random_index)).snmp_type for column in iftable.cols}
column2type
Explanation: A short-coming of easysnmptable is that it does not return the snmp_type for columns. A possible work-around is to use easysnmp to fetch a single column for a random index. This should probably be memoized or cached.
End of explanation
column2type = {item.oid: item.snmp_type for item in session.walk('IF-MIB::ifTable')}
column2type
Explanation: Such a table could be build by "walking" a device.
End of explanation |
10,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
The goal of this Artificial Neural Network (ANN) 101 session is twofold
Step1: Get the data
Step2: Build the artificial neural-network
Step3: Train the artificial neural-network model
Step4: Evaluate the model
Step5: Predict new output data | Python Code:
# library to store and manipulate neural-network input and output data
import numpy as np
# library to graphically display any data
import matplotlib.pyplot as plt
# library to manipulate neural-network models
import torch
import torch.nn as nn
import torch.optim as optim
# the code is compatible with Tensflow v1.4.0
print("Pytorch version:", torch.__version__)
# To check whether you code will use a GPU or not, uncomment the following two
# lines of code. You should either see:
# * an "XLA_GPU",
# * or better a "K80" GPU
# * or even better a "T100" GPU
if torch.cuda.is_available():
print('GPU support (%s)' % torch.cuda.get_device_name(0))
else:
print('no GPU support')
import time
# trivial "debug" function to display the duration between time_1 and time_2
def get_duration(time_1, time_2):
duration_time = time_2 - time_1
m, s = divmod(duration_time, 60)
h, m = divmod(m, 60)
s,m,h = int(round(s, 0)), int(round(m, 0)), int(round(h, 0))
duration = "duration: " + "{0:02d}:{1:02d}:{2:02d}".format(h, m, s)
return duration
Explanation: Introduction
The goal of this Artificial Neural Network (ANN) 101 session is twofold:
To build an ANN model that will be able to predict y value according to x value.
In other words, we want our ANN model to perform a regression analysis.
To observe three important KPI when dealing with ANN:
The size of the network (called trainable_params in our code)
The duration of the training step (called training_ duration: in our code)
The efficiency of the ANN model (called evaluated_loss in our code)
The data used here are exceptionally simple:
X represents the interesting feature (i.e. will serve as input X for our ANN).
Here, each x sample is a one-dimension single scalar value.
Y represents the target (i.e. will serve as the exected output Y of our ANN).
Here, each x sample is also a one-dimension single scalar value.
Note that in real life:
You will never have such godsent clean, un-noisy and simple data.
You will have more samples, i.e. bigger data (better for statiscally meaningful results).
You may have more dimensions in your feature and/or target (e.g. space data, temporal data...).
You may also have more multiple features and even multiple targets.
Hence your ANN model will be more complex that the one studied here
Work to be done:
For exercices A to E, the only lines of code that need to be added or modified are in the create_model() Python function.
Exercice A
Run the whole code, Jupyter cell by Jupyter cell, without modifiying any line of code.
Write down the values for:
trainable_params:
training_ duration:
evaluated_loss:
In the last Jupyter cell, what is the relationship between the predicted x samples and y samples? Try to explain it base on the ANN model?
Exercice B
Add a first hidden layer called "hidden_layer_1" containing 8 units in the model of the ANN.
Restart and execute everything again.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice A?
Worse? Not better? Better? Strongly better?
Exercice C
Modify the hidden layer called "hidden_layer_1" so that it contains 128 units instead of 8.
Restart and execute everything again.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice B?
Worse? Not better? Better? Strongly better?
Exercice D
Add a second hidden layer called "hidden_layer_2" containing 32 units in the model of the ANN.
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice C?
Worse? Not better? Better? Strongly better?
Exercice E
Add a third hidden layer called "hidden_layer_3" containing 4 units in the model of the ANN.
Restart and execute everything again.
Look at the graph in the last Jupyter cell. Is it better?
Write down the obtained values for:
trainable_params:
training_ duration:
evaluated_loss:
How better is it with regard to Exercice D?
Worse? Not better? Better? Strongly better?
Exercice F
If you still have time, you can also play with the training epochs parameter, the number of training samples (or just exchange the training datasets with the test datasets), the type of runtime hardware (GPU orTPU), and so on...
Python Code
Import the tools
End of explanation
# DO NOT MODIFY THIS CODE
# IT HAS JUST BEEN WRITTEN TO GENERATE THE DATA
# library fr generating random number
#import random
# secret relationship between X data and Y data
#def generate_random_output_data_correlated_from_input_data(nb_samples):
# generate nb_samples random x between 0 and 1
# X = np.array( [random.random() for i in range(nb_samples)] )
# generate nb_samples y correlated with x
# Y = np.tan(np.sin(X) + np.cos(X))
# return X, Y
#def get_new_X_Y(nb_samples, debug=False):
# X, Y = generate_random_output_data_correlated_from_input_data(nb_samples)
# if debug:
# print("generate %d X and Y samples:" % nb_samples)
# X_Y = zip(X, Y)
# for i, x_y in enumerate(X_Y):
# print("data sample %d: x=%.3f, y=%.3f" % (i, x_y[0], x_y[1]))
# return X, Y
# Number of samples for the training dataset and the test dateset
#nb_samples=50
# Get some data for training the futture neural-network model
#X_train, Y_train = get_new_X_Y(nb_samples)
# Get some other data for evaluating the futture neural-network model
#X_test, Y_test = get_new_X_Y(nb_samples)
# In most cases, it will be necessary to normalize X and Y data with code like:
# X_centered -= X.mean(axis=0)
# X_normalized /= X_centered.std(axis=0)
#def mstr(X):
# my_str ='['
# for x in X:
# my_str += str(float(int(x*1000)/1000)) + ','
# my_str += ']'
# return my_str
## Call get_data to have an idead of what is returned by call data
#generate_data = False
#if generate_data:
# nb_samples = 50
# X_train, Y_train = get_new_X_Y(nb_samples)
# print('X_train = np.array(%s)' % mstr(X_train))
# print('Y_train = np.array(%s)' % mstr(Y_train))
# X_test, Y_test = get_new_X_Y(nb_samples)
# print('X_test = np.array(%s)' % mstr(X_test))
# print('Y_test = np.array(%s)' % mstr(Y_test))
X_train = np.array([0.765,0.838,0.329,0.277,0.45,0.833,0.44,0.634,0.351,0.784,0.589,0.816,0.352,0.591,0.04,0.38,0.816,0.732,0.32,0.597,0.908,0.146,0.691,0.75,0.568,0.866,0.705,0.027,0.607,0.793,0.864,0.057,0.877,0.164,0.729,0.291,0.324,0.745,0.158,0.098,0.113,0.794,0.452,0.765,0.983,0.001,0.474,0.773,0.155,0.875,])
Y_train = np.array([6.322,6.254,3.224,2.87,4.177,6.267,4.088,5.737,3.379,6.334,5.381,6.306,3.389,5.4,1.704,3.602,6.306,6.254,3.157,5.446,5.918,2.147,6.088,6.298,5.204,6.147,6.153,1.653,5.527,6.332,6.156,1.766,6.098,2.236,6.244,2.96,3.183,6.287,2.205,1.934,1.996,6.331,4.188,6.322,5.368,1.561,4.383,6.33,2.192,6.108,])
X_test = np.array([0.329,0.528,0.323,0.952,0.868,0.931,0.69,0.112,0.574,0.421,0.972,0.715,0.7,0.58,0.69,0.163,0.093,0.695,0.493,0.243,0.928,0.409,0.619,0.011,0.218,0.647,0.499,0.354,0.064,0.571,0.836,0.068,0.451,0.074,0.158,0.571,0.754,0.259,0.035,0.595,0.245,0.929,0.546,0.901,0.822,0.797,0.089,0.924,0.903,0.334,])
Y_test = np.array([3.221,4.858,3.176,5.617,6.141,5.769,6.081,1.995,5.259,3.932,5.458,6.193,6.129,5.305,6.081,2.228,1.912,6.106,4.547,2.665,5.791,3.829,5.619,1.598,2.518,5.826,4.603,3.405,1.794,5.23,6.26,1.81,4.18,1.832,2.208,5.234,6.306,2.759,1.684,5.432,2.673,5.781,5.019,5.965,6.295,6.329,1.894,5.816,5.951,3.258,])
print('X_train contains %d samples' % X_train.shape)
print('Y_train contains %d samples' % Y_train.shape)
print('')
print('X_test contains %d samples' % X_test.shape)
print('Y_test contains %d samples' % Y_test.shape)
# Graphically display our training data
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
plt.title('Scatter plot of the training data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Graphically display our test data
plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
plt.title('Scatter plot of the testing data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Get the data
End of explanation
# THIS IS THE ONLY CELL WHERE YOU HAVE TO ADD AND/OR MODIFY CODE
from collections import OrderedDict
def create_model():
# This returns a tensor
model = nn.Sequential(OrderedDict([
('hidden_layer_1', nn.Linear(1,128)), ('hidden_layer_1_act', nn.ReLU()),
('hidden_layer_2', nn.Linear(128,32)), ('hidden_layer_2_act', nn.ReLU()),
('hidden_layer_3', nn.Linear(32,4)), ('hidden_layer_3_act', nn.ReLU()),
('output_layer', nn.Linear(4,1))
]))
# NO COMPILATION AS IN TENSORFLOW
#model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
# loss='mean_squared_error',
# metrics=['mean_absolute_error', 'mean_squared_error'])
return model
ann_model = create_model()
# Display a textual summary of the newly created model
# Pay attention to size (a.k.a. total parameters) of the network
print(ann_model)
print('params:', sum(p.numel() for p in ann_model.parameters()))
print('trainable_params:', sum(p.numel() for p in ann_model.parameters() if p.requires_grad))
%%html
As a reminder for understanding, the following ANN unit contains <b>m + 1</b> trainable parameters:<br>
<img src='https://www.degruyter.com/view/j/nanoph.2017.6.issue-3/nanoph-2016-0139/graphic/j_nanoph-2016-0139_fig_002.jpg' alt="perceptron" width="400" />
Explanation: Build the artificial neural-network
End of explanation
# Object for storing training results (similar to Tensorflow object)
class Results:
history = {
'train_loss': [],
'valid_loss': []
}
# No Pytorch model.fit() function as it is the case in Tensorflow
# but we can implement it by ourselves.
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
def fit(ann_model, X, Y, verbose=False,
batch_size=1, epochs=500, validation_split=0.2):
n_samples = X.shape[0]
n_samples_test = n_samples - int(n_samples * validation_split)
X = torch.from_numpy(X).unsqueeze(1).float()
Y = torch.from_numpy(Y).unsqueeze(1).float()
X_train = X[0:n_samples_test]
Y_train = Y[0:n_samples_test]
X_valid = X[n_samples_test:]
Y_valid = Y[n_samples_test:]
loss_fn = nn.MSELoss()
optimizer = optim.RMSprop(ann_model.parameters(), lr=0.01)
results = Results()
for epoch in range(0, epochs):
Ŷ_train = ann_model(X_train)
train_loss = loss_fn(Ŷ_train, Y_train)
Ŷ_valid = ann_model(X_valid)
valid_loss = loss_fn(Ŷ_valid, Y_valid)
optimizer.zero_grad()
train_loss.backward()
optimizer.step()
results.history['train_loss'].append(float(train_loss))
results.history['valid_loss'].append(float(valid_loss))
if verbose:
if epoch % 1000 == 0:
print('epoch:%d, train_loss:%.3f, valid_loss:%.3f' \
% (epoch, float(train_loss), float(valid_loss)))
return results
# Train the model with the input data and the output_values
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
t0 = time.time()
results = fit(ann_model, X_train, Y_train, verbose=True,
batch_size=1, epochs=10000, validation_split=0.2)
t1 = time.time()
print('training_%s' % get_duration(t0, t1))
plt.plot(results.history['train_loss'], label = 'train_loss')
plt.plot(results.history['valid_loss'], label = 'validation_loss')
plt.legend()
plt.show()
# If you can write a file locally (i.e. If Google Drive available on Colab environnement)
# then, you can save your model in a file for future reuse.
# Only uncomment the following file if you can write a file
#torch.save(ann_model.state_dict(), 'ann_101.pt')
Explanation: Train the artificial neural-network model
End of explanation
# No Pytorch model.evaluate() function as it is the case in Tensorflow
# but we can implement it by ourselves.
def evaluate(ann_model, X_, Y_, verbose=False):
X = torch.from_numpy(X_).unsqueeze(1).float()
Y = torch.from_numpy(Y_).unsqueeze(1).float()
Ŷ = ann_model(X)
# let's calculate the mean square error
# (could also be calculated with sklearn.metrics.mean_squared_error()
# or we could also calculate other errors like in 5% ok
mean_squared_error = torch.sum((Ŷ - Y) ** 2)/Y.shape[0]
if verbose:
print("mean_squared_error:%.3f" % mean_squared_error)
return mean_squared_error
test_loss = evaluate(ann_model, X_test, Y_test, verbose=True)
Explanation: Evaluate the model
End of explanation
X_new_values = torch.Tensor([0., 0.2, 0.4, 0.6, 0.8, 1.0]).unsqueeze(1).float()
Y_predicted_values = ann_model(X_new_values).detach().numpy()
Y_predicted_values
# Display training data and predicted data graphically
plt.title('Training data (green color) + Predicted data (red color)')
# training data in green color
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
# training data in green color
#plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
# predicted data in blue color
plt.scatter(X_new_values, Y_predicted_values, color='red', alpha=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Predict new output data
End of explanation |
10,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recursion (Recursive Program)
Recursion is a very important way of thinking in programming. It is a function that calls itself. Typical examples are Factorial and Fibonacchi Function.
5 out of 12 problems in ICPC 2016 Yangon regional were related to Recusion.
Step1: Any question ???
Why so complex ???
Is it really necessary ???
They can be easier and faster !!!
Step2: So why recursion is important ?
Sometime it is difficult to calculate from start point
Recursion sometimes shows drastic effect
Example
Step3: Java sample code of Merge Sort
```
import java.util.Scanner;
//import java.util.Arrays ;
// import java.InputMismatchException;
// alds_5b, Aizu Online Judge accept only class name | Python Code:
def factorial(n):
'''
n: integer (n>=1)
returns n! (1*2*3*..*n)
'''
if n==1:
return 1
else:
return n * factorial(n-1) # n * (n-1)!
print('4!=', factorial(4))
print('10!=', factorial(10))
def fibonacchi(n):
'''
n: integer
return Fibonacchi number of n
'''
if n < 2:
return 1
else:
return fibonacchi(n-1) + fibonacchi(n-2)
for i in range(10):
print('Fibonnachi(', i, ')=' ,fibonacchi(i), sep='')
Explanation: Recursion (Recursive Program)
Recursion is a very important way of thinking in programming. It is a function that calls itself. Typical examples are Factorial and Fibonacchi Function.
5 out of 12 problems in ICPC 2016 Yangon regional were related to Recusion.
End of explanation
def factorial_s(n):
'''
n: integer
returns n! (1*2*3*..*n)
'''
ret = 1
for i in range(1, n+1): # from 1 to n
ret *= i
return ret
print('4!=', factorial_s(4))
print('10!=', factorial_s(10))
def fib2(n):
'''
n: integer
return Fibonacchi number of n
'''
if n < 2:
return 1
n2 = n1 = 1
for i in range(2, n+1):
ret = n1 + n2
n2 = n1
n1 = ret
return ret
for i in range(10):
print('Fibonnachi_simple(', i, ')=' ,fib2(i), sep='')
Explanation: Any question ???
Why so complex ???
Is it really necessary ???
They can be easier and faster !!!
End of explanation
from IPython.display import Image, display_png
display_png(Image("Merge-Sort-Tutorial.png"))
import sys
def merge(list1, list2):
print('Merge:', list1, list2, end= '->', file=sys.stderr)
merged_list = list() # Create empty list
while (list1 and list2): # While there are elements in both list
if list2[0] < list1[0]:
merged_list.append(list2.pop(0))
else:
merged_list.append(list1.pop(0))
if list1: merged_list.extend(list1) # append if list1 is not empty
if list2: merged_list.extend(list2)
print(merged_list, file=sys.stderr)
return merged_list
def mergeSort(list_x):
print('Merge Sort:', list_x, file=sys.stderr)
if len(list_x) > 1:
mid_index = len(list_x) // 2
list1 = mergeSort(list_x[:mid_index]) # not include mid_index
list2 = mergeSort(list_x[mid_index:]) # include mid_index
list_x = merge(list1, list2)
return list_x
import random
num_elem = random.randrange(10) + 10
r_list = [random.randrange(100) for i in range(num_elem)]
#r_list = list()
#for i in range(num_elem):
# r_list.append(random.randrange(100))
print('Initial List:', r_list)
print('Sorted List:', mergeSort(r_list))
Explanation: So why recursion is important ?
Sometime it is difficult to calculate from start point
Recursion sometimes shows drastic effect
Example: Merge Sort https://www.geeksforgeeks.org/merge-sort/
End of explanation
def f(n):
'''
n: integer
return sum of each digit
'''
digits = list(str(n))
return (sum(map(int, digits)))
import sys
def g(n):
print('g()', n, file=sys.stderr, end = ': ')
while n>10:
n = f(n)
print(n, file=sys.stderr, end = ' ')
print(file=sys.stderr)
return n
import random
for i in range(5):
n = random.randrange(1000, 2000000000)
print(n, g(n))
print(1234567892, g(1234567892))
Explanation: Java sample code of Merge Sort
```
import java.util.Scanner;
//import java.util.Arrays ;
// import java.InputMismatchException;
// alds_5b, Aizu Online Judge accept only class name: Main
class Main {
public static long merge_count = 0 ;
static void Merge(int [] A, int low, int mid, int high) {
int l_size = mid-low ;
int h_size = high-mid ;
int []L = new int[l_size] ;
int []H = new int[h_size] ;
//System.out.printf("Merge: %d %d %d\n", low, mid, high) ;
for (int i=0; i<l_size; i++) {
L[i] = A[low+i] ;
}
for (int i=0; i<h_size; i++) {
H[i] = A[mid+i] ;
}
int a_pos=low, l_pos=0, h_pos=0 ;
while (l_size != 0 && h_size != 0) {
if (L[l_pos] <= H[h_pos]) {
A[a_pos] = L[l_pos] ;
l_size -= 1 ;
l_pos += 1 ;
}
else {
A[a_pos] = H[h_pos] ;
h_size -= 1 ;
h_pos += 1 ;
}
a_pos += 1 ;
merge_count++ ;
}
while (l_size != 0) {
A[a_pos] = L[l_pos] ;
l_size -= 1 ;
l_pos += 1 ;
a_pos += 1 ;
merge_count++ ;
}
while (h_size != 0) {
A[a_pos] = H[h_pos] ;
h_size -= 1 ;
h_pos += 1 ;
a_pos += 1 ;
merge_count++ ;
}
}
static void Merge_sort(int [] A, int low, int high) {
//System.out.printf("Merge_sort: %d %d\n", low, high) ;
if (low + 1 < high) { // if 2 or more elements in the List
int mid = (low + high) / 2 ;
Merge_sort(A, low, mid) ;
Merge_sort(A, mid, high) ;
Merge(A, low, mid, high) ;
}
}
public static void main(String args[]) {
Scanner scanner = new Scanner(System.in) ;
String s = scanner.nextLine() ;
int elem_count ;
elem_count = Integer.parseInt(s) ;
int A[] = new int[elem_count] ;
for (int i=0; i<elem_count; i++) {
A[i] = scanner.nextInt() ;
}
scanner.close() ;
Merge_sort(A, 0, elem_count) ;
for (int i=0; i<elem_count; i++) {
String end_mark = " " ;
if (i==elem_count-1) end_mark = "\n" ;
System.out.print(A[i] + end_mark) ;
}
System.out.println(merge_count) ;
}
}
```
Caution!
Take care of end-less loop (think exit condition at first)
Take care of stack overflow
Consider
if it's possible to calculate from start
performance improvement whether intermediate result can be reused
UVa 11332
https://uva.onlinejudge.org/index.php?option=onlinejudge&page=show_problem&problem=2307
Problem Description
For a positive integer n, let f(n) denote the
sum of the digits of n when represented in base 10.
It is easy to see that the sequence of numbers
n, f(n), f(f(n)), f(f(f(n))), . . . eventually
becomes a single digit number that repeats forever.
Let this single digit be denoted g(n).
For example, consider n = 1234567892.
Then:
f(n) = 1+2+3+4+5+6+7+8+9+2 = 47
f(f(n)) = 4 + 7 = 11
f(f(f(n))) = 1 + 1 = 2
Therefore, g(1234567892) = 2.
Input
Each line of input contains a single positive integer
n at most 2,000,000,000. Input is terminated
by n = 0 which should not be processed.
Output
For each such integer, you are to output a single
line containing g(n).
Sample Input
2
11
47
1234567892
0
Sample Output
2
2
2
2
End of explanation |
10,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/logo.jpg" style="display
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <p style="text-align
Step5: <p style="text-align
Step6: <div class="align-center" style="display
Step7: <p style="text-align
Step8: <div class="align-center" style="display
Step9: <p style="text-align
Step10: <p style="text-align
Step11: <p style="text-align
Step12: <p style="text-align
Step13: <p style="text-align
Step14: <span style="text-align
Step15: <p style="text-align
Step16: <p style="text-align
Step17: <p style="text-align
Step18: <p style="text-align
Step19: <div class="align-center" style="display
Step20: <span style="text-align
Step21: <p style="text-align
Step22: <p style="text-align
Step23: <p style="text-align
Step24: <p style="text-align
Step25: <p style="text-align
Step27: <div class="align-center" style="display
Step29: <span style="align | Python Code:
items = ['banana', 'apple', 'carrot']
stock = [2, 3, 4]
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">מילונים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ברשימה הבאה, כל תבליט מייצג אוסף של נתונים:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>בחנות של האדון קשטן יש 2 בננות, 3 תפוחים ו־4 גזרים.</li>
<li>מספר הזהות של ג'ני הוא 086753092, של קווין 133713370, של איינשטיין 071091797 ושל מנחם 111111118.</li>
<li>לקווין מהסעיף הקודם יש צוללות בצבע אדום וכחול. הצוללות של ג'ני מהסעיף הקודם בצבע שחור וירוק. הצוללת שלי צהובה.</li>
<li>המחיר של פאי בחנות של קשטן הוא 3.141 ש"ח. המחיר של אווז מחמד בחנות של קשטן הוא 9.0053 ש"ח.</li>
</ul>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נסו למצוא מאפיינים משותפים לאוספים שהופיעו מעלה.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר לחלק כל אחד מהאוספים שכתבנו למעלה ל־2 קבוצות ערכים.<br>
הראשונה – הנושאים של האוסף. עבור החנות של קשטן, לדוגמה, הפריט שאנחנו מחזיקים בחנות.<br>
השנייה – הפריטים שהם <em>נתון כלשהו</em> בנוגע לפריט הראשון: המלאי של אותו פריט, לדוגמה.<br>
</p>
<figure>
<img src="images/dictionary_groups.svg?n=5" style="max-width:100%; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה מופיעים 4 עיגולים בימין ו־4 עיגולים בשמאל. העיגולים בימין, בעלי הכותרת 'נושא', מצביעים על העיגולים בשמאל שכותרתם 'נתון לגבי הנושא'. כל עיגול בימין מצביע לעיגול בשמאל. 'פריט בחנות' מצביע ל'מלאי של הפריט', 'מספר תעודת זהות' מצביע ל'השם שמשויך למספר', 'בן אדם' מצביע ל'צבעי הצוללות שבבעלותו' ו'פריט בחנות' (עיגול נוסף באותו שם כמו העיגול הראשון) מצביע ל'מחיר הפריט'."/>
<figcaption style="text-align: center; direction: rtl;">חלוקת האוספים ל־2 קבוצות של ערכים.</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פריטים מהקבוצה הראשונה לעולם לא יחזרו על עצמם – אין היגיון בכך ש"תפוח ירוק" יופיע פעמיים ברשימת המלאי בחנות, ולא ייתכן מצב של שני מספרי זהות זהים.<br>
הפריטים מהקבוצה השנייה, לעומת זאת, יכולים לחזור על עצמם – הגיוני שתהיה אותה כמות של בננות ותפוחים בחנות, או שיהיו אנשים בעלי מספרי זהות שונים שנקראים "משה כהן".
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבחן לעומק את המאפיינים המשותפים בדוגמאות שלעיל.
</p>
<table style="text-align: right; direction: rtl; clear: both; font-size: 1.3rem">
<caption style="text-align: center; direction: rtl; clear: both; font-size: 2rem; padding-bottom: 2rem;">המשותף לאוספים</caption>
<thead>
<tr>
<th>אוסף</th>
<th>הערך הקושר (קבוצה ראשונה)</th>
<th>הערך המתאים לו (קבוצה שנייה)</th>
<th>הסבר</th>
</tr>
</thead>
<tbody>
<tr>
<td>מוצרים והמלאי שלהם בחנות</td>
<td>המוצר שנמכר בחנות</td>
<td>המלאי מאותו מוצר</td>
<td>יכולים להיות בחנות 5 תפוזים ו־5 תפוחים, אבל אין משמעות לחנות שיש בה 5 תפוחים וגם 3 תפוחים.</td>
</tr>
<tr>
<td>מספרי הזהות של אזרחים</td>
<td>תעודת הזהות</td>
<td>השם של בעל מספר הזהות</td>
<td>יכולים להיות הרבה אזרחים העונים לשם משה לוי, ולכל אחד מהם יהיה מספר זהות שונה. לא ייתכן שמספר זהות מסוים ישויך ליותר מאדם אחד.</td>
</tr>
<tr>
<td>בעלות על צוללות צבעוניות</td>
<td>בעל הצוללות</td>
<td>צבע הצוללות</td>
<td>יכול להיות שגם לקווין וגם לג'ני יש צוללות בצבעים זהים. ג'ני, קווין ואני הם אנשים ספציפיים, שאין יותר מ־1 מהם בעולם (עד שנמציא דרך לשבט אנשים).</td>
</tr>
<tr>
<td>מוצרים ומחיריהם</td>
<td>שם המוצר</td>
<td>מחיר המוצר</td>
<td>לכל מוצר מחיר נקוב. עבור שני מוצרים שונים בחנות יכול להיות מחיר זהה.</td>
</tr>
</tbody>
</table>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מיפוי ערכים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כמו שראינו בדוגמאות, מצב נפוץ במיוחד הוא הצורך לאחסן <em>מיפוי בין ערכים</em>.<br>
נחשוב על המיפוי בחנות של קשטן, שבה הוא סופר את המלאי עבור כל מוצר.<br>
נוכל לייצג את מלאי המוצרים בחנות של קשטן באמצעות הידע שכבר יש לנו. נשתמש בקוד הבא:
</p>
End of explanation
def get_stock(item_name, items, stock):
item_index = items.index(item_name)
how_many = stock[item_index]
return how_many
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עבור כל תא ברשימת <var>items</var>, שמרנו במקום התואם ברשימת <var>stock</var> את הכמות שנמצאת ממנו בחנות.<br>
יש 4 גזרים, 3 תפוחים ו־2 בננות על המדף בחנות של אדון קשטן.<br>
שליפה של כמות המלאי עבור מוצר כלשהו בחנות תתבצע בצורה הבאה:
</p>
End of explanation
print(get_stock('apple', items, stock))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בשורה הראשונה בגוף הפונקציה מצאנו את מיקום המוצר שאנחנו מחפשים במלאי. נניח, "תפוח" מוחזק במקום 1 ברשימה.<br>
בשורה השנייה פנינו לרשימה השנייה, זו שמאחסנת את המלאי עבור כל מוצר, ומצאנו את המלאי שנמצא באותו מיקום.<br>
כמות היחידות של מוצר מאוחסנת במספר תא מסוים, התואם למספר התא ברשימה של שמות המוצרים. זו הסיבה לכך שהרעיון עובד.<br>
</p>
End of explanation
items = [('banana', 2), ('apple', 3), ('carrot', 4)]
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
צורה נוספת למימוש אותו רעיון תהיה שמירה של זוגות סדורים בתוך רשימה של tuple־ים:
</p>
End of explanation
def get_stock(item_name_to_find, items_with_stock):
for item_to_stock in items_with_stock:
item_name = item_to_stock[0]
stock = item_to_stock[1]
if item_name == item_name_to_find:
return stock
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ברשימה הזו הרעיון נראה מובן יותר. בואו נממש דרך לחלץ איבר מסוים מתוך הרשימה:
</p>
End of explanation
get_stock('apple', items)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עבור כל tuple ברשימה, בדקנו אם שם הפריט שהוא מכיל תואם לשם הפריט שחיפשנו.<br>
אם כן, החזרנו את הכמות של אותו פריט במלאי.<br>
שימוש בפונקציה הזו נראה כך:
</p>
End of explanation
ages = {'Yam': 27, 'Methuselah': 969, 'Baby Groot': 3}
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו ב־unpacking שלמדנו במחברת הקודמת כדי לפשט את לולאת ה־<code>for</code> בקוד של <code>get_stock</code>.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שני קטעי הקוד שנתנו כדוגמה פישטו את המצב יתר על המידה, והם אינם מתייחסים למצב שבו הפריט חסר במלאי.<br>
הרחיבו את הפונקציות <code>get_stock</code> כך שיחזירו 0 אם הפריט חסר במלאי.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מה זה מילון?</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מילון הוא סוג ערך בפייתון.<br>
תכליתו היא ליצור קשר בין סדרה של נתונים שנקראת <dfn>מפתחות</dfn>, לבין סדרה אחרת של נתונים שנקראת <dfn>ערכים</dfn>.<br>
לכל מפתח יש ערך שעליו הוא מצביע.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ישנן דוגמאות אפשריות רבות לקשרים כאלו:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>קשר בין ערים בעולם לבין מספר האנשים שחיים בהן.</li>
<li>קשר בין ברקוד של מוצרים בחנות לבין מספר הפריטים במלאי מכל מוצר.</li>
<li>קשר בין מילים לבין רשימת הפירושים שלהן במילון אבן־שושן.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לערך המצביע נקרא <dfn>מפתח</dfn> (<dfn>key</dfn>). זה האיבר מבין זוג האיברים שעל פיו נשמע הגיוני יותר לעשות חיפוש:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>העיר שאנחנו רוצים לדעת את מספר התושבים בה.</li>
<li>הברקוד שאנחנו רוצים לדעת כמה פריטים ממנו קיימים במלאי.</li>
<li>המילה שאת הפירושים שלה אנחנו רוצים למצוא.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לערך השני מבין שני הערכים בזוג, נקרא... ובכן, <dfn>ערך</dfn> (<dfn>value</dfn>). זה הנתון שנרצה למצוא לפי המפתח:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>מספר התושבים בעיר.</li>
<li>מספר הפריטים הקיימים במלאי עבור ברקוד מסוים.</li>
<li>הפירושים של המילה במילון.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם כך, מילון הוא בסך הכול אוסף של זוגות שכאלו: מפתחות וערכים.</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הבסיס</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">יצירת מילון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניצור מילון חדש:</p>
End of explanation
age_of_my_elephants = {}
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
במילון הזה ישנם שלושה ערכים: הגיל של ים, של מתושלח ושל בייבי־גרוט.<br>
המפתחות במילון הזה הם <em>Yam</em> (הערך הקשור למפתח הזה הוא 27), <em>Methuselah</em> (עם הערך 969) ו־<em>Baby Groot</em> (אליו הוצמד הערך 3).
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יצרנו את המילון כך:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>פתחנו סוגריים מסולסלים.</li>
<li>יצרנו זוגות של מפתחות וערכים, מופרדים בפסיק:
<ol>
<li>המפתח.</li>
<li>הפרדה בנקודתיים.</li>
<li>הערך.</li>
</ol>
</li>
<li>סגרנו סוגריים מסולסלים.</li>
</ol>
<figure>
<img src="images/dictionary.svg?v=2" style="max-width:100%; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה מופיעים 3 ריבועים בימין ו־3 ריבועים בשמאל. הריבועים בימין, שמתויגים כ'מפתח', מצביעים על הריבועים בשמאל שמתויגים כ'ערך'. כל ריבוע בימין מצביע לריבוע בשמאל. בריבוע הימני העליון כתוב Yam, והוא מצביע על ריבוע בו כתוב 27. כך גם עבור ריבוע שבו כתוב Methuselah ומצביע לריבוע בו כתוב 969, וריבוע בו כתוב Baby Groot ומצביע לריבוע בו כתוב 3."/>
<figcaption style="text-align: center; direction: rtl;">המחשה למילון שבו 3 מפתחות ו־3 ערכים.</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר ליצור מילון ריק בעזרת פתיחה וסגירה של סוגריים מסולסלים:</p>
End of explanation
names = ['Yam', 'Mathuselah', 'Baby Groot']
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
צרו מילון עבור המלאי בחנות של אדון קשטן.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">אחזור ערך</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניזכר כיצד מאחזרים ערך מתוך רשימה:
</p>
End of explanation
names[2]
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי לחלץ את הערך שנמצא <em>במקום 2</em> ברשימה <var>names</var>, נכתוב:
</p>
End of explanation
items = {'banana': 2, 'apple': 3, 'carrot': 4}
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כאן הכול מוכר.<br>
ניקח את המילון שמייצג את המלאי בחנות של אדון קשטן:
</p>
End of explanation
items['banana']
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי לחלץ את ערך המלאי שנמצא <em>במקום שבו המפתח הוא 'banana'</em>, נרשום את הביטוי הבא:
</p>
End of explanation
items['melon'] = 1
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כיוון שבמילון המפתח הוא זה שמצביע על הערך ולא להפך, אפשר לאחזר ערך לפי מפתח, אבל אי־אפשר לאחזר מפתח לפי ערך.</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/tip.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
ביום־יום, השתמשו במילה "בְּמָקוֹם" (b'e-ma-qom) כתחליף למילים סוגריים מרובעים.<br>
לדוגמה: עבור שורת הקוד האחרונה, אימרו <em><q>items במקום banana</q></em>.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הוספה ועדכון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר להוסיף מפתח וערך למילון, באמצעות השמת הערך אל המילון במקום של המפתח.<br>
ניקח כדוגמה מקרה שבו יש לנו במלאי מלון אחד.<br>
המפתח הוא <em>melon</em> והערך הוא <em>1</em>, ולכן נשתמש בהשמה הבאה:
</p>
End of explanation
items['melon'] = items['melon'] + 4
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אם הגיעו עוד 4 מלונים לחנות של אדון קשטן, נוכל לעדכן את מלאי המלונים באמצעות השמה למקום הנכון במילון:
</p>
End of explanation
favorite_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
for something in favorite_animals:
print(something)
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">כללי המשחק</span>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>לא יכולים להיות 2 מפתחות זהים במילון.</li>
<li>המפתחות במילון חייבים להיות immutables.</li>
<li>אנחנו נתייחס למילון כאל מבנה ללא סדר מסוים (אין "איבר ראשון" או "איבר אחרון").</li>
</ul>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/deeper.svg?a=1" style="height: 50px !important;" alt="העמקה">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
בגרסאות האחרונות של פייתון הפך מילון להיות מבנה סדור, שבו סדר האיברים הוא סדר ההכנסה שלהם למילון.<br>
למרות זאת, רק במצבים נדירים נצטרך להתייחס לסדר שבו האיברים מסודרים במילון, ובשלב זה נעדיף שלא להתייחס לתכונה הזו.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">סיכום ביניים</span>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>מילון הוא מבנה שבנוי זוגות־זוגות: יש ערכים ומפתחות, ולכל מפתח יש ערך אחד שעליו הוא מצביע.</li>
<li>נתייחס למילון כאל מבנה ללא סדר מסוים. אין "איבר ראשון" או "איבר אחרון".</li>
<li>בניגוד לרשימה, כאן ה"מקום" שאליו אנחנו פונים כדי לאחזר ערך הוא המפתח, ולא מספר שמייצג את המקום הסידורי של התא.</li>
<li>בעזרת מפתח אפשר להגיע לערך המוצמד אליו, אבל לא להפך.</li>
</ul>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/tip.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
חדי העין שמו לב שאנחנו מצליחים להוסיף ערכים למילון, ולשנות בו ערכים קיימים.<br>
מהתכונה הזו אנחנו למדים שמילון הוא mutable.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מעבר על מילון</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">לולאת for</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כיוון שמילון הוא iterable, דרך מקובלת לעבור עליו היא באמצעות לולאת <code>for</code>.<br>
ננסה להשתמש בלולאת <code>for</code> על מילון, ונראה מה התוצאות:
</p>
End of explanation
favorite_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print('favorite_animals items:')
for key in favorite_animals:
value = favorite_animals[key]
print(f"{key:10} -----> {value}.") # תרגיל קטן: זהו את הטריק שגורם לזה להיראות טוב כל כך בהדפסה
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה שקיבלנו רק את המפתחות, בלי הערכים.<br>
נסיק מכאן שמילון הוא אמנם iterable, אך בכל איטרציה הוא מחזיר לנו רק את המפתח, בלי הערך.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אנחנו כבר יודעים איך מחלצים את הערך של מפתח מסוים.<br>
נוכל להשתמש בידע הזה כדי לקבל בכל חזרור גם את המפתח, וגם את הערך:
</p>
End of explanation
print(list(favorite_animals.items()))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל הפתרון הזה לא נראה אלגנטי במיוחד, ונראה שנוכל למצוא אחד טוב יותר.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעזרתנו נחלצת הפעולה <code>items</code>, השייכת לערכים מסוג מילון.<br>
הפעולה הזו מחזירה זוגות איברים, כאשר בכל זוג האיבר הראשון הוא המפתח והאיבר השני הוא הערך.
</p>
End of explanation
print('favorite_animals items:')
for key, value in favorite_animals.items():
print(f"{key:10} -----> {value}.")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מאחר שמדובר באיברים שבאים בזוגות, נוכל להשתמש בפירוק איברים כפי שלמדנו בשיעור על לולאות <code>for</code>:
</p>
End of explanation
print('favorite_animals items:')
for character, animal in favorite_animals.items():
print(f"{character:10} -----> {animal}.")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בלולאה שמופיעה למעלה ניצלנו את העובדה שהפעולה <code>items</code> מחזירה לנו איברים בזוגות: מפתח וערך.<br>
בכל חזרור, אנחנו מכניסים למשתנה <var>key</var> את האיבר הראשון בזוג, ולמשתנה <var>value</var> את האיבר השני בזוג.<br>
נוכל להיות אפילו אלגנטיים יותר ולתת למשתנים הללו שמות ראויים:
</p>
End of explanation
empty_dict = {}
empty_dict['DannyDin']
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה שמקבלת מילון ומדפיסה עבור כל מפתח את האורך של הערך המוצמד אליו.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מפתחות שלא קיימים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הבעיה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מילונים הם טיפוסים קצת רגישים. הם לא אוהבים כשמזכירים להם מה אין בהם.<br>
אם ננסה לפנות למילון ולבקש ממנו מפתח שאין לו, נקבל הודעת שגיאה.<br>
בפעמים הראשונות שתתעסקו עם מילונים, יש סיכוי לא מבוטל שתקבלו <code>KeyError</code> שנראה כך:<br>
</p>
End of explanation
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print('Achiles' in loved_animals)
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;"><code>in</code> במילונים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יש כמה דרכים לפתור בעיה זו.<br>
דרך אפשרית אחת היא לבדוק שהמפתח קיים לפני שאנחנו ניגשים אליו:</p>
End of explanation
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
if 'Achiles' in loved_animals:
value = loved_animals['Achiles']
else:
value = 'Pony'
print(value)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כאן השתמשנו באופרטור <code>in</code> כדי לבדוק אם מפתח מסוים נמצא במילון.<br>
נוכל גם לבקש את הערך לאחר שבדקנו שהוא קיים:
</p>
End of explanation
def get_value(dictionary, key, default_value):
if key in dictionary:
return dictionary[key]
else:
return default_value
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד שלמעלה, השתמשנו באופרטור ההשוואה <code>in</code> כדי לבדוק אם מפתח מסוים ("אכילס") קיים בתוך המילון שיצרנו בשורה הראשונה.<br>
אם הוא נמצא שם, חילצנו את הערך שמוצמד לאותו מפתח (ל"אכילס"). אם לא, המצאנו ערך משלנו – "פוני".<br>
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
מעבר על מילון יחזיר בכל חזרור מפתח מהמילון, ללא הערך הקשור אליו.<br>
מסיבה זו, אופרטור ההשוואה <code>in</code> יבדוק רק אם קיים <em>מפתח</em> מסוים במילון, ולא יבדוק אם ערך שכזה קיים.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה שמקבלת שלושה פרמטרים: מילון, מפתח וערך ברירת מחדל.<br>
הפונקציה תחפש את המפתח במילון, ואם הוא קיים תחזיר את הערך שלו.<br>
אם המפתח לא קיים במילון, הפונקציה תחזיר את ערך ברירת המחדל.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה לכתוב את הרעיון בקוד שלמעלה כפונקציה כללית:
</p>
End of explanation
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print("Mad hatter: " + get_value(loved_animals, 'Mad hatter', 'Pony'))
print("Queen of hearts: " + get_value(loved_animals, 'Queen of hearts', 'Pony'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה שלמעלה מקבלת מילון, מפתח וערך ברירת מחדל.<br>
אם היא מוצאת את המפתח במילון, היא מחזירה את הערך של אותו מפתח.<br>
אם היא לא מוצאת את המפתח במילון, היא מחזירה את ערך ברירת המחדל שנקבע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבדוק שהפונקציה עובדת:
</p>
End of explanation
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print("Mad hatter: " + loved_animals.get('Mad hatter', 'Pony'))
print("Queen of hearts: " + loved_animals.get('Queen of hearts', 'Pony'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ובכן, זו פונקציה כייפית. כמה נוח היה לו היא הייתה פעולה של מילון.</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הפעולה <code>get</code> במילונים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מי היה מאמין, יש פעולה כזו במילונים! ננסה להפעיל אותה על המילון שלנו.<br>
שימו לב לצורת הקריאה לפעולה, ששונה מהקריאה לפונקציה שכתבנו למעלה – שם המשתנה של המילון בא לפני שם הפעולה. מדובר בפעולה, ולא בפונקציה:
</p>
End of explanation
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print(loved_animals.get('Mad hatter'))
print(loved_animals.get('Queen of hearts'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
טריק קסום אחרון שנראה הוא שהפעולה <code>get</code> סלחנית ממש, ומתפקדת גם אם לא נותנים לה ערך ברירת מחדל.<br>
אם תספקו רק את שם המפתח שממנו תרצו לאחזר ערך, היא תחפש אותו ותחזיר את הערך שלו, אם הוא קיים.<br>
אם המפתח לא קיים ולא סופק ערך ברירת מחדל, היא תחזיר את הערך <code>None</code>:
</p>
End of explanation
decryption_key = {
'O': 'A', 'D': 'B', 'F': 'C', 'I': 'D', 'H': 'E',
'G': 'F', 'L': 'G', 'C': 'H', 'K': 'I', 'Q': 'J',
'B': 'K', 'J': 'L', 'Z': 'M', 'V': 'N', 'S': 'O',
'R': 'P', 'M': 'Q', 'X': 'R', 'E': 'S', 'P': 'T',
'A': 'U', 'Y': 'V', 'W': 'W', 'T': 'X', 'U': 'Y',
'N': 'Z',
}
SONG =
sc, kg pchxh'e svh pckvl k covl svps
pcop lhpe zh pcxsalc pch vklcp
k okv'p lsvvo is wcop k isv'p wovp ps
k'z lsvvo jkyh zu jkgh
eckvkvl jkbh o ikozsvi, xsjjkvl wkpc pch ikfh
epovikvl sv pch jhilh, k ecsw pch wkvi csw ps gju
wchv pch wsxji lhpe kv zu gofh
k eou, coyh o vkfh iou
coyh o vkfh iou
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/recall.svg" style="height: 50px !important;" alt="תזכורת">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
הערך המיוחד <code>None</code> הוא דרך פייתונית להגיד "כלום".<br>
אפשר לדמיין אותו כמו רִיק (וָקוּם). לא הערך המספרי אפס, לא <code>False</code>. פשוט כלום.
</p>
</div>
</div>
<span style="align: right; direction: rtl; float: right; clear: both;">מונחים</span>
<dl style="text-align: right; direction: rtl; float: right; clear: both;">
<dt>מילון</dt><dd>טיפוס פייתוני שמאפשר לנו לשמור זוגות סדורים של מפתחות וערכים, שבהם כל מפתח מצביע על ערך.</dd>
<dt>מפתח</dt><dd>הנתון שלפיו נחפש את הערך הרצוי במילון, ויופיע כאיבר הראשון בזיווג שבין מפתח לערך.</dd>
<dt>ערך</dt><dd>הנתון שעליו מצביע המפתח במילון, יתקבל כאשר נחפש במילון לפי אותו מפתח. יופיע כאיבר השני בזיווג שבין מפתח לערך.<dd>
<dt>זוג סדור</dt><dd>זוג של שני איברים הקשורים זה לזה. במקרה של מילון, מפתח וערך.</dd>
</dl>
<span style="align: right; direction: rtl; float: right; clear: both;">תרגילים</span>
<span style="align: right; direction: rtl; float: right; clear: both;">מסר של יום טוב</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יוגב נבו קיבל מסר מוצפן מאדון יום טוב, והצליח לשים את ידו על שיטה לפענוח המסר.<br>
כדי לפענח את המסר, החליפו כל אות במסר הסודי באות התואמת לה, לפי המילון המופיע למטה.<br>
לדוגמה, דאגו שכל המופעים של האות O במסר <var>SONG</var> יוחלפו באות A.
</p>
End of explanation
encryption_key = {
'T': '1', 'F': '6', 'W': 'c', 'Y': 'h', 'B': 'k',
'P': '~', 'H': 'q', 'S': 's', 'E': 'w', 'Q': '@',
'U': '$', 'M': 'i', 'I': 'l', 'N': 'o', 'J': 'y',
'Z': 'z', 'G': '!', 'L': '#', 'A': '&', 'O': '+',
'D': ',', 'R': '-', 'C': ':', 'V': '?', 'X': '^',
'K': '|',
}
SONG =
l1's ih #l6w
l1's o+c +- ow?w-
l &lo'1 !+oo& #l?w 6+-w?w-
l y$s1 c&o1 1+ #l?w cql#w l'i &#l?w
(l1's ih #l6w)
ih qw&-1 ls #l|w &o +~wo ql!qc&h
#l|w 6-&o|lw s&l,
l ,l, l1 ih c&h
l y$s1 c&o1 1+ #l?w cql#w l'i &#l?w
l1's ih #l6w
Explanation: <span style="align: right; direction: rtl; float: right; clear: both;">מראה מראה שעל הקיר</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
חברו של יום טוב, חיים, שלח ליום טוב מסר מוצפן.<br>
למרבה הצער יוגב שם את ידיו רק על מפת ההצפנה, ולא על מפת הפענוח.<br>
צרו ממילון ההצפנה מילון פענוח, שבו:<br>
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הערכים במילון הפענוח שתיצרו הם המפתחות ממילון ההצפנה.</li>
<li>המפתחות במילון הפענוח שתיצרו הם הערכים ממילון ההצפנה.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה, המילון <code dir="ltr" style="direction: ltr;">{'a': '1', 'b': 2}</code> יהפוך למילון <code dir="ltr" style="direction: ltr;">{'1': 'a', '2': 'b'}</code>.<br>
השתמשו במילון הפענוח שיצרתם כדי לפענח את המסר שנשלח.
</p>
End of explanation |
10,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gas-Phase Calculations
https
Step2: Add Master, Solution Species and Phases by executing PHREEQC input code
Step3: Run Calculation
Step4: Total Gas Pressure and Volume
Step5: Fixed Pressure Gas Composition | Python Code:
%pylab inline
import phreeqpython
import pandas as pd
pp = phreeqpython.PhreeqPython(database='phreeqc.dat')
Explanation: Gas-Phase Calculations
https://wwwbrr.cr.usgs.gov/projects/GWC_coupled/phreeqc/phreeqc3-html/phreeqc3-62.htm#50528271_44022
End of explanation
pp.ip.run_string(
SOLUTION_MASTER_SPECIES
N(-3) NH4+ 0.0 N
SOLUTION_SPECIES
NH4+ = NH3 + H+
log_k -9.252
delta_h 12.48 kcal
-analytic 0.6322 -0.001225 -2835.76
NO3- + 10 H+ + 8 e- = NH4+ + 3 H2O
log_k 119.077
delta_h -187.055 kcal
-gamma 2.5000 0.0000
PHASES
NH3(g)
NH3 = NH3
log_k 1.770
delta_h -8.170 kcal
)
Explanation: Add Master, Solution Species and Phases by executing PHREEQC input code
End of explanation
# add empty solution 1
solution1 = pp.add_solution({})
# equalize solution 1 with Calcite and CO2
solution1.equalize(['Calcite', 'CO2(g)'], [0,-1.5])
# create a fixed pressure gas phase
fixed_pressure = pp.add_gas({
'CO2(g)': 0,
'CH4(g)': 0,
'N2(g)': 0,
'H2O(g)': 0,
}, pressure=1.1, fixed_pressure=True)
# create a fixed volume gas phase
fixed_volume = pp.add_gas({
'CO2(g)': 0,
'CH4(g)': 0,
'N2(g)': 0,
'H2O(g)': 0,
}, volume=23.19, fixed_pressure=False, fixed_volume=True, equilibrate_with=solution1)
mmol = [1, 2, 3, 4, 8, 16, 32, 64, 125, 250, 500, 1000]
# instantiate result lists
fp_vol = []; fp_pres = []; fp_frac = []; fv_vol = []; fv_pres = []; fv_frac = []
for m in mmol:
sol = solution1.copy()
fp = fixed_pressure.copy()
# equlibriate with solution
sol.add('CH2O(NH3)0.07', m, 'mmol')
sol.interact(fp)
fp_vol.append(fp.volume)
fp_pres.append(fp.pressure)
fp_frac.append(fp.partial_pressures)
sol.forget(); fp.forget() # clean up solutions after use
sol = solution1.copy()
fv = fixed_volume.copy()
sol.add('CH2O(NH3)0.07', m, 'mmol')
sol.interact(fv)
fv_vol.append(fv.volume)
fv_pres.append(fv.pressure)
fv_frac.append(fv.partial_pressures)
sol.forget(); fv.forget() # clean up solutions after use
Explanation: Run Calculation
End of explanation
plt.figure(figsize=[8,5])
# create two y axes
ax1 = plt.gca()
ax2 = ax1.twinx()
# plot pressures
ax1.plot(mmol, np.log10(fp_pres), 'x-', color='tab:purple', label='Fixed_P - Pressure')
ax1.plot(mmol, np.log10(fv_pres), 's-', color='tab:purple', label='Fixed_V - Pressure')
# add dummy handlers for legend
ax1.plot(np.nan, np.nan, 'x-', color='tab:blue', label='Fixed_P - Volume')
ax1.plot(np.nan, np.nan, 's-', color='tab:blue', label='Fixed_V - Volume')
# plot volumes
ax2.plot(mmol, fp_vol, 'x-')
ax2.plot(mmol, fv_vol, 's-', color='tab:blue')
# set log scale to both y axes
ax2.set_xscale('log')
ax2.set_yscale('log')
# set axes limits
ax1.set_xlim([1e0, 1e3])
ax2.set_xlim([1e0, 1e3])
ax1.set_ylim([-5,1])
ax2.set_ylim([1e-3,1e5])
# add legend and gridlines
ax1.legend(loc=4)
ax1.grid()
# set labels
ax1.set_xlabel('Organic matter reacted, in millimoles')
ax1.set_ylabel('Log(Pressure, in atmospheres)')
ax2.set_ylabel('Volume, in liters)')
Explanation: Total Gas Pressure and Volume
End of explanation
fig = plt.figure(figsize=[16,5])
# plot fixed pressure gas composition
fig.add_subplot(1,2,1)
pd.DataFrame(fp_frac, index=mmol).apply(np.log10)[2:].plot(style='-x', ax=plt.gca())
plt.title('Fixed Pressure gas composition')
plt.xscale('log')
plt.ylim([-5,1])
plt.grid()
plt.xlim(1e0, 1e3)
plt.xlabel('Organic matter reacted, in millimoles')
plt.ylabel('Log(Partial pressure, in atmospheres)')
# plot fixed volume gas composition
fig.add_subplot(1,2,2)
pd.DataFrame(fv_frac, index=mmol).apply(np.log10).plot(style='-o', ax=plt.gca())
plt.title('Fixed Volume gas composition')
plt.xscale('log')
plt.xlabel('Organic matter reacted, in millimoles')
plt.ylabel('Log(Partial pressure, in atmospheres)')
plt.grid()
plt.ylim([-5,1])
Explanation: Fixed Pressure Gas Composition
End of explanation |
10,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure 17, Plot of overall Froude number vs dimensionless amplitude
Start by loading some boiler plate
Step1: And some more specialized dependencies
Step2: Helper routines
Step3: Configuration for this figure.
Step4: Open a chest located on a remote globus endpoint and load a remote json configuration file.
Step5: We want to plot the spike depth, which is the 'H' field in the chest.
Chests can prefetch lists of keys more quickly than individual ones, so we'll prefetch the keys we want.
Step6: Use a spline to compute the derivative of 'H' vs time
Step7: Plot the Froude number, non-dimensionalized by the theoretical dependence on Atwood, acceleration, and wave number, vs the spike depth, normalized by wave-length.
The dotted line is the theoretical prediction of Goncharaov. The solid black line is the farthest that Wilkinson and Jacobs were able to get. | Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline
from scipy.interpolate import UnivariateSpline
import json
import pandas as pd
from functools import partial
class Foo: pass
Explanation: Figure 17, Plot of overall Froude number vs dimensionless amplitude
Start by loading some boiler plate: matplotlib, numpy, scipy, json, functools, and a convenience class.
End of explanation
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
Explanation: And some more specialized dependencies:
1. Slict provides a convenient slice-able dictionary interface
2. Chest is an out-of-core dictionary that we'll hook directly to a globus remote using...
3. glopen is an open-like context manager for remote globus files
End of explanation
def load_from_archive(names, arch):
cs = []
for name in names:
cs.append(Chest(path = "{:s}-results".format(name),
open = partial(glopen, endpoint=arch),
open_many = partial(glopen_many, endpoint=arch)))
scs = [CachedSlict(c) for c in cs]
ps = []
for name in names:
with glopen(
"{:s}.json".format(name), mode='r',
endpoint = arch,
) as f:
ps.append(json.load(f))
return cs, scs, ps
Explanation: Helper routines
End of explanation
config = Foo()
config.names = [
# "Wilk/Wilk_kmin_2.5/Wilk_kmin_2.5",
# "Wilk/Wilk_kmin_3.5/Wilk_kmin_3.5",
# "Wilk/Wilk_kmin_4.5/Wilk_kmin_4.5",
"Wilk/Wilk_long/Wilk_long",
]
#config.arch_end = "maxhutch#alpha-admin/~/pub/"
#config.arch_end = "alcf#dtn_mira/projects/alpha-nek/experiments/"
config.arch_end = "alcf#dtn_mira/projects/PetaCESAR/maxhutch/"
height = 'H_exp'
Explanation: Configuration for this figure.
End of explanation
cs, scs, ps = load_from_archive(config.names, config.arch_end);
Explanation: Open a chest located on a remote globus endpoint and load a remote json configuration file.
End of explanation
for c,sc in zip(cs, scs):
c.prefetch(sc[:,height].full_keys())
Explanation: We want to plot the spike depth, which is the 'H' field in the chest.
Chests can prefetch lists of keys more quickly than individual ones, so we'll prefetch the keys we want.
End of explanation
spls = []
for sc, p in zip(scs, ps):
T = np.array(sc[:,height].keys())
H = np.array(sc[:,height].values()) #- 2 * np.sqrt(p['conductivity']* (T + p['delta']**2 / p['conductivity'] / 4))
spls.append(UnivariateSpline(T,
H,
k = 5,
s = 1.e-12))
Frs = [spl.derivative() for spl in spls]
Tss = [np.linspace(sc[:,height].keys()[0], sc[:,height].keys()[-1], 1000) for sc in scs]
Run37 = pd.DataFrame.from_csv('WRun37 4.49.56 PM 7_3_07.txt', sep='\t')
Run58 = pd.DataFrame.from_csv('WRun058 4.32.52 PM 7_3_07.txt', sep='\t')
Run78 = pd.DataFrame.from_csv('WRun078 4.49.56 PM 7_3_07.txt', sep='\t')
def plot_exp(data, n, fmt):
norm = .5*( np.sqrt(data["Atwood"]/(1-data["Atwood"])*data["Accel. [mm/sec^2]"]* 76 / n)
+ np.sqrt(data["Atwood"]/(1+data["Atwood"])*data["Accel. [mm/sec^2]"]* 76 / n))
axs.plot(
data["AvgAmp (mm)"] * n / 76,
data["Average Velocity"]/norm, fmt);
#data["Froude Average"], fmt);
return
Explanation: Use a spline to compute the derivative of 'H' vs time: the Froude number.
End of explanation
fig, axs = plt.subplots(1,1)
for p, spl, Fr, T in zip(ps, spls, Frs, Tss):
axs.plot(
spl(T) * p["kmin"],
Fr(T)/ np.sqrt(p["atwood"]*p["g"] / p["kmin"]),
label="{:3.1f} modes".format(p["kmin"]));
#axs.plot(Run37["AvgAmp (mm)"] * 2.5 / 76, Run37["Froude Average"], "bx");
#plot_exp(Run37, 2.5, "bx")
#plot_exp(Run78, 3.5, "gx")
plot_exp(Run58, 4.5, "bx")
axs.plot([0,10], [np.sqrt(1/np.pi), np.sqrt(1/np.pi)], 'k--')
axs.axvline(x=1.4, color='k');
axs.set_ylabel(r'Fr')
axs.set_xlabel(r'$h/\lambda$');
axs.legend(loc=4);
axs.set_xbound(0,3);
axs.set_ybound(0,1.5);
plt.savefig('Figure17_long.png')
fig, axs = plt.subplots(1,1)
for sc, p, spl, Fr, T in zip(scs, ps, spls, Frs, Tss):
axs.plot(
T,
spl(T) * p["kmin"],
label="{:3.1f} modes".format(p["kmin"]));
axs.plot(
sc[:,height].keys(),
np.array(sc[:,height].values())*p['kmin'],
'bo');
#axs.plot(Run37["Time (sec)"]-.5, Run37["AvgAmp (mm)"] * 2.5 / 76, "bx");
axs.plot(Run58["Time (sec)"]-.515, Run58["AvgAmp (mm)"] * 4.5 / 78, "bx");
#axs.plot(Run78["Time (sec)"]-.5, Run78["AvgAmp (mm)"] * 3.5 / 76, "gx");
axs.set_ylabel(r'$h/\lambda$')
axs.set_xlabel(r'T (s)');
axs.set_xbound(0.0,1.5);
axs.set_ybound(-0.0,4);
axs.legend(loc=4);
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, matplotlib, slict, chest, glopen, globussh
Explanation: Plot the Froude number, non-dimensionalized by the theoretical dependence on Atwood, acceleration, and wave number, vs the spike depth, normalized by wave-length.
The dotted line is the theoretical prediction of Goncharaov. The solid black line is the farthest that Wilkinson and Jacobs were able to get.
End of explanation |
10,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the data
2MASS => effective resolution of the 2MASS system is approximately 5"
WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0"
GALEX imaging => Five imaging surveys in a Far UV band (1350—1750Å) and Near UV band (1750—2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map.
Step1: Matching coordinates
Step2: Plot W1-J vs W1
Step3: W1-J < -1.7 => galaxy
W1-J > -1.7 => stars
only 2 object are galaxy?
Step4: Filter all Cats | Python Code:
obj = ["PKS J0006-0623", 1.55789, -6.39315, 1]
# name, ra, dec, radius of cone
obj_name = obj[0]
obj_ra = obj[1]
obj_dec = obj[2]
cone_radius = obj[3]
obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs")
data_2mass = Irsa.query_region(obj_coord, catalog="fp_psc", radius=cone_radius * u.deg)
data_wise = Irsa.query_region(obj_coord, catalog="allwise_p3as_psd", radius=cone_radius * u.deg)
__data_galex = Vizier.query_region(obj_coord, catalog='II/335', radius=cone_radius * u.deg)
data_galex = __data_galex[0]
num_2mass = len(data_2mass)
num_wise = len(data_wise)
num_galex = len(data_galex)
print("Number of object in (2MASS, WISE, GALEX): ", num_2mass, num_wise, num_galex)
Explanation: Get the data
2MASS => effective resolution of the 2MASS system is approximately 5"
WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0"
GALEX imaging => Five imaging surveys in a Far UV band (1350—1750Å) and Near UV band (1750—2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map.
End of explanation
# use only coordinate columns
ra_2mass = data_2mass['ra']
dec_2mass = data_2mass['dec']
c_2mass = coordinates.SkyCoord(ra=ra_2mass, dec=dec_2mass, unit=(u.deg, u.deg), frame="icrs")
ra_wise = data_wise['ra']
dec_wise = data_wise['dec']
c_wise = coordinates.SkyCoord(ra=ra_wise, dec=dec_wise, unit=(u.deg, u.deg), frame="icrs")
ra_galex = data_galex['RAJ2000']
dec_galex = data_galex['DEJ2000']
c_galex = coordinates.SkyCoord(ra=ra_galex, dec=dec_galex, unit=(u.deg, u.deg), frame="icrs")
####
sep_min = 6.0 * u.arcsec # minimum separation in arcsec
# Only 2MASS and WISE matching
#
idx_2mass, idx_wise, d2d, d3d = c_wise.search_around_sky(c_2mass, sep_min)
# select only one nearest if there are more in the search reagion (minimum seperation parameter)!
print("Only 2MASS and WISE: ", len(idx_2mass))
Explanation: Matching coordinates
End of explanation
# from matching of 2 cats (2MASS and WISE) coordinate
w1 = data_wise[idx_wise]['w1mpro']
j = data_2mass[idx_2mass]['j_m']
w1j = w1-j
# match between WISE and 2MASS
data_wise_matchwith_2mass = data_wise[idx_wise] # WISE dataset
cutw1j = -1.7
galaxy = data_wise_matchwith_2mass[w1j < cutw1j] # https://academic.oup.com/mnras/article/448/2/1305/1055284
w1j_galaxy = w1j[w1j<cutw1j]
w1_galaxy = w1[w1j<cutw1j]
plt.scatter(w1j, w1, marker='o', color='blue')
plt.scatter(w1j_galaxy, w1_galaxy, marker='.', color="red")
plt.axvline(x=cutw1j) # https://academic.oup.com/mnras/article/448/2/1305/1055284
Explanation: Plot W1-J vs W1
End of explanation
# GALEX
###
# coord of object in 2mass which match wise (first objet/nearest in sep_min region)
c_2mass_matchwith_wise = c_2mass[idx_2mass]
c_wise_matchwith_2mass = c_wise[idx_wise]
#Check with 2mass cut
idx_2mass_wise_galex, idx_galex1, d2d, d3d = c_galex.search_around_sky(c_2mass_matchwith_wise, sep_min)
num_galex1 = len(idx_galex1)
#Check with wise cut
idx_wise_2mass_galex, idx_galex2, d2d, d3d = c_galex.search_around_sky(c_wise_matchwith_2mass, sep_min)
num_galex2 = len(idx_galex2)
print("Number of match in 2MASS cut (with WISE): ", num_galex1)
print("Number of match in WISE cut (with 2MASS): ", num_galex2)
# diff/average
print("Confusion level: ", abs(num_galex1 - num_galex2)/np.mean([num_galex1, num_galex2])*100, "%")
Explanation: W1-J < -1.7 => galaxy
W1-J > -1.7 => stars
only 2 object are galaxy?
End of explanation
# Choose which one is smaller!
if num_galex1 < num_galex2:
select_from_galex = idx_galex1
else:
select_from_galex = idx_galex2
print(len(select_from_galex))
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# Select other catalog using GALEX again
# 2MASS
idx_2mass, idx_galex1, d2d, d3d = c_2mass.search_around_sky(c_selected_galex, sep_min)
# WISE
idx_wise, idx_galex2, d2d, d3d = c_wise.search_around_sky(c_selected_galex, sep_min)
print("Number of match in GALEX: ", len(c_selected_galex))
print("Number of match in 2MASS: ", len(idx_2mass))
print("Number of match in WISE : ", len(idx_wise))
Explanation: Filter all Cats
End of explanation |
10,415 | Given the following text description, write Python code to implement the functionality described.
Description:
Sort a nearly sorted array using STL
Given an array of size n , where every element is k away from its target position , sorts the array in O ( n Log n ) time . ; Sort the array using inbuilt function ; An utility function to print array elements ; Driver code
| Python Code:
def sortK(arr , n , k ) :
arr . sort()
def printArray(arr , size ) :
for i in range(size ) :
print(arr[i ] , end = "▁ ")
print()
k = 3
arr =[2 , 6 , 3 , 12 , 56 , 8 ]
n = len(arr )
sortK(arr , n , k )
print("Following ▁ is ▁ sorted ▁ array ")
printArray(arr , n )
|
10,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cookbook recipe
Step1: Either create a new ipyrad assembly or load an existing one
Step2: Or load a finished assembly from its JSON file
Step3: Look at the stats summary for this assembly
Step4: Load R-language extension
Step5: Transfer Python object to R
There are a few odd tricks to using this module. One being that you shouldn't try to transfer objects with '.' in their names. R doesn't like that. So simply rename these objects before passing them. Below I rename the stats data frame and then use the '-i' flag in R to import it into R namespace. The "%R" at the beginning of the line tells IPyhton to execute just that line as R code.
Step6: Now R knows about statsDF
We can access it just like a normal R DataFrame, and even create plots. Using the cell header %%R everything in the cell will execute as R code.
Step7: Let's transfer more data from Python to R
Step8: Plot coverage among samples
Kinda boring in this example...
Step9: Plot the distribution of SNPs among loci | Python Code:
## import ipyrad and give it a shorter name
import ipyrad as ip
Explanation: Cookbook recipe: Access and plot ipyrad stats in R
Jupyter notebooks provide a convenient interface for sharing data and functions between Python and R through use of the Python rpy2 module. By combining all of your code from across multiple languages inside a single notebook your workflow from analyses to plots is easy to follow and reproduce. In this notebook, I show an example of exporting data from an ipyrad JSON object into R so that we can create high quality plots using plotting libraries in R.
Why do this?
A large motivation for creating the JSON storage object for ipyrad Assemblies is that this object stores all of the information for an entire assembly, and thus provides a useful portable format. This way you can execute very large assemblies on a remote HPC cluster and simply import the small JSON file onto your laptop to analyze and compare its size and other stats. It is of course easiest to analyze the stats in Python since ipyrad has a built-in parser for these JSON objects. However, many users may prefer to use R for plotting, and so here we show how to easily transfer results from ipyrad to R.
Two ways of accessing data in the ipyrad JSON file:
Load the data in with the ipyrad API and save to CSV.
Load the data in with the ipyrad API and export to R using rpy2.
Start by importing ipyrad
End of explanation
## create a test assembly
data = ip.Assembly("data")
data.set_params('project_dir', 'test')
data.set_params('raw_fastq_path', 'ipsimdata/rad_example_R1_.fastq.gz')
data.set_params('barcodes_path', 'ipsimdata/rad_example_barcodes.txt')
## Assemble data set; runs steps 1-7
data.run('1')
Explanation: Either create a new ipyrad assembly or load an existing one:
Here we use the API to run the example RAD data set, which only takes about 3 minutes on a 4-core laptop.
End of explanation
## load the JSON file for this assembly
data = ip.load_json("test/data.json")
Explanation: Or load a finished assembly from its JSON file
End of explanation
## Data can be accessed from the object's stats and stats_df attributes
print data.stats
Explanation: Look at the stats summary for this assembly
End of explanation
## This requires that you have the Python module `rpy2` installed.
## If you do not, it can be installed in anaconda with:
## conda install rpy2
Explanation: Load R-language extension
End of explanation
%load_ext rpy2.ipython
## rename data.stats as statsDF
statsDF = data.stats
## import statsDF into R namespace
%R -i statsDF
Explanation: Transfer Python object to R
There are a few odd tricks to using this module. One being that you shouldn't try to transfer objects with '.' in their names. R doesn't like that. So simply rename these objects before passing them. Below I rename the stats data frame and then use the '-i' flag in R to import it into R namespace. The "%R" at the beginning of the line tells IPyhton to execute just that line as R code.
End of explanation
%%R
print(statsDF)
%%R -w 350 -h 350
## the dimensions above tell IPython how big to make the embedded figure
## alternatively you can adjust the size when you save the figure
plot(statsDF$reads_raw,
statsDF$reads_filtered,
pch=20, cex=3)
Explanation: Now R knows about statsDF
We can access it just like a normal R DataFrame, and even create plots. Using the cell header %%R everything in the cell will execute as R code.
End of explanation
### Other stats from our assembly are also available.
### First store names and then import into R
s5 = data.stats_dfs.s5
s7L = data.stats_dfs.s7_loci
s7S = data.stats_dfs.s7_snps
s7N = data.stats_dfs.s7_samples
## no spaces allowed between comma-separated names when
## transferring multiple objects to R
%R -i s5,s7L,s7S,s7N
Explanation: Let's transfer more data from Python to R
End of explanation
%%R -w 800 -h 320
##
barplot(s7N$sample_coverage,
col='grey30', names=rownames(s7N),
ylab="N loci",
xlab="Sample")
Explanation: Plot coverage among samples
Kinda boring in this example...
End of explanation
%%R -w 450 -h 400
print(s7S)
barplot(s7S$var,
col=rgb(0,0,1,1/4),
names=rownames(s7S),
ylab="N loci", ylim=c(0, 400),
xlab="N variable sites")
barplot(s7S$pis,
col=rgb(1,0,0,1/4),
names=rownames(s7S),
ylab="N loci", ylim=c(0, 400),
xlab="N variable sites",
add=TRUE)
Explanation: Plot the distribution of SNPs among loci
End of explanation |
10,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this example, we will use tensorflow.keras package to create a keras image classification application using model MobileNetV2, and transfer the application to Cluster Serving step by step.
Original Keras application
We will first show an original Keras application, which download the data and preprocess it, then create the MobileNetV2 model to predict.
Step1: In keras, input could be ndarray, or generator. We could just use model.predict(test_generator). But to simplify, here we just input the first record to model.
Step2: Great! Now the Keras application is completed.
Export TensorFlow Saved Model
Next, we transfer the application to Cluster Serving. The first step is to save the model to SavedModel format.
Step3: Deploy Cluster Serving
After model prepared, we start to deploy it on Cluster Serving.
First install Cluster Serving
Step4: We config the model path in config.yaml to following (the detail of config is at Cluster Serving Configuration)
Step5: Start Cluster Serving
Cluster Serving requires Flink and Redis installed, and corresponded environment variables set, check Cluster Serving Installation Guide for detail.
Flink cluster should start before Cluster Serving starts, if Flink cluster is not started, call following to start a local Flink cluster.
Step6: After configuration, start Cluster Serving by cluster-serving-start (the detail is at Cluster Serving Programming Guide)
Step7: Prediction using Cluster Serving
Next we start Cluster Serving code at python client.
Step8: In Cluster Serving, only NdArray is supported as input. Thus, we first transform the generator to ndarray (If you do not know how to transform your input to NdArray, you may get help at data transform guide)
Step9: If everything works well, the result prediction would be the exactly the same NdArray object with the output of original Keras model.
Next is the way to use http service through python.
Step10: If you do not know how to find the jar or other http service, you may get help at Cluster Serving http guide
Step11: Cluster Serving provides an Python util http_response_to_ndarray which let user parse http response directly to ndarray, as following. | Python Code:
import tensorflow as tf
import os
import PIL
tf.__version__
# Obtain data from url:"https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip"
zip_file = tf.keras.utils.get_file(origin="https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip",
fname="cats_and_dogs_filtered.zip", extract=True)
# Find the directory of validation set
base_dir, _ = os.path.splitext(zip_file)
test_dir = os.path.join(base_dir, 'validation')
# Set images size to 160x160x3
image_size = 160
# Rescale all images by 1./255 and apply image augmentation
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
# Flow images using generator to the test_generator
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(image_size, image_size),
batch_size=1,
class_mode='binary')
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE=(160,160,3)
model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
Explanation: In this example, we will use tensorflow.keras package to create a keras image classification application using model MobileNetV2, and transfer the application to Cluster Serving step by step.
Original Keras application
We will first show an original Keras application, which download the data and preprocess it, then create the MobileNetV2 model to predict.
End of explanation
prediction=model.predict(test_generator.next()[0])
print(prediction)
Explanation: In keras, input could be ndarray, or generator. We could just use model.predict(test_generator). But to simplify, here we just input the first record to model.
End of explanation
# Save trained model to ./transfer_learning_mobilenetv2
model.save('/tmp/transfer_learning_mobilenetv2')
! ls /tmp/transfer_learning_mobilenetv2
Explanation: Great! Now the Keras application is completed.
Export TensorFlow Saved Model
Next, we transfer the application to Cluster Serving. The first step is to save the model to SavedModel format.
End of explanation
! pip install analytics-zoo-serving
# we go to a new directory and initialize the environment
! mkdir cluster-serving
os.chdir('cluster-serving')
! cluster-serving-init
! tail wget-log.2
# if you encounter slow download issue like above, you can just use following command to download
# ! wget https://repo1.maven.org/maven2/com/intel/analytics/zoo/analytics-zoo-bigdl_0.12.1-spark_2.4.3/0.9.0/analytics-zoo-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar
# if you are using wget to download, call mv *serving.jar zoo.jar again after downloaded.
# After initialization finished, check the directory
! ls
Explanation: Deploy Cluster Serving
After model prepared, we start to deploy it on Cluster Serving.
First install Cluster Serving
End of explanation
## Analytics-zoo Cluster Serving
model:
# model path must be provided
path: /tmp/transfer_learning_mobilenetv2
! head config.yaml
Explanation: We config the model path in config.yaml to following (the detail of config is at Cluster Serving Configuration)
End of explanation
! $FLINK_HOME/bin/start-cluster.sh
Explanation: Start Cluster Serving
Cluster Serving requires Flink and Redis installed, and corresponded environment variables set, check Cluster Serving Installation Guide for detail.
Flink cluster should start before Cluster Serving starts, if Flink cluster is not started, call following to start a local Flink cluster.
End of explanation
! cluster-serving-start
Explanation: After configuration, start Cluster Serving by cluster-serving-start (the detail is at Cluster Serving Programming Guide)
End of explanation
from zoo.serving.client import InputQueue, OutputQueue
input_queue = InputQueue()
Explanation: Prediction using Cluster Serving
Next we start Cluster Serving code at python client.
End of explanation
arr = test_generator.next()[0]
arr
# Use async api to put and get, you have pass a name arg and use the name to get
input_queue.enqueue('my-input', t=arr)
output_queue = OutputQueue()
prediction = output_queue.query('my-input')
# Use sync api to predict, this will block until the result is get or timeout
prediction = input_queue.predict(arr)
prediction
Explanation: In Cluster Serving, only NdArray is supported as input. Thus, we first transform the generator to ndarray (If you do not know how to transform your input to NdArray, you may get help at data transform guide)
End of explanation
# start the http server via jar
# ! java -jar analytics-zoo-bigdl_0.10.0-spark_2.4.3-0.9.0-SNAPSHOT-http.jar
Explanation: If everything works well, the result prediction would be the exactly the same NdArray object with the output of original Keras model.
Next is the way to use http service through python.
End of explanation
! curl http://localhost:10020
Explanation: If you do not know how to find the jar or other http service, you may get help at Cluster Serving http guide
End of explanation
import json
import requests
import numpy as np
from zoo.serving.client import http_response_to_ndarray
url = 'http://localhost:10020/predict'
d = json.dumps({"instances":[{"floatTensor": arr.tolist()}]})
r = requests.post(url, data=d)
http_prediction = http_response_to_ndarray(r)
http_prediction
# don't forget to delete the model you save for this tutorial
! rm -rf /tmp/transfer_learning_mobilenetv2
Explanation: Cluster Serving provides an Python util http_response_to_ndarray which let user parse http response directly to ndarray, as following.
End of explanation |
10,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Dataset Preparation
Overview
In this phase, a startups dataset will be properly created and prepared for further feature analysis. Different features will be created here by combining information from the CSV files we have available
Step1: Start main dataset by USA companies from companies.csv
We'll be using in the analysis only USA based companies since companies from other countries have a large amount of missing data
Step2: Extract company category features
Now that we have a first version of our dataset, we'll expand the category_list attribute into dummy variables for categories.
Step3: Since there are too many categories, we'll be selecting the top 50 more frequent ones.
We see from the chart above, that with these 50 (out of 60813) categories we cover 46% of the companies.
Step4: So now we added more 50 categories to our dataset.
Analyzing total funding and funding round features
Step5: Analyzing date variables
Extract investment rounds features
Here, we'll extract from the rounds.csv file the number of rounds and total amount invested for each different type of investment.
Step6: Change dataset index
We'll set the company id (permalink attribute) as the index for the dataset. This simple change will make it easier to attach new features to the dataset.
Step7: Extract acquisitions features
Here, we'll extract the number of acquisitions were made by each company in our dataset.
Step8: Extract investments feature
Here, we'll extract the number of investments made by each company in our dataset.
Note
Step9: Extract average number of investors and amount invested per round
Here we'll extract two more features
The average number of investors that participated in each around of investment
The average amount invested among all the investment rounds a startup had
Step10: Drop useless features
Here we'll drop homepage_url, category_list, region, city, country_code We'll also move status to the end of the dataframe
Step11: Normalize numeric variables
Here we'll set all the numeric variables into the same scale (0 to 1)
Step12: Normalize date variables
Here we'll convert dates to ages in months up to the first day of 2017
Step13: Extract state_code features
Step14: As we did for the categories variable, in order to decrease the amount of features in our dataset, let's just select the top 15 more frequent states (which cover already 82% of our companies)
Step15: Move status to the end of dataframe and save to file | Python Code:
#All imports here
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
from datetime import datetime
from dateutil import relativedelta
%matplotlib inline
#Let's start by importing our csv files into dataframes
df_companies = pd.read_csv('data/companies.csv')
df_acquisitions = pd.read_csv('data/acquisitions.csv')
df_investments = pd.read_csv('data/investments.csv')
df_rounds = pd.read_csv('data/rounds.csv')
Explanation: 1. Dataset Preparation
Overview
In this phase, a startups dataset will be properly created and prepared for further feature analysis. Different features will be created here by combining information from the CSV files we have available: acquisitions.csv, investments.csv, rounds.csv and companies.csv.
Load all available data from CSV general files
End of explanation
#Our final database will be stored in 'startups_USA'
startups_USA = df_companies[df_companies['country_code'] == 'USA']
startups_USA.head()
Explanation: Start main dataset by USA companies from companies.csv
We'll be using in the analysis only USA based companies since companies from other countries have a large amount of missing data
End of explanation
from operator import methodcaller
def split_categories(categories):
#get a unique list of the categories
splitted_categories = list(categories.astype('str').unique())
#split each category by |
splitted_categories = map(methodcaller("split", "|"), splitted_categories)
#flatten the list of sub categories
splitted_categories = [item for sublist in splitted_categories for item in sublist]
return splitted_categories
def explore_categories(categories, top_n_categories):
cat = split_categories(categories)
print 'There are in total {} different categories'.format(len(cat))
prob = pd.Series(cat).value_counts()
print prob.head()
#select first <top_n_categories>
mask = prob > prob[top_n_categories]
head_prob = prob.loc[mask].sum()
tail_prob = prob.loc[~mask].sum()
total_sum = prob.sum()
prob = prob.loc[mask]
prob2 = pd.DataFrame({'top '+str(top_n_categories)+' categories': head_prob, 'others': tail_prob},index=[0])
fig, axs = plt.subplots(2,1, figsize=(15,6))
prob.plot(kind='bar', ax=axs[0])
prob2.plot(kind='bar', ax=axs[1])
for bar in axs[1].patches:
height = bar.get_height()
axs[1].text(bar.get_x() + bar.get_width()/2., 0.50*height, '%.2f' % (float(height)/float(total_sum)*100) + "%", ha='center', va='top')
fig.tight_layout()
plt.xticks(rotation=90)
plt.show()
explore_categories(startups_USA['category_list'], top_n_categories=50)
Explanation: Extract company category features
Now that we have a first version of our dataset, we'll expand the category_list attribute into dummy variables for categories.
End of explanation
def expand_top_categories_into_dummy_variables(df):
cat = df['category_list'].astype('str')
cat_count = cat.str.split('|').apply(lambda x: pd.Series(x).value_counts()).sum()
#Get a dummy dataset for categories
dummies = cat.str.get_dummies(sep='|')
#Count of categories splitted first 50)
top50categories = list(cat_count.sort_values(ascending=False).index[:50])
#Create a dataframe with the 50 top categories to be concatenated later to the complete dataframe
categories_df = dummies[top50categories]
categories_df = categories_df.add_prefix('Category_')
return pd.concat([df, categories_df], axis=1, ignore_index=False)
startups_USA = expand_top_categories_into_dummy_variables(startups_USA)
startups_USA.head()
Explanation: Since there are too many categories, we'll be selecting the top 50 more frequent ones.
We see from the chart above, that with these 50 (out of 60813) categories we cover 46% of the companies.
End of explanation
startups_USA['funding_rounds'].hist(bins=range(1,10))
plt.title("Histogram of the number of funding rounds")
plt.ylabel('Number of companies')
plt.xlabel('Number of funding rounds')
#funding_total_usd
#funding_rounds
plt.subplot()
startups_USA[startups_USA['funding_total_usd'] != '-']. \
set_index('name')['funding_total_usd'] \
.astype(float) \
.sort_values(ascending=False)\
[:30].plot(kind='barh', figsize=(5,7))
plt.gca().invert_yaxis()
plt.title('Companies with highest total funding')
plt.ylabel('Companies')
plt.xlabel('Total amount of funding (USD)')
Explanation: So now we added more 50 categories to our dataset.
Analyzing total funding and funding round features
End of explanation
# Investment types
df_rounds['funding_round_type'].value_counts()
import warnings
warnings.filterwarnings('ignore')
#Iterate over each kind of funding type, and add two new features for each into the dataframe
def add_dummy_for_funding_type(df, aggr_rounds, funding_type):
funding_df = aggr_rounds.iloc[aggr_rounds.index.get_level_values('funding_round_type') == funding_type].reset_index()
funding_df.columns = funding_df.columns.droplevel()
funding_df.columns = ['company_permalink', funding_type, funding_type+'_funding_total_usd', funding_type+'_funding_rounds']
funding_df = funding_df.drop(funding_type,1)
new_df = pd.merge(df, funding_df, on='company_permalink', how='left')
new_df = new_df.fillna(0)
return new_df
def expand_investment_rounds(df, df_rounds):
#Prepare an aggregated rounds dataframe grouped by company and funding type
rounds_agg = df_rounds.groupby(['company_permalink', 'funding_round_type'])['raised_amount_usd'].agg({'amount': [ pd.Series.sum, pd.Series.count]})
#Get available unique funding types
funding_types = list(rounds_agg.index.levels[1])
#Prepare the dataframe where all the dummy features for each funding type will be added (number of rounds and total sum for each type)
rounds_df = df[['permalink']]
rounds_df = rounds_df.rename(columns = {'permalink':'company_permalink'})
#For each funding type, add two more columns to rounds_df
for funding_type in funding_types:
rounds_df = add_dummy_for_funding_type(rounds_df, rounds_agg, funding_type)
#remove the company_permalink variable, since it's already available in the companies dataframe
rounds_df = rounds_df.drop('company_permalink', 1)
#set rounds_df to have the same index of the other dataframes
rounds_df.index = df.index
return pd.concat([df, rounds_df], axis=1, ignore_index=False)
startups_USA = expand_investment_rounds(startups_USA, df_rounds)
startups_USA.head()
Explanation: Analyzing date variables
Extract investment rounds features
Here, we'll extract from the rounds.csv file the number of rounds and total amount invested for each different type of investment.
End of explanation
startups_USA = startups_USA.set_index('permalink')
Explanation: Change dataset index
We'll set the company id (permalink attribute) as the index for the dataset. This simple change will make it easier to attach new features to the dataset.
End of explanation
import warnings
warnings.filterwarnings('ignore')
def extract_feature_number_of_acquisitions(df, df_acquisitions):
number_of_acquisitions = df_acquisitions.groupby(['acquirer_permalink'])['acquirer_permalink'].agg({'amount': [ pd.Series.count]}).reset_index()
number_of_acquisitions.columns = number_of_acquisitions.columns.droplevel()
number_of_acquisitions.columns = ['permalink', 'number_of_acquisitions']
number_of_acquisitions = number_of_acquisitions.set_index('permalink')
number_of_acquisitions = number_of_acquisitions.fillna(0)
new_df = df.join(number_of_acquisitions)
new_df['number_of_acquisitions'] = new_df['number_of_acquisitions'].fillna(0)
return new_df
startups_USA = extract_feature_number_of_acquisitions(startups_USA, df_acquisitions)
Explanation: Extract acquisitions features
Here, we'll extract the number of acquisitions were made by each company in our dataset.
End of explanation
import warnings
warnings.filterwarnings('ignore')
def extract_feature_number_of_investments(df, df_investments):
number_of_investments = df_investments.groupby(['investor_permalink'])['investor_permalink'].agg({'amount': [ pd.Series.count]}).reset_index()
number_of_investments.columns = number_of_investments.columns.droplevel()
number_of_investments.columns = ['permalink', 'number_of_investments']
number_of_investments = number_of_investments.set_index('permalink')
number_of_unique_investments = df_investments.groupby(['investor_permalink'])['company_permalink'].agg({'amount': [ pd.Series.nunique]}).reset_index()
number_of_unique_investments.columns = number_of_unique_investments.columns.droplevel()
number_of_unique_investments.columns = ['permalink', 'number_of_unique_investments']
number_of_unique_investments = number_of_unique_investments.set_index('permalink')
new_df = df.join(number_of_investments)
new_df['number_of_investments'] = new_df['number_of_investments'].fillna(0)
new_df = new_df.join(number_of_unique_investments)
new_df['number_of_unique_investments'] = new_df['number_of_unique_investments'].fillna(0)
return new_df
startups_USA = extract_feature_number_of_investments(startups_USA, df_investments)
Explanation: Extract investments feature
Here, we'll extract the number of investments made by each company in our dataset.
Note: This is not the number of times in which someone invested in the startup. It is the number of times in which each startup have made an investment in other company.
End of explanation
import warnings
warnings.filterwarnings('ignore')
def extract_feature_avg_investors_per_round(df, investments):
number_of_investors_per_round = investments.groupby(['company_permalink', 'funding_round_permalink'])['investor_permalink'].agg({'investor_permalink': [ pd.Series.count]}).reset_index()
number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0)
number_of_investors_per_round.columns = ['company_permalink', 'funding_round_permalink', 'count']
number_of_investors_per_round = number_of_investors_per_round.groupby(['company_permalink']).agg({'count': [ pd.Series.mean]}).reset_index()
number_of_investors_per_round.columns = number_of_investors_per_round.columns.droplevel(0)
number_of_investors_per_round.columns = ['company_permalink', 'number_of_investors_per_round']
number_of_investors_per_round = number_of_investors_per_round.set_index('company_permalink')
new_df = df.join(number_of_investors_per_round)
new_df['number_of_investors_per_round'] = new_df['number_of_investors_per_round'].fillna(-1)
return new_df
def extract_feature_avg_amount_invested_per_round(df, investments):
investmentsdf = investments.copy()
investmentsdf['raised_amount_usd'] = investmentsdf['raised_amount_usd'].astype(float)
avg_amount_invested_per_round = investmentsdf.groupby(['company_permalink', 'funding_round_permalink'])['raised_amount_usd'].agg({'raised_amount_usd': [ pd.Series.mean]}).reset_index()
avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0)
avg_amount_invested_per_round.columns = ['company_permalink', 'funding_round_permalink', 'mean']
avg_amount_invested_per_round = avg_amount_invested_per_round.groupby(['company_permalink']).agg({'mean': [ pd.Series.mean]}).reset_index()
avg_amount_invested_per_round.columns = avg_amount_invested_per_round.columns.droplevel(0)
avg_amount_invested_per_round.columns = ['company_permalink', 'avg_amount_invested_per_round']
avg_amount_invested_per_round = avg_amount_invested_per_round.set_index('company_permalink')
new_df = df.join(avg_amount_invested_per_round)
new_df['avg_amount_invested_per_round'] = new_df['avg_amount_invested_per_round'].fillna(-1)
return new_df
startups_USA = extract_feature_avg_investors_per_round(startups_USA, df_investments)
startups_USA = extract_feature_avg_amount_invested_per_round(startups_USA, df_investments)
startups_USA.head()
Explanation: Extract average number of investors and amount invested per round
Here we'll extract two more features
The average number of investors that participated in each around of investment
The average amount invested among all the investment rounds a startup had
End of explanation
#drop features
startups_USA = startups_USA.drop(['name','homepage_url', 'category_list', 'region', 'city', 'country_code'], 1)
#move status to the end of the dataframe
cols = list(startups_USA)
cols.append(cols.pop(cols.index('status')))
startups_USA = startups_USA.ix[:, cols]
Explanation: Drop useless features
Here we'll drop homepage_url, category_list, region, city, country_code We'll also move status to the end of the dataframe
End of explanation
def normalize_numeric_features(df, columns_to_scale = None):
min_max_scaler = preprocessing.MinMaxScaler()
startups_normalized = df.copy()
#Convert '-' to zeros in funding_total_usd
startups_normalized['funding_total_usd'] = startups_normalized['funding_total_usd'].replace('-', 0)
#scale numeric features
startups_normalized[columns_to_scale] = min_max_scaler.fit_transform(startups_normalized[columns_to_scale])
return startups_normalized
columns_to_scale = list(startups_USA.filter(regex=(".*(funding_rounds|funding_total_usd)|(number_of|avg_).*")).columns)
startups_USA = normalize_numeric_features(startups_USA, columns_to_scale)
Explanation: Normalize numeric variables
Here we'll set all the numeric variables into the same scale (0 to 1)
End of explanation
def date_to_age_in_months(date):
if date != date or date == 0: #is NaN
return 0
date1 = datetime.strptime(date, '%Y-%m-%d')
date2 = datetime.strptime('2017-01-01', '%Y-%m-%d') #get age until 01/01/2017
delta = relativedelta.relativedelta(date2, date1)
return delta.years * 12 + delta.months
def normalize_date_variables(df):
date_vars = ['founded_at', 'first_funding_at', 'last_funding_at']
for var in date_vars:
df[var] = df[var].map(date_to_age_in_months)
df = normalize_numeric_features(df, date_vars)
return df
startups_USA = normalize_date_variables(startups_USA)
Explanation: Normalize date variables
Here we'll convert dates to ages in months up to the first day of 2017
End of explanation
def explore_states(states, top_n_states):
print 'There are in total {} different states'.format(len(states.unique()))
prob = pd.Series(states).value_counts()
print prob.head()
#select first <top_n_categories>
mask = prob > prob[top_n_states]
head_prob = prob.loc[mask].sum()
tail_prob = prob.loc[~mask].sum()
total_sum = prob.sum()
prob = prob.loc[mask]
prob2 = pd.DataFrame({'top '+str(top_n_states)+' states': head_prob, 'others': tail_prob},index=[0])
fig, axs = plt.subplots(2,1, figsize=(15,6))
prob.plot(kind='bar', ax=axs[0])
prob2.plot(kind='bar', ax=axs[1])
for bar in axs[1].patches:
height = bar.get_height()
axs[1].text(bar.get_x() + bar.get_width()/2., 0.50*height, '%.2f' % (float(height)/float(total_sum)*100) + "%", ha='center', va='top')
fig.tight_layout()
plt.xticks(rotation=90)
plt.show()
explore_states(startups_USA['state_code'], top_n_states=15)
Explanation: Extract state_code features
End of explanation
def expand_top_states_into_dummy_variables(df):
states = df['state_code'].astype('str')
#Get a dummy dataset for categories
dummies = pd.get_dummies(states)
#select top most frequent states
top15states = list(states.value_counts().sort_values(ascending=False).index[:15])
#Create a dataframe with the 15 top states to be concatenated later to the complete dataframe
states_df = dummies[top15states]
states_df = states_df.add_prefix('State_')
new_df = pd.concat([df, states_df], axis=1, ignore_index=False)
new_df = new_df.drop(['state_code'], axis=1)
return new_df
startups_USA = expand_top_states_into_dummy_variables(startups_USA)
Explanation: As we did for the categories variable, in order to decrease the amount of features in our dataset, let's just select the top 15 more frequent states (which cover already 82% of our companies)
End of explanation
cols = list(startups_USA)
cols.append(cols.pop(cols.index('status')))
startups_USA = startups_USA.ix[:, cols]
startups_USA.to_csv('data/startups_pre_processed.csv')
startups_USA.head()
Explanation: Move status to the end of dataframe and save to file
End of explanation |
10,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The series, $1^1 + 2^2 + 3^3 + ... + 10^{10} = 10405071317$.
Find the last ten digits of the series, $1^1 + 2^2 + 3^3 + ... + 1000^{1000}$.
Version 1
Step1: <!-- TEASER_END -->
This leaves much to be desired since we have to compute a integer with at least $10^{3000}$ digits only to truncate off $10^{3000} - 10^{10} = 10^{10} (10^{2990} - 1)$ digits. Note that in most other languages such as Java, we would have had to resort to some library like BigInteger to perform this computation. In Python, all numbers are represented by a (theoretically) infinite number of bits | Python Code:
from six.moves import map, range, reduce
sum(map(lambda k: k**k, range(1, 1000+1))) % 10**10
Explanation: The series, $1^1 + 2^2 + 3^3 + ... + 10^{10} = 10405071317$.
Find the last ten digits of the series, $1^1 + 2^2 + 3^3 + ... + 1000^{1000}$.
Version 1: The obvious way
End of explanation
def prod_mod(nums, m):
"Multiply all nums modulo m"
return reduce(lambda p, q: (p*q)%m, map(lambda n: n%m, nums))
from itertools import repeat
pow_mod = lambda n, p, m: prod_mod(repeat(n, p), m)
pow_mod(3, 3, 8)
sum_mod = lambda nums, m: reduce(lambda p, q: (p+q)%m, map(lambda n: n%m, nums))
sum_mod(map(lambda k: pow_mod(k, k, 10**10), range(1, 1000+1)), 10**10)
Explanation: <!-- TEASER_END -->
This leaves much to be desired since we have to compute a integer with at least $10^{3000}$ digits only to truncate off $10^{3000} - 10^{10} = 10^{10} (10^{2990} - 1)$ digits. Note that in most other languages such as Java, we would have had to resort to some library like BigInteger to perform this computation. In Python, all numbers are represented by a (theoretically) infinite number of bits:
Integers (int)
These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2’s complement which gives the illusion of an infinite string of sign bits extending to the left.
https://docs.python.org/3/reference/datamodel.html
Version 2: Some simple modulo arithmetic
We're asked to find $1^1 + 2^2 + 3^3 + ... + 1000^{1000} \mod 10^{10}$. Note that
$$a + b \mod n = (a \mod n) + (b \mod n)$$
and that
$$a \cdot b \mod n = (a \mod n) \cdot (b \mod n)$$
so we can implement modulo sum and prod functions.
End of explanation |
10,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hardware simulators - gem5 target support
The gem5 simulator is a modular platform for computer-system architecture research, encompassing system-level architecture as well as processor microarchitecture.
Before creating the gem5 target, the inputs needed by gem5 should have been created (eg gem5 binary, kernel suitable for gem5, disk image, device tree blob, etc). For more information, see GEM5 - Main Page.
Environment setup
Step1: Target configuration
The definitions below need to be changed to the paths pointing to the gem5 binaries on your development machine.
M5_PATH needs to be set in your environment
platform - the currently supported platforms are
Step2: Run workloads on gem5
This is an example of running a workload and extracting stats from the simulation using m5 commands. For more information about m5 commands, see http
Step3: Trace analysis
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb. | Python Code:
from conf import LisaLogging
LisaLogging.setup()
# One initial cell for imports
import json
import logging
import os
from env import TestEnv
# Suport for FTrace events parsing and visualization
import trappy
from trappy.ftrace import FTrace
from trace import Trace
# Support for plotting
# Generate plots inline
%matplotlib inline
import numpy
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Hardware simulators - gem5 target support
The gem5 simulator is a modular platform for computer-system architecture research, encompassing system-level architecture as well as processor microarchitecture.
Before creating the gem5 target, the inputs needed by gem5 should have been created (eg gem5 binary, kernel suitable for gem5, disk image, device tree blob, etc). For more information, see GEM5 - Main Page.
Environment setup
End of explanation
# Root path of the gem5 workspace
base = "/home/vagrant/gem5/"
conf = {
# Only 'linux' is supported by gem5 for now
# 'android' is a WIP
"platform" : 'linux',
# Preload settings for a specific target
"board" : 'gem5',
# Host that will run the gem5 instance
"host" : "workstation-lin",
"gem5" : {
# System to simulate
"system" : {
# Platform description
"platform" : {
# Gem5 platform description
# LISA will also look for an optional gem5<platform> board file
# located in the same directory as the description file.
"description" : os.path.join(base, "juno.py"),
"args" : [
"--atomic",
# Resume simulation from a previous checkpoint
# Checkpoint must be taken before Virtio folders are mounted
# "--checkpoint-indir " + os.path.join(base, "Juno/atomic/",
# "checkpoints"),
# "--checkpoint-resume 1",
]
},
# Kernel compiled for gem5 with Virtio flags
"kernel" : os.path.join(base, "platform_juno/", "vmlinux"),
# DTB of the system to simulate
"dtb" : os.path.join(base, "platform_juno/", "armv8_juno_r2.dtb"),
# Disk of the distrib to run
"disk" : os.path.join(base, "binaries/", "aarch64-ubuntu-trusty-headless.img")
},
# gem5 settings
"simulator" : {
# Path to gem5 binary
"bin" : os.path.join(base, "gem5/build/ARM/gem5.fast"),
# Args to be given to the binary
"args" : [
# Zilch
],
}
},
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_load_waking_task",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"sched_energy_diff"
],
"buffsize" : 100 * 1024,
},
"modules" : ["cpufreq", "bl", "gem5stats"],
# Tools required by the experiments
"tools" : ['trace-cmd', 'sysbench'],
# Output directory on host
"results_dir" : "gem5_res"
}
# Create the hardware target. Patience is required :
# ~40 minutes to resume from a checkpoint (detailed)
# ~5 minutes to resume from a checkpoint (atomic)
# ~3 hours to start from scratch (detailed)
# ~15 minutes to start from scratch (atomic)
te = TestEnv(conf)
target = te.target
Explanation: Target configuration
The definitions below need to be changed to the paths pointing to the gem5 binaries on your development machine.
M5_PATH needs to be set in your environment
platform - the currently supported platforms are:
- linux - accessed via SSH connection
board - the currently supported boards are:
- gem5 - target is a gem5 simulator
host - target IP or MAC address of the platform hosting the simulator
gem5 - the settings for the simulation are:
system
platform
description - python description of the platform to simulate
args - arguments to be given to the python script (./gem5.fast model.py --help)
kernel - kernel image to run on the simulated platform
dtb - dtb of the platform to simulate
disk - disk image to run on the platform
simulator
bin - path to the gem5 simulator binary
args - arguments to be given to the gem5 binary (./gem5.fast --help)
modules - devlib modules to be enabled
exclude_modules - devlib modules to be disabled
tools - binary tools (available under ./tools/$ARCH/) to install by default
ping_time - wait time before trying to access the target after reboot
reboot_time - maximum time to wait after rebooting the target
features - list of test environment features to enable
- no-kernel - do not deploy kernel/dtb images
- no-reboot - do not force reboot the target at each configuration change
- debug - enable debugging messages
ftrace - ftrace configuration
events
functions
buffsize
results_dir - location of results of the experiments
End of explanation
# This function is an example use of gem5's ROI functionality
def record_time(command):
roi = 'time'
target.gem5stats.book_roi(roi)
target.gem5stats.roi_start(roi)
target.execute(command)
target.gem5stats.roi_end(roi)
res = target.gem5stats.match(['host_seconds', 'sim_seconds'], [roi])
target.gem5stats.free_roi(roi)
return res
# Initialise command: [binary/script, arguments]
workload = 'sysbench'
args = '--test=cpu --max-time=1 run'
# Install binary if needed
path = target.install_if_needed("/home/vagrant/lisa/tools/arm64/" + workload)
command = path + " " + args
# FTrace the execution of this workload
te.ftrace.start()
res = record_time(command)
te.ftrace.stop()
print "{} -> {}s wall-clock execution time, {}s simulation-clock execution time".format(command,
sum(map(float, res['host_seconds']['time'])),
sum(map(float, res['sim_seconds']['time'])))
Explanation: Run workloads on gem5
This is an example of running a workload and extracting stats from the simulation using m5 commands. For more information about m5 commands, see http://gem5.org/M5ops
End of explanation
# Load traces in memory (can take several minutes)
platform_file = os.path.join(te.res_dir, 'platform.json')
te.platform_dump(te.res_dir, platform_file)
with open(platform_file, 'r') as fh:
platform = json.load(fh)
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
trace = Trace(platform, trace_file, events=conf['ftrace']['events'], normalize_time=False)
# Plot some stuff
trace.analysis.cpus.plotCPU()
# Simulations done
target.disconnect()
Explanation: Trace analysis
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
End of explanation |
10,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear regression homework with Yelp votes
Introduction
This assignment uses a small subset of the data from Kaggle's Yelp Business Rating Prediction competition.
Description of the data
Step1: Task 1 (Bonus)
Ignore the yelp.csv file, and construct this DataFrame yourself from yelp.json. This involves reading the data into Python, decoding the JSON, converting it to a DataFrame, and adding individual columns for each of the vote types.
Step2: Task 2
Explore the relationship between each of the vote types (cool/useful/funny) and the number of stars.
Step3: Task 3
Define cool/useful/funny as the feature matrix X, and stars as the response vector y.
Step4: Task 4
Fit a linear regression model and interpret the coefficients. Do the coefficients make intuitive sense to you? Explore the Yelp website to see if you detect similar trends.
Step5: Task 5
Evaluate the model by splitting it into training and testing sets and computing the RMSE. Does the RMSE make intuitive sense to you?
Step6: Task 6
Try removing some of the features and see if the RMSE improves.
Step7: Task 7 (Bonus)
Think of some new features you could create from the existing data that might be predictive of the response. Figure out how to create those features in Pandas, add them to your model, and see if the RMSE improves.
Step8: Task 8 (Bonus)
Compare your best RMSE on the testing set with the RMSE for the "null model", which is the model that ignores all features and simply predicts the mean response value in the testing set. | Python Code:
# access yelp.csv using a relative path
import pandas as pd
yelp = pd.read_csv('../data/yelp.csv')
yelp.head(1)
Explanation: Linear regression homework with Yelp votes
Introduction
This assignment uses a small subset of the data from Kaggle's Yelp Business Rating Prediction competition.
Description of the data:
yelp.json is the original format of the file. yelp.csv contains the same data, in a more convenient format. Both of the files are in this repo, so there is no need to download the data from the Kaggle website.
Each observation in this dataset is a review of a particular business by a particular user.
The "stars" column is the number of stars (1 through 5) assigned by the reviewer to the business. (Higher stars is better.) In other words, it is the rating of the business by the person who wrote the review.
The "cool" column is the number of "cool" votes this review received from other Yelp users. All reviews start with 0 "cool" votes, and there is no limit to how many "cool" votes a review can receive. In other words, it is a rating of the review itself, not a rating of the business.
The "useful" and "funny" columns are similar to the "cool" column.
Task 1
Read yelp.csv into a DataFrame.
End of explanation
# read the data from yelp.json into a list of rows
# each row is decoded into a dictionary named "data" using using json.loads()
import json
with open('../data/yelp.json', 'rU') as f:
data = [json.loads(row) for row in f]
# show the first review
data[0]
# convert the list of dictionaries to a DataFrame
ydata = pd.DataFrame(data)
ydata.head(2)
# add DataFrame columns for cool, useful, and funny
x = pd.DataFrame.from_records(ydata.votes)
ydata= pd.concat([ydata, x], axis=1)
ydata.head(2)
# drop the votes column and then display the head
ydata.drop("votes", axis=1, inplace=True)
ydata.head(2)
Explanation: Task 1 (Bonus)
Ignore the yelp.csv file, and construct this DataFrame yourself from yelp.json. This involves reading the data into Python, decoding the JSON, converting it to a DataFrame, and adding individual columns for each of the vote types.
End of explanation
# treat stars as a categorical variable and look for differences between groups by comparing the means of the groups
ydata.groupby(['stars'])['cool','funny','useful'].mean().T
# display acorrelation matrix of the vote types (cool/useful/funny) and stars
%matplotlib inline
import seaborn as sns
sns.heatmap(yelp.corr())
# display multiple scatter plots (cool, useful, funny) with linear regression line
feat_cols = ['cool', 'useful', 'funny']
sns.pairplot(ydata, x_vars=feat_cols, y_vars='stars', kind='reg', size=5)
Explanation: Task 2
Explore the relationship between each of the vote types (cool/useful/funny) and the number of stars.
End of explanation
X = ydata[['cool', 'useful', 'funny']]
y = ydata['stars']
Explanation: Task 3
Define cool/useful/funny as the feature matrix X, and stars as the response vector y.
End of explanation
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X, y)
# print the coefficients
print lr.intercept_
print lr.coef_
zip(X, lr.coef_)
Explanation: Task 4
Fit a linear regression model and interpret the coefficients. Do the coefficients make intuitive sense to you? Explore the Yelp website to see if you detect similar trends.
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn import metrics
import numpy as np
# define a function that accepts a list of features and returns testing RMSE
# define a function that accepts a list of features and returns testing RMSE
def train_test_rmse(feat_cols):
X = ydata[feat_cols]
y = ydata.stars
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
linreg = LinearRegression()
linreg.fit(X_train, y_train)
y_pred = linreg.predict(X_test)
return np.sqrt(metrics.mean_squared_error(y_test, y_pred))
train_test_split(X, y, random_state=123)
# calculate RMSE with all three features
print train_test_rmse(['cool', 'funny', 'useful'])
Explanation: Task 5
Evaluate the model by splitting it into training and testing sets and computing the RMSE. Does the RMSE make intuitive sense to you?
End of explanation
print train_test_rmse(['cool', 'funny', 'useful'])
print train_test_rmse(['cool', 'funny'])
print train_test_rmse(['cool'])
### RMSE is best with all 3 features
Explanation: Task 6
Try removing some of the features and see if the RMSE improves.
End of explanation
# new feature: Number of reviews per business_id. More reviews = more favored by reviewer?
# Adding # of occurs for business_id
ydata['review_freq']= ydata.groupby(['business_id'])['stars'].transform('count')
# new features:
# add 0 if occurs < 4 or 1 if >= 4
ydata["favored"] = [1 if x > 3 else 0 for x in ydata.review_freq]
# add new features to the model and calculate RMSE
print train_test_rmse(['cool', 'funny', 'useful','review_freq'])
Explanation: Task 7 (Bonus)
Think of some new features you could create from the existing data that might be predictive of the response. Figure out how to create those features in Pandas, add them to your model, and see if the RMSE improves.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123)
# create a NumPy array with the same shape as y_test
y_null = np.zeros_like(y_test, dtype=float)
# fill the array with the mean value of y_test
y_null.fill(y_test.mean())
y_null
np.sqrt(metrics.mean_squared_error(y_test, y_null))
Explanation: Task 8 (Bonus)
Compare your best RMSE on the testing set with the RMSE for the "null model", which is the model that ignores all features and simply predicts the mean response value in the testing set.
End of explanation |
10,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Craftcans.com - cleaning
Craftcans.com provides a database of 2692 crafted canned beers. The data on beers includes the following variables
Step1: As it can be seen above, the header row is saved as a simple observation unit. Let's rename the columns with the real headers.
Step2: Fine, but the very first row still remains. We have to drop it, and for that we will use the drop() function from the pandas library, which takes 2 arguments
Step3: Let's do the same for row names. Rows are called indecies in Pandas. Thus, let's take the values from the "ENTRY" column and use them to rename rows. Then, of course, we shoudl drop the additional column too.
Step4: Nice, now let's clean some variables. Let's start from the SIZE. It includes information on size which is presented in oz or ounces or differently. We need to have numbers only. Let's first see what are the available options. FOr that purpose, we can convert that columns to a list and then use the set() function to get the unique values from the list.
Step5: Excellent. This means we can write a regular expression that will find all the digits (including those that have a dot inside) and subsitute whatever comes afterwards with an empty string.
Step6: Done! Let's now go for the ABV variable. It is given in %-s, so we can keep the number only, and divide it by 100 to get the float value. But there may be some wrongly inputed values in the columns also. So let's divide only those that have correct values and assign a "missing value" value to others.
Step7: Great! Let's now get some info on our dataframe
Step8: As you can see the ABV guy is left with only 2348 values out of 2410, as we assigned "nan" to incorrect values. Let's impute those missing values. As it is a variable with integer values,we can impute with mean using the fillna() function from pandas.'
Step9: Done! But there is another variable with missing values | Python Code:
import pandas, re
data = pandas.read_excel("craftcans.xlsx")
data.head()
Explanation: Craftcans.com - cleaning
Craftcans.com provides a database of 2692 crafted canned beers. The data on beers includes the following variables:
Name
Style
Size
Alcohol by volume (ABV)
IBU’s
Brewer name
Brewer location
However, some of the variables include both number and text values (e.g. Size), while others include missing values (e.g. IBUs).
In order to make the dataset ready for analysis, one needs to clean it first. We will do that using pandas and regular expressions.
End of explanation
data.columns = data.iloc[0]
data.head()
Explanation: As it can be seen above, the header row is saved as a simple observation unit. Let's rename the columns with the real headers.
End of explanation
data = data.drop(0,axis=0)
data.head()
Explanation: Fine, but the very first row still remains. We have to drop it, and for that we will use the drop() function from the pandas library, which takes 2 arguments: the dropable row/column name and the axis (0 for rows and 1 for columns).
End of explanation
data.index = data["ENTRY"]
data.head()
data = data.drop("ENTRY",axis=1)
data.head()
Explanation: Let's do the same for row names. Rows are called indecies in Pandas. Thus, let's take the values from the "ENTRY" column and use them to rename rows. Then, of course, we shoudl drop the additional column too.
End of explanation
data_list = data["SIZE"].tolist()
unique_values = set(data_list)
print(unique_values)
Explanation: Nice, now let's clean some variables. Let's start from the SIZE. It includes information on size which is presented in oz or ounces or differently. We need to have numbers only. Let's first see what are the available options. FOr that purpose, we can convert that columns to a list and then use the set() function to get the unique values from the list.
End of explanation
for i in range(0,len(data['SIZE'])):
data['SIZE'][i] = re.sub('(^.*\d)(\s*.*$)',r'\1',data['SIZE'][i])
data.head()
Explanation: Excellent. This means we can write a regular expression that will find all the digits (including those that have a dot inside) and subsitute whatever comes afterwards with an empty string.
End of explanation
# for all values in that columns
for i in range(0,len(data['ABV'])):
# if match exists, which means it is a correct value
if re.match('(^.*\d)(%)',data['ABV'][i]) is not None:
# substitute the % sign with nothing, convert result to float and divide by 100
data['ABV'][i] = float(re.sub('(^.*\d)(%)',r'\1',data['ABV'][i]))/100
else: # which is when the value is incorrect
# give it the value of "nan" which stands for missing values
data['ABV'][i] = float("nan")
data['ABV'].head(100)
Explanation: Done! Let's now go for the ABV variable. It is given in %-s, so we can keep the number only, and divide it by 100 to get the float value. But there may be some wrongly inputed values in the columns also. So let's divide only those that have correct values and assign a "missing value" value to others.
End of explanation
data.info()
Explanation: Great! Let's now get some info on our dataframe
End of explanation
data['ABV'] = data['ABV'].fillna(data['ABV'].mean())
data.info()
Explanation: As you can see the ABV guy is left with only 2348 values out of 2410, as we assigned "nan" to incorrect values. Let's impute those missing values. As it is a variable with integer values,we can impute with mean using the fillna() function from pandas.'
End of explanation
data['IBUs'] = data['IBUs'].fillna(method = "bfill")
data.info()
Explanation: Done! But there is another variable with missing values: IBUs. Let's make an imputation for that one also, but this time instead of mean let's use the backward/forward filling method.
End of explanation |
10,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following
Step1: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: Now, let us take a look at what the dataset looks like (Note
Step4: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note
Step5: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step6: We convert both the training and validation sets into NumPy arrays.
Warning
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands
Step8: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is
Step9: Quiz question
Step11: Quiz question
Step12: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
Step13: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
Step14: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step15: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
Step16: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step17: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
Step18: Quiz Question
Step19: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models. | Python Code:
from __future__ import division
import graphlab
Explanation: Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
products
Explanation: Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).
End of explanation
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
End of explanation
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
## YOUR CODE HERE
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
## YOUR CODE HERE
predictions = 1 / (1 + np.exp(-scores))
return predictions
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-4-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative = derivative - 2 * l2_penalty * coefficient
return derivative
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
Explanation: Quiz question: In the code above, was the intercept term regularized?
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def compute_derivatives_with_L2(features, indicator, weights, L2):
indicator can only consists of 1(positive) and 0 (negative)
scores = np.dot(features, weights)
predictions = sigmoid(scores)
differences = indicator - predictions
return np.dot(differences, features) - 2 * L2 * weights
def my_logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the derivatives
derivatives = compute_derivatives_with_L2(feature_matrix, indicator, coefficients, l2_penalty)
# Update weights
coefficients = coefficients + derivatives * step_size
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j],
coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] = coefficients[j] + derivative * step_size
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
my_logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
word_weights = zip(coefficients_0_penalty[1:],important_words)
sorted_word_weights = sorted(word_weights, reverse=True)
positive_words = [word for (weight, word) in sorted_word_weights[:5]]
negative_words = [word for (weight, word) in sorted_word_weights[-5:]]
positive_words
negative_words
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
Explanation: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
End of explanation
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
Explanation: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
End of explanation
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
Explanation: Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Based on the above, we will use the same code that was used in Module 3 assignment.
End of explanation
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
Explanation: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
End of explanation |
10,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating the phase diagram
To generate a phase diagram, we obtain entries from the Materials Project and call the PhaseDiagram class in pymatgen.
Step1: Plotting the phase diagram
To plot a phase diagram, we send our phase diagram object into the PDPlotter class.
Step2: Calculating energy above hull and other phase equilibria properties
To perform more sophisticated analyses, use the PDAnalyzer object. | Python Code:
#This initializes the REST adaptor. You may need to put your own API key in as an arg.
a = MPRester()
#Entries are the basic unit for thermodynamic and other analyses in pymatgen.
#This gets all entries belonging to the Ca-C-O system.
entries = a.get_entries_in_chemsys(['Ca', 'C', 'O'])
#With entries, you can do many sophisticated analyses, like creating phase diagrams.
pd = PhaseDiagram(entries)
Explanation: Generating the phase diagram
To generate a phase diagram, we obtain entries from the Materials Project and call the PhaseDiagram class in pymatgen.
End of explanation
from pymatgen.phasediagram.plotter import PDPlotter
#Let's show all phases, including unstable ones
plotter = PDPlotter(pd, show_unstable=True)
plotter.show()
Explanation: Plotting the phase diagram
To plot a phase diagram, we send our phase diagram object into the PDPlotter class.
End of explanation
from pymatgen.phasediagram.pdanalyzer import PDAnalyzer
a = PDAnalyzer(pd)
import collections
data = collections.defaultdict(list)
for e in entries:
decomp, ehull = a.get_decomp_and_e_above_hull(e)
data["Materials ID"].append(e.entry_id)
data["Composition"].append(e.composition.reduced_formula)
data["Ehull"].append(ehull)
data["Decomposition"].append(" + ".join(["%.2f %s" % (v, k.composition.formula) for k, v in decomp.items()]))
from pandas import DataFrame
df = DataFrame(data, columns=["Materials ID", "Composition", "Ehull", "Decomposition"])
print(df)
Explanation: Calculating energy above hull and other phase equilibria properties
To perform more sophisticated analyses, use the PDAnalyzer object.
End of explanation |
10,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning Unsupervised Embeddings for Molecules
In this tutorial, we will use a SeqToSeq model to generate fingerprints for classifying molecules. This is based on the following paper, although some of the implementation details are different
Step1: Learning Embeddings with SeqToSeq
Many types of models require their inputs to have a fixed shape. Since molecules can vary widely in the numbers of atoms and bonds they contain, this makes it hard to apply those models to them. We need a way of generating a fixed length "fingerprint" for each molecule. Various ways of doing this have been designed, such as the Extended-Connectivity Fingerprints (ECFPs) we used in earlier tutorials. But in this example, instead of designing a fingerprint by hand, we will let a SeqToSeq model learn its own method of creating fingerprints.
A SeqToSeq model performs sequence to sequence translation. For example, they are often used to translate text from one language to another. It consists of two parts called the "encoder" and "decoder". The encoder is a stack of recurrent layers. The input sequence is fed into it, one token at a time, and it generates a fixed length vector called the "embedding vector". The decoder is another stack of recurrent layers that performs the inverse operation
Step2: We need to define the "alphabet" for our SeqToSeq model, the list of all tokens that can appear in sequences. (It's also possible for input and output sequences to have different alphabets, but since we're training it as an autoencoder, they're identical in this case.) Make a list of every character that appears in any training sequence.
Step3: Create the model and define the optimization method to use. In this case, learning works much better if we gradually decrease the learning rate. We use an ExponentialDecay to multiply the learning rate by 0.9 after each epoch.
Step4: Let's train it! The input to fit_sequences() is a generator that produces input/output pairs. On a good GPU, this should take a few hours or less.
Step5: Let's see how well it works as an autoencoder. We'll run the first 500 molecules from the validation set through it, and see how many of them are exactly reproduced.
Step6: Now we'll trying using the encoder as a way to generate molecular fingerprints. We compute the embedding vectors for all molecules in the training and validation datasets, and create new datasets that have those as their feature vectors. The amount of data is small enough that we can just store everything in memory.
Step7: For classification, we'll use a simple fully connected network with one hidden layer.
Step8: Find out how well it worked. Compute the ROC AUC for the training and validation datasets. | Python Code:
!pip install --pre deepchem
import deepchem
deepchem.__version__
Explanation: Learning Unsupervised Embeddings for Molecules
In this tutorial, we will use a SeqToSeq model to generate fingerprints for classifying molecules. This is based on the following paper, although some of the implementation details are different: Xu et al., "Seq2seq Fingerprint: An Unsupervised Deep Molecular Embedding for Drug Discovery" (https://doi.org/10.1145/3107411.3107424).
Colab
This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
End of explanation
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_muv(split='stratified')
train_dataset, valid_dataset, test_dataset = datasets
train_smiles = train_dataset.ids
valid_smiles = valid_dataset.ids
Explanation: Learning Embeddings with SeqToSeq
Many types of models require their inputs to have a fixed shape. Since molecules can vary widely in the numbers of atoms and bonds they contain, this makes it hard to apply those models to them. We need a way of generating a fixed length "fingerprint" for each molecule. Various ways of doing this have been designed, such as the Extended-Connectivity Fingerprints (ECFPs) we used in earlier tutorials. But in this example, instead of designing a fingerprint by hand, we will let a SeqToSeq model learn its own method of creating fingerprints.
A SeqToSeq model performs sequence to sequence translation. For example, they are often used to translate text from one language to another. It consists of two parts called the "encoder" and "decoder". The encoder is a stack of recurrent layers. The input sequence is fed into it, one token at a time, and it generates a fixed length vector called the "embedding vector". The decoder is another stack of recurrent layers that performs the inverse operation: it takes the embedding vector as input, and generates the output sequence. By training it on appropriately chosen input/output pairs, you can create a model that performs many sorts of transformations.
In this case, we will use SMILES strings describing molecules as the input sequences. We will train the model as an autoencoder, so it tries to make the output sequences identical to the input sequences. For that to work, the encoder must create embedding vectors that contain all information from the original sequence. That's exactly what we want in a fingerprint, so perhaps those embedding vectors will then be useful as a way to represent molecules in other models!
Let's start by loading the data. We will use the MUV dataset. It includes 74,501 molecules in the training set, and 9313 molecules in the validation set, so it gives us plenty of SMILES strings to work with.
End of explanation
tokens = set()
for s in train_smiles:
tokens = tokens.union(set(c for c in s))
tokens = sorted(list(tokens))
Explanation: We need to define the "alphabet" for our SeqToSeq model, the list of all tokens that can appear in sequences. (It's also possible for input and output sequences to have different alphabets, but since we're training it as an autoencoder, they're identical in this case.) Make a list of every character that appears in any training sequence.
End of explanation
from deepchem.models.optimizers import Adam, ExponentialDecay
max_length = max(len(s) for s in train_smiles)
batch_size = 100
batches_per_epoch = len(train_smiles)/batch_size
model = dc.models.SeqToSeq(tokens,
tokens,
max_length,
encoder_layers=2,
decoder_layers=2,
embedding_dimension=256,
model_dir='fingerprint',
batch_size=batch_size,
learning_rate=ExponentialDecay(0.001, 0.9, batches_per_epoch))
Explanation: Create the model and define the optimization method to use. In this case, learning works much better if we gradually decrease the learning rate. We use an ExponentialDecay to multiply the learning rate by 0.9 after each epoch.
End of explanation
def generate_sequences(epochs):
for i in range(epochs):
for s in train_smiles:
yield (s, s)
model.fit_sequences(generate_sequences(40))
Explanation: Let's train it! The input to fit_sequences() is a generator that produces input/output pairs. On a good GPU, this should take a few hours or less.
End of explanation
predicted = model.predict_from_sequences(valid_smiles[:500])
count = 0
for s,p in zip(valid_smiles[:500], predicted):
if ''.join(p) == s:
count += 1
print('reproduced', count, 'of 500 validation SMILES strings')
Explanation: Let's see how well it works as an autoencoder. We'll run the first 500 molecules from the validation set through it, and see how many of them are exactly reproduced.
End of explanation
import numpy as np
train_embeddings = model.predict_embeddings(train_smiles)
train_embeddings_dataset = dc.data.NumpyDataset(train_embeddings,
train_dataset.y,
train_dataset.w.astype(np.float32),
train_dataset.ids)
valid_embeddings = model.predict_embeddings(valid_smiles)
valid_embeddings_dataset = dc.data.NumpyDataset(valid_embeddings,
valid_dataset.y,
valid_dataset.w.astype(np.float32),
valid_dataset.ids)
Explanation: Now we'll trying using the encoder as a way to generate molecular fingerprints. We compute the embedding vectors for all molecules in the training and validation datasets, and create new datasets that have those as their feature vectors. The amount of data is small enough that we can just store everything in memory.
End of explanation
classifier = dc.models.MultitaskClassifier(n_tasks=len(tasks),
n_features=256,
layer_sizes=[512])
classifier.fit(train_embeddings_dataset, nb_epoch=10)
Explanation: For classification, we'll use a simple fully connected network with one hidden layer.
End of explanation
metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean, mode="classification")
train_score = classifier.evaluate(train_embeddings_dataset, [metric], transformers)
valid_score = classifier.evaluate(valid_embeddings_dataset, [metric], transformers)
print('Training set ROC AUC:', train_score)
print('Validation set ROC AUC:', valid_score)
Explanation: Find out how well it worked. Compute the ROC AUC for the training and validation datasets.
End of explanation |
10,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
MODFLOW-USG $-$ Discontinuous water table configuration over a stairway impervious base
One of the most challenging numerical cases for MODFLOW arises from drying-rewetting problems often associated with abrupt changes in the elevations of impervious base of a thin unconfined aquifer. This problem simulates a discontinuous water table configuration over a stairway impervious base and flow between constant-head boundaries in column 1 and 200. This problem is based on
Zaidel, J. (2013), Discontinuous Steady-State Analytical Solutions of the Boussinesq Equation and Their Numerical Representation by Modflow. Groundwater, 51
Step1: Model parameters
Step2: Create and run the MODFLOW-USG model
Step3: Read the simulated MODFLOW-USG model results
Step4: Plot MODFLOW-USG results | Python Code:
%matplotlib inline
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mfusg'
if platform.system() == 'Windows':
exe_name += '.exe'
mfexe = exe_name
modelpth = os.path.join('data')
modelname = 'zaidel'
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
Explanation: FloPy
MODFLOW-USG $-$ Discontinuous water table configuration over a stairway impervious base
One of the most challenging numerical cases for MODFLOW arises from drying-rewetting problems often associated with abrupt changes in the elevations of impervious base of a thin unconfined aquifer. This problem simulates a discontinuous water table configuration over a stairway impervious base and flow between constant-head boundaries in column 1 and 200. This problem is based on
Zaidel, J. (2013), Discontinuous Steady-State Analytical Solutions of the Boussinesq Equation and Their Numerical Representation by Modflow. Groundwater, 51: 952–959. doi: 10.1111/gwat.12019
The model consistes of a grid of 200 columns, 1 row, and 1 layer; a bottom altitude of ranging from 20 to 0 m; constant heads of 23 and 5 m in column 1 and 200, respectively; and a horizontal hydraulic conductivity of $1x10^{-4}$ m/d. The discretization is 5 m in the row direction for all cells.
In this example results from MODFLOW-USG will be evaluated.
End of explanation
# model dimensions
nlay, nrow, ncol = 1, 1, 200
delr = 50.
delc = 1.
# boundary heads
h1 = 23.
h2 = 5.
# cell centroid locations
x = np.arange(0., float(ncol)*delr, delr) + delr / 2.
# ibound
ibound = np.ones((nlay, nrow, ncol), dtype=np.int)
ibound[:, :, 0] = -1
ibound[:, :, -1] = -1
# bottom of the model
botm = 25 * np.ones((nlay + 1, nrow, ncol), dtype=np.float)
base = 20.
for j in range(ncol):
botm[1, :, j] = base
#if j > 0 and j % 40 == 0:
if j+1 in [40,80,120,160]:
base -= 5
# starting heads
strt = h1 * np.ones((nlay, nrow, ncol), dtype=np.float)
strt[:, :, -1] = h2
Explanation: Model parameters
End of explanation
#make the flopy model
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth)
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol,
delr=delr, delc=delc,
top=botm[0, :, :], botm=botm[1:, :, :],
perlen=1, nstp=1, steady=True)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt)
lpf = flopy.modflow.ModflowLpf(mf, hk=0.0001, laytyp=4)
oc = flopy.modflow.ModflowOc(mf,
stress_period_data={(0,0): ['print budget', 'print head',
'save head', 'save budget']})
sms = flopy.modflow.ModflowSms(mf, nonlinmeth=1, linmeth=1,
numtrack=50, btol=1.1, breduc=0.70, reslim = 0.0,
theta=0.85, akappa=0.0001, gamma=0., amomentum=0.1,
iacl=2, norder=0, level=5, north=7, iredsys=0, rrctol=0.,
idroptol=1, epsrn=1.e-5,
mxiter=500, hclose=1.e-3, hiclose=1.e-3, iter1=50)
mf.write_input()
# remove any existing head files
try:
os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname)))
except:
pass
# run the model
mf.run_model()
Explanation: Create and run the MODFLOW-USG model
End of explanation
# Create the mfusg headfile object
headfile = os.path.join(modelpth, '{0}.hds'.format(modelname))
headobj = flopy.utils.HeadFile(headfile)
times = headobj.get_times()
mfusghead = headobj.get_data(totim=times[-1])
Explanation: Read the simulated MODFLOW-USG model results
End of explanation
fig = plt.figure(figsize=(8,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0.25, hspace=0.25)
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, mfusghead[0, 0, :], linewidth=0.75, color='blue', label='MODFLOW-USG')
ax.fill_between(x, y1=botm[1, 0, :], y2=-5, color='0.5', alpha=0.5)
leg = ax.legend(loc='upper right')
leg.draw_frame(False)
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Head, in m')
ax.set_ylim(-5,25);
Explanation: Plot MODFLOW-USG results
End of explanation |
10,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Singular Value Decomposition Notes
Code examples from andrew.gibiansky.com tutorial
Step1: Step 1
Step2: Step 2
Step4: Step 3 | Python Code:
%matplotlib inline
Explanation: Singular Value Decomposition Notes
Code examples from andrew.gibiansky.com tutorial
End of explanation
from scipy import ndimage, misc
import matplotlib.pyplot as plt
tiger = misc.imread('tiger.jpg', flatten=True)
def show_grayscale(values):
plt.gray()
plt.imshow(values)
plt.show()
show_grayscale(tiger)
Explanation: Step 1: Load an image and show it in grayscale
End of explanation
from scipy import linalg
import numpy as np
U, s, Vh = linalg.svd(tiger, full_matrices=0)
print "U, V, s:", U.shape, Vh.shape, s.shape
plt.plot(np.log10(s))
plt.title("Singular values")
plt.show()
s_sum=sum(s)
s_cumsum=np.cumsum(s)
s_cumsum_norm=s_cumsum / s_sum
plt.title('Cumulative Percent of Total Sigmas')
plt.plot(s_cumsum_norm)
plt.ylim(0,1)
plt.show()
Explanation: Step 2: Compute the SVD, observe the scale of the components, most of the information is in the largest singular values.
End of explanation
def approx_image(U, s, Vh, rank):
U: first argument from scipy.slinalg.svd output
s: second argument from scipy.slinalg.svd output. The diagonal elements.
Vh: third argument from scipy.slinalg.svd output
approx_sigma = s
approx_sigma[rank:] = 0
approx_S = np.diag(approx_sigma)
approx_tiger = U.dot(approx_S).dot(Vh)
return approx_tiger
U, s, Vh = linalg.svd(tiger, full_matrices=0)
ranks = [1000, 100, 50, 10]
for i, rank in enumerate(ranks):
plt.subplot(2, 2, i + 1)
plt.title("Approximation Rank %d" % rank)
approx_tiger = approx_image(U, s, Vh, rank)
plt.gray()
plt.imshow(approx_tiger)
plt.show()
Explanation: Step 3: Generate some compressed images using SVD!
The higher values of s in the SVD correspond to the "more important" subcomponents of the image. We can remove some of the less important stuff and
End of explanation |
10,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous Factors
Base Class for Continuous Factors
Joint Gaussian Distributions
Canonical Factors
Linear Gaussian CPD
In many situations, some variables are best modeled as taking values in some continuous space. Examples include variables such as position, velocity, temperature, and pressure. Clearly, we cannot use a table representation in this case.
Nothing in the formulation of a Bayesian network requires that we restrict attention to discrete variables. The only requirement is that the CPD, P(X | Y1, Y2, ... Yn) represent, for every assignment of values y1 ∈ Val(Y1), y2 ∈ Val(Y2), .....yn ∈ val(Yn), a distribution over X. In this case, X might be continuous, in which case the CPD would need to represent distributions over a continuum of values; we might also have X’s parents continuous, so that the CPD would also need to represent a continuum of different probability distributions. There exists implicit representations for CPDs of this type, allowing us to apply all the network machinery for the continuous case as well.
Base Class for Continuous Factors
This class will behave as a base class for the continuous factor representations. All the present and future factor classes will be derived from this base class. We need to specify the variable names and a pdf function to initialize this class.
Step1: This class supports methods like marginalize, reduce, product and divide just like what we have with discrete classes. One caveat is that when there are a number of variables involved, these methods prove to be inefficient and hence we resort to certain Gaussian or some other approximations which are discussed later.
Step2: The ContinuousFactor class also has a method discretize that takes a pgmpy Discretizer class as input. It will output a list of discrete probability masses or a Factor or TabularCPD object depending upon the discretization method used. Although, we do not have inbuilt discretization algorithms for multivariate distributions for now, the users can always define their own Discretizer class by subclassing the pgmpy.BaseDiscretizer class.
Joint Gaussian Distributions
In its most common representation, a multivariate Gaussian distribution over X1………..Xn is characterized by an n-dimensional mean vector μ, and a symmetric n x n covariance matrix Σ. The density function is most defined as -
$$
p(x) = \dfrac{1}{(2\pi)^{n/2}|Σ|^{1/2}} exp[-0.5*(x-μ)^TΣ^{-1}(x-μ)]
$$
The class pgmpy.JointGaussianDistribution provides its representation. This is derived from the class pgmpy.ContinuousFactor. We need to specify the variable names, a mean vector and a covariance matrix for its inialization. It will automatically comute the pdf function given these parameters.
Step3: This class overrides the basic operation methods (marginalize, reduce, normalize, product and divide) as these operations here are more efficient than the ones in its parent class. Most of these operation involve a matrix inversion which is O(n^3) with repect to the number of variables.
Step4: The others methods can also be used in a similar fashion.
Canonical Factors
While the Joint Gaussian representation is useful for certain sampling algorithms, a closer look reveals that it can also not be used directly in the sum-product algorithms. Why? Because operations like product and reduce, as mentioned above involve matrix inversions at each step.
So, in order to compactly describe the intermediate factors in a Gaussian network without the costly matrix inversions at each step, a simple parametric representation is used known as the Canonical Factor. This representation is closed under the basic operations used in inference
Step5: This class also has a method, to_joint_gaussian to convert the canoncial representation back into the joint gaussian distribution.
Step6: Linear Gaussian CPD
A linear gaussian conditional probability distribution is defined on a continuous variable. All the parents of this variable are also continuous. The mean of this variable, is linearly dependent on the mean of its parent variables and the variance is independent.
For example,
$$
P(Y ; x1, x2, x3) = N(β_1x_1 + β_2x_2 + β_3x_3 + β_0 ; σ^2)
$$
Let Y be a linear Gaussian of its parents X1,...,Xk
Step7: A Gaussian Bayesian is defined as a network all of whose variables are continuous, and where all of the CPDs are linear Gaussians. These networks are of particular interest as these are an alternate form of representaion of the Joint Gaussian distribution.
These networks are implemented as the LinearGaussianBayesianNetwork class in the module, pgmpy.models.continuous. This class is a subclass of the BayesianModel class in pgmpy.models and will inherit most of the methods from it. It will have a special method known as to_joint_gaussian that will return an equivalent JointGuassianDistribution object for the model. | Python Code:
import numpy as np
from scipy.special import beta
# Two variable drichlet ditribution with alpha = (1,2)
def drichlet_pdf(x, y):
return (np.power(x, 1)*np.power(y, 2))/beta(x, y)
from pgmpy.factors import ContinuousFactor
drichlet_factor = ContinuousFactor(['x', 'y'], drichlet_pdf)
drichlet_factor.scope(), drichlet_factor.assignment(5,6)
Explanation: Continuous Factors
Base Class for Continuous Factors
Joint Gaussian Distributions
Canonical Factors
Linear Gaussian CPD
In many situations, some variables are best modeled as taking values in some continuous space. Examples include variables such as position, velocity, temperature, and pressure. Clearly, we cannot use a table representation in this case.
Nothing in the formulation of a Bayesian network requires that we restrict attention to discrete variables. The only requirement is that the CPD, P(X | Y1, Y2, ... Yn) represent, for every assignment of values y1 ∈ Val(Y1), y2 ∈ Val(Y2), .....yn ∈ val(Yn), a distribution over X. In this case, X might be continuous, in which case the CPD would need to represent distributions over a continuum of values; we might also have X’s parents continuous, so that the CPD would also need to represent a continuum of different probability distributions. There exists implicit representations for CPDs of this type, allowing us to apply all the network machinery for the continuous case as well.
Base Class for Continuous Factors
This class will behave as a base class for the continuous factor representations. All the present and future factor classes will be derived from this base class. We need to specify the variable names and a pdf function to initialize this class.
End of explanation
def custom_pdf(x, y, z):
return z*(np.power(x, 1)*np.power(y, 2))/beta(x, y)
custom_factor = ContinuousFactor(['x', 'y', 'z'], custom_pdf)
custom_factor.scope(), custom_factor.assignment(1, 2, 3)
custom_factor.reduce([('y', 2)])
custom_factor.scope(), custom_factor.assignment(1, 3)
from scipy.stats import multivariate_normal
std_normal_pdf = lambda *x: multivariate_normal.pdf(x, [0, 0], [[1, 0], [0, 1]])
std_normal = ContinuousFactor(['x1', 'x2'], std_normal_pdf)
std_normal.scope(), std_normal.assignment([1, 1])
std_normal.marginalize(['x2'])
std_normal.scope(), std_normal.assignment(1)
sn_pdf1 = lambda x: multivariate_normal.pdf([x], [0], [[1]])
sn_pdf2 = lambda x1,x2: multivariate_normal.pdf([x1, x2], [0, 0], [[1, 0], [0, 1]])
sn1 = ContinuousFactor(['x2'], sn_pdf1)
sn2 = ContinuousFactor(['x1', 'x2'], sn_pdf2)
sn3 = sn1 * sn2
sn4 = sn2 / sn1
sn3.assignment(0, 0), sn4.assignment(0, 0)
Explanation: This class supports methods like marginalize, reduce, product and divide just like what we have with discrete classes. One caveat is that when there are a number of variables involved, these methods prove to be inefficient and hence we resort to certain Gaussian or some other approximations which are discussed later.
End of explanation
from pgmpy.factors import JointGaussianDistribution as JGD
dis = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]),
np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]]))
dis.variables
dis.mean
dis.covariance
dis.pdf([0,0,0])
dis.mean
dis.covariance
dis.pdf([0,0,0])
Explanation: The ContinuousFactor class also has a method discretize that takes a pgmpy Discretizer class as input. It will output a list of discrete probability masses or a Factor or TabularCPD object depending upon the discretization method used. Although, we do not have inbuilt discretization algorithms for multivariate distributions for now, the users can always define their own Discretizer class by subclassing the pgmpy.BaseDiscretizer class.
Joint Gaussian Distributions
In its most common representation, a multivariate Gaussian distribution over X1………..Xn is characterized by an n-dimensional mean vector μ, and a symmetric n x n covariance matrix Σ. The density function is most defined as -
$$
p(x) = \dfrac{1}{(2\pi)^{n/2}|Σ|^{1/2}} exp[-0.5*(x-μ)^TΣ^{-1}(x-μ)]
$$
The class pgmpy.JointGaussianDistribution provides its representation. This is derived from the class pgmpy.ContinuousFactor. We need to specify the variable names, a mean vector and a covariance matrix for its inialization. It will automatically comute the pdf function given these parameters.
End of explanation
dis1 = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]),
np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]]))
dis2 = JGD(['x3', 'x4'], [1, 2], [[2, 3], [5, 6]])
dis3 = dis1 * dis2
dis3.scope()
dis3.mean
dis3.covariance
Explanation: This class overrides the basic operation methods (marginalize, reduce, normalize, product and divide) as these operations here are more efficient than the ones in its parent class. Most of these operation involve a matrix inversion which is O(n^3) with repect to the number of variables.
End of explanation
from pgmpy.factors import CanonicalFactor
phi1 = CanonicalFactor(['x1', 'x2', 'x3'],
np.array([[1, -1, 0], [-1, 4, -2], [0, -2, 4]]),
np.array([[1], [4], [-1]]), -2)
phi2 = CanonicalFactor(['x1', 'x2'], np.array([[3, -2], [-2, 4]]),
np.array([[5], [-1]]), 1)
phi3 = phi1 * phi2
phi3.scope()
phi3.h
phi3.K
phi3.g
Explanation: The others methods can also be used in a similar fashion.
Canonical Factors
While the Joint Gaussian representation is useful for certain sampling algorithms, a closer look reveals that it can also not be used directly in the sum-product algorithms. Why? Because operations like product and reduce, as mentioned above involve matrix inversions at each step.
So, in order to compactly describe the intermediate factors in a Gaussian network without the costly matrix inversions at each step, a simple parametric representation is used known as the Canonical Factor. This representation is closed under the basic operations used in inference: factor product, factor division, factor reduction, and marginalization. Thus, we can define a set of simple data structures that allow the inference process to be performed. Moreover, the integration operation required by marginalization is always well defined, and it is guaranteed to produce a finite integral under certain conditions; when it is well defined, it has a simple analytical solution.
A canonical form C (X; K,h, g) is defined as:
$$C(X; K,h,g) = exp(-0.5X^TKX + h^TX + g)$$
We can represent every Gaussian as a canonical form. Rewriting the joint Gaussian pdf we obtain,
N (μ; Σ) = C (K, h, g) where:
$$
K = Σ^{-1}
$$
$$
h = Σ^{-1}μ
$$
$$
g = -0.5μ^TΣ^{-1}μ - log((2π)^{n/2}|Σ|^{1/2}
$$
Similar to the JointGaussainDistribution class, the CanonicalFactor class is also derived from the ContinuousFactor class but with its own implementations of the methods required for the sum-product algorithms that are much more efficient than its parent class methods. Let us have a look at the API of a few methods in this class.
End of explanation
phi = CanonicalFactor(['x1', 'x2'], np.array([[3, -2], [-2, 4]]),
np.array([[5], [-1]]), 1)
jgd = phi.to_joint_gaussian()
jgd.variables
jgd.covariance
jgd.mean
Explanation: This class also has a method, to_joint_gaussian to convert the canoncial representation back into the joint gaussian distribution.
End of explanation
# For P(Y| X1, X2, X3) = N(-2x1 + 3x2 + 7x3 + 0.2; 9.6)
from pgmpy.factors import LinearGaussianCPD
cpd = LinearGaussianCPD('Y', 0.2, 9.6, ['X1', 'X2', 'X3'], [-2, 3, 7])
print(cpd)
Explanation: Linear Gaussian CPD
A linear gaussian conditional probability distribution is defined on a continuous variable. All the parents of this variable are also continuous. The mean of this variable, is linearly dependent on the mean of its parent variables and the variance is independent.
For example,
$$
P(Y ; x1, x2, x3) = N(β_1x_1 + β_2x_2 + β_3x_3 + β_0 ; σ^2)
$$
Let Y be a linear Gaussian of its parents X1,...,Xk:
$$
p(Y | x) = N(β_0 + β^T x ; σ^2)
$$
The distribution of Y is a normal distribution p(Y) where:
$$
μ_Y = β_0 + β^Tμ
$$
$$
{{σ^2}_Y = σ^2 + β^TΣβ}
$$
The joint distribution over {X, Y} is a normal distribution where:
$$Cov[X_i; Y] = {\sum_{j=1}^{k} β_jΣ_{i,j}}$$
Assume that X1,...,Xk are jointly Gaussian with distribution N (μ; Σ). Then:
For its representation pgmpy has a class named LinearGaussianCPD in the module pgmpy.factors.continuous. To instantiate an object of this class, one needs to provide a variable name, the value of the beta_0 term, the variance, a list of the parent variable names and a list of the coefficient values of the linear equation (beta_vector), where the list of parent variable names and beta_vector list is optional and defaults to None.
End of explanation
from pgmpy.models import LinearGaussianBayesianNetwork
model = LinearGaussianBayesianNetwork([('x1', 'x2'), ('x2', 'x3')])
cpd1 = LinearGaussianCPD('x1', 1, 4)
cpd2 = LinearGaussianCPD('x2', -5, 4, ['x1'], [0.5])
cpd3 = LinearGaussianCPD('x3', 4, 3, ['x2'], [-1])
model.add_cpds(cpd1, cpd2, cpd3)
jgd = model.to_joint_gaussian()
jgd.variables
jgd.mean
jgd.covariance
Explanation: A Gaussian Bayesian is defined as a network all of whose variables are continuous, and where all of the CPDs are linear Gaussians. These networks are of particular interest as these are an alternate form of representaion of the Joint Gaussian distribution.
These networks are implemented as the LinearGaussianBayesianNetwork class in the module, pgmpy.models.continuous. This class is a subclass of the BayesianModel class in pgmpy.models and will inherit most of the methods from it. It will have a special method known as to_joint_gaussian that will return an equivalent JointGuassianDistribution object for the model.
End of explanation |
10,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 1
Step1: First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means
Step2: All input data is stored in pandas dataframes under, self.Data.Interances and self.Data.Foliations
Step3: In case of disconformities, we can define which formation belong to which series using a dictionary. Until python 3.6 is important to specify the order of the series otherwise is random
Step4: Now in the data frame we should have the series column too
Step5: Next step is the creating of a grid. So far only regular. By default it takes the extent and the resolution given in the import_data method.
Step6: Plotting raw data
The object Plot is created automatically as we call the methods above. This object contains some methods to plot the data and the results.
It is possible to plot a 2D projection of the data in a specific direction using the following method. Also is possible to choose the series you want to plot. Additionally all the key arguments of seaborn lmplot can be used.
Step7: Class Interpolator
This class will take the data from the class Data and calculate potential fields and block. We can pass as key arguments all the variables of the interpolation. I recommend not to touch them if you do not know what are you doing. The default values should be good enough. Also the first time we execute the method, we will compile the theano function so it can take a bit of time.
Step8: Now we could visualize the individual potential fields as follow
Step9: BIF Series
Step10: SImple mafic
Step11: Optimizing the export of lithologies
But usually the final result we want to get is the final block. The method compute_block_model will compute the block model, updating the attribute block. This attribute is a theano shared function that can return a 3D array (raveled) using the method get_value().
Step12: And again after computing the model in the Plot object we can use the method plot_block_section to see a 2D section of the model
Step14: Export to vtk. (Under development)
Step15: Performance Analysis
One of the advantages of theano is the posibility to create a full profile of the function. This has to be included in at the time of the creation of the function. At the moment it should be active (the downside is larger compilation time and I think also a bit in the computation so be careful if you need a fast call)
CPU
The following profile is with a 2 core laptop. Nothing spectacular.
Step16: Looking at the profile we can see that most of time is in pow operation (exponential). This probably is that the extent is huge and we are doing it with too much precision. I am working on it
Step17: GPU | Python Code:
# Importing
import theano.tensor as T
import sys, os
sys.path.append("../GeMpy")
# Importing GeMpy modules
import GeMpy_core
import Visualization
# Reloading (only for development purposes)
import importlib
importlib.reload(GeMpy_core)
importlib.reload(Visualization)
# Usuful packages
import numpy as np
import pandas as pn
import matplotlib.pyplot as plt
# This was to choose the gpu
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# Default options of printin
np.set_printoptions(precision = 6, linewidth= 130, suppress = True)
%matplotlib inline
#%matplotlib notebook
Explanation: Example 1: Sandstone Model
End of explanation
# Setting extend, grid and compile
# Setting the extent
sandstone = GeMpy_core.GeMpy()
# Create Data class with raw data
sandstone.import_data( [696000,747000,6863000,6950000,-20000, 2000],[ 40, 40, 80],
path_f = os.pardir+"/input_data/a_Foliations.csv",
path_i = os.pardir+"/input_data/a_Points.csv")
Explanation: First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means
End of explanation
sandstone.Data.Foliations.head()
Explanation: All input data is stored in pandas dataframes under, self.Data.Interances and self.Data.Foliations:
End of explanation
sandstone.Data.set_series({"EarlyGranite_Series":sandstone.Data.formations[-1],
"BIF_Series":(sandstone.Data.formations[0], sandstone.Data.formations[1]),
"SimpleMafic_Series":sandstone.Data.formations[2]},
order = ["EarlyGranite_Series",
"BIF_Series",
"SimpleMafic_Series"])
Explanation: In case of disconformities, we can define which formation belong to which series using a dictionary. Until python 3.6 is important to specify the order of the series otherwise is random
End of explanation
sandstone.Data.Foliations.head()
Explanation: Now in the data frame we should have the series column too
End of explanation
# Create a class Grid so far just regular grid
sandstone.create_grid()
sandstone.Grid.grid
Explanation: Next step is the creating of a grid. So far only regular. By default it takes the extent and the resolution given in the import_data method.
End of explanation
sandstone.Plot.plot_data(series = sandstone.Data.series.columns.values[1])
Explanation: Plotting raw data
The object Plot is created automatically as we call the methods above. This object contains some methods to plot the data and the results.
It is possible to plot a 2D projection of the data in a specific direction using the following method. Also is possible to choose the series you want to plot. Additionally all the key arguments of seaborn lmplot can be used.
End of explanation
sandstone.set_interpolator()
Explanation: Class Interpolator
This class will take the data from the class Data and calculate potential fields and block. We can pass as key arguments all the variables of the interpolation. I recommend not to touch them if you do not know what are you doing. The default values should be good enough. Also the first time we execute the method, we will compile the theano function so it can take a bit of time.
End of explanation
sandstone.Plot.plot_potential_field(10, n_pf=0)
Explanation: Now we could visualize the individual potential fields as follow:
Early granite
End of explanation
sandstone.Plot.plot_potential_field(13, n_pf=1, cmap = "magma", plot_data = True,
verbose = 5 )
Explanation: BIF Series
End of explanation
sandstone.Plot.plot_potential_field(10, n_pf=2)
Explanation: SImple mafic
End of explanation
# Reset the block
sandstone.Interpolator.block.set_value(np.zeros_like(sandstone.Grid.grid[:,0]))
# Compute the block
sandstone.Interpolator.compute_block_model([0,1,2], verbose = 0)
sandstone.Interpolator.block.get_value(), np.unique(sandstone.Interpolator.block.get_value())
Explanation: Optimizing the export of lithologies
But usually the final result we want to get is the final block. The method compute_block_model will compute the block model, updating the attribute block. This attribute is a theano shared function that can return a 3D array (raveled) using the method get_value().
End of explanation
sandstone.Plot.plot_block_section(13, interpolation = 'nearest', direction='y')
plt.savefig("sandstone_example.png")
Explanation: And again after computing the model in the Plot object we can use the method plot_block_section to see a 2D section of the model
End of explanation
Export model to VTK
Export the geology blocks to VTK for visualisation of the entire 3-D model in an
external VTK viewer, e.g. Paraview.
..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk
**Optional keywords**:
- *vtk_filename* = string : filename of VTK file (default: output_name)
- *data* = np.array : data array to export to VKT (default: entire block model)
vtk_filename = "noddyFunct2"
extent_x = 10
extent_y = 10
extent_z = 10
delx = 0.2
dely = 0.2
delz = 0.2
from pyevtk.hl import gridToVTK
# Coordinates
x = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64')
y = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64')
z = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64')
# self.block = np.swapaxes(self.block, 0, 2)
gridToVTK(vtk_filename, x, y, z, cellData = {"geology" : sol})
Explanation: Export to vtk. (Under development)
End of explanation
%%timeit
# Reset the block
sandstone.Interpolator.block.set_value(np.zeros_like(sandstone.Grid.grid[:,0]))
# Compute the block
sandstone.Interpolator.compute_block_model([0,1,2], verbose = 0)
Explanation: Performance Analysis
One of the advantages of theano is the posibility to create a full profile of the function. This has to be included in at the time of the creation of the function. At the moment it should be active (the downside is larger compilation time and I think also a bit in the computation so be careful if you need a fast call)
CPU
The following profile is with a 2 core laptop. Nothing spectacular.
End of explanation
esandstone.Interpolator._interpolate.profile.summary()
Explanation: Looking at the profile we can see that most of time is in pow operation (exponential). This probably is that the extent is huge and we are doing it with too much precision. I am working on it
End of explanation
%%timeit
# Reset the block
sandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))
# Compute the block
sandstone.compute_block_model([0,1,2], verbose = 0)
sandstone.block_export.profile.summary()
Explanation: GPU
End of explanation |
10,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deploy and predict with Keras model on Cloud AI Platform.
Learning Objectives
Setup up the environment
Deploy trained Keras model to Cloud AI Platform
Online predict from model on Cloud AI Platform
Batch predict from model on Cloud AI Platform
Introduction
Verify that you have previously Trained your Keras model. If not, go back to train_keras_ai_platform_babyweight.ipynb create them.
In this notebook, we'll be deploying our Keras model to Cloud AI Platform and creating predictions.
We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Import necessary libraries.
Step1: Lab Task #1
Step2: Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
Step3: Lab Task #2
Step4: Lab Task #3
Step5: The predictions for the four instances were
Step6: Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version.
Step7: Lab Task #4 | Python Code:
import os
Explanation: Deploy and predict with Keras model on Cloud AI Platform.
Learning Objectives
Setup up the environment
Deploy trained Keras model to Cloud AI Platform
Online predict from model on Cloud AI Platform
Batch predict from model on Cloud AI Platform
Introduction
Verify that you have previously Trained your Keras model. If not, go back to train_keras_ai_platform_babyweight.ipynb create them.
In this notebook, we'll be deploying our Keras model to Cloud AI Platform and creating predictions.
We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Import necessary libraries.
End of explanation
%%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# Change these to try this notebook out
PROJECT = "cloud-training-demos" # TODO 1: Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # TODO 1: Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
%%bash
gcloud config set compute/region $REGION
gcloud config set ai_platform/region global
Explanation: Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
Explanation: Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
End of explanation
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=# TODO 2: Add GCS path to saved_model.pb file.
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.1 \
--python-version=3.7
Explanation: Lab Task #2: Deploy trained model.
Deploying the trained model to act as a REST web service is a simple gcloud call. Complete #TODO by providing location of saved_model.pb file to Cloud AI Platform model deployment service. The deployment will take a few minutes.
End of explanation
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = # TODO 3a: Add model name
MODEL_VERSION = # TODO 3a: Add model version
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
# TODO 3a: Create another instance
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
Explanation: Lab Task #3: Use model to make online prediction.
Complete __#TODO__s for both the Python and gcloud Shell API methods of calling our deployed model on Cloud AI Platform for online prediction.
Python API
We can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
End of explanation
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
Explanation: The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different).
gcloud shell API
Instead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud.
End of explanation
%%bash
gcloud ai-platform predict \
--model=babyweight \
--json-instances=inputs.json \
--version=# TODO 3b: Add model version
Explanation: Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version.
End of explanation
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight \
--version=# TODO 4: Add model version
Explanation: Lab Task #4: Use model to make batch prediction.
Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction. Complete __#TODO__s so we can call our deployed model on Cloud AI Platform for batch prediction.
End of explanation |
10,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DM_08_04
Import packages
We'll create a hidden Markov model to examine the state-shifting in the dataset.
Step1: Import data
Read CSV file into "df."
Step2: Drop the row number and "corr" so we can focus on the influence of "prev" and "Pacc" on "rt." Also define "prev" as a factor.
Step3: Create model
Make an HMM with 2 states. (The choice of 2 is based on theory.)
Step4: Predict the hidden state for each record and get count of predicted states.
Step5: Get the mean reaction time (rt) for each of the two states.
Step6: Visualize results
Make a graph to show the change of states. | Python Code:
% matplotlib inline
import pylab
import numpy as np
import pandas as pd
from hmmlearn.hmm import GaussianHMM
Explanation: DM_08_04
Import packages
We'll create a hidden Markov model to examine the state-shifting in the dataset.
End of explanation
df = pd.read_csv("speed.csv", sep = ",")
df.head(5)
Explanation: Import data
Read CSV file into "df."
End of explanation
x = df.drop(["row", "corr"], axis = 1)
x["prev"] = pd.factorize(x["prev"])[0]
Explanation: Drop the row number and "corr" so we can focus on the influence of "prev" and "Pacc" on "rt." Also define "prev" as a factor.
End of explanation
model = GaussianHMM(n_components=2, n_iter=10000, random_state=1).fit(x)
model.monitor_
Explanation: Create model
Make an HMM with 2 states. (The choice of 2 is based on theory.)
End of explanation
states = model.predict(x)
pd.Series(states).value_counts()
Explanation: Predict the hidden state for each record and get count of predicted states.
End of explanation
model.means_[:, 0]
Explanation: Get the mean reaction time (rt) for each of the two states.
End of explanation
fig = pylab.figure(figsize=(20, 1))
ax = fig.add_subplot(111)
ax.grid(True)
ax.set_xlabel("Record number")
ax.set_ylabel("State")
ax.plot(states)
Explanation: Visualize results
Make a graph to show the change of states.
End of explanation |
10,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
10,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mitchell-Schaeffer - First Version
This is my first pass, which ended up somewhat similar to the rat data Ian showed.
Model is Mitchell-Schaeffer as shown in Eqn 3.1 from Ian's thesis
Step1: Function-based model
Step2: Odd | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
# h steady-state value
def h_inf(Vm=0.0):
return 0.0 # TODO??
# Input stimulus
def Id(t):
if 5.0 < t < 6.0:
return 1.0
elif 20.0 < t < 21.0:
return 1.0
return 0.0
# Compute derivatives
def compute_derivatives(y, t0):
dy = np.zeros((2,))
v = y[0]
h = y[1]
dy[0] = ((h*v**2) * (1-v))/t_in - v/t_out + Id(t0) # TODO remove forcing?
#dy[0] = ((h*v**2) * (1-v))/t_in - v/t_out
# dh/dt
if v >= vgate:
dy[1] = -h/t_close
else:
dy[1] = (1-h)/t_open
return dy
# Set random seed (for reproducibility)
np.random.seed(10)
# Start and end time (in milliseconds)
tmin = 0.0
tmax = 50.0
# Time values
T = np.linspace(tmin, tmax, 10000)
#Parameters (wild ass guess based on statement that these are ordered as follows by size):
t_in = 0.1
t_out = 1
t_close = 5
t_open = 7
vgate = 0.13 # Mitchell paper describes this as the assumption
# initial state (v, h)
Y = np.array([0.0, h_inf()])
# Solve ODE system
# Vy = (Vm[t0:tmax], n[t0:tmax], m[t0:tmax], h[t0:tmax])
Vy = odeint(compute_derivatives, Y, T)
Idv = [Id(t) for t in T]
fig, ax = plt.subplots(figsize=(12, 7))
# stimulus
color = 'tab:blue'
ax.plot(T, Idv, color=color)
ax.set_xlabel('Time (ms)')
ax.set_ylabel(r'Current density (uA/$cm^2$)',color=color)
ax.tick_params(axis='y', labelcolor=color)
# potential
color = 'tab:orange'
ax2 = ax.twinx()
ax2.set_ylabel('v',color=color)
ax2.plot(T, Vy[:, 0],color=color)
ax2.tick_params(axis='y', labelcolor=color)
#plt.grid()
# gate
color = 'tab:green'
ax3 = ax.twinx()
ax3.spines['right'].set_position(('outward', 50))
ax3.set_ylabel('h',color=color)
ax3.plot(T, Vy[:, 1],color=color)
ax3.tick_params(axis='y', labelcolor=color)
# ax3.set_ylim(ax2.get_ylim())
#plt.grid()
# Trajectories with limit cycles
fig, ax = plt.subplots(figsize=(10, 10))
ax.plot(Vy[:, 0], Vy[:, 1],label='v - h')
ax.set_xlabel("v")
ax.set_ylabel("h")
ax.set_title('Limit cycles')
ax.legend()
#plt.grid()
Explanation: Mitchell-Schaeffer - First Version
This is my first pass, which ended up somewhat similar to the rat data Ian showed.
Model is Mitchell-Schaeffer as shown in Eqn 3.1 from Ian's thesis:
$$\frac{dv}{dt}=\frac{hv^2(1-v)}{\tau_{in}} - \frac{v}{\tau_{out}}$$
$$
\frac{dh}{dt} = \left{
\begin{array}{ll}
\frac{-h}{\tau_{close}} & \quad v \geq v_{gate} \
\frac{1-h}{\tau_{open}} & \quad v < v_{gate}
\end{array}
\right.
$$
Code adapted from: giuseppebonaccorso/hodgkin-huxley-main.py on gist:
https://gist.github.com/giuseppebonaccorso/60ce3eb3a829b94abf64ab2b7a56aaef
End of explanation
T = np.linspace(0, 50.0, 10000)
def MitchellSchaeffer(t_in = 0.1, t_out = 1, t_close = 5, t_open = 7):
# h steady-state value
def h_inf(Vm=0.0):
return 0.0 # TODO??
# Input stimulus
def Id(t):
if 5.0 < t < 6.0:
return 0.5
elif 30.0 < t < 31.0:
return 0.5
return 0.0
# Compute derivatives
def compute_derivatives(y, t0):
dy = np.zeros((2,))
v = y[0]
h = y[1]
dy[0] = ((h*v**2) * (1-v))/t_in - v/t_out + Id(t0) # TODO remove forcing?
#dy[0] = ((h*v**2) * (1-v))/t_in - v/t_out
# dh/dt
if v >= vgate:
dy[1] = -h/t_close
else:
dy[1] = (1-h)/t_open
return dy
# initial state (v, h)
Y = np.array([0.0, h_inf()])
# Solve ODE system
# V = (v[t0:tmax], h[t0:tmax])
V = odeint(compute_derivatives, Y, T)
return V
plt.plot(T, MitchellSchaeffer()[:,0])
# t_out = 1.0
fig, axes = plt.subplots(2,1,figsize=(12,6))
for t in np.linspace(0.5,1.5,5):
V = MitchellSchaeffer(t_out=t)
axes[0].plot(V[:,0],label="t_out = %.2f"%t)
axes[0].legend()
axes[1].plot(V[:,1])
Explanation: Function-based model:
For multiple runs and parameter estimation.
End of explanation
# t_in = 0.1
fig, axes = plt.subplots(2,1,figsize=(12,6))
for t in np.linspace(0.05,0.2,5):
V = MitchellSchaeffer(t_in=t)
axes[0].plot(V[:,0])
axes[1].plot(V[:,1])
# t_close = 5
fig, axes = plt.subplots(2,1,figsize=(12,6))
for t in np.linspace(3,7,5):
V = MitchellSchaeffer(t_close=t)
axes[0].plot(V[:,0])
axes[1].plot(V[:,1])
# t_open = 7
fig, axes = plt.subplots(2,1,figsize=(12,6))
for t in np.linspace(4,9,5):
V = MitchellSchaeffer(t_open=t)
axes[0].plot(V[:,0])
axes[1].plot(V[:,1])
Explanation: Odd: at 0.5 there is no second response.
End of explanation |
10,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to NLTK
NLTK is the Natural Language Toolkit, a fairly large Python library for doing many sorts of linguistic analysis of text. NLTK comes with a selection of sample texts that we'll use to day, to get yourself familiar with what sorts of analysis you can do.
To run this notebook you will need the nltk, matplotlib, and tkinter modules. If you are new to Python and programming, the best way to have these is to make sure you are using the Anaconda Python distribution, which includes all of these and a whole host of other useful libraries. You can check whether you have the libraries by running the following commands in a Terminal or Powershell window
Step1: This import statement reads the book samples, which include nine sentences and nine book-length texts. It has also helpfully put each of these texts into a variable for us, from sent1 to sent9 and text1 to text9.
Step2: Let's look at the texts now.
Step3: Each of these texts is an nltk.text.Text object, and has methods to let you see what the text contains. But you can also treat it as a plain old list!
Step4: We can do simple concordancing, printing the context for each use of a word throughout the text
Step5: The default is to show no more than 25 results for any given word, but we can change that.
Step6: We can adjust the amount of context we show in our concordance
Step7: ...or get the number of times any individual word appears in the text.
Step8: We can generate a vocabulary for the text, and use the vocabulary to find the most frequent words as well as the ones that appear only once (a.k.a. the hapaxes.)
Step9: You've now seen two methods for getting the number of times a word appears in a text
Step10: We can try and find interesting words in the text, such as words of a minimum length (the longer a word, the less common it probably is) that occur more than once or twice...
Step11: And we can look for pairs of words that go together more often than chance would suggest.
Step12: NLTK can also provide us with a few simple graph visualizations, when we have matplotlib installed. To make this work in iPython, we need the following magic line. If you are running in PyCharm, then you do not need this line - it will throw an error if you try to use it!
Step13: The vocabulary we get from the .vocab() method is something called a "frequency distribution", which means it's a giant tally of each unique word and the number of times that word appears in the text. We can also make a frequency distribution of other features, such as "each possible word length and the number of times a word that length is used". Let's do that and plot it.
Step14: We can plot where in the text a word occurs, and compare it to other words, with a dispersion plot. For example, the following dispersion plots show respectively (among other things) that the words 'coconut' and 'swallow' almost always appear in the same part of the Holy Grail text, and that Willoughby and Lucy do not appear in Sense and Sensibility until some time after the beginning of the book.
Step15: We can go a little crazy with text statistics. This block of code computes the average word length for each text, as well as a measure known as the "lexical diversity" that measures how much word re-use there is in a text.
Step16: A text of your own
So far we have been using the sample texts, but we can also use any text that we have lying around on our computer. The easiest sort of text to read in is plaintext, not PDF or HTML or anything else. Once we have made the text into an NLTK text with the Text() function, we can use all the same methods on it as we did for the sample texts above.
Step17: Using text corpora
NLTK comes with several pre-existing corpora of texts, some of which are the main body of text used for certain sorts of linguistic research. Using a corpus of texts, as opposed to an individual text, brings us a few more features.
Step18: Paradise Lost is now a Text object, just like the ones we have worked on before. But we accessed it through the NLTK corpus reader, which means that we get some extra bits of functionality
Step19: We can also make our own corpus if we have our own collection of files, e.g. the Federalist Papers from last week. But we have to pay attention to how those files are arranged! In this case, if you look in the text file, the paragraphs are set apart with 'hanging indentation' - all the lines
Step20: And just like before, from this corpus we can make individual Text objects, on which we can use the methods we have seen above.
Step21: Filtering out stopwords
In linguistics, stopwords or function words are words that are so frequent in a particular language that they say little to nothing about the meaning of a text. You can make your own list of stopwords, but NLTK also provides a list for each of several common languages. These sets of stopwords are provided as another corpus.
Step22: So reading in the stopword list, we can use it to filter out vocabulary we don't want to see. Let's look at our 50 most frequent words in Holy Grail again.
Step23: Maybe we should get rid of punctuation and all-caps words too...
Step24: Getting word stems
Quite frequently we might want to treat different forms of a word - e.g. 'make / makes / made / making' - as the same word. A common way to do this is to find the stem of the word and use that in your analysis, in place of the word itself. There are several different approaches that can be takenNone of them are perfect, and quite frequently linguists will write their own stemmers.
Let's chop out a paragraph of Alice in Wonderland to play with.
Step25: NLTK comes with a few different stemming algorithms; we can also use WordNet (a system for analyzing semantic relationships between words) to look for the lemma form of each word and "stem" it that way. Here are some results.
Step26: Part-of-speech tagging
This is where corpus linguistics starts to get interesting. In order to analyze a text computationally, it is useful to know its syntactic structure - what words are nouns, what are verbs, and so on? This can be done (again, imperfectly) by using part-of-speech tagging.
Step27: NLTK part-of-speech tags (simplified tagset)
| Tag | Meaning | Examples |
|-----|--------------------|--------------------------------------|
| JJ | adjective | new, good, high, special, big, local |
| RB | adverb | really, already, still, early, now |
| CC | conjunction | and, or, but, if, while, although |
| DT | determiner | the, a, some, most, every, no |
| EX | existential | there, there's |
| FW | foreign word | dolce, ersatz, esprit, quo, maitre |
| MD | modal verb | will, can, would, may, must, should |
| NN | noun | year, home, costs, time, education |
| NNP | proper noun | Alison, Africa, April, Washington |
| NUM | number | twenty-four, fourth, 1991, 14
Step28: We can even do a frequency plot of the different parts of speech in the corpus (if we have matplotlib installed!)
Step29: Named-entity recognition
As well as the parts of speech of individual words, it is useful to be able to analyze the structure of an entire sentence. This generally involves breaking the sentence up into its component phrases, otherwise known as chunking.
Not going to cover chunking here as there is no out-of-the-box chunker for NLTK! You are expected to define the grammar (or at least some approximation of the grammar), and once you have done that then it becomes possible.
But one application of chunking is named-entity recognition - parsing a sentence to identify the named people, places, and organizations therein. This is more difficult than it looks, e.g. "Yankee", "May", "North".
Here's how to do it. We will use the example sentences that were loaded in sent1 through sent9 to try it out. Notice the difference (in iPython only!) between printing the result and just looking at the result - if you try to show the graph for more than one sentence at a time then you'll be waiting a long time. So don't try it.
Step30: Here is a function that takes the result of ne_chunk (the plain-text form, not the graph form!) and spits out only the named entities that were found. | Python Code:
from nltk.book import *
Explanation: Introduction to NLTK
NLTK is the Natural Language Toolkit, a fairly large Python library for doing many sorts of linguistic analysis of text. NLTK comes with a selection of sample texts that we'll use to day, to get yourself familiar with what sorts of analysis you can do.
To run this notebook you will need the nltk, matplotlib, and tkinter modules. If you are new to Python and programming, the best way to have these is to make sure you are using the Anaconda Python distribution, which includes all of these and a whole host of other useful libraries. You can check whether you have the libraries by running the following commands in a Terminal or Powershell window:
python -c 'import nltk'
python -c 'import matplotlib'
python -c 'import tkinter'
If you don't have NLTK, you can install it using the pip command (or possibly pip3 if you're on a Mac) as usual.
pip install nltk
If you don't have Matplotlib or TkInter, and don't want to download Anaconda, you will be able to follow along with most but not all of this notebook.
Once all this package installation work is done, you can run
python -c 'import nltk; nltk.download()'
or, if you are on a Mac with Python 3.4 installed via the standard Python installer:
python3 -c 'import nltk; nltk.download()'
and use the dialog that appears to download the 'book' package.
Examining features of a text
We will start by loading the example texts in the 'book' package that we just downloaded.
End of explanation
print(sent1)
print(sent3)
print(sent5)
Explanation: This import statement reads the book samples, which include nine sentences and nine book-length texts. It has also helpfully put each of these texts into a variable for us, from sent1 to sent9 and text1 to text9.
End of explanation
print(text6)
print(text6.name)
print("This text has %d words" % len(text6.tokens))
print("The first hundred words are:", " ".join( text6.tokens[:100] ))
Explanation: Let's look at the texts now.
End of explanation
print(text5[0])
print(text3[0:11])
print(text4[0:51])
Explanation: Each of these texts is an nltk.text.Text object, and has methods to let you see what the text contains. But you can also treat it as a plain old list!
End of explanation
text6.concordance( "swallow" )
Explanation: We can do simple concordancing, printing the context for each use of a word throughout the text:
End of explanation
text6.concordance('Arthur', lines=37)
Explanation: The default is to show no more than 25 results for any given word, but we can change that.
End of explanation
text6.concordance('Arthur', width=100)
Explanation: We can adjust the amount of context we show in our concordance:
End of explanation
word_to_count = "KNIGHT"
print("The word %s appears %d times." % ( word_to_count, text6.count( word_to_count ) ))
Explanation: ...or get the number of times any individual word appears in the text.
End of explanation
t6_vocab = text6.vocab()
t6_words = list(t6_vocab.keys())
print("The text has %d different words" % ( len( t6_words ) ))
print("Some arbitrary 50 of these are:", t6_words[:50])
print("The most frequent 50 words are:", t6_vocab.most_common(50))
print("The word swallow appears %d times" % ( t6_vocab['swallow'] ))
print("The text has %d words that appear only once" % ( len( t6_vocab.hapaxes() ) ))
print("Some arbitrary 100 of these are:", t6_vocab.hapaxes()[:100])
Explanation: We can generate a vocabulary for the text, and use the vocabulary to find the most frequent words as well as the ones that appear only once (a.k.a. the hapaxes.)
End of explanation
print("Here we assert something that is true.")
for w in t6_words:
assert text6.count( w ) == t6_vocab[w]
print("See, that worked! Now we will assert something that is false, and we will get an error.")
for w in t6_words:
assert w.lower() == w
Explanation: You've now seen two methods for getting the number of times a word appears in a text: t6.count(word) and t6_vocab[word]. These are in fact identical, and the following bit of code is just to prove that. An assert statement is used to test whether something is true - if it ever isn't true, the code will throw up an error! This is a basic building block for writing tests for your code.
End of explanation
# With a list comprehension
long_words = [ w for w in t6_words if len( w ) > 5 and t6_vocab[w] > 3 ]
# The long way, with a for loop. This is identical to the above.
long_words = []
for w in t6_words:
if( len ( w ) > 5 and t6_vocab[w] > 3 ):
long_words.append( w )
print("The reasonably frequent long words in the text are:", long_words)
Explanation: We can try and find interesting words in the text, such as words of a minimum length (the longer a word, the less common it probably is) that occur more than once or twice...
End of explanation
print("\nUp to twenty collocations")
text6.collocations()
print("\nUp to fifty collocations")
text6.collocations(num=50)
print("\nCollocations that might have one word in between")
text6.collocations(window_size=3)
Explanation: And we can look for pairs of words that go together more often than chance would suggest.
End of explanation
%pylab --no-import-all inline
Explanation: NLTK can also provide us with a few simple graph visualizations, when we have matplotlib installed. To make this work in iPython, we need the following magic line. If you are running in PyCharm, then you do not need this line - it will throw an error if you try to use it!
End of explanation
word_length_dist = FreqDist( [ len(w) for w in t6_vocab.keys() ] )
word_length_dist.plot()
Explanation: The vocabulary we get from the .vocab() method is something called a "frequency distribution", which means it's a giant tally of each unique word and the number of times that word appears in the text. We can also make a frequency distribution of other features, such as "each possible word length and the number of times a word that length is used". Let's do that and plot it.
End of explanation
text6.dispersion_plot(["coconut", "swallow", "KNIGHT", "witch", "ARTHUR"])
text2.dispersion_plot(["Elinor", "Marianne", "Edward", "Willoughby", "Lucy"])
Explanation: We can plot where in the text a word occurs, and compare it to other words, with a dispersion plot. For example, the following dispersion plots show respectively (among other things) that the words 'coconut' and 'swallow' almost always appear in the same part of the Holy Grail text, and that Willoughby and Lucy do not appear in Sense and Sensibility until some time after the beginning of the book.
End of explanation
def print_text_stats( thetext ):
# Average word length
awl = sum([len(w) for w in thetext]) / len( thetext )
ld = len( thetext ) / len( thetext.vocab() )
print("%.2f\t%.2f\t%s" % ( awl, ld, thetext.name ))
all_texts = [ text1, text2, text3, text4, text5, text6, text7, text8, text9 ]
print("Wlen\tLdiv\tTitle")
for t in all_texts:
print_text_stats( t )
Explanation: We can go a little crazy with text statistics. This block of code computes the average word length for each text, as well as a measure known as the "lexical diversity" that measures how much word re-use there is in a text.
End of explanation
from nltk import word_tokenize
# You can read the file this way:
f = open('alice.txt', encoding='utf-8')
raw = f.read()
f.close()
# or you can read it this way.
with open('alice.txt', encoding='utf-8') as f:
raw = f.read()
# Use NLTK to break the text up into words, and put the result into a
# Text object.
alice = Text( word_tokenize( raw ) )
alice.name = "Alice's Adventures in Wonderland"
print(alice.name)
alice.concordance( "cat" )
print_text_stats( alice )
Explanation: A text of your own
So far we have been using the sample texts, but we can also use any text that we have lying around on our computer. The easiest sort of text to read in is plaintext, not PDF or HTML or anything else. Once we have made the text into an NLTK text with the Text() function, we can use all the same methods on it as we did for the sample texts above.
End of explanation
from nltk.corpus import gutenberg
print(gutenberg.fileids())
paradise_lost = Text( gutenberg.words( "milton-paradise.txt" ) )
paradise_lost
Explanation: Using text corpora
NLTK comes with several pre-existing corpora of texts, some of which are the main body of text used for certain sorts of linguistic research. Using a corpus of texts, as opposed to an individual text, brings us a few more features.
End of explanation
print("Length of text is:", len( gutenberg.raw( "milton-paradise.txt" )))
print("Number of words is:", len( gutenberg.words( "milton-paradise.txt" )))
assert( len( gutenberg.words( "milton-paradise.txt" )) == len( paradise_lost ))
print("Number of sentences is:", len( gutenberg.sents( "milton-paradise.txt" )))
print("Number of paragraphs is:", len( gutenberg.paras( "milton-paradise.txt" )))
Explanation: Paradise Lost is now a Text object, just like the ones we have worked on before. But we accessed it through the NLTK corpus reader, which means that we get some extra bits of functionality:
End of explanation
from nltk.corpus import PlaintextCorpusReader
from nltk.corpus.reader.util import read_regexp_block
# Define how paragraphs look in our text files.
def read_hanging_block( stream ):
return read_regexp_block( stream, "^[A-Za-z]" )
corpus_root = 'federalist'
file_pattern = 'federalist_.*\.txt'
federalist = PlaintextCorpusReader( corpus_root, file_pattern, para_block_reader=read_hanging_block )
print("List of texts in corpus:", federalist.fileids())
print("\nHere is the fourth paragraph of the first text:")
print(federalist.paras("federalist_1.txt")[3])
Explanation: We can also make our own corpus if we have our own collection of files, e.g. the Federalist Papers from last week. But we have to pay attention to how those files are arranged! In this case, if you look in the text file, the paragraphs are set apart with 'hanging indentation' - all the lines
End of explanation
fed1 = Text( federalist.words( "federalist_1.txt" ))
print("The first Federalist Paper has the following word collocations:")
fed1.collocations()
print("\n...and the following most frequent words.")
fed1.vocab().most_common(50)
Explanation: And just like before, from this corpus we can make individual Text objects, on which we can use the methods we have seen above.
End of explanation
from nltk.corpus import stopwords
print("We have stopword lists for the following languages:")
print(stopwords.fileids())
print("\nThese are the NLTK-provided stopwords for the German language:")
print(", ".join( stopwords.words('german') ))
Explanation: Filtering out stopwords
In linguistics, stopwords or function words are words that are so frequent in a particular language that they say little to nothing about the meaning of a text. You can make your own list of stopwords, but NLTK also provides a list for each of several common languages. These sets of stopwords are provided as another corpus.
End of explanation
print("The most frequent words are: ")
print([word[0] for word in t6_vocab.most_common(50)])
f1_most_frequent = [ w[0] for w in t6_vocab.most_common() if w[0].lower() not in stopwords.words('english') ]
print("\nThe most frequent interesting words are: ", " ".join( f1_most_frequent[:50] ))
Explanation: So reading in the stopword list, we can use it to filter out vocabulary we don't want to see. Let's look at our 50 most frequent words in Holy Grail again.
End of explanation
import re
def is_interesting( w ):
if( w.lower() in stopwords.words('english') ):
return False
if( w.isupper() ):
return False
return w.isalpha()
f1_most_frequent = [ w[0] for w in t6_vocab.most_common() if is_interesting( w[0] ) ]
print("The most frequent interesting words are: ", " ".join( f1_most_frequent[:50] ))
Explanation: Maybe we should get rid of punctuation and all-caps words too...
End of explanation
my_text = alice[305:549]
print(" ". join( my_text ))
print(len( set( my_text )), "words")
Explanation: Getting word stems
Quite frequently we might want to treat different forms of a word - e.g. 'make / makes / made / making' - as the same word. A common way to do this is to find the stem of the word and use that in your analysis, in place of the word itself. There are several different approaches that can be takenNone of them are perfect, and quite frequently linguists will write their own stemmers.
Let's chop out a paragraph of Alice in Wonderland to play with.
End of explanation
from nltk import PorterStemmer, LancasterStemmer, WordNetLemmatizer
porter = PorterStemmer()
lanc = LancasterStemmer()
wnl = WordNetLemmatizer()
porterlist = [porter.stem(w) for w in my_text]
print(" ".join( porterlist ))
print(len( set( porterlist )), "Porter stems")
lanclist = [lanc.stem(w) for w in my_text]
print(" ".join( lanclist ))
print(len( set( lanclist )), "Lancaster stems")
wnllist = [ wnl.lemmatize(w) for w in my_text ]
print(" ".join( wnllist ))
print(len( set( wnllist )), "Wordnet lemmata")
Explanation: NLTK comes with a few different stemming algorithms; we can also use WordNet (a system for analyzing semantic relationships between words) to look for the lemma form of each word and "stem" it that way. Here are some results.
End of explanation
from nltk import pos_tag
print(pos_tag(my_text))
Explanation: Part-of-speech tagging
This is where corpus linguistics starts to get interesting. In order to analyze a text computationally, it is useful to know its syntactic structure - what words are nouns, what are verbs, and so on? This can be done (again, imperfectly) by using part-of-speech tagging.
End of explanation
from nltk.corpus import brown
print(brown.tagged_words()[:25])
print(brown.tagged_words(tagset='universal')[:25])
Explanation: NLTK part-of-speech tags (simplified tagset)
| Tag | Meaning | Examples |
|-----|--------------------|--------------------------------------|
| JJ | adjective | new, good, high, special, big, local |
| RB | adverb | really, already, still, early, now |
| CC | conjunction | and, or, but, if, while, although |
| DT | determiner | the, a, some, most, every, no |
| EX | existential | there, there's |
| FW | foreign word | dolce, ersatz, esprit, quo, maitre |
| MD | modal verb | will, can, would, may, must, should |
| NN | noun | year, home, costs, time, education |
| NNP | proper noun | Alison, Africa, April, Washington |
| NUM | number | twenty-four, fourth, 1991, 14:24 |
| PRO | pronoun | he, their, her, its, my, I, us |
| IN | preposition | on, of, at, with, by, into, under |
| TO | the word to | to |
| UH | interjection | ah, bang, ha, whee, hmpf, oops |
| VB | verb | is, has, get, do, make, see, run |
| VBD | past tense | said, took, told, made, asked |
| VBG | present participle | making, going, playing, working |
| VN | past participle | given, taken, begun, sung |
| WRB | wh determiner | who, which, when, what, where, how |
Automated tagging is pretty good, but not perfect. There are other taggers out there, such as the Brill tagger and the TreeTagger, but these aren't set up to run 'out of the box' and, with TreeTagger in particular, you will have to download extra software.
Some of the bigger corpora in NLTK come pre-tagged; this is a useful way to train a tagger that uses machine-learning methods (such as Brill), and a good way to test any new tagging method that is developed. This is also the data from which our knowledge of how language is used comes from. (At least, English and some other major Western languages.)
End of explanation
tagged_word_fd = FreqDist([ w[1] for w in brown.tagged_words(tagset='universal') ])
tagged_word_fd.plot()
Explanation: We can even do a frequency plot of the different parts of speech in the corpus (if we have matplotlib installed!)
End of explanation
from nltk import ne_chunk
tagged_text = pos_tag(sent2)
ner_text = ne_chunk( tagged_text )
print(ner_text)
ner_text
Explanation: Named-entity recognition
As well as the parts of speech of individual words, it is useful to be able to analyze the structure of an entire sentence. This generally involves breaking the sentence up into its component phrases, otherwise known as chunking.
Not going to cover chunking here as there is no out-of-the-box chunker for NLTK! You are expected to define the grammar (or at least some approximation of the grammar), and once you have done that then it becomes possible.
But one application of chunking is named-entity recognition - parsing a sentence to identify the named people, places, and organizations therein. This is more difficult than it looks, e.g. "Yankee", "May", "North".
Here's how to do it. We will use the example sentences that were loaded in sent1 through sent9 to try it out. Notice the difference (in iPython only!) between printing the result and just looking at the result - if you try to show the graph for more than one sentence at a time then you'll be waiting a long time. So don't try it.
End of explanation
def list_named_entities( tree ):
try:
tree.label()
except AttributeError:
return
if( tree.label() != "S" ):
print(tree)
else:
for child in tree:
list_named_entities( child )
list_named_entities( ner_text )
Explanation: Here is a function that takes the result of ne_chunk (the plain-text form, not the graph form!) and spits out only the named entities that were found.
End of explanation |
10,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Variational Equations With the Chain Rule
For a complete introduction to variational equations, please read the paper by Rein and Tamayo (2016).
Variational equations can be used to calculate derivatives in an $N$-body simulation. More specifically, given a set of initial conditions $\alpha_i$ and a set of variables at the end of the simulation $v_k$, we can calculate all first order derivatives
$$\frac{\partial v_k}{\partial \alpha_i}$$
as well as all second order derivates
$$\frac{\partial^2 v_k}{\partial \alpha_i\partial \alpha_j}$$
For this tutorial, we work with a two planet system.
We first chose the semi-major axis $a$ of the outer planet as an initial condition (this is our $\alpha_i$). At the end of the simulation we output the velocity of the star in the $x$ direction (this is our $v_k$).
To do that, let us first import REBOUND and numpy.
Step1: The following function takes $a$ as a parameter, then integrates the two planet system and returns the velocity of the star at the end of the simulation.
Step2: If we run the simulation again, with a different initial $a$, we get a different velocity
Step3: We could now run many different simulations to map out the parameter space. This is a very simple example of a typical use case
Step4: Note the two new functions. sim.add_variation() adds a set of variational particles to the simulation. All variational particles are by default initialized to zero. We use the vary() function to initialize them to a variation that we are interested in. Here, we initialize the variational particles corresponding to a change in the semi-major axis, $a$, of the particle with index 2 (the outer planet).
Step5: We can use the derivative to construct a Taylor series expansion of the velocity around $a_0=1.5$
Step6: Compare this value with the explicitly calculate one above. They are almost the same! But we can do even better, by using second order variational equations to calculate second order derivatives.
Step7: Using a Taylor series expansion to second order gives a better estimate of v(1.51).
Step8: Now that we know how to calculate first and second order derivates of positions and velocities of particles, we can simply use the chain rule to calculate more complicated derivates. For example, instead of the velocity $v_x$, you might be interested in the quantity $w\equiv(v_x - c)^2$ where $c$ is a constant. This is something that typically appears in a $\chi^2$ fit. The chain rule gives us
Step9: Similarly, you can also use the chain rule to vary initial conditions of particles in a way that is not supported by REBOUND by default. For example, suppose you want to work in some fancy coordinate system, using $h\equiv e\sin(\omega)$ and $k\equiv e \cos(\omega)$ variables instead of $e$ and $\omega$. You might want to do that because $h$ and $k$ variables are often better behaved near $e\sim0$. In that case the chain rule gives us | Python Code:
import rebound
import numpy as np
Explanation: Using Variational Equations With the Chain Rule
For a complete introduction to variational equations, please read the paper by Rein and Tamayo (2016).
Variational equations can be used to calculate derivatives in an $N$-body simulation. More specifically, given a set of initial conditions $\alpha_i$ and a set of variables at the end of the simulation $v_k$, we can calculate all first order derivatives
$$\frac{\partial v_k}{\partial \alpha_i}$$
as well as all second order derivates
$$\frac{\partial^2 v_k}{\partial \alpha_i\partial \alpha_j}$$
For this tutorial, we work with a two planet system.
We first chose the semi-major axis $a$ of the outer planet as an initial condition (this is our $\alpha_i$). At the end of the simulation we output the velocity of the star in the $x$ direction (this is our $v_k$).
To do that, let us first import REBOUND and numpy.
End of explanation
def calculate_vx(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
return sim.particles[0].vx # return star's velocity in the x direction
calculate_vx(a=1.5) # initial semi-major axis of the outer planet is 1.5
Explanation: The following function takes $a$ as a parameter, then integrates the two planet system and returns the velocity of the star at the end of the simulation.
End of explanation
calculate_vx(a=1.51) # initial semi-major axis of the outer planet is 1.51
Explanation: If we run the simulation again, with a different initial $a$, we get a different velocity:
End of explanation
def calculate_vx_derivative(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation() # add a set of variational particles
v1.vary(2,"a") # initialize the variational particles
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
return sim.particles[0].vx, v1.particles[0].vx # return star's velocity and its derivative
Explanation: We could now run many different simulations to map out the parameter space. This is a very simple example of a typical use case: the fitting of a radial velocity datapoint.
However, we can be smarter than simple running an almost identical simulation over and over again by using variational equations. These will allow us to calculate the derivate of the stellar velocity at the end of the simulation. We can take derivative with respect to any of the initial conditions, i.e. a particle's mass, semi-major axis, x-coordinate, etc. Here, we want to take the derivative with respect to the semi-major axis of the outer planet. The following function does exactly that:
End of explanation
calculate_vx_derivative(a=1.5)
Explanation: Note the two new functions. sim.add_variation() adds a set of variational particles to the simulation. All variational particles are by default initialized to zero. We use the vary() function to initialize them to a variation that we are interested in. Here, we initialize the variational particles corresponding to a change in the semi-major axis, $a$, of the particle with index 2 (the outer planet).
End of explanation
a0=1.5
va0, dva0 = calculate_vx_derivative(a=a0)
def v(a):
return va0 + (a-a0)*dva0
print(v(1.51))
Explanation: We can use the derivative to construct a Taylor series expansion of the velocity around $a_0=1.5$:
$$v(a) \approx v(a_0) + (a-a_0) \frac{\partial v}{\partial a}$$
End of explanation
def calculate_vx_derivative_2ndorder(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation()
v1.vary(2,"a")
# The following lines add and initialize second order variational particles
v2 = sim.add_variation(order=2, first_order=v1)
v2.vary(2,"a")
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
# return star's velocity and its first and second derivatives
return sim.particles[0].vx, v1.particles[0].vx, v2.particles[0].vx
Explanation: Compare this value with the explicitly calculate one above. They are almost the same! But we can do even better, by using second order variational equations to calculate second order derivatives.
End of explanation
a0=1.5
va0, dva0, ddva0 = calculate_vx_derivative_2ndorder(a=a0)
def v(a):
return va0 + (a-a0)*dva0 + 0.5*(a-a0)**2*ddva0
print(v(1.51))
Explanation: Using a Taylor series expansion to second order gives a better estimate of v(1.51).
End of explanation
def calculate_w_derivative(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation() # add a set of variational particles
v1.vary(2,"a") # initialize the variational particles
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
c = 1.02 # some constant
w = (sim.particles[0].vx-c)**2
dwda = 2.*v1.particles[0].vx * (sim.particles[0].vx-c)
return w, dwda # return w and its derivative
calculate_w_derivative(1.5)
Explanation: Now that we know how to calculate first and second order derivates of positions and velocities of particles, we can simply use the chain rule to calculate more complicated derivates. For example, instead of the velocity $v_x$, you might be interested in the quantity $w\equiv(v_x - c)^2$ where $c$ is a constant. This is something that typically appears in a $\chi^2$ fit. The chain rule gives us:
$$ \frac{\partial w}{\partial a} = 2 \cdot (v_x-c)\cdot \frac{\partial v_x}{\partial a}$$
The variational equations provide the $\frac{\partial v_x}{\partial a}$ part, the ordinary particles provide $v_x$.
End of explanation
def calculate_vx_derivative_h():
h, k = 0.1, 0.2
e = float(np.sqrt(h**2+k**2))
omega = np.arctan2(k,h)
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=1.5, e=e, omega=omega) # outer planet
v1 = sim.add_variation()
dpde = rebound.Particle(simulation=sim, particle=sim.particles[2], variation="e")
dpdomega = rebound.Particle(simulation=sim, particle=sim.particles[2], m=1e-3, a=1.5, e=e, omega=omega, variation="omega")
v1.particles[2] = h/e * dpde - k/(e*e) * dpdomega
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
# return star's velocity and its first derivatives
return sim.particles[0].vx, v1.particles[0].vx
calculate_vx_derivative_h()
Explanation: Similarly, you can also use the chain rule to vary initial conditions of particles in a way that is not supported by REBOUND by default. For example, suppose you want to work in some fancy coordinate system, using $h\equiv e\sin(\omega)$ and $k\equiv e \cos(\omega)$ variables instead of $e$ and $\omega$. You might want to do that because $h$ and $k$ variables are often better behaved near $e\sim0$. In that case the chain rule gives us:
$$\frac{\partial p(e(h, k), \omega(h, k))}{\partial h} = \frac{\partial p}{\partial e}\frac{\partial e}{\partial h} + \frac{\partial p}{\partial \omega}\frac{\partial \omega}{\partial h}$$
where $p$ is any of the particles initial coordinates. In our case the derivates of $e$ and $\omega$ with respect to $h$ are:
$$\frac{\partial \omega}{\partial h} = -\frac{k}{e^2}\quad\text{and}\quad \frac{\partial e}{\partial h} = \frac{h}{e}$$
With REBOUND, you can easily implement this. The following function calculates the derivate of the star's velocity with respect to the outer planet's $h$ variable.
End of explanation |
10,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Modify Module
Step1: The last row of this data set repeats the labels. We're going to go ahead and omit it.
Step2: We're going to predict whether a job is still open, so our label will ultimately be the "Status" column.
Step3: We'll remove the label from the rest of the data later. First, let's do some cleaning. Notice that we have some missing data for our floating point variables (encoded as numpy.nan)
Step4: Sklearn can't tolerate these missing values, so we have to do something with them. Probably, a statistically sound thing to do with this data would be to leave these rows out, but for pedagogical purposes, let's assume it makes sense to impute the data. We can do that with
Step5: Looks like there were a few entries that had 0 for a zip code already.
For the purposes of this tutorial, we will go ahead and replace missing values with the most frequent value in the column
Step6: Our data also has a number of string columns. Strings must be converted to numbers before Scikit-Learn can analyze them, so we will use
Step7: Note that classes is a dictionary of arrays where each key is the column name and each value is an array of which string each number represents. For example, if we wanted to find out what category 1 represents, we would look at
Step8: and find that category 1 is 'Alley'
Selection
Diogenes provides a number of functions to retain only columns and rows matching a specific criteria
Step9: Notice that "Type of Service Request" has been removed, since every value in the column was the same
Next, let's assume that we're only interested in requests made during the year 2015 and select only those rows using the
Step10: Finally, let's remove rows which the "Status" column claims are duplicates. We review our classes variable to find
Step11: We want to remove rows that have either 1 or 3 in the status column. We don't have a row selection function already defined to select rows that have one of several discrete values, so we will create one
Step12: Feature Generation
We can also create new features based on existing data. We'll start out by generating a feature that calculates the distance of the service request from Cloud Gate in downtown Chicago (41.882773, -87.623304) using
Step13: Now we'll put those distances into 10 bins using
Step14: Now we'll make a binary feature that is true if and only if the tree is in a parkway in ward 10 using
Step15: We note that "Parkway" is category 2, so we will select items that equal 2 in the "If Yes, where is the debris located?" column and 10 in the "Ward" column.
Step16: Finally, we'll add all of our generated features to our data using
Step17: Last steps
Now, all we have to do is make remove the "Status" column from the rest of the data (along with the highly correlated "Completion Date") and we're ready to run an experiment. | Python Code:
import diogenes
data = diogenes.read.open_csv_url('https://data.cityofchicago.org/api/views/mab8-y9h3/rows.csv?accessType=DOWNLOAD',
parse_datetimes=['Creation Date', 'Completion Date'])
Explanation: The Modify Module
:mod:diogenes.modify provides tools for manipulating arrays and generating features.
Cleaning
:func:diogenes.modify.modify.replace_missing_vals
:func:diogenes.modify.modify.label_encode
Selection
:func:diogenes.modify.modify.choose_cols_where
:func:diogenes.modify.modify.choose_rows_where
:func:diogenes.modify.modify.remove_cols_where
:func:diogenes.modify.modify.remove_rows_where
Feature generation
:func:diogenes.modify.modify.generate_bin
:func:diogenes.modify.modify.normalize
:func:diogenes.modify.modify.combine_cols
:func:diogenes.modify.modify.distance_from_point
:func:diogenes.modify.modify.where_all_are_true
In-place Cleaning
Diogenes provides two functions for data cleaning:
:func:diogenes.modify.modify.replace_missing_vals, which replaces missing values with valid onces.
:func:diogenes.modify.modify.label_encode which replaces strings with corresponding integers.
For this example, we'll look at Chicago's "311 Service Requests - Tree Debris" data on the Chicago data portal (https://data.cityofchicago.org/)
End of explanation
data = data[:-1]
data.dtype
Explanation: The last row of this data set repeats the labels. We're going to go ahead and omit it.
End of explanation
from collections import Counter
print Counter(data['Status']).most_common()
Explanation: We're going to predict whether a job is still open, so our label will ultimately be the "Status" column.
End of explanation
import numpy as np
print sum(np.isnan(data['ZIP Code']))
print sum(np.isnan(data['Ward']))
print sum(np.isnan(data['X Coordinate']))
Explanation: We'll remove the label from the rest of the data later. First, let's do some cleaning. Notice that we have some missing data for our floating point variables (encoded as numpy.nan)
End of explanation
data_with_zeros = diogenes.modify.replace_missing_vals(data, strategy='constant', constant=0)
print sum(np.isnan(data_with_zeros['ZIP Code']))
print sum(data_with_zeros['ZIP Code'] == 0)
Explanation: Sklearn can't tolerate these missing values, so we have to do something with them. Probably, a statistically sound thing to do with this data would be to leave these rows out, but for pedagogical purposes, let's assume it makes sense to impute the data. We can do that with :func:diogenes.modify.modify.replace_missing_vals.
We could, for instance, replace every nan with a 0:
End of explanation
data = diogenes.modify.replace_missing_vals(data, strategy='most_frequent')
Explanation: Looks like there were a few entries that had 0 for a zip code already.
For the purposes of this tutorial, we will go ahead and replace missing values with the most frequent value in the column:
End of explanation
print Counter(data['If Yes, where is the debris located?']).most_common()
data, classes = diogenes.modify.label_encode(data)
print Counter(data['If Yes, where is the debris located?']).most_common()
print classes['If Yes, where is the debris located?']
Explanation: Our data also has a number of string columns. Strings must be converted to numbers before Scikit-Learn can analyze them, so we will use :func:diogenes.modify.modify.label_encode to convert them
End of explanation
classes['If Yes, where is the debris located?'][1]
Explanation: Note that classes is a dictionary of arrays where each key is the column name and each value is an array of which string each number represents. For example, if we wanted to find out what category 1 represents, we would look at:
End of explanation
print data.dtype.names
print
print Counter(data['Type of Service Request'])
print
arguments = [{'func': diogenes.modify.col_val_eq_any, 'vals': None}]
data = diogenes.modify.remove_cols_where(data, arguments)
print data.dtype.names
Explanation: and find that category 1 is 'Alley'
Selection
Diogenes provides a number of functions to retain only columns and rows matching a specific criteria:
:func:diogenes.modify.modify.choose_cols_where
:func:diogenes.modify.modify.remove_cols_where
:func:diogenes.modify.modify.choose_rows_where
:func:diogenes.modify.modify.remove_rows_where
These are explained in detail in the module documentation for :mod:diogenes.modify.modify. Explaining all the different things you can do with these selection operators is outside the scope of this tutorial.
We'll start out by removing any columns for which every row is the same value by employing the :func:diogenes.modify.modify.col_val_eq_any column selection function:
End of explanation
print data.shape
print data['Creation Date'].min()
print data['Creation Date'].max()
print
arguments = [{'func': diogenes.modify.row_val_between,
'vals': [np.datetime64('2015-01-01T00:00:00', 'ns'), np.datetime64('2016-01-01T00:00:00', 'ns')],
'col_name': 'Creation Date'}]
data = diogenes.modify.choose_rows_where(data, arguments)
print data.shape
print data['Creation Date'].min()
print data['Creation Date'].max()
Explanation: Notice that "Type of Service Request" has been removed, since every value in the column was the same
Next, let's assume that we're only interested in requests made during the year 2015 and select only those rows using the :func:diogenes.modify.modify.row_val_between row selection function:
End of explanation
classes['Status']
Explanation: Finally, let's remove rows which the "Status" column claims are duplicates. We review our classes variable to find:
End of explanation
def row_val_in(M, col_name, vals):
return np.logical_or(M[col_name] == vals[0], M[col_name] == vals[1])
print data.shape
print Counter(data['Status']).most_common()
print
arguments = [{'func': row_val_in, 'vals': [1, 3], 'col_name': 'Status'}]
data2 = diogenes.modify.remove_rows_where(data, arguments)
print data2.shape
print Counter(data2['Status']).most_common()
Explanation: We want to remove rows that have either 1 or 3 in the status column. We don't have a row selection function already defined to select rows that have one of several discrete values, so we will create one:
End of explanation
dist_from_cloud_gate = diogenes.modify.distance_from_point(41.882773, -87.623304, data['Latitude'], data['Longitude'])
print dist_from_cloud_gate[:10]
Explanation: Feature Generation
We can also create new features based on existing data. We'll start out by generating a feature that calculates the distance of the service request from Cloud Gate in downtown Chicago (41.882773, -87.623304) using :func:diogenes.modify.modify.distance_from_point.
End of explanation
dist_binned = diogenes.modify.generate_bin(dist_from_cloud_gate, 10)
print dist_binned[:10]
Explanation: Now we'll put those distances into 10 bins using :func:diogenes.modify.modify.generate_bin.
End of explanation
print classes['If Yes, where is the debris located?']
Explanation: Now we'll make a binary feature that is true if and only if the tree is in a parkway in ward 10 using :func:diogenes.modify.modify.where_all_are_true (which has similar syntax to the selection functions).
End of explanation
arguments = [{'func': diogenes.modify.row_val_eq,
'col_name': 'If Yes, where is the debris located?',
'vals': 2},
{'func': diogenes.modify.row_val_eq,
'col_name': 'Ward',
'vals': 10}]
parkway_in_ward_10 = diogenes.modify.where_all_are_true(data, arguments)
print np.where(parkway_in_ward_10)
Explanation: We note that "Parkway" is category 2, so we will select items that equal 2 in the "If Yes, where is the debris located?" column and 10 in the "Ward" column.
End of explanation
data = diogenes.utils.append_cols(data, [dist_from_cloud_gate, dist_binned, parkway_in_ward_10],
['dist_from_cloud_gate', 'dist_binned', 'parkway_in_ward_10'])
print data.dtype
Explanation: Finally, we'll add all of our generated features to our data using :func:diogenes.utils.append_cols
End of explanation
labels = data['Status']
M = diogenes.utils.remove_cols(data, ['Status', 'Completion Date'])
exp = diogenes.grid_search.experiment.Experiment(M, labels)
exp.run()
Explanation: Last steps
Now, all we have to do is make remove the "Status" column from the rest of the data (along with the highly correlated "Completion Date") and we're ready to run an experiment.
End of explanation |
10,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div id="toc"></div>
Step1: Method
Step2: Load det_df, channel lists
Step3: Load bhp data
Do I have a bhp distribution saved that I can load directly? I would rather not have to load and revive bhm because it requires so much memory.
Work with data from Cf072115_to_Cf072215b analysis.
Step4: Sum along first axis to make distribution across all angles. Multiply by norm_factor so we're working with counts (instead of normalized).
Step5: Coarsen time binning.
Step6: Sum along a specific slice for $\Delta t_1$
Step7: This works, but I need to work with the full dataset and coarsen the timing.
Take both slices
The way I am plotting the distribution above, I am holding $\Delta t_1$ constant and looking at $\Delta t_2$. This means that I am looking at the distribution of detctor 2 with detector 1 held constant.
This introduces some bias, or more precisely, removes a biased set of channels from the distribution.
Since detctor pairs are organizes such that det1ch < det2ch, I am looking at the distribution of neutron times for the higher detector channels.
Step8: In order to include all data, I need to take the sum of all detectors pairs in both directions. Try it out.
Step9: Normalize it
In order to compare multiple traces, I need to normalize them either by the peak or the total number. For now I am going to go with the total number.
Create slices at a few times.
Step10: Functionalize it
I want to automate the process for producing this based on a give time stamp.
Functionalize calculating the slice
Step11: Functionalize creating bhp_slices
Need to create a bunch of slices at once from t_slices. For now just do t_slices, don't specify max time range for each slice.
Step12: Functionalize plotting
Step13: Convert to energy space
Time to energy
Step14: How do time and energy relate? | Python Code:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
Explanation: <div id="toc"></div>
End of explanation
import numpy as np
import scipy.io as sio
import os
import sys
import matplotlib.pyplot as plt
import matplotlib.colors
from matplotlib.pyplot import cm
import inspect
import seaborn as sns
sns.set(style='ticks')
# Load the bicorr.py functions I have already developed
sys.path.append('../scripts')
import bicorr as bicorr
import bicorr_plot as bicorr_plot
import bicorr_math as bicorr_math
%load_ext autoreload
%autoreload 2
Explanation: Method: bhp slices
Develop a method for slicing bhp by $\Delta t_1$ or $\Delta t_2$.
Load bhp (or bhm $\rightarrow$ bhp)
Plot bhp
Select by $\Delta t_1$ or $\Delta t_2$
Plot that distribution
Look at average energies
End of explanation
os.listdir('../meas_info/')
det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv',plot_flag=True)
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag=True)
Explanation: Load det_df, channel lists
End of explanation
os.listdir('../analysis/Cf072115_to_Cf072215b/')
bhp_nn_gif_data = np.load('../analysis/Cf072115_to_Cf072215b/bhp_nn_gif.npz')
print(bhp_nn_gif_data.files)
norm_factor = bhp_nn_gif_data['norm_factor']
bhp_nn_pos = bhp_nn_gif_data['bhp_nn_pos']
bhp_nn_neg = bhp_nn_gif_data['bhp_nn_neg']
bhp_nn_diff = bhp_nn_gif_data['bhp_nn_diff']
th_bin_edges= bhp_nn_gif_data['th_bin_edges']
th_bin_centers = (th_bin_edges[:-1]+th_bin_edges[1:])/2
dt_bin_edges= bhp_nn_gif_data['dt_bin_edges']
dt_bin_edges_neg= bhp_nn_gif_data['dt_bin_edges_neg']
bhp_nn_diff.shape
Explanation: Load bhp data
Do I have a bhp distribution saved that I can load directly? I would rather not have to load and revive bhm because it requires so much memory.
Work with data from Cf072115_to_Cf072215b analysis.
End of explanation
for i in range(len(norm_factor)):
bhp_nn_diff[i,:,:] = norm_factor[i] * bhp_nn_diff[i,:,:]
bhp = np.sum(bhp_nn_diff,axis=0)
bhp.shape
bicorr.bhp_plot(bhp,dt_bin_edges,show_flag = True,vmin=1)
Explanation: Sum along first axis to make distribution across all angles. Multiply by norm_factor so we're working with counts (instead of normalized).
End of explanation
bhp, dt_bin_edges = bicorr.coarsen_bhp(bhp, dt_bin_edges, 8, normalized = True, print_flag = True)
bicorr.bhp_plot(bhp,dt_bin_edges,show_flag = True, vmin=1)
Explanation: Coarsen time binning.
End of explanation
dt_bin_centers = bicorr.calc_centers(dt_bin_edges)
i = 25
print(dt_bin_edges[i])
print(dt_bin_edges[i+1])
print(dt_bin_centers[i])
t = 51
i = np.digitize(t,dt_bin_edges)-1
print(dt_bin_edges[i])
print(dt_bin_edges[i+1])
print(dt_bin_centers[i])
plt.figure(figsize=(4,4))
plt.plot(dt_bin_centers,bhp[i,:],'.-k',linewidth=.5)
plt.xlabel('$\Delta t_2$')
plt.ylabel('Normalized counts')
plt.title('Slice at $\Delta t_1$ = {}'.format(dt_bin_centers[i]))
plt.tight_layout()
plt.show()
plt.figure(figsize=(4,4))
plt.plot(dt_bin_centers,bhp[i,:],'.-k',linewidth=.5)
plt.xlabel('$\Delta t_2$')
plt.ylabel('Normalized counts')
plt.title('$\Delta t_1$ = {}'.format(dt_bin_centers[i]))
plt.tight_layout()
plt.show()
Explanation: Sum along a specific slice for $\Delta t_1$
End of explanation
bicorr.plot_det_df(det_df, which='index')
Explanation: This works, but I need to work with the full dataset and coarsen the timing.
Take both slices
The way I am plotting the distribution above, I am holding $\Delta t_1$ constant and looking at $\Delta t_2$. This means that I am looking at the distribution of detctor 2 with detector 1 held constant.
This introduces some bias, or more precisely, removes a biased set of channels from the distribution.
Since detctor pairs are organizes such that det1ch < det2ch, I am looking at the distribution of neutron times for the higher detector channels.
End of explanation
plt.figure(figsize=(4,4))
plt.plot(dt_bin_centers,bhp[i,:]+bhp[:,i],'.-',linewidth=.5)
plt.xlabel('$\Delta t_2$')
plt.ylabel('Normalized counts')
plt.title('$\Delta t_1$ = {}'.format(dt_bin_centers[i]))
plt.tight_layout()
plt.show()
Explanation: In order to include all data, I need to take the sum of all detectors pairs in both directions. Try it out.
End of explanation
t_slices = [30,40,50,60,70,80,90]
bicorr.bhp_plot(bhp,dt_bin_edges,title='bhp',clear=False,vmin=1)
for t in t_slices:
plt.axvline(t,c='w')
plt.show()
bhp_slices = np.zeros((len(t_slices),len(dt_bin_centers)))
plt.figure(figsize=(4,3))
for t in t_slices:
i = t_slices.index(t) # Works as long as t_slices is unique
bhp_slices[i,:] = bhp[i,:]+bhp[:,i]
plt.plot(dt_bin_centers,bhp_slices[i,:],'.-',linewidth=.5)
plt.xlabel('Time (ns)')
plt.ylabel('Counts / (fission-ns2-pair)')
plt.legend([str(t) for t in t_slices])
plt.title('Non-normalized bhp slices')
sns.despine(right=False)
plt.show()
plt.figure(figsize=(4,3))
for t in t_slices:
i = t_slices.index(t) # Works as long as t_slices is unique
bhp_slices[i,:] = bhp[i,:]+bhp[:,i]
plt.plot(dt_bin_centers,bhp_slices[i,:]/np.sum(bhp_slices[i,:]),'.-',linewidth=.5)
plt.xlabel('Time (ns)')
plt.ylabel('Counts / (fission-ns2-pair)')
plt.legend([str(t) for t in t_slices])
plt.title('Normalized bhp slices by integral')
sns.despine(right=False)
plt.show()
plt.figure(figsize=(4,3))
for t in t_slices:
i = t_slices.index(t) # Works as long as t_slices is unique
bhp_slices[i,:] = bhp[i,:]+bhp[:,i]
plt.plot(dt_bin_centers,bhp_slices[i,:]/np.max(bhp_slices[i,:]),'.-',linewidth=.5)
plt.xlabel('Time (ns)')
plt.ylabel('Counts / (fission-ns2-pair)')
plt.legend([str(t) for t in t_slices])
plt.title('Normalized bhp slices by height')
sns.despine(right=False)
plt.show()
Explanation: Normalize it
In order to compare multiple traces, I need to normalize them either by the peak or the total number. For now I am going to go with the total number.
Create slices at a few times.
End of explanation
help(bicorr.slice_bhp)
bhp_slice, slice_dt_range = bicorr.slice_bhp(bhp,dt_bin_edges,50.0,53.0,True)
Explanation: Functionalize it
I want to automate the process for producing this based on a give time stamp.
Functionalize calculating the slice
End of explanation
t_slices
print(t_slices)
bhp_slices, slice_dt_ranges = bicorr.slices_bhp(bhp,dt_bin_edges,t_slices)
Explanation: Functionalize creating bhp_slices
Need to create a bunch of slices at once from t_slices. For now just do t_slices, don't specify max time range for each slice.
End of explanation
help(bicorr.plot_bhp_slice)
bicorr_plot.plot_bhp_slice(bhp_slice, dt_bin_edges, slice_range = slice_dt_range, show_flag = True, normalized='max', title='Normalized by max')
bicorr_plot.plot_bhp_slice(bhp_slice, dt_bin_edges, slice_range = slice_dt_range, show_flag = True, normalized='int', title='Normalized by integral')
bicorr_plot.plot_bhp_slices(bhp_slices,dt_bin_edges,slice_dt_ranges);
Explanation: Functionalize plotting
End of explanation
energy_bin_edges = np.asarray(np.insert([bicorr.convert_time_to_energy(t) for t in dt_bin_edges[1:]],0,10000))
Explanation: Convert to energy space
Time to energy
End of explanation
plt.figure(figsize=(4,3))
plt.plot(dt_bin_edges,energy_bin_edges,'.-k',linewidth=.5)
plt.yscale('log')
plt.xlabel('time (ns)')
plt.ylabel('enregy (MeV)')
sns.despine(right=False)
plt.title('Relationship between time and energy')
plt.show()
Explanation: How do time and energy relate?
End of explanation |
10,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook contains a simple image classification convolutional neural network using the MNIST data. <br>
It is highly recommended to read the following blog post while going through the notebook.<br>
Enjoy!
Step1: Load the MNIST data
Step2: Explore the dataset
Training images
Step3: Targets (train and test) distributions
Step4: Our first multiclass classification neural network
This is a model that is mostly inspired from here
Step5: Model definiton
Step6: Model architecture
Step7: Visualize the model
Step8: Model training and evaluation
Step9: Save the model and load it if necessary
Save the model weights
Step10: Save the model architecture
Step11: Load the model (architecture and weights)
Step12: Model predictions (predicted classes)
Step13: Make prediction on unseen images | Python Code:
## Keras related imports
from keras.datasets import mnist
from keras.models import Sequential, model_from_json
from keras.layers import Activation, Dropout, Flatten, Dense, Convolution2D, MaxPooling2D
from keras.utils import np_utils, data_utils, visualize_util
from keras.preprocessing.image import load_img, img_to_array
## Other data science libraries
import numpy as np
import pandas as pd
import json
from PIL import Image
import matplotlib.pylab as plt
import seaborn as sns
%matplotlib inline
## Some model and data processing constants
batch_size = 128
nb_classes = 10
nb_epoch = 12
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel sizeb
nb_conv = 3
Explanation: This notebook contains a simple image classification convolutional neural network using the MNIST data. <br>
It is highly recommended to read the following blog post while going through the notebook.<br>
Enjoy!
End of explanation
(X_train, y_train), (X_test, y_test) = mnist.load_data()
y_train.shape
y_test.shape
Explanation: Load the MNIST data
End of explanation
fig, axes = plt.subplots(5, 5, figsize=(8,8), sharex=True, sharey=True)
for id, ax in enumerate(axes.ravel()):
image = Image.fromarray(X_train[id, :, :])
ax.imshow(image, cmap='Greys_r')
Explanation: Explore the dataset
Training images
End of explanation
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8, 10), sharex=True)
sns.distplot(y_train, kde=False, ax=ax1, color='blue', hist_kws={"width": 0.3, "align": "mid"})
ax1.set_ylabel('Train samples')
sns.distplot(y_test, kde=False, ax=ax2, color='green', hist_kws={"width": 0.3, "align": "mid"})
ax2.set_ylabel('Test samples')
ax2.set_xlabel('Labels')
Explanation: Targets (train and test) distributions
End of explanation
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
Explanation: Our first multiclass classification neural network
This is a model that is mostly inspired from here: https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py
Data processing
End of explanation
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
Explanation: Model definiton
End of explanation
model.summary()
Explanation: Model architecture
End of explanation
visualize_util.plot(loaded_model,
to_file='simple_image_classification_architecture.png', show_shapes=True)
load_img('simple_image_classification_architecture.png')
Explanation: Visualize the model
End of explanation
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
Explanation: Model training and evaluation
End of explanation
model.save_weights('simple_image_classification_weights.h5')
Explanation: Save the model and load it if necessary
Save the model weights
End of explanation
saved_model = model.to_json()
with open('simple_image_classification_architecture.json', 'w') as outfile:
json.dump(saved_model, outfile)
Explanation: Save the model architecture
End of explanation
### Load architecture
with open('simple_image_classification_architecture.json', 'r') as architecture_file:
model_architecture = json.load(architecture_file)
loaded_model = model_from_json(model_architecture)
### Load weights
loaded_model.load_weights('simple_image_classification_weights.h5')
Explanation: Load the model (architecture and weights)
End of explanation
predictions = model.predict_classes(X_test)
(predictions == y_test).sum() / len(predictions)
Explanation: Model predictions (predicted classes)
End of explanation
### Load, resize, scale and reshape the new digit image (the output is a 4D array)
def load_resize_scale_reshape(img_path):
img = load_img(img_path, grayscale=True, target_size=(img_rows, img_cols))
array_img = img_to_array(img) / 255.0
reshaped_array_img = array_img.reshape(1, *array_img.shape)
return reshaped_array_img
two = load_resize_scale_reshape('data/digits/2.png')
five = load_resize_scale_reshape('data/digits/5.png')
images_list = [two, five]
digits = np.stack(images_list).reshape(len(images_list), *six.shape[1:])
loaded_model.predict_classes(digits)
Explanation: Make prediction on unseen images
End of explanation |
10,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JointAnalyzer assumes the individual audio analysis and score analysis is applied earlier.
Step1: First we compute the input score and audio features for joint analysis.
Step2: Next, you can use the single line call "analyze," which does all the available analysis simultaneously. You can then update the audio analysis using the joint analysis results.
Step3: ... or the individual calls are given below. | Python Code:
data_folder = os.path.join('..', 'sample-data')
# score inputs
symbtr_name = 'ussak--sazsemaisi--aksaksemai----neyzen_aziz_dede'
txt_score_filename = os.path.join(data_folder, symbtr_name, symbtr_name + '.txt')
mu2_score_filename = os.path.join(data_folder, symbtr_name, symbtr_name + '.mu2')
# instantiate
audio_mbid = 'f970f1e0-0be9-4914-8302-709a0eac088e'
audio_filename = os.path.join(data_folder, symbtr_name, audio_mbid, audio_mbid + '.mp3')
# instantiate analyzer objects
scoreAnalyzer = SymbTrAnalyzer(verbose=True)
audioAnalyzer = AudioAnalyzer(verbose=True)
jointAnalyzer = JointAnalyzer(verbose=True)
Explanation: JointAnalyzer assumes the individual audio analysis and score analysis is applied earlier.
End of explanation
# score (meta)data analysis
score_features = scoreAnalyzer.analyze(
txt_score_filename, mu2_score_filename)
# predominant melody extraction
audio_pitch = audioAnalyzer.extract_pitch(audio_filename)
# NOTE: do not call pitch filter later as aligned_pitch_filter will be more effective
Explanation: First we compute the input score and audio features for joint analysis.
End of explanation
# joint analysis
joint_features, score_informed_audio_features = jointAnalyzer.analyze(
txt_score_filename, score_features, audio_filename, audio_pitch)
# redo some steps in audio analysis
score_informed_audio_features = audioAnalyzer.analyze(
metadata=False, pitch=False, **score_informed_audio_features)
# get a summary of the analysis
summarized_features = jointAnalyzer.summarize(
audio_features={'pitch': audio_pitch},
score_features=score_features, joint_features=joint_features,
score_informed_audio_features=score_informed_audio_features)
# plot
plt.rcParams['figure.figsize'] = [20, 8]
fig, ax = jointAnalyzer.plot(summarized_features)
ax[0].set_ylim([50, 500])
plt.show()
Explanation: Next, you can use the single line call "analyze," which does all the available analysis simultaneously. You can then update the audio analysis using the joint analysis results.
End of explanation
# score (meta)data analysis
score_features = scoreAnalyzer.analyze(
txt_score_filename, mu2_score_filename, symbtr_name=symbtr_name)
# predominant melody extraction
audio_pitch = audioAnalyzer.extract_pitch(audio_filename)
# joint analysis
# score-informed tonic and tempo estimation
tonic, tempo = jointAnalyzer.extract_tonic_tempo(
txt_score_filename, score_features, audio_filename, audio_pitch)
# section linking and note-level alignment
aligned_sections, notes, section_links, section_candidates = jointAnalyzer.align_audio_score(
txt_score_filename, score_features, audio_filename, audio_pitch, tonic, tempo)
# aligned pitch filter
pitch_filtered, notes_filtered = jointAnalyzer.filter_pitch(audio_pitch, notes)
# aligned note model
note_models, pitch_distribution, aligned_tonic = jointAnalyzer.compute_note_models(
pitch_filtered, notes_filtered, tonic['symbol'])
# recompute the audio features using the filtered pitch and tonic
# pitch histograms
pitch_class_distribution = copy.deepcopy(pitch_distribution)
pitch_class_distribution.to_pcd()
# get the melodic progression model
melodic_progression = audioAnalyzer.compute_melodic_progression(pitch_filtered)
# transposition (ahenk) identification
transposition = audioAnalyzer.identify_transposition(aligned_tonic, aligned_tonic['symbol'])
Explanation: ... or the individual calls are given below.
End of explanation |
10,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collective intelligence
Step1: Build models
Lookin' good! Let's convert the data into a nice format. We rearrange some columns, check out what the columns are.
Step2: 4) Majority vote on classifications
We could manually code up a simple implementation of majority voting. It might look something like this
Step3: And we could assess the performance of the majority voted predictions like so
Step4: Luckily, we do not have to do all of this manually, but can use scikit's VotingClassifier class
Step5: We can also do a weighted majority vote, where the different base learners are associated with a weight (often reflecting the accuracies of the models, i.e. more accurate models should have a higher weight). These weight the occurence of predicted class labels, which allows certain algorithms to have more of a say in the majority voting.
Step6: You may have noticed the voting='hard' argument we passed to the VotingClassifier. Setting voting='soft' would predict the class labels based on how certain each algorithm in the ensemble was about their individual predictions. This involves calculating the predicted probabilities p for the classifier. Note that scikit only recommends this approach if the classifiers are already tuned well, which should be the case here. | Python Code:
import wget
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/wine/winequality-red.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=";")
# Using a lambda function to bin quality scores
dataset['quality_is_high'] = dataset.quality.apply(lambda x: 1 if x >= 6 else 0)
# Convert the dataframe to a numpy array and split the
# data into an input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-2].astype(float)
y = npArray[:,-1]
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
Explanation: Collective intelligence: combining different machine learning algorithms into an ensemble model
Model ensembling is a class of techniques for aggregating together multiple different predictive algorithms into a meta-algorithm, which tends to increase accuracy and reduce overfitting. Ensembling approaches often work surprisingly well. Many winners of competitive data science competitions use model ensembling in one form or another. In previous tutorials, we discussed how tuning models allows you to get the best performance from individual machine learning algorithms. Here, we will take you through the steps of building your own ensemble for a classification problem, consisting of an individually optimized:
Random forest (which is already itself an ensemble model)
Support vector machine and
Regularized logistic regression classifier
These different models have quite different structures, which suggests they might capture different aspects of the dataset and could work well in an ensemble. We’ll continue working on the popular wine dataset, which captures chemical properties of wines and associated wine quality rankings. The goal is to predict wine quality from the chemical properties. In this post, you'll use the following techniques to build model ensembles: simple majority voting, weighted majority voting, and model stacking/blending.
The motivation behind ensembling
There are also fundamental reasons for why ensembling together different algorithms often improves accuracy, which is extremely well explained in this Kaggle ensembling guide. Briefly, majority voting between models can correct errors in the predictions of individual models.
The general idea behind ensembling is this: different classes of algorithms (or differently parameterized versions of the same type of algorithm) might be good at picking up on different signals in the dataset. Combining them means that you can model the data better, leading to better predictions. Furthermore, different algorithms might be overfitting to the data in various ways, but by combining them, you can effectively average away some of this overfitting. Furthermore, if you're trying to improve your model to chase accuracy points, ensembling is a more computationally effective way to do this than trying to tune a single model by searching for more and more optimal hyperparameters.
It is best to ensemble together models which are less correlated, because then you can capture different aspects of the blog post (see an excellent explanation here).
See an excellent explanation of ensembling here.
Examples of ensemble learning
You have probably already encountered several uses of model ensembling. Random forests are a type of ensemble algorithm that aggregates together many individual classification tree base learners. They are a good systems for intuitively understanding what ensembling is. [Explanation here].
So, a random forest is already an ensemble. But, a random forest will be just one model in the ensemble we build here. 'Ensembling' is a broad term, and is a recurrent concept throughout machine learning, but the general idea is that ensembling can correct the individual parts that may go wrong, and allow different models to capture different signals in the datasetm thereby improving overall performance.
If you’re interested in deep learning, one common technique for improving classification accuracy is training different neural networks and getting them to vote on classifications for test instances. An ensemble-like technique for training individual neural networks is called dropout, and involves training different subnetworks during the same training phase. Combinging different models is a recurring trend in machine learning, different incarnations. If you’re familiar with bagging or boosting algorithms, these are very explicit examples of ensembling.
In this post
We will be working on ensembling different algorithms, using both majority voting and stacking,, in order to get improved classification accuracy on the spam dataset. We won’t do fancy visualizations of the dataset, but check out a previous tutorial or our bootcamp to learn Plotly and matplotlib if you're interested. Here, we focus on combining different algorithms to boost performance.
Let's get started!
1. Loading up the data
Load dataset. We often want our input data to be a matrix (X) and the vector of instance labels as a separate vector (y).
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.linear_model import LogisticRegression
# Build rf model
best_n_estimators, best_max_features = 73, 5
rf = RandomForestClassifier(n_estimators=best_n_estimators, max_features=best_max_features)
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
# Build SVM model
best_C_svm, best_gamma = 1.07, 0.01
rbf_svm = svm.SVC(kernel='rbf', C=best_C_svm, gamma=best_gamma)
rbf_svm.fit(XTrain, yTrain)
svm_predictions = rbf_svm.predict(XTest)
# Build LR model
best_penalty, best_C_lr = "l2", 0.52
lr = LogisticRegression(penalty=best_penalty, C=best_C_lr)
lr.fit(XTrain, yTrain)
lr_predictions = lr.predict(XTest)
# Train SVM and output predictions
# rbfSVM = svm.SVC(kernel='rbf', C=best_C, gamma=best_gamma)
# rbfSVM.fit(XTrain, yTrain)
# svm_predictions = rbfSVM.predict(XTest)
print (classification_report(yTest, svm_predictions))
print ("Overall Accuracy:", round(accuracy_score(yTest, svm_predictions),4))
print(best_C, best_C_svm)
Explanation: Build models
Lookin' good! Let's convert the data into a nice format. We rearrange some columns, check out what the columns are.
End of explanation
import collections
# stick all predictions into a dataframe
predictions = pd.DataFrame(np.array([rf_predictions, svm_predictions, lr_predictions])).T
predictions.columns = ['RF', 'SVM', 'LR']
# initialise empty array for holding predictions
ensembled_predictions = np.zeros(shape=yTest.shape)
# majority vote and output final predictions
for test_point in range(predictions.shape[0]):
row = predictions.iloc[test_point,:]
counts = collections.Counter(row)
majority_vote = counts.most_common(1)[0][0]
# output votes
ensembled_predictions[test_point] = majority_vote.astype(int)
#print "The majority vote for test point", test_point, "is: ", majority_vote
print(ensembled_predictions)
Explanation: 4) Majority vote on classifications
We could manually code up a simple implementation of majority voting. It might look something like this:
End of explanation
# Get final accuracy of ensembled model
from sklearn.metrics import classification_report, accuracy_score
for individual_predictions in [rf_predictions, svm_predictions, lr_predictions]:
# classification_report(yTest.astype(int), individual_predictions.astype(int))
print "Accuracy:", round(accuracy_score(yTest.astype(int), individual_predictions.astype(int)),2)
print classification_report(yTest.astype(int), ensembled_predictions.astype(int))
print "Ensemble Accuracy:", round(accuracy_score(yTest.astype(int), ensembled_predictions.astype(int)),2)
Explanation: And we could assess the performance of the majority voted predictions like so:
End of explanation
# from sklearn.ensemble import VotingClassifier
import sklearn.ensemble.VotingClassifier
# Build and fit majority vote classifier
# ensemble_1 = VotingClassifier(estimators=[('rf', rf), ('svm', rbf_svm), ('lr', lr)], voting='hard')
# ensemble_1.fit(XTrain, yTrain)
# simple_ensemble_predictions = ensemble_1.predict(XTest)
# print metrics.classification_report(yTest, simple_ensemble_predictions)
# print "Ensemble_2 Overall Accuracy:", round(metrics.accuracy_score(yTest, simple_ensemble_predictions),2)
Explanation: Luckily, we do not have to do all of this manually, but can use scikit's VotingClassifier class:
End of explanation
# Getting weights
ensemble_1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], weights=[1,1,1], voting='hard')
ensemble_1.fit(XTrain, yTrain)
simple_ensemble_predictions = ensemble_1.predict(XTest)
print metrics.classification_report(yTest, simple_ensemble_predictions)
print "Ensemble_2 Overall Accuracy:", round(metrics.accuracy_score(yTest, simple_ensemble_predictions),2)
Explanation: We can also do a weighted majority vote, where the different base learners are associated with a weight (often reflecting the accuracies of the models, i.e. more accurate models should have a higher weight). These weight the occurence of predicted class labels, which allows certain algorithms to have more of a say in the majority voting.
End of explanation
ensemble_1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], weights=[1,1,1], voting='soft')
ensemble_1.fit(XTrain, yTrain)
simple_ensemble_predictions = ensemble_1.predict(XTest)
print metrics.classification_report(yTest, simple_ensemble_predictions)
print "Ensemble_2 Overall Accuracy:", round(metrics.accuracy_score(yTest, simple_ensemble_predictions),2)
## Model stacking
Explanation: You may have noticed the voting='hard' argument we passed to the VotingClassifier. Setting voting='soft' would predict the class labels based on how certain each algorithm in the ensemble was about their individual predictions. This involves calculating the predicted probabilities p for the classifier. Note that scikit only recommends this approach if the classifiers are already tuned well, which should be the case here.
End of explanation |
10,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing Data using Python and SQLite3
SQLite basics
Create a connection
conn = sqlite3.connect('database_file')
cur = conn.curser()
Execute SQL commands
execute
Step1: Setup/create a table
Step2: Read data using pandas and store them in sqlite
Step3: Summarizing Queries
Total number of tweets
sql
SELECT COUNT(*)
FROM Tweets;
Total number of neutral/positive/negative tweets
sql
SELECT sentiment,COUNT(*)
FROM Tweets
GROUP BY sentiment;
Sum of sentiment values in each state for each party
sql
SELECT state,pary,SUM(sentiment)
FROM Tweets
GROUP BY state,party; | Python Code:
import sqlite3
conn = sqlite3.connect('election_tweets.sqlite')
cur = conn.cursor()
Explanation: Analyzing Data using Python and SQLite3
SQLite basics
Create a connection
conn = sqlite3.connect('database_file')
cur = conn.curser()
Execute SQL commands
execute: cur.execute('SQL COMMANDS')
commit to save changes made to the database: conn.commit()
Retrieve results
execute the retrieval query cur.execute('SELECT QUERY')
Fetch the results
Fetch all the results at once: cur.fecthall()
Fetch only one result: cur.fetchone()
Close the connection
conn.close()
Data types
|Data Type| Affinity|
|:--|:--:|
|INT INTEGER TINYINT SMALLINT <br> MEDIUMINT BIGINT UNSIGNED<br> BIG INT INT2 INT8| INTEGER|
|CHARACTER(20) VARCHAR(255) <br>VARYING CHARACTER(255) <br>NCHAR(55) NATIVE CHARACTER(70) <br> NVARCHAR(100) TEXT CLOB| TEXT|
|BLOB no datatype specified |NONE|
|REAL DOUBLE <br>DOUBLE PRECISION FLOAT |REAL|
Example: create a table to store data from a textfile
First, create a connection
End of explanation
cur.execute("DROP TABLE IF EXISTS Tweets")
cur.execute("CREATE TABLE Tweets(state VARCHAR(10), party VARCHAR(20), sentiment INT2)")
conn.commit()
Explanation: Setup/create a table
End of explanation
import pandas as pd
reader = pd.read_table('http://vahidmirjalili.com/election-2016/opFromNLP-2.txt',
sep='|', header=None, chunksize=100)
sentiment={'Neutral':0,
'Positive':1,
'Negative':-1}
for chunk in reader:
for i in range(chunk.shape[0]):
line = chunk.iloc[[i]].values[0]
cur.execute("INSERT INTO Tweets (state, party, sentiment) \
VALUES (?,?,?)",
(line[0], line[1], sentiment[line[2]]))
conn.commit()
Explanation: Read data using pandas and store them in sqlite
End of explanation
cur.execute('SELECT count(*) FROM Tweets')
num_tweets = cur.fetchall()
print('Total number of tweets: %d'%(num_tweets[0]))
cur.execute('SELECT sentiment,COUNT(*) FROM Tweets GROUP BY sentiment')
results = cur.fetchall()
for res in results:
print("Count of %d tweets: %d"%(res[0], res[1]))
import seaborn as sns
import matplotlib
import numpy as np
import pandas as pd
results = pd.DataFrame(results)
results.columns = ['Mood', 'Freq']
results['Freq'] = results['Freq']/np.sum(results['Freq'])
%matplotlib inline
ax = sns.barplot(x="Mood", y="Freq", data=results)
cur.execute('SELECT state,SUM(sentiment),count(*) \
FROM Tweets WHERE party="Democrat" GROUP BY state')
dem_results = cur.fetchall()
cur.execute('SELECT state,SUM(sentiment),count(*) \
FROM Tweets WHERE party="Republican" GROUP BY state')
rep_results = cur.fetchall()
for dem_res,rep_res in zip(dem_results,rep_results):
if(len(dem_res[0]) == 2):
print("%s\tDemocrat: %6.2f\tRepublican: %6.2f"%(
dem_res[0], dem_res[1]/dem_res[2], rep_res[1]/rep_res[2]))
dem_df = pd.DataFrame(dem_results)
rep_df = pd.DataFrame(rep_results)
df = pd.DataFrame({'state':dem_df[0], 'dem':dem_df[2], 'rep':rep_df[2], 'tot':dem_df[2]+rep_df[2]})
df.to_csv('/tmp/res', sep=' ')
ax = sns.barplot(x="state", y="tot", data=df)
for dem_res,rep_res in zip(dem_results,rep_results):
if(len(dem_res[0]) == 2):
if (dem_res[1]/dem_res[2] > rep_res[1]/rep_res[2]):
print("%s\tDemocrat \t%.3f"%(
dem_res[0], dem_res[1]/dem_res[2] -rep_res[1]/rep_res[2]))
else:
print("%s\tRepublican\t%.3f"%(
rep_res[0], rep_res[1]/rep_res[2] - dem_res[1]/dem_res[2]))
Explanation: Summarizing Queries
Total number of tweets
sql
SELECT COUNT(*)
FROM Tweets;
Total number of neutral/positive/negative tweets
sql
SELECT sentiment,COUNT(*)
FROM Tweets
GROUP BY sentiment;
Sum of sentiment values in each state for each party
sql
SELECT state,pary,SUM(sentiment)
FROM Tweets
GROUP BY state,party;
End of explanation |
10,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook compares the annotated results of the "blocked" vs. "random" dataset of wikipedia talk pages. The "blocked" dataset consists of the few last comments before a user is blocked for personal harassment. The "random" dataset randomly samples all of the wikipedia talk page revisions. Both of these datasets are cleaned and filtered to remove common administrator messages. These datasets are annotated via crowdflower to measure friendliness, aggressiveness and whether the comment constitutes a personal attack. Below we plot a histogram of the results, pull out a few comments to examine, and compute inter-annotator agreement.
On Crowdflower, each revision is rated 7 times. The raters are given three questions
Step1: Plot histogram of average ratings by comment
For each revision, we take the average of all the ratings by level of harassment. The histogram of these averages for both the blocked and random dataset are displayed below. We notice that the blocked dataset has a significantly higher proportion of attacking comments (approximately 20%).
Step2: For each revision, we take the average of all the ratings by level of friendliness/aggressiveness. The histogram of these averages for both the blocked and random dataset are displayed below. We notice that the blocked dataset has a more even distribution of aggressiveness scores.
Step3: Selected harassing and aggressive comments by quartile
We look at a sample of revisions whose average aggressive score falls into various quantiles. This allows us to subjectively evaluate the quality of the questions that we are asking on Crowdflower. This slicing is done on the aggregate of both the blocked and random dataset.
Step4: Most harassing comments in aggregated dataset
Step5: Most aggressive comments in aggregated dataset
Step6: Median aggressive comments in aggregated dataset
Step7: Least aggressive comments in aggregated dataset
Step8: Selected revisions by multiple questions
In this section, we examine a selection of revisions by their answer to Question 3 ('Is this an example of harassment or a personal attack?') and sorted by aggression score. Again, this allows us to subjectively evaluate the quality of questions and responses that we obtain from Crowdflower.
Step9: Inter-Annotator Agreement
Below, we compute the Krippendorf's Alpha, which is a measure of the inter-annotator agreement of our Crowdflower responses. We achieve an Alpha value of 0.489 on our dataset, which is relatively low. We have since decided to reframe our questions and have achieved a higher Alpha score (see Experiment v. 2). | Python Code:
%matplotlib inline
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth', 1000)
# Download data from google drive (Respect Eng / Wiki Collab): wikipdia data/v2_annotated
blocked_dat = pd.read_csv('../data/annotated_1k_no_admin_blocked_user_post_sample.csv')
random_dat = pd.read_csv('../data/annotated_1k_no_admin_post_sample.csv')
# Removing irrelevant differing columns
del blocked_dat['na_gold']
del random_dat['unnamed_0']
blocked_dat['dat_type'] = 'blocked'
random_dat['dat_type'] = 'random'
# Do both arrays now have the same columns?
(random_dat.columns == blocked_dat.columns).all()
dat = pd.concat([blocked_dat, random_dat])
# Remove test questions
dat = dat[dat['_golden'] == False]
# Replace missing data with 'False'
dat = dat.replace(np.nan, False, regex=True)
# Reshape the data for later analysis
def create_column_of_counts_from_nums(df, col):
return df.apply(lambda x: int(col) == x)
aggressive_columns = ['1', '2', '3', '4', '5', '6', '7']
for col in aggressive_columns:
dat[col] = create_column_of_counts_from_nums(dat['how_aggressive_or_friendly_is_the_tone_of_this_comment'], col)
blocked_columns = ['0','1']
for col in blocked_columns:
dat['blocked_'+col] = create_column_of_counts_from_nums(dat['is_harassment_or_attack'], col)
blocked_columns = ['blocked_0','blocked_1']
# Group the data
agg_dict = dict.fromkeys(aggressive_columns, 'sum')
agg_dict.update(dict.fromkeys(blocked_columns, 'sum'))
agg_dict.update({'clean_diff': 'first', 'is_harassment_or_attack': 'mean',
'how_aggressive_or_friendly_is_the_tone_of_this_comment': 'mean', 'na': 'mean'})
grouped_dat = dat.groupby(['dat_type','rev_id'], as_index=False).agg(agg_dict)
# Get rid of data which the majority thinks is not in English or not readable
grouped_dat = grouped_dat[grouped_dat['na'] < 0.5]
Explanation: Introduction
This notebook compares the annotated results of the "blocked" vs. "random" dataset of wikipedia talk pages. The "blocked" dataset consists of the few last comments before a user is blocked for personal harassment. The "random" dataset randomly samples all of the wikipedia talk page revisions. Both of these datasets are cleaned and filtered to remove common administrator messages. These datasets are annotated via crowdflower to measure friendliness, aggressiveness and whether the comment constitutes a personal attack. Below we plot a histogram of the results, pull out a few comments to examine, and compute inter-annotator agreement.
On Crowdflower, each revision is rated 7 times. The raters are given three questions:
Is this comment not English or not human readable?
Column 'na'
How aggressive or friendly is the tone of this comment?
Column 'how_aggressive_or_friendly_is_the_tone_of_this_comment'
Ranges from 1 (Friendly) to 7 (Aggressive)
Is this an example of harassment or a personal attack?
Column 'is_harassment_or_attack'
Loading packages and data
End of explanation
def hist_comments(df, bins, dat_type, plot_by, title):
sliced_array = df[df['dat_type'] == dat_type][[plot_by]]
weights = np.ones_like(sliced_array)/len(sliced_array)
sliced_array.plot.hist(bins = bins, legend = False, title = title, weights=weights)
plt.ylabel('Proportion')
plt.xlabel('Average Score')
plt.figure()
bins = np.linspace(0,1,11)
hist_comments(grouped_dat, bins, 'blocked', 'is_harassment_or_attack', 'Average Harassment Rating for Blocked Data')
hist_comments(grouped_dat, bins, 'random', 'is_harassment_or_attack', 'Average Harassment Rating for Random Data')
Explanation: Plot histogram of average ratings by comment
For each revision, we take the average of all the ratings by level of harassment. The histogram of these averages for both the blocked and random dataset are displayed below. We notice that the blocked dataset has a significantly higher proportion of attacking comments (approximately 20%).
End of explanation
bins = np.linspace(1,7,61)
plt.figure()
hist_comments(grouped_dat, bins, 'blocked', 'how_aggressive_or_friendly_is_the_tone_of_this_comment',
'Average Aggressiveness Rating for Blocked Data')
hist_comments(grouped_dat, bins, 'random', 'how_aggressive_or_friendly_is_the_tone_of_this_comment',
'Average Aggressiveness Rating for Random Data')
Explanation: For each revision, we take the average of all the ratings by level of friendliness/aggressiveness. The histogram of these averages for both the blocked and random dataset are displayed below. We notice that the blocked dataset has a more even distribution of aggressiveness scores.
End of explanation
def sorted_comments(df, sort_by, is_ascending, quartile, num, dat_type = None):
if dat_type:
sub_df = df[df['dat_type'] == dat_type]
else:
sub_df = df
n = sub_df.shape[0]
start_index = int(quartile*n)
if dat_type:
return sub_df[['clean_diff', 'is_harassment_or_attack',
'how_aggressive_or_friendly_is_the_tone_of_this_comment']].sort_values(
by=sort_by, ascending = is_ascending)[start_index:start_index + num]
return df[['clean_diff', 'dat_type', 'is_harassment_or_attack',
'how_aggressive_or_friendly_is_the_tone_of_this_comment']].sort_values(
by=sort_by, ascending = is_ascending)[start_index:start_index + num]
Explanation: Selected harassing and aggressive comments by quartile
We look at a sample of revisions whose average aggressive score falls into various quantiles. This allows us to subjectively evaluate the quality of the questions that we are asking on Crowdflower. This slicing is done on the aggregate of both the blocked and random dataset.
End of explanation
sorted_comments(grouped_dat, 'is_harassment_or_attack', False, 0, 5)
Explanation: Most harassing comments in aggregated dataset
End of explanation
sorted_comments(grouped_dat, 'how_aggressive_or_friendly_is_the_tone_of_this_comment', False, 0, 5)
Explanation: Most aggressive comments in aggregated dataset
End of explanation
sorted_comments(grouped_dat, 'how_aggressive_or_friendly_is_the_tone_of_this_comment', False, 0.5, 5)
Explanation: Median aggressive comments in aggregated dataset
End of explanation
sorted_comments(grouped_dat, 'how_aggressive_or_friendly_is_the_tone_of_this_comment', True, 0, 5)
Explanation: Least aggressive comments in aggregated dataset
End of explanation
# Least aggressive comments that are considered harassment or a personal attack
sorted_comments(grouped_dat[grouped_dat['is_harassment_or_attack'] > 0.5], 'how_aggressive_or_friendly_is_the_tone_of_this_comment', True, 0, 5)
# Most aggressive comments that are NOT considered harassment or a personal attack
sorted_comments(grouped_dat[grouped_dat['is_harassment_or_attack'] < 0.5], 'how_aggressive_or_friendly_is_the_tone_of_this_comment', False, 0, 5)
Explanation: Selected revisions by multiple questions
In this section, we examine a selection of revisions by their answer to Question 3 ('Is this an example of harassment or a personal attack?') and sorted by aggression score. Again, this allows us to subjectively evaluate the quality of questions and responses that we obtain from Crowdflower.
End of explanation
def add_row_to_coincidence(o, row, columns):
m_u = row.sum(1)
for i in columns:
for j in columns:
if i == j:
o[i][j] = o[i][j] + row[i]*(row[i]-1)/(m_u-1)
else:
o[i][j] = o[i][j] + row[i]*row[j]/(m_u-1)
return o
def make_coincidence_matrix(df, columns):
df = df[columns]
n = df.shape[0]
num_cols = len(columns)
o = pd.DataFrame(np.zeros((num_cols,num_cols)), index = columns, columns=columns)
for i in xrange(n):
o = add_row_to_coincidence(o, df[i:i+1], columns)
return o
def binary_distance(i,j):
return i!=j
def interval_distance(i,j):
return (int(i)-int(j))**2
def e(n, i, j):
if i == j:
return n[i]*(n[i]-1)/sum(n)-1
else:
return n[i]*n[j]/sum(n)-1
def D_e(o, columns, distance):
n = o.sum(1)
output = 0
for i in columns:
for j in columns:
output = output + e(n,i,j)*distance(i,j)
return output
def D_o(o, columns, distance):
output = 0
for i in columns:
for j in columns:
output = output + o[i][j]*distance(i,j)
return output
def Krippendorf_alpha(df, columns, distance = binary_distance, o = None):
if o is None:
o = make_coincidence_matrix(df, columns)
d_o = D_o(o, columns, distance)
d_e = D_e(o, columns, distance)
return (1 - d_o/d_e)
df = grouped_dat[grouped_dat['dat_type'] == 'blocked']
Krippendorf_alpha(df, aggressive_columns, distance = interval_distance)
Krippendorf_alpha(df, blocked_columns)
df = grouped_dat[grouped_dat['dat_type'] == 'random']
Krippendorf_alpha(df, aggressive_columns, distance = interval_distance)
Krippendorf_alpha(df, blocked_columns)
Explanation: Inter-Annotator Agreement
Below, we compute the Krippendorf's Alpha, which is a measure of the inter-annotator agreement of our Crowdflower responses. We achieve an Alpha value of 0.489 on our dataset, which is relatively low. We have since decided to reframe our questions and have achieved a higher Alpha score (see Experiment v. 2).
End of explanation |
10,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Simulating fullCyc Day1 control gradients
Not simulating incorporation (all 0% isotope incorp.)
Don't know how much true incorporatation for emperical data
Using parameters inferred from emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters
Determining whether simulated taxa show similar distribution to the emperical data
Simulating higher levels of richness
Init
Step1: Nestly
assuming fragments already simulated
Step2: Notes
richness of 8000 & 12000 failed due to memory errors
BD min/max
what is the min/max BD that we care about?
Step3: Loading simulated OTU tables
Step4: Plotting number of taxa in each fraction
Emperical data (fullCyc)
Step5: w/ simulated data
Step6: Total sequence count
Step7: Plotting Shannon diversity for each
Step8: min/max abundances of taxa
Step9: BD range where an OTU is detected
Do the simulated OTU BD distributions span the same BD range of the emperical data?
Simulated
Step10: Emperical | Python Code:
import os
import glob
import re
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(phyloseq)
## BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
Explanation: Goal
Simulating fullCyc Day1 control gradients
Not simulating incorporation (all 0% isotope incorp.)
Don't know how much true incorporatation for emperical data
Using parameters inferred from emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters
Determining whether simulated taxa show similar distribution to the emperical data
Simulating higher levels of richness
Init
End of explanation
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'
buildDir = os.path.join(workDir, 'Day1_xRich')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
fragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl'
targetFile = '/home/nick/notebook/SIPSim/dev/fullCyc/CD-HIT/target_taxa.txt'
physeqDir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq_bulkCore = 'bulk-core'
physeq_SIP_core = 'SIP-core_unk'
prefrac_comm_abundance = ['1e9']
richness = [2503, 4000, 8000, 120000] # 2503= chao1 estimate for bulk Day 1
seq_per_fraction = ['lognormal', 9.432, 0.5, 10000, 30000] # dist, mean, scale, min, max
bulk_days = [1]
nprocs = 24
# building tree structure
nest = nestly.Nest()
## varying params
nest.add('richness', richness)
## set params
nest.add('abs', prefrac_comm_abundance, create_dir=False)
nest.add('bulk_day', bulk_days, create_dir=False)
nest.add('percIncorp', [0], create_dir=False)
nest.add('percTaxa', [0], create_dir=False)
nest.add('np', [nprocs], create_dir=False)
nest.add('subsample_dist', [seq_per_fraction[0]], create_dir=False)
nest.add('subsample_mean', [seq_per_fraction[1]], create_dir=False)
nest.add('subsample_scale', [seq_per_fraction[2]], create_dir=False)
nest.add('subsample_min', [seq_per_fraction[3]], create_dir=False)
nest.add('subsample_max', [seq_per_fraction[4]], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
nest.add('fragFile', [fragFile], create_dir=False)
nest.add('targetFile', [targetFile], create_dir=False)
nest.add('physeqDir', [physeqDir], create_dir=False)
nest.add('physeq_bulkCore', [physeq_bulkCore], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
#-- making DNA pool similar to gradient of interest
echo '# Creating comm file from phyloseq'
phyloseq2comm.r {physeqDir}{physeq_bulkCore} -s 12C-Con -d {bulk_day} > {physeq_bulkCore}_comm.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm.txt
echo '## Adding target taxa to comm file'
comm_add_target.r {physeq_bulkCore}_comm.txt {targetFile} > {physeq_bulkCore}_comm_target.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm_target.txt
echo '# Adding extra richness to community file'
printf "1\t{richness}\n" > richness_needed.txt
comm_add_richness.r -s {physeq_bulkCore}_comm_target.txt richness_needed.txt > {physeq_bulkCore}_comm_all.txt
### renaming comm file for downstream pipeline
cat {physeq_bulkCore}_comm_all.txt > {physeq_bulkCore}_comm_target.txt
rm -f {physeq_bulkCore}_comm_all.txt
echo '## parsing out genome fragments to make simulated DNA pool resembling the gradient of interest'
## all OTUs without an associated reference genome will be assigned a random reference (of the reference genome pool)
### this is done through --NA-random
SIPSim fragment_KDE_parse {fragFile} {physeq_bulkCore}_comm_target.txt \
--rename taxon_name --NA-random > fragsParsed.pkl
echo '#-- SIPSim pipeline --#'
echo '# converting fragments to KDE'
SIPSim fragment_KDE \
fragsParsed.pkl \
> fragsParsed_KDE.pkl
echo '# adding diffusion'
SIPSim diffusion \
fragsParsed_KDE.pkl \
--np {np} \
> fragsParsed_KDE_dif.pkl
echo '# adding DBL contamination'
SIPSim DBL \
fragsParsed_KDE_dif.pkl \
--np {np} \
> fragsParsed_KDE_dif_DBL.pkl
echo '# making incorp file'
SIPSim incorpConfigExample \
--percTaxa {percTaxa} \
--percIncorpUnif {percIncorp} \
> {percTaxa}_{percIncorp}.config
echo '# adding isotope incorporation to BD distribution'
SIPSim isotope_incorp \
fragsParsed_KDE_dif_DBL.pkl \
{percTaxa}_{percIncorp}.config \
--comm {physeq_bulkCore}_comm_target.txt \
--np {np} \
> fragsParsed_KDE_dif_DBL_inc.pkl
#echo '# calculating BD shift from isotope incorporation'
#SIPSim BD_shift \
# fragsParsed_KDE_dif_DBL.pkl \
# fragsParsed_KDE_dif_DBL_inc.pkl \
# --np {np} \
# > fragsParsed_KDE_dif_DBL_inc_BD-shift.txt
echo '# simulating gradient fractions'
SIPSim gradient_fractions \
{physeq_bulkCore}_comm_target.txt \
> fracs.txt
echo '# simulating an OTU table'
SIPSim OTU_table \
fragsParsed_KDE_dif_DBL_inc.pkl \
{physeq_bulkCore}_comm_target.txt \
fracs.txt \
--abs {abs} \
--np {np} \
> OTU_abs{abs}.txt
#echo '# simulating PCR'
SIPSim OTU_PCR \
OTU_abs{abs}.txt \
> OTU_abs{abs}_PCR.txt
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}_PCR.txt \
> OTU_abs{abs}_PCR_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_meta.txt
!chmod 777 $bashFile
!cd $workDir; \
nestrun --template-file $bashFile -d Day1_xRich --log-file log.txt -j 1
Explanation: Nestly
assuming fragments already simulated
End of explanation
%%R
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
cat('Min BD:', min_BD, '\n')
cat('Max BD:', max_BD, '\n')
Explanation: Notes
richness of 8000 & 12000 failed due to memory errors
BD min/max
what is the min/max BD that we care about?
End of explanation
%%R -i OTU_files -i buildDir
# loading files
OTU_files = c('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_xRich/2503/OTU_abs1e9_PCR_sub.txt',
'/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_xRich/4000/OTU_abs1e9_PCR_sub.txt')
df.SIM = list()
for (x in OTU_files){
richness = gsub(paste0(buildDir, '/'), '', x)
richness = gsub('/OTU_abs1e9_PCR_sub.txt', '', richness)
df.SIM[[richness]] = read.delim(x, sep='\t')
}
df.SIM = do.call('rbind', df.SIM)
df.SIM$richness = gsub('\\.[0-9]+$', '', rownames(df.SIM))
rownames(df.SIM) = 1:nrow(df.SIM)
df.SIM %>% head
%%R
## edit table
df.SIM.nt = df.SIM %>%
filter(count > 0) %>%
group_by(richness, library, BD_mid) %>%
summarize(n_taxa = n()) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.nt %>% head
Explanation: Loading simulated OTU tables
End of explanation
%%R
# simulated OTU table file
OTU.table.dir = '/home/nick/notebook/SIPSim/dev/fullCyc/frag_norm_9_2.5_n5/Day1_default_run/1e9/'
OTU.table.file = 'OTU_abs1e9_PCR_sub.txt'
#OTU.table.file = 'OTU_abs1e9_sub.txt'
#OTU.table.file = 'OTU_abs1e9.txt'
%%R -i physeqDir -i physeq_SIP_core -i bulk_days
# bulk core samples
F = file.path(physeqDir, physeq_SIP_core)
physeq.SIP.core = readRDS(F)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core = prune_samples(physeq.SIP.core.m$Substrate == '12C-Con' &
physeq.SIP.core.m$Day %in% bulk_days,
physeq.SIP.core) %>%
filter_taxa(function(x) sum(x) > 0, TRUE)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core
%%R -w 800 -h 300
## dataframe
df.EMP = physeq.SIP.core %>% otu_table %>%
as.matrix %>% as.data.frame
df.EMP$OTU = rownames(df.EMP)
df.EMP = df.EMP %>%
gather(sample, abundance, 1:(ncol(df.EMP)-1))
df.EMP = inner_join(df.EMP, physeq.SIP.core.m, c('sample' = 'X.Sample'))
df.EMP.nt = df.EMP %>%
group_by(sample) %>%
mutate(n_taxa = sum(abundance > 0)) %>%
ungroup() %>%
distinct(sample) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
## plotting
p = ggplot(df.EMP.nt, aes(Buoyant_density, n_taxa)) +
geom_point(color='blue') +
geom_line(color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
Explanation: Plotting number of taxa in each fraction
Emperical data (fullCyc)
End of explanation
%%R -w 800 -h 300
# plotting
p = ggplot(df.SIM.nt, aes(BD_mid, n_taxa)) +
geom_point(aes(color=richness, group=richness)) +
geom_line(aes(color=richness, group=richness), alpha=0.5) +
#geom_smooth(color='red') +
geom_point(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16)#,
# legend.position = 'none'
)
p
Explanation: w/ simulated data
End of explanation
%%R -w 800 -h 300
# simulated
df.SIM.s = df.SIM %>%
group_by(richness, library, BD_mid) %>%
summarize(total_abund = sum(count)) %>%
rename('Day' = library, 'Buoyant_density' = BD_mid) %>%
ungroup() %>%
mutate(dataset='simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Day, Buoyant_density) %>%
summarize(total_abund = sum(abundance)) %>%
ungroup() %>%
mutate(dataset='emperical', richness = NA)
# join
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.SIM.s = df.EMP.s = ""
# plot
ggplot(df.j, aes(Buoyant_density, total_abund, color=dataset, group=richness)) +
geom_point() +
geom_line(alpha=0.3) +
geom_line(data=df.j %>% filter(dataset=='emperical')) +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Buoyant density', y='Total sequences per sample') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Total sequence count
End of explanation
%%R
shannon_index_long = function(df, abundance_col, ...){
# calculating shannon diversity index from a 'long' formated table
## community_col = name of column defining communities
## abundance_col = name of column defining taxon abundances
df = df %>% as.data.frame
cmd = paste0(abundance_col, '/sum(', abundance_col, ')')
df.s = df %>%
group_by_(...) %>%
mutate_(REL_abundance = cmd) %>%
mutate(pi__ln_pi = REL_abundance * log(REL_abundance),
shannon = -sum(pi__ln_pi, na.rm=TRUE)) %>%
ungroup() %>%
dplyr::select(-REL_abundance, -pi__ln_pi) %>%
distinct_(...)
return(df.s)
}
%%R
# calculating shannon
df.SIM.shan = shannon_index_long(df.SIM, 'count', 'richness', 'library', 'fraction') %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.EMP.shan = shannon_index_long(df.EMP, 'abundance', 'sample') %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
%%R -w 800 -h 300
# plotting
p = ggplot(df.SIM.shan, aes(BD_mid, shannon, group=richness)) +
geom_point(aes(color=richness)) +
geom_line(aes(color=richness), alpha=0.3) +
geom_point(data=df.EMP.shan, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.shan, aes(x=Buoyant_density), color='blue') +
scale_y_continuous(limits=c(4, 7.5)) +
labs(x='Buoyant density', y='Shannon index') +
theme_bw() +
theme(
text = element_text(size=16)#,
#legend.position = 'none'
)
p
Explanation: Plotting Shannon diversity for each
End of explanation
%%R -h 300 -w 800
# simulated
df.SIM.s = df.SIM %>%
filter(rel_abund > 0) %>%
group_by(richness, BD_mid) %>%
summarize(min_abund = min(rel_abund),
max_abund = max(rel_abund)) %>%
ungroup() %>%
rename('Buoyant_density' = BD_mid) %>%
mutate(dataset = 'simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Buoyant_density) %>%
mutate(rel_abund = abundance / sum(abundance)) %>%
filter(rel_abund > 0) %>%
summarize(min_abund = min(rel_abund),
max_abund = max(rel_abund)) %>%
ungroup() %>%
mutate(dataset = 'emperical',
richness = NA)
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
# plotting
ggplot(df.j, aes(Buoyant_density, max_abund, color=dataset, group=richness)) +
geom_point(aes(color=richness)) +
geom_line(aes(color=richness), alpha=0.3) +
geom_line(data=df.j %>% filter(dataset=='emperical')) +
scale_color_manual(values=c('green', 'red', 'blue')) +
labs(x='Buoyant density', y='Maximum relative abundance') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: min/max abundances of taxa
End of explanation
%%R
comm_files = c('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_xRich/2503/bulk-core_comm_target.txt',
'/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_xRich/4000/bulk-core_comm_target.txt')
%%R
df.comm = list()
for (f in comm_files){
rep = gsub('.+/Day1_xRich/([0-9]+)/.+', '\\1', f)
df.comm[[rep]] = read.delim(f, sep='\t') %>%
dplyr::select(library, taxon_name, rel_abund_perc) %>%
rename('bulk_abund' = rel_abund_perc) %>%
mutate(bulk_abund = bulk_abund / 100)
}
df.comm = do.call('rbind', df.comm)
df.comm$richness = gsub('\\.[0-9]+$', '', rownames(df.comm))
rownames(df.comm) = 1:nrow(df.comm)
df.comm %>% head(n=3)
%%R
## joining
df.SIM.j = inner_join(df.SIM, df.comm, c('richness' = 'richness',
'library' = 'library',
'taxon' = 'taxon_name')) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.j %>% head(n=3)
Explanation: BD range where an OTU is detected
Do the simulated OTU BD distributions span the same BD range of the emperical data?
Simulated
End of explanation
%%R
bulk_days = c(1)
%%R
physeq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq.bulk = 'bulk-core'
physeq.file = file.path(physeq.dir, physeq.bulk)
physeq.bulk = readRDS(physeq.file)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' &
physeq.bulk.m$Day %in% bulk_days, physeq.bulk)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk
%%R
physeq.bulk.n = transform_sample_counts(physeq.bulk, function(x) x/sum(x))
physeq.bulk.n
%%R
# making long format of each bulk table
bulk.otu = physeq.bulk.n %>% otu_table %>% as.data.frame
ncol = ncol(bulk.otu)
bulk.otu$OTU = rownames(bulk.otu)
bulk.otu = bulk.otu %>%
gather(sample, abundance, 1:ncol)
bulk.otu = inner_join(physeq.bulk.m, bulk.otu, c('X.Sample' = 'sample')) %>%
dplyr::select(OTU, abundance) %>%
rename('bulk_abund' = abundance)
bulk.otu %>% head(n=3)
%%R
# joining tables
df.EMP.j = inner_join(df.EMP, bulk.otu, c('OTU' = 'OTU')) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.EMP.j %>% head(n=3)
%%R
# filtering & combining emperical w/ simulated data
## emperical
max_BD_range = max(df.EMP.j$Buoyant_density) - min(df.EMP.j$Buoyant_density)
df.EMP.j.f = df.EMP.j %>%
filter(abundance > 0) %>%
group_by(OTU) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(Buoyant_density),
max_BD = max(Buoyant_density),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
mutate(dataset = 'emperical',
richness = NA)
## simulated
max_BD_range = max(df.SIM.j$BD_mid) - min(df.SIM.j$BD_mid)
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
group_by(richness, taxon) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(BD_mid),
max_BD = max(BD_mid),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
rename('OTU' = taxon) %>%
mutate(dataset = 'simulated')
## join
df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%
filter(BD_range_perc > 0,
mean_rel_abund > 0)
df.j %>% head(n=3)
%%R -h 400
## plotting
ggplot(df.j, aes(mean_rel_abund, BD_range_perc, color=richness)) +
geom_point(alpha=0.5, shape='O') +
scale_x_log10() +
scale_y_continuous() +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank()#,
#legend.position = 'none'
)
Explanation: Emperical
End of explanation |
10,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 22
Step1: We start by using the ordinary free energy of the pure components
Step2: $$L(\phi,\nabla\phi) = \int_V \Big[ ~~f(\phi,T) + \frac{\epsilon^2_\phi}{2}|\nabla \phi|^2~\Big]~ dV$$
From here it should be clear that the free energy space is homogeneous in the order parameter and temperature. Now that the description of the bulk energy is complete we can return to the Euler-Lagrange equations and proceed to develop our equilibrium solution and our equations of motion.
Top of Page
Analytical Solution
The full expression for the equation of motion is
Step3: W and $\epsilon$ can be parameterized in terms of the surface energy and the interface thickness to make a connection with the physical world.
Top of Page
Numerical Solution (FiPy)
Step4: This cell sets the initial conditions. There is a helper attribute cellCenters that fetches a list of the x points. The setValue helper functions and the 'where' keyword help you to set the initial conditions. FiPy is linked to Matplotlib and once you created the viewer object you call .plot() to update.
Step5: Top of Page
DIY
Step6: $$\frac{\partial \phi}{\partial t} = - M_\phi \Big[\frac{\partial f}{\partial \phi} - \epsilon^2_\phi \nabla^2 \phi \Big]$$
$$f(\phi,T)_A = W_A~g(\phi) + L_A \frac{(T_M^A - T)}{T_M^A}p(\phi)$$
Step7: This is our general statement of a diffusive PDE. There is a transient term and a source term. Translate from the description of the phase field model above.
Step8: Just re-execute this cell after you change parameters. You can execute it over and over until you are satisfied that you've reached equilibrium.
You can try changing $\kappa$, W, and T. Changing T from the melting tmperature will result in a moving interface. This is where things get interesting!
Top of Page
In 2D
This one is important. We will simulate a pair of curved particles (each with a different radius) at the melting temperature. What do you think will happen?
Step9: The strength of FiPy is that you can use the same code here in 2D as above. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
def plot_p_and_g():
phi = np.linspace(-0.1, 1.1, 200)
g=phi**2*(1-phi)**2
p=phi**3*(6*phi**2-15*phi+10)
# Changed 3 to 1 in the figure call.
plt.figure(1, figsize=(12,6))
plt.subplot(121)
plt.plot(phi, g, linewidth=1.0);
plt.xlabel('$\phi$', fontsize=18)
plt.ylabel('$g(\phi)$', fontsize=18)
plt.subplot(122)
plt.plot(phi, p, linewidth=1.0);
plt.xlabel('$\phi$', fontsize=18)
plt.ylabel('$p(\phi)$', fontsize=18)
return
plot_p_and_g()
Explanation: Lecture 22: Phase Field Models
Sections
Introduction
Learning Goals
On Your Own
The Order Parameter
In Class
The Free Energy Functional
The Equation of Motion
Analytical Solution
[Numerical Solution (FiPy)](#Numerical-Solution-(FiPy)
In 2D
Homework
Summary
Looking Ahead
Reading Assignments and Practice
Introduction
This workbook/lecture is derived from Boettinger, et al. in Annual Review of Materials Research, v32, p163-194 (2002).
doi: 10.1146/annurev.matsci.32.101901.155803
The phase field method makes possible the study of complex microstructural morphologies such as dendritic and eutectic solidification as well as polycrystalline growth.
The major contribution of the method is the introduction of an order parameter used to delineate phases such as solid/liquid, $\alpha~/~\beta$, etc. The concept of an order parameter is not new. However, smoothly varying this order parameter through an interphase interface frees us from tracking the interface position and applying boundary conditions at interfaces having complex morphologies.
Top of Page
Learning Goals
Introduction to the idea of an "order parameter".
Observe a practical use for the Calculus of Variations.
Introduction to what is meant by a non-homogeneous thermodynamic system.
Code a simple microstructure simulation.
Top of Page
On Your Own
Read this small excerpt from Boettinger's paper:
The method employs a phase-field variable, e.g., $\phi$, which is a function of position and time, to describe whether the material is liquid or solid. The behavior
of this variable is governed by an equation that is coupled to equations for heat
and solute transport. Interfaces between liquid and solid are described by smooth
but highly localized changes of this variable between fixed values that represent
solid and liquid, (in this review, 0 and 1, respectively).
Therein is the key feature of phase field models. The order (or the phase) is described by a field variable ($\phi$) coupled to heat and mass transfer. The result is that complex interface shapes do not require tracking of the position of the interface.
This may not have significance to you, however there was a time when knowledge of the position of the interface was required for solidification calculations. This boundary condition made dendrite computation difficult if not impossible.
The Order Parameter
The order parameter can be thought of as an envelope around probability amplitudes of atomic positions. In the picture below we have a probability density of finding an atom at a particular position. In this picture $\phi = 0$ might be considered the solid, and $\phi = 1$ would be the liquid.
Using the order parameter in this way makes it easier to calculate solidification microstructures - we no longer have to track the interface (you'll see how this works below).
The shape of this interface is a balance between two forces. The energy increase for intermediate states between solid and liquid (from the bulk free energy) and energy costs associated with steep gradients in the phase-field order parameter.
Top of Page
Review: Calculus of Variations
The calculus of variations is a rich mathematical subject. There are many books on the topic. One of the canonical problems in the subject area is to compute the shortest arc between to points on a plane. This, is a good place to begin your study of the topic. For now I'll describe the major output of the CoV and the points relevant to phase field.
The analogy between calculus and CoV is good. If finding the minimum of a function can be done by inspecting derivatives then the minimum of a functional can be found by inspecting the so-called 'variational derivative'. In particular the first chapter of Lev. D. Elsgolc's book "Calculus of Variations" presents this idea nicely.
The CoV gives us the Euler-Lagrange equation. This is the main tool of CoV:
$$ \frac{\delta F}{\delta \phi} = \frac{\partial F}{\partial \phi} - \frac{\partial}{\partial x} \frac{\partial F}{\partial \nabla \phi} = 0$$
The scalar and gradient terms in $\phi$ are treated as independent variables. This equation is telling us that the function that minimizes the functional is the solution to the above differential equation.
Top of Page
In Class
The Free Energy Functional
The Langragian is constructed from the free energy functional (and integrated over all space, V) and that, in turn is constructed from the bulk free energy and the gradient energy thus:
$$L(\phi,\nabla\phi) = \int_V \Big[ ~~f(\phi,T) + \frac{\epsilon^2_\phi}{2}|\nabla \phi|^2~\Big]~ dV$$
Where L is the Lagrangian, the functional (hereafter F) is in the square brackets, V is the volume of the system, $\phi$ is the order parameter, T is the temperature, $\epsilon$ is the gradient energy coefficient and $\nabla$ has the usual meaning.
$f(\phi,T)$ is the free energy density. This is often referred to as the 'bulk' free energy term and the terms in $\nabla\phi$ are the gradient energy terms. You may also hear these terms referred to as the 'non-classical' terms.
At equilibrium the variational derivatives must satisfy the following:
$$ \frac{\delta F}{\delta \phi} = \frac{\partial f}{\partial \phi} - \epsilon^2_\phi \nabla^2 \phi = 0$$
Recall the definition of the Euler-Lagrange equation:
$$ \frac{\delta F}{\delta \phi} = \frac{\partial F}{\partial \phi} - \frac{\partial}{\partial x} \frac{\partial F}{\partial \nabla \phi} = 0$$
and the free energy functional:
$$f(\phi,T) + \frac{\epsilon^2_\phi}{2}|\nabla \phi|^2$$
The first equation above tells us that the function $\phi(x,t)$ is unchanging. We will compute this interface profile below. To develop a kinetic expression we make an educated guess (also known as an "ansatz" in the phase field literature) about the relaxation of a system towards equilibrium.
Top of Page
The Equation of Motion
We assume the following functional form for the equation of motion:
$$\frac{\partial \phi}{\partial t} = - M_\phi \frac{\delta F}{\delta \phi}$$
This is the simplest expression that guarantees the free energy of the system will decrease over time. In this form the phase-field variable, $\phi$, is non-conserved. The conserved form takes the divergence of the expression above as is done when expression accumulation in a control volume (as in Fick's second law).
Our equation of motion is therefore:
$$\frac{\partial \phi}{\partial t} = - M_\phi \Big[\frac{\partial f}{\partial \phi} - \epsilon^2_\phi \nabla^2 \phi \Big]$$
Top of Page
Building the Bulk Free Energy '$f$'
There are two so-called helper functions that homogenize the free energy. The interpolating function, $p(\phi)$ and the double well function $g(\phi)$.
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.ticker import LinearLocator, FormatStrFormatter
def plot_homogeneous_F():
plt.fig = plt.figure(2, figsize=(10,10))
plt.ax = plt.fig.gca(projection='3d')
phi = np.linspace(0.0, 1.0, 100)
temperature = np.linspace(0.0, 1.0, 100)
phi,temperature = np.meshgrid(phi,temperature)
W=30.0
L=1.0
Tm=0.5
g=phi**2*(1-phi)**2
p=phi**3*(6*phi**2-15*phi+10)
f = W*g+L*p*(Tm-temperature)/Tm
energyPlot = plt.ax.plot_surface(phi, temperature, f, label=None,
cmap=plt.cm.coolwarm, rstride=5, cstride=5, alpha=0.5)
energyPlot = plt.contour(phi, temperature, f,20)
plt.clabel(energyPlot, inline=1, fontsize=10)
plt.ax.set_xlabel('$\phi$')
plt.ax.set_ylabel('T')
plt.ax.set_zlabel('$f(\phi,t)$')
return
plot_homogeneous_F()
Explanation: We start by using the ordinary free energy of the pure components:
Pure A, liquid phase - $f_A^L(T)$
Pure A, solid phase - $f_A^S(T)$
As we will be limiting ourselves to a pure material at this time, these are the only two free energies we need. Near the melting point these free energies are often modeled as straight lines using the relationship for the Gibbs free energy:
$$G = H - TS$$
taking H and S to be constants. Following conventions of the phase diagram modeling community we take the reference state of the component A to be the equilibrium phase at STP. If this were a metal like Cu then the reference state would be the FCC phase. For us, that will be the SOLID. This sets:
$$f_A^S(T) = 0$$
Expanding the difference in free energy between the solid and the liquid around the melting point results in:
$$f_A^L(T)-f_A^S(T) = L_A \frac{(T_M^A - T)}{T_M^A}$$
The next step is to homogenize the free energy for component A. We build the free energy $f(\phi,T)_A$ as follows:
$$f(\phi,T)_A = W_A~g(\phi) + f_L p(\phi) + f_S (1-p(\phi))$$
so that:
$$f(\phi,T)_A = W_A~g(\phi) + L_A \frac{(T_M^A - T)}{T_M^A}p(\phi)$$
Let us plot this and see what it looks like.
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, fixed
fig = None
def plot_equilibrium(W=500.0, epsilon=1.0):
global fig
if fig: plt.close(fig)
fig = plt.figure()
x = np.linspace(-1.0, 1.0, 200)
phi = 0.5*(1+np.tanh(x*np.sqrt(2*W)/(2*epsilon)))
plt.plot(x, phi, linewidth=1.0)
plt.xlabel('$x$', fontsize=24)
plt.ylabel('$\phi(x)$', fontsize=24)
return
print 'Hello!'
interact(plot_equilibrium, W=(1,1000,10), epsilon=fixed(1.0))
Explanation: $$L(\phi,\nabla\phi) = \int_V \Big[ ~~f(\phi,T) + \frac{\epsilon^2_\phi}{2}|\nabla \phi|^2~\Big]~ dV$$
From here it should be clear that the free energy space is homogeneous in the order parameter and temperature. Now that the description of the bulk energy is complete we can return to the Euler-Lagrange equations and proceed to develop our equilibrium solution and our equations of motion.
Top of Page
Analytical Solution
The full expression for the equation of motion is:
$$\frac{\partial \phi}{\partial t} = - M_\phi \epsilon^2
\Big[\nabla^2\phi- \frac{2W_A}{\epsilon^2} \phi(1-\phi)(1-2\phi)\Big]-\frac{30 M_\phi L_A}{T_M^A}(T_M^A - T)\phi^2(1-\phi)^2$$
While this is correct - it is often not explicitly written out like this when solving the equations numerically. Further, it is better if you don't fully expand the derivatives when attempting the analytical solution.
There is a fair bit of algebra and a few assumptions that enable the solution to the Euler-Lagrange equation above. I will state the procedue and leave out the gory detail. Remember we are after the expression that gives us $\phi(x)$. Keep this in mind as you read through the following bullet points.
First - assume that you are at the melting temperature. The rationale is that this is the only temperature where BOTH phases CO-EXIST. Any other temperature and it does not make sense to discuss an interface. This removes the second term on the RHS of the above expression.
Second - you can use $\frac{d\phi}{dx}$ as an integrating factor to take the first integral of the Euler-Lagrange equation.
Third - after evaluating the constant (C=0 is the answer, but, why it is zero will test your reasoning skills) the equation is seperable and can be integrated.
The result is:
$$\phi(x) = \frac{1}{2} \Big[ 1 + \tanh \Big( \frac{x}{2\delta} \Big) \Big]$$
where $\delta$ is related to the W and $\epsilon$ parameters.
End of explanation
%matplotlib osx
from fipy import *
L = 1.
nx = 400
dx = L/nx
mesh = Grid1D(dx=dx, nx=nx)
phase = CellVariable(name="phase",mesh=mesh)
viewer = MatplotlibViewer(vars=(phase,),datamin=-0.1, datamax=1.1, legend=None)
Explanation: W and $\epsilon$ can be parameterized in terms of the surface energy and the interface thickness to make a connection with the physical world.
Top of Page
Numerical Solution (FiPy)
End of explanation
x = mesh.cellCenters
phase.setValue(1.)
phase.setValue(0., where=x > L/2)
viewer.plot()
Explanation: This cell sets the initial conditions. There is a helper attribute cellCenters that fetches a list of the x points. The setValue helper functions and the 'where' keyword help you to set the initial conditions. FiPy is linked to Matplotlib and once you created the viewer object you call .plot() to update.
End of explanation
import sympy as sp
phi = sp.symbols('phi')
sp.init_printing()
((1-phi)**2*(phi**2)).diff(phi).simplify()
(phi**3*(6*phi**2-15*phi+10)).diff(phi).simplify()
Explanation: Top of Page
DIY: Code the terms to complete the phase field description.
$$\frac{\partial \phi}{\partial t} = - M_\phi \epsilon^2
\Big[\nabla^2\phi- \frac{2W_A}{\epsilon^2} \phi(1-\phi)(1-2\phi)\Big]-\frac{30 M_\phi L_A}{T_M^A}(T_M^A - T)\phi^2(1-\phi)^2$$
End of explanation
eps_sqrd = 0.00025
M = 1.0
W = 0.5
Lv = 1.
Tm = 1.
T = 1.0
enthalpy = Lv*(Tm-T)/Tm
S0 = W*2.0*phase*(phase-1.0)*(2*phase-1.0) + 30*phase**2*(phase**2-2*phase+1)*enthalpy
Explanation: $$\frac{\partial \phi}{\partial t} = - M_\phi \Big[\frac{\partial f}{\partial \phi} - \epsilon^2_\phi \nabla^2 \phi \Big]$$
$$f(\phi,T)_A = W_A~g(\phi) + L_A \frac{(T_M^A - T)}{T_M^A}p(\phi)$$
End of explanation
eq = TransientTerm() == DiffusionTerm(coeff=eps_sqrd*M) - S0
for i in range(50):
eq.solve(var = phase, dt=0.1)
viewer.plot()
Explanation: This is our general statement of a diffusive PDE. There is a transient term and a source term. Translate from the description of the phase field model above.
End of explanation
%matplotlib osx
from fipy import *
L = 1.
nx = 200
dx = L/nx
dy = L/nx
mesh = Grid2D(dx=dx, dy=dx, nx=nx, ny=nx)
phase = CellVariable(name="phase", mesh=mesh)
x = mesh.cellCenters()[0]
y = mesh.cellCenters()[1]
phase.setValue(1.)
x0 = 0.0
y0 = 0.0
#phase.setValue(0., where=(
# ((x-x0)**2+(y-y0)**2 > L/3) & ((x-L)**2+(y-L)**2 > 0.2)
# )
# )
phase.setValue(ExponentialNoiseVariable(mesh=mesh, mean=0.5))
viewer = Matplotlib2DGridViewer(vars=phase, datamin=0.0, datamax=1.0)
viewer.plot()
Explanation: Just re-execute this cell after you change parameters. You can execute it over and over until you are satisfied that you've reached equilibrium.
You can try changing $\kappa$, W, and T. Changing T from the melting tmperature will result in a moving interface. This is where things get interesting!
Top of Page
In 2D
This one is important. We will simulate a pair of curved particles (each with a different radius) at the melting temperature. What do you think will happen?
End of explanation
eps_sqrd = 0.00025
M = 1.0
W = 0.5
Lv = 1.
Tm = 1.
T = 1.
enthalpy = Lv*(Tm-T)/Tm
S0 = W*2.0*phase*(phase-1.0)*(2*phase-1.0) + 30*phase**2*(phase**2-2*phase+1)*enthalpy
eq = TransientTerm() == DiffusionTerm(coeff=eps_sqrd) - S0
for i in range(500):
eq.solve(var = phase, dt=0.05)
viewer.plot()
Explanation: The strength of FiPy is that you can use the same code here in 2D as above.
End of explanation |
10,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Usage
Basic usage example, where the code table is built based on given symbol frequencies
Step1: You can also "train" the codec by providing it data directly
Step2: Non-string sequences
Using dahuffman with sequences of symbols, e.g. country codes
Step4: Pre-trained codecs | Python Code:
codec = dahuffman.HuffmanCodec.from_frequencies({'e': 100, 'n':20, 'x':1, 'i': 40, 'q':3})
encoded = codec.encode('exeneeeexniqneieini')
print(encoded)
print(encoded.hex())
print(len(encoded))
codec.decode(encoded)
codec.print_code_table()
Explanation: Usage
Basic usage example, where the code table is built based on given symbol frequencies:
End of explanation
codec = dahuffman.HuffmanCodec.from_data('hello world how are you doing today foo bar lorem ipsum')
len(codec.encode('do lo er ad od'))
Explanation: You can also "train" the codec by providing it data directly:
End of explanation
countries = ['FR', 'UK', 'BE', 'IT', 'FR', 'IT', 'GR', 'FR', 'NL', 'BE', 'DE']
codec = dahuffman.HuffmanCodec.from_data(countries)
encoded = codec.encode(['FR', 'IT', 'BE', 'FR', 'UK'])
len(encoded), encoded.hex()
codec.decode(encoded)
Explanation: Non-string sequences
Using dahuffman with sequences of symbols, e.g. country codes:
End of explanation
codecs = {
'shakespeare': dahuffman.load_shakespeare(),
'json': dahuffman.load_json(),
'xml': dahuffman.load_xml()
}
def try_codecs(data):
print("{n:12s} {s:5d} bytes".format(n="original", s=len(data)))
for name, codec in codecs.items():
try:
encoded = codec.encode(data)
except KeyError:
continue
print("{n:12s} {s:5d} bytes ({p:.1f}%)".format(n=name, s=len(encoded), p=100.0*len(encoded)/len(data)))
try_codecs(To be, or not to be; that is the question;
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing, end them. To die, to sleep)
try_codecs('''{
"firstName": "John",
"lastName": "Smith",
"isAlive": true,
"age": 27,
"children": [],
"spouse": null
}''')
try_codecs('''<?xml version="1.0"?>
<catalog>
<book id="bk101">
<author>Gambardella, Matthew</author>
<title>XML Developer's Guide</title>
<price>44.95</price>
</book>
<book id="bk102">
<author>Ralls, Kim</author>
<title>Midnight Rain</title>
<price>5.95</price>
</book>
</catalog>''')
Explanation: Pre-trained codecs
End of explanation |
10,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: About Python
Created in 1991 by Guido van Rossum
Step2: Control flow
Built-in functions and types
Sequence types
References and mutability
Dicts and sets
Comprehensions
Functions are objects | Python Code:
def fibonacci(n):
return Nth number in the Fibonacci series
a, b = 0, 1
while n:
a, b = b, a + b
n -= 1
return a
for n in range(20):
print(fibonacci(n))
for i, n in enumerate(range(20)):
print('%2d -> %4d' % (i, fibonacci(n)))
Explanation: About Python
Created in 1991 by Guido van Rossum: a pragmatic take on the ABC language by Geurts, Meertens and Pemberton, which was designed for non-programmers;
Open Source and multi-platform: pre-installed in practically any Unix-like system;
Object-oriented but not strictly so: some functional programming traits;
Strongly typed (few implicit type conversions)
Dinamically typed (no type declarations for variables, functions)
Current versions are Python 2.7 and 3.5;
Minor but significant incompatibilities: there will be no Python 2.8.
Basic syntax
no type declarations
no semi-colons
no braces
comments (#) and docstrings
End of explanation
fibonacci
fibonacci.__doc__
help(fibonacci)
fibonacci?
Explanation: Control flow
Built-in functions and types
Sequence types
References and mutability
Dicts and sets
Comprehensions
Functions are objects
End of explanation |
10,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic Dataset
Step1: Preprocessing
Cleaning
Step2: Feature Engineering
We can also generate new features. Here are some ideas
Step3: Using The Title
We can extract the title of the passenger from their name. The titles take the form of Master., Mr., Mrs.. There are a few very commonly used titles, and a "long tail" of one-off titles that only one or two passengers have.
We'll first extract the titles with a regular expression, and then map each unique title to an integer value.
We'll then have a numeric column that corresponds to the appropriate Title.
Step4: Family Groups
We can also generate a feature indicating which family people are in. Because survival was likely highly dependent on your family and the people around you, this has a good chance at being a good feature.
To get this, we'll concatenate someone's last name with FamilySize to get a unique family id. We'll then be able to assign a code to each person based on their family id.
Step5: Feature Selection
Feature engineering is the most important part of any machine learning task, and there are lots more features we could calculate. But we also need a way to figure out which features are the best.
One way to do this is to use univariate feature selection. This essentially goes column by column, and figures out which columns correlate most closely with what we're trying to predict (Survived).
As usual, sklearn has a function that will help us with feature selection, SelectKBest. This selects the best features from the data, and allows us to specify how many it selects.
Step6: The final preprocessed dataframe. Now it is ready for training and evaluation
Step7: Training & Evaluation
Step8: Decision Tree
Step9: Random Forest
Parameter Tuning
The first, and easiest, thing we can do to improve the accuracy of the random forest is to increase the number of trees we're using. Training more trees will take more time, but because of the fact that we're averaging many predictions made on different subsets of the data, having more trees will increase accuracy greatly (up to a point).
We can also tweak the min_samples_split and min_samples_leaf variables to reduce overfitting. Because of how a decision tree works (as we explained in the video), having splits that go all the way down, or overly deep in the tree can result in fitting to quirks in the dataset, and not true signal. Thus, increasing min_samples_split and min_samples_leaf can reduce overfitting, which will actually improve our score, as we're making predictions on unseen data. A model that is less overfit, and that can generalize better, will actually perform better on unseen data, but worse on seen data.
Step10: Gradient Boosted
Another method that builds on decision trees is a gradient boosting classifier. Boosting involves training decision trees one after another, and feeding the errors from one tree into the next tree. So each tree is building on all the other trees that came before it. This can lead to overfitting if we build too many trees, though. As you get above 100 trees or so, it's very easy to overfit and train to quirks in the dataset. As our dataset is extremely small, we'll limit the tree count to just 25.
Another way to limit overfitting is to limit the depth to which each tree in the gradient boosting process can be built. We'll limit the tree depth to 3 to avoid overfitting.
We'll try boosting instead of our random forest approach and see if we can improve our accuracy.
Step11: Kaggle Submission
Step12: Decision Surfaces | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import cross_validation
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
Explanation: Titanic Dataset
End of explanation
titanic = pd.read_csv("data/train.csv")
# The titanic variable is available here.
titanic.Age = titanic.Age.fillna(titanic.Age.median())
# Replace all the occurences of male with the number 0.
titanic.loc[titanic["Sex"] == "male", "Sex"] = 0
titanic.loc[titanic["Sex"] == "female", "Sex"] = 1
# Find all the unique values for "Embarked".
print(titanic["Embarked"].unique())
titanic["Embarked"] = titanic["Embarked"].fillna("S")
titanic.loc[titanic["Embarked"] == "S", "Embarked"] = 0
titanic.loc[titanic["Embarked"] == "C", "Embarked"] = 1
titanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 2
Explanation: Preprocessing
Cleaning
End of explanation
# Generating a familysize column
titanic["FamilySize"] = titanic["SibSp"] + titanic["Parch"]
# The .apply method generates a new series
titanic["NameLength"] = titanic["Name"].apply(lambda x: len(x))
Explanation: Feature Engineering
We can also generate new features. Here are some ideas:
The length of the name -- this could pertain to how rich the person was, and therefore their position in the Titanic.
The total number of people in a family (SibSp + Parch).
An easy way to generate features is to use the .apply method on pandas dataframes. This applies a function you pass in to each element in a dataframe or series. We can pass in a lambda function, which enables us to define a function inline.
To write a lambda function, you write lambda x: len(x). x will take on the value of the input that is passed in -- in this case, the passenger name. The function to the right of the colon is then applied to x, and the result returned. The .apply method takes all of these outputs and constructs a pandas series from them. We can assign this series to a dataframe column.
End of explanation
import re
# A function to get the title from a name.
def get_title(name):
# Use a regular expression to search for a title. Titles always consist of capital and lowercase letters, and end with a period.
title_search = re.search(' ([A-Za-z]+)\.', name)
# If the title exists, extract and return it.
if title_search:
return title_search.group(1)
return ""
# Get all the titles and print how often each one occurs.
titles = titanic["Name"].apply(get_title)
print(pd.value_counts(titles))
# Map each title to an integer. Some titles are very rare, and are compressed into the same codes as other titles.
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8, "Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2}
for k,v in title_mapping.items():
titles[titles == k] = v
# Verify that we converted everything.
print(pd.value_counts(titles))
# Add in the title column.
titanic["Title"] = titles
Explanation: Using The Title
We can extract the title of the passenger from their name. The titles take the form of Master., Mr., Mrs.. There are a few very commonly used titles, and a "long tail" of one-off titles that only one or two passengers have.
We'll first extract the titles with a regular expression, and then map each unique title to an integer value.
We'll then have a numeric column that corresponds to the appropriate Title.
End of explanation
import operator
# A dictionary mapping family name to id
family_id_mapping = {}
# A function to get the id given a row
def get_family_id(row):
# Find the last name by splitting on a comma
last_name = row["Name"].split(",")[0]
# Create the family id
family_id = "{0}{1}".format(last_name, row["FamilySize"])
# Look up the id in the mapping
if family_id not in family_id_mapping:
if len(family_id_mapping) == 0:
current_id = 1
else:
# Get the maximum id from the mapping and add one to it if we don't have an id
current_id = (max(family_id_mapping.items(), key=operator.itemgetter(1))[1] + 1)
family_id_mapping[family_id] = current_id
return family_id_mapping[family_id]
# Get the family ids with the apply method
family_ids = titanic.apply(get_family_id, axis=1)
# There are a lot of family ids, so we'll compress all of the families under 3 members into one code.
family_ids[titanic["FamilySize"] < 3] = -1
# Print the count of each unique id.
print(pd.value_counts(family_ids))
titanic["FamilyId"] = family_ids
Explanation: Family Groups
We can also generate a feature indicating which family people are in. Because survival was likely highly dependent on your family and the people around you, this has a good chance at being a good feature.
To get this, we'll concatenate someone's last name with FamilySize to get a unique family id. We'll then be able to assign a code to each person based on their family id.
End of explanation
import numpy as np
from sklearn.feature_selection import SelectKBest, f_classif
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", "FamilySize", "Title", "FamilyId"]
# Perform feature selection
selector = SelectKBest(f_classif, k=5)
selector.fit(titanic[predictors], titanic["Survived"])
# Get the raw p-values for each feature, and transform from p-values into scores
scores = -np.log10(selector.pvalues_)
# Plot the scores. See how "Pclass", "Sex", "Title", and "Fare" are the best?
plt.bar(range(len(predictors)), scores)
plt.xticks(range(len(predictors)), predictors, rotation='vertical')
plt.show()
# Pick only the four best features.
predictors = ["Pclass", "Sex", "Fare", "Title"]
Explanation: Feature Selection
Feature engineering is the most important part of any machine learning task, and there are lots more features we could calculate. But we also need a way to figure out which features are the best.
One way to do this is to use univariate feature selection. This essentially goes column by column, and figures out which columns correlate most closely with what we're trying to predict (Survived).
As usual, sklearn has a function that will help us with feature selection, SelectKBest. This selects the best features from the data, and allows us to specify how many it selects.
End of explanation
titanic.head()
#My routine that encapsulates all the preprocessing work documented above
def preprocess(data):
data["Age"] = data["Age"].fillna(data["Age"].median())
data["Fare"] = data["Fare"].fillna(data["Fare"].median())
data.loc[data["Sex"] == "male", "Sex"] = 0
data.loc[data["Sex"] == "female", "Sex"] = 1
data["Embarked"] = data["Embarked"].fillna("S")
data.loc[data["Embarked"] == "S", "Embarked"] = 0
data.loc[data["Embarked"] == "C", "Embarked"] = 1
data.loc[data["Embarked"] == "Q", "Embarked"] = 2
# Generating a familysize column
data["FamilySize"] = data["SibSp"] + data["Parch"]
# The .apply method generates a new series
data["NameLength"] = data["Name"].apply(lambda x: len(x))
import re
# Get all the titles and print how often each one occurs.
titles = data["Name"].apply(get_title)
# print(pd.value_counts(titles))
# Map each title to an integer. Some titles are very rare, and are compressed into the same codes as other titles.
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8, "Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2, "Dona": 10}
for k,v in title_mapping.items():
titles[titles == k] = v
# Verify that we converted everything.
# print(pd.value_counts(titles))
# Add in the title column.
data["Title"] = titles
import operator
# A dictionary mapping family name to id
family_id_mapping = {}
# Get the family ids with the apply method
family_ids = data.apply(get_family_id, axis=1)
# There are a lot of family ids, so we'll compress all of the families under 3 members into one code.
family_ids[data["FamilySize"] < 3] = -1
# Print the count of each unique id.
# print(pd.value_counts(family_ids))
data["FamilyId"] = family_ids
return data
df = titanic = pd.read_csv("data/train.csv")
titanic = preprocess(df)
Explanation: The final preprocessed dataframe. Now it is ready for training and evaluation:
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import grid_search
from sklearn.metrics import classification_report
Explanation: Training & Evaluation
End of explanation
parameters = {'max_depth':range(1,20),
'max_features': ["auto", "sqrt", "log2", None]
}
dtgscv = grid_search.GridSearchCV(DecisionTreeClassifier(), parameters, cv=10, n_jobs=-1)
dtgscv.fit(titanic[predictors], titanic["Survived"])
dtree = dtgscv.best_estimator_
print("Best parameters set found on training set:")
print
print(dtgscv.best_params_)
print("Best parameters set found on training set:")
print
print(dtgscv.best_score_)
print
print("Grid scores on training set:")
print
for params, mean_score, scores in dtgscv.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
Explanation: Decision Tree
End of explanation
# alg = RandomForestClassifier(random_state=1, n_estimators=150, min_samples_split=4, min_samples_leaf=2)
parameters = {
"n_estimators": [100, 150, 200],
"min_samples_split": range(1,10),
# "min_samples_leaf": range(1,10)
}
rff_gscv = grid_search.GridSearchCV(RandomForestClassifier(), parameters, cv=10, n_jobs=-1)
rff_gscv.fit(titanic[predictors], titanic["Survived"])
rfc = rff_gscv.best_estimator_
print("Best parameters set found on training set:")
print
print(rff_gscv.best_params_)
print
print("Best parameters set found on training set:")
print
print(rff_gscv.best_score_)
print
print("Grid scores on training set:")
print
for params, mean_score, scores in rff_gscv.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
Explanation: Random Forest
Parameter Tuning
The first, and easiest, thing we can do to improve the accuracy of the random forest is to increase the number of trees we're using. Training more trees will take more time, but because of the fact that we're averaging many predictions made on different subsets of the data, having more trees will increase accuracy greatly (up to a point).
We can also tweak the min_samples_split and min_samples_leaf variables to reduce overfitting. Because of how a decision tree works (as we explained in the video), having splits that go all the way down, or overly deep in the tree can result in fitting to quirks in the dataset, and not true signal. Thus, increasing min_samples_split and min_samples_leaf can reduce overfitting, which will actually improve our score, as we're making predictions on unseen data. A model that is less overfit, and that can generalize better, will actually perform better on unseen data, but worse on seen data.
End of explanation
# alg = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0)
parameters = {
"learning_rate": range(1,10),
"n_estimators": [ 100, 150],
"max_depth": range(1,10)
}
gbc_gscv = grid_search.GridSearchCV(GradientBoostingClassifier(), parameters, cv=3, n_jobs = -1)
gbc_gscv.fit(titanic[predictors], titanic["Survived"])
gbc = gbc_gscv.best_estimator_
print("Best parameters set found on training set:")
print
print(gbc_gscv.best_params_)
print("Best parameters set found on training set:")
print
print(gbc_gscv.best_score_)
print
print("Grid scores on training set:")
print
for params, mean_score, scores in gbc_gscv.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
Explanation: Gradient Boosted
Another method that builds on decision trees is a gradient boosting classifier. Boosting involves training decision trees one after another, and feeding the errors from one tree into the next tree. So each tree is building on all the other trees that came before it. This can lead to overfitting if we build too many trees, though. As you get above 100 trees or so, it's very easy to overfit and train to quirks in the dataset. As our dataset is extremely small, we'll limit the tree count to just 25.
Another way to limit overfitting is to limit the depth to which each tree in the gradient boosting process can be built. We'll limit the tree depth to 3 to avoid overfitting.
We'll try boosting instead of our random forest approach and see if we can improve our accuracy.
End of explanation
test_data = pd.read_csv("data/test.csv")
titanic_test = preprocess(test_data.copy())
models = [dtree, rfc, gbc]
for model in models:
# Train the algorithm using all the training data
model.fit(titanic[predictors], titanic["Survived"])
# Make predictions using the test set.
predictions = model.predict(titanic_test[predictors])
# Create a new dataframe with only the columns Kaggle wants from the dataset.
submission = pd.DataFrame({
"PassengerId": titanic_test["PassengerId"],
"Survived": predictions
})
submission.to_csv("%s.csv"%model.__class__.__name__, index=False)
Explanation: Kaggle Submission
End of explanation
#plotting dicision boundries with columns Age and Fare.
from itertools import product
#Picking Age and Fare as they are continuous and highly correlated with Survived
df = titanic
X = df[['Age', 'Fare']].values
y = df['Survived'].values
# Training classifiers
clf1 = DecisionTreeClassifier()
clf2 = RandomForestClassifier()
clf3 = GradientBoostingClassifier()
#Fit models
clf1.fit(X, y)
clf2.fit(X, y)
clf3.fit(X, y)
# Plotting decision regions
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(1, 3, sharex='col', sharey='row', figsize=(15,4))
for idx, clf, tt in zip(range(3),
[clf1, clf2, clf3],
['Decision Tree', 'Random Forest',
'Gradient Boost']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx].contourf(xx, yy, Z, alpha=0.4)
axarr[idx].scatter(X[:, 0], X[:, 1], c=y, alpha=0.8)
axarr[idx].set_title(tt, fontsize=14)
axarr[idx].axis('off')
Explanation: Decision Surfaces
End of explanation |
10,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 勾配ブースティング木
Step2: 特徴量の説明については、前のチュートリアルをご覧ください。
特徴量カラム、input_fn、を作成して Estimator をトレーニングする
データを処理する
元の数値カラムをそのまま、そして One-Hot エンコーディングカテゴリ変数を使用して、特徴量カラムを作成します。
Step3: 入力パイプラインを構築する
Pandas から直接データを読み取るために、tf.data API の from_tensor_slices メソッドを使用して入力関数を作成します。
Step4: モデルをトレーニングする
Step5: パフォーマンスの理由により、データがメモリに収まる場合は、tf.estimator.BoostedTreesClassifier 関数の train_in_memory=True 引数を使用することをお勧めします。ただし、トレーニング時間が問題ではない場合、または非常に大規模なデータセットを使用しており、分散型トレーニングを実施する場合は、上記に示される tf.estimator.BoostedTrees API を使用してください。
このメソッドを使用する際は、メソッドはデータセット全体に対して処理を行うため、入力データをバッチ処理することはできません。
Step6: モデルの解釈とプロットの作成
Step7: ローカル解釈可能性
次に、Interpreting Random Forests(Palczewska et al および Saabas)に概説されたアプローチを使用して(このメソッドは、treeinterpreter パッケージの Random Forests の scikit-learn にもあります)、DFC(指向特徴量貢献度)を出力します。DCS は次のようにして生成します。
pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))
(注意
Step8: DFC の便利な特性は、貢献度とバイアスの和が特定の例の予測と同等であるということです。
Step11: 個別の乗船者に対し、DFC を描画します。貢献度の指向性に基づいてプロットに色を付け、図に特徴量値を追加しましょう。
Step12: 貢献度が大きくなると、モデルの予測に対する影響度も増します。負の貢献度は、この例の特徴量値によってモデルの予測が減少したことを示し、正の値は予測が増加したことを示します。
また、例の DFC を分散全体に比較したバイオリンプロットも作成できます。
Step13: この例をプロットに描画します。
Step14: 最後に、LIME や shap といったサードパーティ製ツールを使用すると、モデルの個別の予測を理解しやすくなります。
グローバル特徴量重要度
さらに、個別の予測を調べる代わりに、モデル全体を理解することができます。以下に、計算してし使用する内容を示します。
est.experimental_feature_importances を使用したゲインベースの特徴量重要度
パーミュテーション重要度
est.experimental_predict_with_explanations による DFC の総計
ゲインベースの特徴量重要度は、特定の特徴量を分割したときの損失の変化を測定し、パーミュテーション特徴量重要度は、各特徴量を 1 つずつシャッフルして評価セットのモデルパフォーマンスを評価し、モデルパフォーマンスの変化をシャッフルされた特徴量によるものとすることで、計算されます。
一般的に、ゲインベースの特徴量重要度よりパーミュテーション特徴量重要度の方が優先されますが、両方の手法は、潜在的な予測変数が測定のスケーリングまたはカテゴリ数が異なり、特徴量が相関する際に、信頼性がなくなることがあります(出典)。特徴量重要度の種類に関するより詳細な概要と素晴らしい議論については、こちらの記事をご覧ください。
ゲインベースの特徴量重要度
ゲインベースの特徴量重要度は、est.experimental_feature_importances を使用して TensorFlow ブースティング木 Estimator に組み込まれています。
Step15: 平均絶対 DFC
グローバルレベルでの影響を理解するために、DFC の絶対値の平均を割り出すこともできます。
Step16: また、特徴量値が変化するにつれ、DFC がどのように変化するのかも確認できます。
Step19: パーミュテーション特徴量重要度
Step20: モデルの適合性を視覚化する
まず、次の式に従って、トレーニングデータをシミュレーション/作成しましょう。
$$z=x* e^{-x^2 - y^2}$$
上記の z は予測しようとしている従属変数で、x と y は特徴量です。
Step22: 関数を視覚化できます。赤味が強くなるほど、より大きな関数値に対応します。
Step23: まず、線形モデルをデータにフィットしてみましょう。
Step24: あまり良いフィットではありません。次に、モデルが関数にどのようにフィットするかを理解するために、GBDT モデルをフィットしてみましょう。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install statsmodels
import numpy as np
import pandas as pd
from IPython.display import clear_output
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
import tensorflow as tf
tf.random.set_seed(123)
Explanation: 勾配ブースティング木: モデルの理解
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/estimator/boosted_trees_model_understanding"><img src="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/boosted_trees_model_understanding.ipynb">TensorFlow.org で表示</a></td>
<td> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/boosted_trees_model_understanding.ipynb">Google Colab で実行</a> </td>
<td> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"><a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/estimator/boosted_trees_model_understanding.ipynb">GitHub でソースを表示</a> </td>
<td> <img src="https://www.tensorflow.org/images/download_logo_32px.png"><a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/estimator/boosted_trees_model_understanding.ipynb">ノートブックをダウンロード</a> </td>
</table>
警告: 新しいコードには Estimators は推奨されません。Estimators は v1.Session スタイルのコードを実行しますが、これは正しく記述するのはより難しく、特に TF 2 コードと組み合わせると予期しない動作をする可能性があります。Estimators は、互換性保証の対象となりますが、セキュリティの脆弱性以外の修正は行われません。詳細については、移行ガイドを参照してください。
注意: 多くの最先端の決定フォレストアルゴリズムの最新の Keras ベースの実装は、TensorFlow 決定フォレストから利用できます。
勾配ブースティングモデルのトレーニング関するエンドツーエンドのウォークスルーについては、ブースティング決定木のチュートリアルをご覧ください。このチュートリアルでは、次のことを目的としています。
ブースティング木モデルをローカルとグローバルのレベルで解釈する方法を学習します。
ブースティング木モデルがデータセットにどのように適合するかの洞察を得ます。
ブースティング木モデルをローカルとグローバルでどのように解釈するのか
ローカル解釈可能性とは、個別の例レベルにおけるモデルの予測の理解を指し、グローバル解釈可能性とは、モデル全体の理解を指します。このようなテクニックによって、機械学習(ML)実践者がモデルの開発段階でバイアスとバグを検出することができます。
ローカル解釈可能性については、インスタンスごとに貢献度を作成して視覚化する方法を学習します。これを特徴量の重要度と区別するために、これらの値 DFC(指向特徴量貢献度)と呼びます。
グローバル解釈可能性については、獲得ベースの特徴量重要度(パーミュテーション特徴量重要度)を取得して視覚化し、集計された DFC も示します。
Titanic データセットを読み込む
Titanic データセットを使用します。ここでの目標は、性別、年齢、クラスなど与えられた特徴から(やや悪趣味ではありますが)乗船者の生存を予測することです。
End of explanation
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fc.indicator_column(
fc.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(fc.numeric_column(feature_name,
dtype=tf.float32))
Explanation: 特徴量の説明については、前のチュートリアルをご覧ください。
特徴量カラム、input_fn、を作成して Estimator をトレーニングする
データを処理する
元の数値カラムをそのまま、そして One-Hot エンコーディングカテゴリ変数を使用して、特徴量カラムを作成します。
End of explanation
# Use entire batch since this is such a small dataset.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = (dataset
.repeat(n_epochs)
.batch(NUM_EXAMPLES))
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
Explanation: 入力パイプラインを構築する
Pandas から直接データを読み取るために、tf.data API の from_tensor_slices メソッドを使用して入力関数を作成します。
End of explanation
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classification when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(feature_columns, **params)
# Train model.
est.train(train_input_fn, max_steps=100)
# Evaluation.
results = est.evaluate(eval_input_fn)
clear_output()
pd.Series(results).to_frame()
Explanation: モデルをトレーニングする
End of explanation
in_memory_params = dict(params)
in_memory_params['n_batches_per_layer'] = 1
# In-memory input_fn does not use batching.
def make_inmemory_train_input_fn(X, y):
y = np.expand_dims(y, axis=1)
def input_fn():
return dict(X), y
return input_fn
train_input_fn = make_inmemory_train_input_fn(dftrain, y_train)
# Train the model.
est = tf.estimator.BoostedTreesClassifier(
feature_columns,
train_in_memory=True,
**in_memory_params)
est.train(train_input_fn)
print(est.evaluate(eval_input_fn))
Explanation: パフォーマンスの理由により、データがメモリに収まる場合は、tf.estimator.BoostedTreesClassifier 関数の train_in_memory=True 引数を使用することをお勧めします。ただし、トレーニング時間が問題ではない場合、または非常に大規模なデータセットを使用しており、分散型トレーニングを実施する場合は、上記に示される tf.estimator.BoostedTrees API を使用してください。
このメソッドを使用する際は、メソッドはデータセット全体に対して処理を行うため、入力データをバッチ処理することはできません。
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
Explanation: モデルの解釈とプロットの作成
End of explanation
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.describe().T
Explanation: ローカル解釈可能性
次に、Interpreting Random Forests(Palczewska et al および Saabas)に概説されたアプローチを使用して(このメソッドは、treeinterpreter パッケージの Random Forests の scikit-learn にもあります)、DFC(指向特徴量貢献度)を出力します。DCS は次のようにして生成します。
pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))
(注意: メソッドは「実験的」に指定されています。そのため、experimental のプレフィクスを削除する前に、この API が変更される可能性があります。)
End of explanation
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
Explanation: DFC の便利な特性は、貢献度とバイアスの和が特定の例の予測と同等であるということです。
End of explanation
# Boilerplate code for plotting :)
def _get_color(value):
To make positive DFCs plot green, negative DFCs plot red.
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
Display feature's values on left of plot.
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
def plot_example(example):
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=colors,
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
return ax
# Plot results.
ID = 182
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = plot_example(example)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14)
plt.show()
Explanation: 個別の乗船者に対し、DFC を描画します。貢献度の指向性に基づいてプロットに色を付け、図に特徴量値を追加しましょう。
End of explanation
# Boilerplate plotting code.
def dist_violin_plot(df_dfc, ID):
# Initialize plot.
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
# Create example dataframe.
TOP_N = 8 # View top 8 features.
example = df_dfc.iloc[ID]
ix = example.abs().sort_values()[-TOP_N:].index
example = example[ix]
example_df = example.to_frame(name='dfc')
# Add contributions of entire distribution.
parts=ax.violinplot([df_dfc[w] for w in ix],
vert=False,
showextrema=False,
widths=0.7,
positions=np.arange(len(ix)))
face_color = sns_colors[0]
alpha = 0.15
for pc in parts['bodies']:
pc.set_facecolor(face_color)
pc.set_alpha(alpha)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
# Add local contributions.
ax.scatter(example,
np.arange(example.shape[0]),
color=sns.color_palette()[2],
s=100,
marker="s",
label='contributions for example')
# Legend
# Proxy plot, to show violinplot dist on legend.
ax.plot([0,0], [1,1], label='eval set contributions\ndistributions',
color=face_color, alpha=alpha, linewidth=10)
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large',
frameon=True)
legend.get_frame().set_facecolor('white')
# Format plot.
ax.set_yticks(np.arange(example.shape[0]))
ax.set_yticklabels(example.index)
ax.grid(False, axis='y')
ax.set_xlabel('Contribution to predicted probability', size=14)
Explanation: 貢献度が大きくなると、モデルの予測に対する影響度も増します。負の貢献度は、この例の特徴量値によってモデルの予測が減少したことを示し、正の値は予測が増加したことを示します。
また、例の DFC を分散全体に比較したバイオリンプロットも作成できます。
End of explanation
dist_violin_plot(df_dfc, ID)
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
plt.show()
Explanation: この例をプロットに描画します。
End of explanation
importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.Series(importances)
# Visualize importances.
N = 8
ax = (df_imp.iloc[0:N][::-1]
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6)))
ax.grid(False, axis='y')
Explanation: 最後に、LIME や shap といったサードパーティ製ツールを使用すると、モデルの個別の予測を理解しやすくなります。
グローバル特徴量重要度
さらに、個別の予測を調べる代わりに、モデル全体を理解することができます。以下に、計算してし使用する内容を示します。
est.experimental_feature_importances を使用したゲインベースの特徴量重要度
パーミュテーション重要度
est.experimental_predict_with_explanations による DFC の総計
ゲインベースの特徴量重要度は、特定の特徴量を分割したときの損失の変化を測定し、パーミュテーション特徴量重要度は、各特徴量を 1 つずつシャッフルして評価セットのモデルパフォーマンスを評価し、モデルパフォーマンスの変化をシャッフルされた特徴量によるものとすることで、計算されます。
一般的に、ゲインベースの特徴量重要度よりパーミュテーション特徴量重要度の方が優先されますが、両方の手法は、潜在的な予測変数が測定のスケーリングまたはカテゴリ数が異なり、特徴量が相関する際に、信頼性がなくなることがあります(出典)。特徴量重要度の種類に関するより詳細な概要と素晴らしい議論については、こちらの記事をご覧ください。
ゲインベースの特徴量重要度
ゲインベースの特徴量重要度は、est.experimental_feature_importances を使用して TensorFlow ブースティング木 Estimator に組み込まれています。
End of explanation
# Plot.
dfc_mean = df_dfc.abs().mean()
N = 8
sorted_ix = dfc_mean.abs().sort_values()[-N:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
Explanation: 平均絶対 DFC
グローバルレベルでの影響を理解するために、DFC の絶対値の平均を割り出すこともできます。
End of explanation
FEATURE = 'fare'
feature = pd.Series(df_dfc[FEATURE].values, index=dfeval[FEATURE].values).sort_index()
ax = sns.regplot(feature.index.values, feature.values, lowess=True)
ax.set_ylabel('contribution')
ax.set_xlabel(FEATURE)
ax.set_xlim(0, 100)
plt.show()
Explanation: また、特徴量値が変化するにつれ、DFC がどのように変化するのかも確認できます。
End of explanation
def permutation_importances(est, X_eval, y_eval, metric, features):
Column by column, shuffle values and observe effect on eval set.
source: http://explained.ai/rf-importance/index.html
A similar approach can be done during training. See "Drop-column importance"
in the above article.
baseline = metric(est, X_eval, y_eval)
imp = []
for col in features:
save = X_eval[col].copy()
X_eval[col] = np.random.permutation(X_eval[col])
m = metric(est, X_eval, y_eval)
X_eval[col] = save
imp.append(baseline - m)
return np.array(imp)
def accuracy_metric(est, X, y):
TensorFlow estimator accuracy.
eval_input_fn = make_input_fn(X,
y=y,
shuffle=False,
n_epochs=1)
return est.evaluate(input_fn=eval_input_fn)['accuracy']
features = CATEGORICAL_COLUMNS + NUMERIC_COLUMNS
importances = permutation_importances(est, dfeval, y_eval, accuracy_metric,
features)
df_imp = pd.Series(importances, index=features)
sorted_ix = df_imp.abs().sort_values().index
ax = df_imp[sorted_ix][-5:].plot(kind='barh', color=sns_colors[2], figsize=(10, 6))
ax.grid(False, axis='y')
ax.set_title('Permutation feature importance')
plt.show()
Explanation: パーミュテーション特徴量重要度
End of explanation
from numpy.random import uniform, seed
from scipy.interpolate import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
xy = np.zeros((2,np.size(x)))
xy[0] = x
xy[1] = y
xy = xy.T
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi)
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
Explanation: モデルの適合性を視覚化する
まず、次の式に従って、トレーニングデータをシミュレーション/作成しましょう。
$$z=x* e^{-x^2 - y^2}$$
上記の z は予測しようとしている従属変数で、x と y は特徴量です。
End of explanation
zi = griddata(xy, z, (xi, yi), method='linear', fill_value='0')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data')
plt.show()
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
def predict(est):
Predictions from a given estimator.
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
Explanation: 関数を視覚化できます。赤味が強くなるほど、より大きな関数値に対応します。
End of explanation
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
Explanation: まず、線形モデルをデータにフィットしてみましょう。
End of explanation
n_trees = 37 #@param {type: "slider", min: 1, max: 80, step: 1}
est = tf.estimator.BoostedTreesRegressor(fc, n_batches_per_layer=1, n_trees=n_trees)
est.train(train_input_fn, max_steps=500)
clear_output()
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20)
plt.show()
Explanation: あまり良いフィットではありません。次に、モデルが関数にどのようにフィットするかを理解するために、GBDT モデルをフィットしてみましょう。
End of explanation |
10,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features
Step1: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
Step2: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pins.
Step3: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step4: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step5: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step6: Likewise, we can construct a control rod guide tube with the same surfaces.
Step7: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step8: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step9: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step10: We now must create a geometry that is assigned a root universe and export it to XML.
Step11: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step12: Let us also create a Plots file that we can use to verify that our fuel assembly geometry was created successfully.
Step13: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
Step14: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define 20-energy-group, 1-energy-group, and 6-delayed-group structures.
Step15: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
Step16: Now, we can run OpenMC to generate the cross sections.
Step17: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
Step18: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step19: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta and DelayedNuFissionXS objects. The delayed neutron precursor concentrations are modeled using the following equations
Step20: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
Step21: Cross Section Visualizations
In addition to inspecting the data in the tallies by getting the pandas dataframe, we can also plot the tally data on the domain mesh. Below is the delayed neutron fraction tallied in each mesh cell for each delayed group. | Python Code:
import math
import pickle
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
import openmoc
import openmoc.process
from openmoc.opencg_compatible import get_openmoc_geometry
from openmoc.materialize import load_openmc_mgxs_lib
%matplotlib inline
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features:
Calculation of multi-energy-group and multi-delayed-group cross sections for a fuel assembly
Automated creation, manipulation and storage of MGXS with openmc.mgxs.Library
Steady-state pin-by-pin delayed neutron fractions (beta) for each delayed group.
Generation of surface currents on the interfaces and surfaces of a Mesh.
Generate Input Files
End of explanation
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
b10 = openmc.Nuclide('B10')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
zr90 = openmc.Nuclide('Zr90')
Explanation: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
water.add_nuclide(b10, 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
Explanation: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pins.
End of explanation
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, water, zircaloy))
materials_file.default_xs = '71c'
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:,:] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
# Create Geometry and set root Universe
geometry = openmc.Geometry()
geometry.root_universe = root_universe
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.pixels = [250, 250]
plot.width = [-10.71*2, -10.71*2]
plot.color = 'mat'
# Instantiate a Plots object, add Plot, and export to "plots.xml"
plot_file = openmc.Plots([plot])
plot_file.export_to_xml()
Explanation: Let us also create a Plots file that we can use to verify that our fuel assembly geometry was created successfully.
End of explanation
# Run openmc in plotting mode
openmc.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png')
Explanation: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
End of explanation
# Instantiate a 20-group EnergyGroups object
energy_groups = openmc.mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 21)
# Instantiate a 1-group EnergyGroups object
one_group = openmc.mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
# Instantiate a 6-delayed-group list
delayed_groups = list(range(1,7))
Explanation: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define 20-energy-group, 1-energy-group, and 6-delayed-group structures.
End of explanation
# Instantiate a tally mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [17, 17, 1]
mesh.lower_left = [-10.71, -10.71, -10000.]
mesh.width = [1.26, 1.26, 20000.]
# Initialize an 20-energy-group and 6-delayed-group MGXS Library
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = energy_groups
mgxs_lib.delayed_groups = delayed_groups
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',
'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']
# Specify a "mesh" domain type for the cross section tally filters
mgxs_lib.domain_type = 'mesh'
# Specify the mesh domain over which to compute multi-group cross sections
mgxs_lib.domains = [mesh]
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a current tally
mesh_filter = openmc.MeshFilter(mesh)
current_tally = openmc.Tally(name='current tally')
current_tally.scores = ['current']
current_tally.filters = [mesh_filter]
# Add current tally to the tallies file
tallies_file.append(current_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Now, we can run OpenMC to generate the cross sections.
End of explanation
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Extrack the current tally separately
current_tally = sp.get_tally(name='current tally')
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
# Set the time constants for the delayed precursors (in seconds^-1)
precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])
precursor_lambda = -np.log(0.5) / precursor_halflife
beta = mgxs_lib.get_mgxs(mesh, 'beta')
# Create a tally object with only the delayed group filter for the time constants
beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]
lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)
for f in beta_filters:
lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.
# Set the mean of the lambda tally and reshape to account for nuclides and scores
lambda_tally._mean = precursor_lambda
lambda_tally._mean.shape = lambda_tally.std_dev.shape
# Set a total nuclide and lambda score
lambda_tally.nuclides = [openmc.Nuclide(name='total')]
lambda_tally.scores = ['lambda']
delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe().head(10)
Explanation: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta and DelayedNuFissionXS objects. The delayed neutron precursor concentrations are modeled using the following equations:
$$\frac{\partial}{\partial t} C_{k,d} (t) = \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t) \nu_d \sigma_{f,x}(\mathbf{r},E',t)\Phi(\mathbf{r},E',t) - \lambda_{d} C_{k,d} (t) $$
$$C_{k,d} (t=0) = \frac{1}{\lambda_{d}} \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t=0) \nu_d \sigma_{f,x}(\mathbf{r},E',t=0)\Phi(\mathbf{r},E',t=0) $$
End of explanation
current_tally.get_pandas_dataframe().head(10)
Explanation: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
End of explanation
# Extract the energy-condensed delayed neutron fraction tally
beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)
beta_by_group.mean.shape = (17, 17, 6)
beta_by_group.mean[beta_by_group.mean == 0] = np.nan
# Plot the betas
plt.figure(figsize=(18,9))
fig = plt.subplot(231)
plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 1')
fig = plt.subplot(232)
plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 2')
fig = plt.subplot(233)
plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 3')
fig = plt.subplot(234)
plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 4')
fig = plt.subplot(235)
plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 5')
fig = plt.subplot(236)
plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 6')
Explanation: Cross Section Visualizations
In addition to inspecting the data in the tallies by getting the pandas dataframe, we can also plot the tally data on the domain mesh. Below is the delayed neutron fraction tallied in each mesh cell for each delayed group.
End of explanation |
10,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Setup
Step2: Data and model
Step3: HMC
Step4: Blackjax | Python Code:
import jax
print(jax.devices())
!git clone https://github.com/google-research/google-research.git
%cd /content/google-research
!ls bnn_hmc
!pip install optax
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/bnn_hmc_gaussian.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
(SG)HMC for inferring params of a 2d Gaussian
Based on
https://github.com/google-research/google-research/blob/master/bnn_hmc/notebooks/mcmc_gaussian_test.ipynb
End of explanation
from jax.config import config
import jax
from jax import numpy as jnp
import numpy as onp
import numpy as np
import os
import sys
import time
import tqdm
import optax
import functools
from matplotlib import pyplot as plt
from bnn_hmc.utils import losses
from bnn_hmc.utils import train_utils
from bnn_hmc.utils import tree_utils
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: Setup
End of explanation
mu = jnp.zeros(
[
2,
]
)
# sigma = jnp.array([[1., .5], [.5, 1.]])
sigma = jnp.array([[1.0e-4, 0], [0.0, 1.0]])
sigma_l = jnp.linalg.cholesky(sigma)
sigma_inv = jnp.linalg.inv(sigma)
sigma_det = jnp.linalg.det(sigma)
onp.random.seed(0)
samples = onp.random.multivariate_normal(onp.asarray(mu), onp.asarray(sigma), size=1000)
plt.scatter(samples[:, 0], samples[:, 1], alpha=0.3)
plt.grid()
def log_density_fn(params):
assert params.shape == mu.shape, "Shape error"
diff = params - mu
k = mu.size
log_density = -jnp.log(2 * jnp.pi) * k / 2
log_density -= jnp.log(sigma_det) / 2
log_density -= diff.T @ sigma_inv @ diff / 2
return log_density
def log_likelihood_fn(_, params, *args, **kwargs):
return log_density_fn(params), jnp.array(jnp.nan)
def log_prior_fn(_):
return 0.0
def log_prior_diff_fn(*args):
return 0.0
fake_net_apply = None
fake_data = jnp.array([[jnp.nan,],]), jnp.array(
[
[
jnp.nan,
],
]
)
fake_net_state = jnp.array(
[
jnp.nan,
]
)
Explanation: Data and model
End of explanation
step_size = 1e-1
trajectory_len = jnp.pi / 2
max_num_leapfrog_steps = int(trajectory_len // step_size + 1)
print("Leapfrog steps per iteration:", max_num_leapfrog_steps)
update, get_log_prob_and_grad = train_utils.make_hmc_update(
fake_net_apply, log_likelihood_fn, log_prior_fn, log_prior_diff_fn, max_num_leapfrog_steps, 1.0, 0.0
)
# Initial log-prob and grad values
# params = jnp.ones_like(mu)[None, :]
params = jnp.ones_like(mu)
log_prob, state_grad, log_likelihood, net_state = get_log_prob_and_grad(fake_data, params, fake_net_state)
%%time
num_iterations = 500
all_samples = []
key = jax.random.PRNGKey(0)
for iteration in tqdm.tqdm(range(num_iterations)):
(params, net_state, log_likelihood, state_grad, step_size, key, accept_prob, accepted) = update(
fake_data, params, net_state, log_likelihood, state_grad, key, step_size, trajectory_len, True
)
if accepted:
all_samples.append(onp.asarray(params).copy())
# print("It: {} \t Accept P: {} \t Accepted {} \t Log-likelihood: {}".format(
# iteration, accept_prob, accepted, log_likelihood))
len(all_samples)
log_prob, state_grad, log_likelihood, net_state
all_samples_cat = onp.stack(all_samples)
plt.scatter(all_samples_cat[:, 0], all_samples_cat[:, 1], alpha=0.3)
plt.grid()
Explanation: HMC
End of explanation
!pip install blackjax
import jax
import jax.numpy as jnp
import jax.scipy.stats as stats
import matplotlib.pyplot as plt
import numpy as np
import blackjax.hmc as hmc
import blackjax.nuts as nuts
import blackjax.stan_warmup as stan_warmup
print(jax.devices())
potential = lambda x: -log_density_fn(**x)
num_integration_steps = 30
kernel_generator = lambda step_size, inverse_mass_matrix: hmc.kernel(
potential, step_size, inverse_mass_matrix, num_integration_steps
)
rng_key = jax.random.PRNGKey(0)
initial_position = {"params": np.zeros(2)}
initial_state = hmc.new_state(initial_position, potential)
print(initial_state)
%%time
nsteps = 500
final_state, (step_size, inverse_mass_matrix), info = stan_warmup.run(
rng_key,
kernel_generator,
initial_state,
nsteps,
)
%%time
kernel = nuts.kernel(potential, step_size, inverse_mass_matrix)
kernel = jax.jit(kernel)
def inference_loop(rng_key, kernel, initial_state, num_samples):
def one_step(state, rng_key):
state, _ = kernel(rng_key, state)
return state, state
keys = jax.random.split(rng_key, num_samples)
_, states = jax.lax.scan(one_step, initial_state, keys)
return states
%%time
nsamples = 500
states = inference_loop(rng_key, kernel, initial_state, nsamples)
samples = states.position["params"].block_until_ready()
print(samples.shape)
plt.scatter(samples[:, 0], samples[:, 1], alpha=0.3)
plt.grid()
Explanation: Blackjax
End of explanation |
10,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas 데이터 입출력
이 노트북의 예제를 실행하기 위해서는 datascienceschool/rpython 도커 이미지의 다음 디렉토리로 이동해야 한다.
Step1: pandas 데이터 입출력 종류
CSV
Clipboard
Excel
JSON
HTML
Python Pickling
HDF5
SAS
STATA
SQL
Google BigQuery
CSV 파일 입력
Comma Separated Values
MicroSoft EXCEL에서 export 가능
pandas.from_csv()
Step2: 컬럼 이름이 없는 경우에는 names 인수로 설정 가능
Step3: 특정한 컬럼을 인덱스로 지정하고 싶으면 index_col 인수 사용
Step4: 구분자가 comma가 아닌 경우에는 sep 인수 사용
Step5: 건너 뛰어야 할 행이 있으면 skiprows 사용
Step6: 특정한 값을 NA로 취급하고 싶으면 na_values 인수 사용
Step7: 일부 행만 읽고 싶다면 nrows 인수 사용
Step8: CSV 파일 출력
DataFrame.to_csv()
Step9: sep 인수로 구분자 변경 가능
Step10: na_rep 인수로 NA 표시 변경 가능
Step11: index, header 인수로 인덱스 및 헤더 출력 여부 결정 가능
Step12: 인터넷 상의 CSV 파일 입력
파일 path 대신 URL을 지정하면 다운로드하여 import
Step13: 인터넷 상의 데이터 베이스 자료 입력
다음과 같은 인터넷 상의 자료는 pandas_datareader 패키지의 DataReader 을 써서 바로 pandas로 입력 가능
Yahoo! Finance
Google Finance
St.Louis FED (FRED)
Kenneth French’s data library
World Bank
Google Analytics
Step14: http
Step15: https
Step16: https | Python Code:
%cd /home/dockeruser/data/pydata-book-master/
Explanation: Pandas 데이터 입출력
이 노트북의 예제를 실행하기 위해서는 datascienceschool/rpython 도커 이미지의 다음 디렉토리로 이동해야 한다.
End of explanation
!cat ../../pydata-book-master/ch06/ex1.csv
!cat ch06/ex1.csv
df = pd.read_csv('../../pydata-book-master/ch06/ex1.csv')
df
Explanation: pandas 데이터 입출력 종류
CSV
Clipboard
Excel
JSON
HTML
Python Pickling
HDF5
SAS
STATA
SQL
Google BigQuery
CSV 파일 입력
Comma Separated Values
MicroSoft EXCEL에서 export 가능
pandas.from_csv(): csv file -> DataFrame
End of explanation
!cat ch06/ex2.csv
pd.read_csv('../../pydata-book-master/ch06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
Explanation: 컬럼 이름이 없는 경우에는 names 인수로 설정 가능
End of explanation
!cat ch06/csv_mindex.csv
pd.read_csv('../../pydata-book-master/ch06/csv_mindex.csv', index_col=['key1', 'key2'])
Explanation: 특정한 컬럼을 인덱스로 지정하고 싶으면 index_col 인수 사용
End of explanation
!cat 'ch06/ex3.txt'
pd.read_table('../../pydata-book-master/ch06/ex3.txt', sep='\s+')
Explanation: 구분자가 comma가 아닌 경우에는 sep 인수 사용
End of explanation
!cat ch06/ex4.csv
pd.read_csv('../../pydata-book-master/ch06/ex4.csv', skiprows=[0, 2, 3])
Explanation: 건너 뛰어야 할 행이 있으면 skiprows 사용
End of explanation
!cat ch06/ex5.csv
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('../../pydata-book-master/ch06/ex5.csv', na_values=sentinels)
Explanation: 특정한 값을 NA로 취급하고 싶으면 na_values 인수 사용
End of explanation
!head ch06/ex6.csv
pd.read_csv('../../pydata-book-master/ch06/ex6.csv', nrows=3)
Explanation: 일부 행만 읽고 싶다면 nrows 인수 사용
End of explanation
df.to_csv('../../pydata-book-master/ch06/out.csv')
!cat ch06/out.csv
Explanation: CSV 파일 출력
DataFrame.to_csv(): DataFrame -> csv file
End of explanation
import sys
df.to_csv(sys.stdout, sep='|')
Explanation: sep 인수로 구분자 변경 가능
End of explanation
df.to_csv(sys.stdout, na_rep='NULL')
Explanation: na_rep 인수로 NA 표시 변경 가능
End of explanation
df.to_csv(sys.stdout, index=False, header=False)
Explanation: index, header 인수로 인덱스 및 헤더 출력 여부 결정 가능
End of explanation
titanic = pd.read_csv('http://dato.com/files/titanic.csv', index_col=0)
titanic.tail()
Explanation: 인터넷 상의 CSV 파일 입력
파일 path 대신 URL을 지정하면 다운로드하여 import
End of explanation
import pandas_datareader.data as web
import datetime
start = datetime.datetime(2015, 1, 1)
end = datetime.datetime(2016, 8, 25)
Explanation: 인터넷 상의 데이터 베이스 자료 입력
다음과 같은 인터넷 상의 자료는 pandas_datareader 패키지의 DataReader 을 써서 바로 pandas로 입력 가능
Yahoo! Finance
Google Finance
St.Louis FED (FRED)
Kenneth French’s data library
World Bank
Google Analytics
End of explanation
df = web.DataReader("005930.KS", 'yahoo', start, end)
df.tail()
Explanation: http://finance.yahoo.com/q?s=005930.ks
End of explanation
df = web.DataReader("KRX:005930", "google", start, end)
df.tail()
Explanation: https://www.google.com/finance?cid=151610035517112
End of explanation
gdp = web.DataReader("GDP", "fred", start, end)
gdp
inflation = web.DataReader(["CPIAUCSL", "CPILFESL"], "fred", start, end)
inflation
Explanation: https://fred.stlouisfed.org/series/GDP
https://fred.stlouisfed.org/series/CPIAUCSL
https://fred.stlouisfed.org/series/CPILFESL
End of explanation |
10,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Projections
This section of the tutorial discusses map projections. If you don't know what a projection is, or are looking to learn more about how they work in geoplot, this page is for you!
I recommend following along with this tutorial interactively using Binder.
Projection and unprojection
Step1: This map is an example of an unprojected plot
Step2: But there is a better way
Step3: For a list of projections implemented in geoplot, refer to the projections reference in the cartopy documentation (cartopy is the library geoplot relies on for its projections).
Stacking projected plots
A key feature of geoplot is the ability to stack plots on top of one another.
Step4: By default, geoplot will set the extent (the area covered by the plot) to the total_bounds of the last plot stacked onto the map.
However, suppose that even though we have data for One entire United States (plus Puerto Rico) we actually want to display just data for the contiguous United States. An easy way to get this is setting the extent parameter using total_bounds.
Step5: The section of the tutorial on Customizing Plots explains the extent parameter in more detail.
Projections on subplots
It is possible to compose multiple axes together into a single panel figure in matplotlib using the subplots feature. This feature is highly useful for creating side-by-side comparisons of your plots, or for stacking your plots together into a single more informative display.
Step6: matplotlib supports subplotting projected maps using the projection argument to subplot_kw. | Python Code:
import geopandas as gpd
import geoplot as gplt
%matplotlib inline
# load the example data
contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
gplt.polyplot(contiguous_usa)
Explanation: Working with Projections
This section of the tutorial discusses map projections. If you don't know what a projection is, or are looking to learn more about how they work in geoplot, this page is for you!
I recommend following along with this tutorial interactively using Binder.
Projection and unprojection
End of explanation
boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
gplt.polyplot(boroughs)
Explanation: This map is an example of an unprojected plot: it reproduces our coordinates as if they were on a flat Cartesian plane. But remember, the Earth is not a flat surface; it's a sphere. This isn't a map of the United States that you'd seen in print anywhere because it badly distorts both of the two criteria most projections are evaluated on: shape and area.
For sufficiently small areas, the amount of distortion is very small. This map of New York City, for example, is reasonably accurate:
End of explanation
import geoplot.crs as gcrs
gplt.polyplot(contiguous_usa, projection=gcrs.AlbersEqualArea())
Explanation: But there is a better way: use a projection.
A projection is a way of mapping points on the surface of the Earth into two dimensions (like a piece of paper or a computer screen). Because moving from three dimensions to two is intrinsically lossy, no projection is perfect, but some will definitely work better in certain case than others.
The most common projection used for the contiguous United States is the Albers Equal Area projection. This projection works by wrapping the Earth around a cone, one that's particularly well optimized for locations near the middle of the Northern Hemisphere (and particularly poorly for locations at the poles).
To add a projection to a map in geoplot, pass a geoplot.crs object to the projection parameter on the plot. For instance, here's what we get when we try Albers out on the contiguous United States:
End of explanation
cities = gpd.read_file(gplt.datasets.get_path('usa_cities'))
ax = gplt.polyplot(
contiguous_usa,
projection=gcrs.AlbersEqualArea()
)
gplt.pointplot(cities, ax=ax)
Explanation: For a list of projections implemented in geoplot, refer to the projections reference in the cartopy documentation (cartopy is the library geoplot relies on for its projections).
Stacking projected plots
A key feature of geoplot is the ability to stack plots on top of one another.
End of explanation
ax = gplt.polyplot(
contiguous_usa,
projection=gcrs.AlbersEqualArea()
)
gplt.pointplot(cities, ax=ax, extent=contiguous_usa.total_bounds)
Explanation: By default, geoplot will set the extent (the area covered by the plot) to the total_bounds of the last plot stacked onto the map.
However, suppose that even though we have data for One entire United States (plus Puerto Rico) we actually want to display just data for the contiguous United States. An easy way to get this is setting the extent parameter using total_bounds.
End of explanation
import matplotlib.pyplot as plt
import geoplot as gplt
f, axarr = plt.subplots(1, 2, figsize=(12, 4))
gplt.polyplot(contiguous_usa, ax=axarr[0])
gplt.polyplot(contiguous_usa, ax=axarr[1])
Explanation: The section of the tutorial on Customizing Plots explains the extent parameter in more detail.
Projections on subplots
It is possible to compose multiple axes together into a single panel figure in matplotlib using the subplots feature. This feature is highly useful for creating side-by-side comparisons of your plots, or for stacking your plots together into a single more informative display.
End of explanation
proj = gcrs.AlbersEqualArea(central_longitude=-98, central_latitude=39.5)
f, axarr = plt.subplots(1, 2, figsize=(12, 4), subplot_kw={
'projection': proj
})
gplt.polyplot(contiguous_usa, projection=proj, ax=axarr[0])
gplt.polyplot(contiguous_usa, projection=proj, ax=axarr[1])
Explanation: matplotlib supports subplotting projected maps using the projection argument to subplot_kw.
End of explanation |
10,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
오류 및 예외 처리
개요
코딩할 때 발생할 수 있는 다양한 오류 살펴 보기
오류 메시지 정보 확인 방법
예외 처리, 즉 오류가 발생할 수 있는 예외적인 상황을 미리 고려하는 방법 소개
오늘의 주요 예제
아래 코드는 raw_input() 함수를 이용하여 사용자로부터 숫자를 입력받아 그 숫자의 제곱을 리턴하고자 하는 내용을 담고 있다. 코드를 실행하면 숫자를 입력하라는 창이 나오며,
여기에 숫자 3을 입력하면 정상적으로 작동한다.
하지만, 예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
Step1: 주의
Step2: 오류를 확인하는 메시지가 처음 볼 때는 매우 생소하다.
위 오류 메시지를 간단하게 살펴보면 다음과 같다.
File "<ipython-input-37-a6097ed4dc2e>", line 1
1번 줄에서 오류 발생
sentence = 'I am a sentence
^
오류 발생 위치 명시
SyntaxError
Step3: 오류의 종류
앞서 예제들을 통해 살펴 보았듯이 다양한 종류의 오류가 발생하며,
코드가 길어지거나 복잡해지면 오류가 발생할 가능성은 점차 커진다.
오류의 종류를 파악하면 어디서 왜 오류가 발생하였는지를 보다 쉽게 파악하여
코드를 수정할 수 있게 된다.
따라서 코드의 발생원인을 바로 알아낼 수 있어야 하며 이를 위해서는 오류 메시지를
제대로 확인할 수 있어야 한다.
하지만 여기서는 언급된 예제 정도의 수준만 다루고 넘어간다.
코딩을 하다 보면 어차피 다양한 오류와 마주치게 될 텐데 그때마다
스스로 오류의 내용과 원인을 확인해 나가는 과정을 통해
보다 많은 경험을 쌓는 길 외에는 달리 방법이 없다.
예외 처리
코드에 문법 오류가 포함되어 있는 경우 아예 실행되지 않는다.
그렇지 않은 경우에는 일단 실행이 되고 중간에 오류가 발생하면 바로 멈춰버린다.
이렇게 중간에 오류가 발생할 수 있는 경우를 미리 생각하여 대비하는 과정을
예외 처리(exception handling)라고 부른다.
예를 들어, 오류가 발생하더라도 오류발생 이전까지 생성된 정보들을 저장하거나, 오류발생 이유를 좀 더 자세히 다루거나, 아니면 오류발생에 대한 보다 자세한 정보를 사용자에게 알려주기 위해 예외 처리를 사용한다.
예제
아래 코드는 raw_input() 함수를 이용하여 사용자로부터 숫자를 입력받아 그 숫자의 제곱을 리턴하고자 하는 내용을 담고 있으며, 코드에는 문법적 오류가 없다.
그리고 코드를 실행하면 숫자를 입력하라는 창이 나온다.
여기에 숫자 3을 입력하면 정상적으로 작동하지만
예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
Step4: 3.2를 입력했을 때 오류가 발생하는 이유는 int() 함수가 정수 모양의 문자열만
처리할 수 있기 때문이다.
사실 정수들의 제곱을 계산하는 프로그램을 작성하였지만 경우에 따라
정수 이외의 값을 입력하는 경우가 발생하게 되며, 이런 경우를 대비해야 한다.
즉, 오류가 발생할 것을 미리 예상해야 하며, 어떻게 대처해야 할지 준비해야 하는데,
try ... except ...문을 이용하여 예외를 처리하는 방식을 활용할 수 있다.
Step5: 오류 종류에 맞추어 다양한 대처를 하기 위해서는 오류의 종류를 명시하여 예외처리를 하면 된다.
아래 코드는 입력 갑에 따라 다른 오류가 발생하고 그에 상응하는 방식으로 예외처리를 실행한다.
값 오류(ValueError)의 경우
Step6: 0으로 나누기 오류(ZeroDivisionError)의 경우
Step7: 주의
Step9: raise 함수
강제로 오류를 발생시키고자 하는 경우에 사용한다.
예제
어떤 함수를 정확히 정의하지 않은 상태에서 다른 중요한 일을 먼저 처리하고자 할 때
아래와 같이 함수를 선언하고 넘어갈 수 있다.
그런데 아래 함수를 제대로 선언하지 않은 채로 다른 곳에서 호출하면
"아직 정의되어 있지 않음"
이란 메시지로 정보를 알려주게 된다.
Step11: 주의
Step13: 코드의 안전성 문제
문법 오류 또는 실행 중에 오류가 발생하지 않는다 하더라도 코드의 안전성이 보장되지는 않는다.
코드의 안정성이라 함은 코드를 실행할 때 기대하는 결과가 산출된다는 것을 보장한다는 의미이다.
예제
아래 코드는 숫자의 제곱을 리턴하는 square() 함수를 제대로 구현하지 못한 경우를 다룬다.
Step14: 위 함수를 아래와 같이 호출하면 오류가 전혀 발생하지 않지만,
엉뚱한 값을 리턴한다.
Step15: 주의
Step16: 오류에 대한 보다 자세한 정보
파이썬에서 다루는 오류에 대한 보다 자세한 정보는 아래 사이트들에 상세하게 안내되어 있다.
파이썬 기본 내장 오류 정보 문서
Step17: 아래 내용이 충족되도록 위 코드를 수정하라.
나눗셈이 부동소수점으로 계산되도록 한다.
0이 아닌 숫자가 입력될 경우 100을 그 숫자로 나눈다.
0이 입력될 경우 0이 아닌 숫자를 입력하라고 전달한다.
숫자가 아닌 값이 입력될 경우 숫자를 입력하라고 전달한다.
견본답안 | Python Code:
from __future__ import print_function
input_number = raw_input("A number please: ")
number = int(input_number)
print("제곱의 결과는", number**2, "입니다.")
Explanation: 오류 및 예외 처리
개요
코딩할 때 발생할 수 있는 다양한 오류 살펴 보기
오류 메시지 정보 확인 방법
예외 처리, 즉 오류가 발생할 수 있는 예외적인 상황을 미리 고려하는 방법 소개
오늘의 주요 예제
아래 코드는 raw_input() 함수를 이용하여 사용자로부터 숫자를 입력받아 그 숫자의 제곱을 리턴하고자 하는 내용을 담고 있다. 코드를 실행하면 숫자를 입력하라는 창이 나오며,
여기에 숫자 3을 입력하면 정상적으로 작동한다.
하지만, 예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
End of explanation
sentence = 'I am a sentence
Explanation: 주의: 파이썬 3의 경우 input() 함수를 raw_input() 대신에 사용해야 한다.
위 코드는 정수들의 제곱을 계산하는 프로그램이다.
하지만 사용자가 경우에 따라 정수 이외의 값을 입력하면 시스템이 다운된다.
이에 대한 해결책을 다루고자 한다.
오류 예제
먼저 오류의 다양한 예제를 살펴보자.
다음 코드들은 모두 오류를 발생시킨다.
예제: 0으로 나누기 오류
python
4.6/0
오류 설명: 0으로 나눌 수 없다.
예제: 문법 오류
python
sentence = 'I am a sentence
오류 설명: 문자열 양 끝의 따옴표가 짝이 맞아야 한다.
* 작은 따옴표끼리 또는 큰 따옴표끼리
예제: 들여쓰기 문법 오류
python
for i in range(3):
j = i * 2
print(i, j)
오류 설명: 2번 줄과 3번 줄의 들여쓰기 정도가 동일해야 한다.
예제: 자료형 오류
```python
new_string = 'cat' - 'dog'
new_string = 'cat' * 'dog'
new_string = 'cat' / 'dog'
new_string = 'cat' + 3
new_string = 'cat' - 3
new_string = 'cat' / 3
```
오류 설명: 문자열 자료형끼리의 합, 문자열과 정수의 곱셈만 정의되어 있다.
예제: 이름 오류
python
print(party)
오류 설명: 미리 선언된 변수만 사용할 수 있다.
예제: 인덱스 오류
python
a_string = 'abcdefg'
a_string[12]
오류 설명: 인덱스는 문자열의 길이보다 작은 수만 사용할 수 있다.
예제: 값 오류
python
int(a_string)
오류 설명: int() 함수는 정수로만 구성된 문자열만 처리할 수 있다.
예제: 속성 오류
python
print(a_string.len())
오류 설명: 문자열 자료형에는 len() 메소드가 존재하지 않는다.
주의: len() 이라는 함수는 문자열의 길이를 확인하지만 문자열 메소드는 아니다.
이후에 다룰 리스트, 튜플 등에 대해서도 사용할 수 있는 함수이다.
오류 확인
앞서 언급한 코드들을 실행하면 오류가 발생하고 어디서 어떤 오류가 발생하였는가에 대한 정보를
파이썬 해석기가 바로 알려 준다.
예제
End of explanation
a = 0
4/a
Explanation: 오류를 확인하는 메시지가 처음 볼 때는 매우 생소하다.
위 오류 메시지를 간단하게 살펴보면 다음과 같다.
File "<ipython-input-37-a6097ed4dc2e>", line 1
1번 줄에서 오류 발생
sentence = 'I am a sentence
^
오류 발생 위치 명시
SyntaxError: EOL while scanning string literal
오류 종류 표시: 문법 오류(SyntaxError)
예제
아래 예제는 0으로 나눌 때 발생하는 오류를 나타낸다.
오류에 대한 정보를 잘 살펴보면서 어떤 내용을 담고 있는지 확인해 보아야 한다.
End of explanation
from __future__ import print_function
number_to_square = raw_input("A number please")
# number_to_square 변수의 자료형이 문자열(str)임에 주의하라.
# 따라서 연산을 하고 싶으면 정수형(int)으로 형변환을 먼저 해야 한다.
number = int(number_to_square)
print("제곱의 결과는", number**2, "입니다.")
Explanation: 오류의 종류
앞서 예제들을 통해 살펴 보았듯이 다양한 종류의 오류가 발생하며,
코드가 길어지거나 복잡해지면 오류가 발생할 가능성은 점차 커진다.
오류의 종류를 파악하면 어디서 왜 오류가 발생하였는지를 보다 쉽게 파악하여
코드를 수정할 수 있게 된다.
따라서 코드의 발생원인을 바로 알아낼 수 있어야 하며 이를 위해서는 오류 메시지를
제대로 확인할 수 있어야 한다.
하지만 여기서는 언급된 예제 정도의 수준만 다루고 넘어간다.
코딩을 하다 보면 어차피 다양한 오류와 마주치게 될 텐데 그때마다
스스로 오류의 내용과 원인을 확인해 나가는 과정을 통해
보다 많은 경험을 쌓는 길 외에는 달리 방법이 없다.
예외 처리
코드에 문법 오류가 포함되어 있는 경우 아예 실행되지 않는다.
그렇지 않은 경우에는 일단 실행이 되고 중간에 오류가 발생하면 바로 멈춰버린다.
이렇게 중간에 오류가 발생할 수 있는 경우를 미리 생각하여 대비하는 과정을
예외 처리(exception handling)라고 부른다.
예를 들어, 오류가 발생하더라도 오류발생 이전까지 생성된 정보들을 저장하거나, 오류발생 이유를 좀 더 자세히 다루거나, 아니면 오류발생에 대한 보다 자세한 정보를 사용자에게 알려주기 위해 예외 처리를 사용한다.
예제
아래 코드는 raw_input() 함수를 이용하여 사용자로부터 숫자를 입력받아 그 숫자의 제곱을 리턴하고자 하는 내용을 담고 있으며, 코드에는 문법적 오류가 없다.
그리고 코드를 실행하면 숫자를 입력하라는 창이 나온다.
여기에 숫자 3을 입력하면 정상적으로 작동하지만
예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.
End of explanation
number_to_square = raw_input("A number please:")
try:
number = int(number_to_square)
print("제곱의 결과는", number ** 2, "입니다.")
except:
print("정수를 입력해야 합니다.")
Explanation: 3.2를 입력했을 때 오류가 발생하는 이유는 int() 함수가 정수 모양의 문자열만
처리할 수 있기 때문이다.
사실 정수들의 제곱을 계산하는 프로그램을 작성하였지만 경우에 따라
정수 이외의 값을 입력하는 경우가 발생하게 되며, 이런 경우를 대비해야 한다.
즉, 오류가 발생할 것을 미리 예상해야 하며, 어떻게 대처해야 할지 준비해야 하는데,
try ... except ...문을 이용하여 예외를 처리하는 방식을 활용할 수 있다.
End of explanation
number_to_square = raw_input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("결과는", a, "입니다.")
except ValueError:
print("정수를 입력해야 합니다.")
except ZeroDivisionError:
print("4는 빼고 하세요.")
Explanation: 오류 종류에 맞추어 다양한 대처를 하기 위해서는 오류의 종류를 명시하여 예외처리를 하면 된다.
아래 코드는 입력 갑에 따라 다른 오류가 발생하고 그에 상응하는 방식으로 예외처리를 실행한다.
값 오류(ValueError)의 경우
End of explanation
number_to_square = raw_input("A number please: ")
try:
number = int(number_to_square)
a = 5/(number - 4)
print("결과는", a, "입니다.")
except ValueError:
print("정수를 입력해야 합니다.")
except ZeroDivisionError:
print("4는 빼고 하세요.")
Explanation: 0으로 나누기 오류(ZeroDivisionError)의 경우
End of explanation
try:
a = 1/0
except ValueError:
print("This program stops here.")
Explanation: 주의: 이와 같이 발생할 수 예외를 가능한 한 모두 염두하는 프로그램을 구현해야 하는 일은
매우 어려운 일이다.
앞서 보았듯이 오류의 종류를 정확히 알 필요가 발생한다.
다음 예제에소 보듯이 오류의 종류를 틀리게 명시하면 예외 처리가 제대로 작동하지 않는다.
End of explanation
def to_define():
아주 복잡하지만 지금 당장 불필요
raise NotImplementedError("아직 정의되어 있지 않음")
print(to_define())
Explanation: raise 함수
강제로 오류를 발생시키고자 하는 경우에 사용한다.
예제
어떤 함수를 정확히 정의하지 않은 상태에서 다른 중요한 일을 먼저 처리하고자 할 때
아래와 같이 함수를 선언하고 넘어갈 수 있다.
그런데 아래 함수를 제대로 선언하지 않은 채로 다른 곳에서 호출하면
"아직 정의되어 있지 않음"
이란 메시지로 정보를 알려주게 된다.
End of explanation
def to_define1():
아주 복잡하지만 지금 당장 불필요
print(to_define1())
Explanation: 주의: 오류 처리를 사용하지 않으면 오류 메시지가 보이지 않을 수도 있음에 주의해야 한다.
End of explanation
def square( number ):
정수를 인자로 입력 받아 제곱을 리턴한다.
square_of_number = number * 2
return square_of_number
Explanation: 코드의 안전성 문제
문법 오류 또는 실행 중에 오류가 발생하지 않는다 하더라도 코드의 안전성이 보장되지는 않는다.
코드의 안정성이라 함은 코드를 실행할 때 기대하는 결과가 산출된다는 것을 보장한다는 의미이다.
예제
아래 코드는 숫자의 제곱을 리턴하는 square() 함수를 제대로 구현하지 못한 경우를 다룬다.
End of explanation
square(3)
Explanation: 위 함수를 아래와 같이 호출하면 오류가 전혀 발생하지 않지만,
엉뚱한 값을 리턴한다.
End of explanation
help(square)
Explanation: 주의: help() 를 이용하여 어떤 함수가 무슨 일을 하는지 내용을 확인할 수 있다.
단, 함수를 정의할 때 함께 적힌 문서화 문자열(docstring) 내용이 확인된다.
따라서, 함수를 정의할 때 문서화 문자열에 가능한 유효한 정보를 입력해 두어야 한다.
End of explanation
from __future__ import print_function
number_to_square = raw_input("A number to divide 100: ")
number = int(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
Explanation: 오류에 대한 보다 자세한 정보
파이썬에서 다루는 오류에 대한 보다 자세한 정보는 아래 사이트들에 상세하게 안내되어 있다.
파이썬 기본 내장 오류 정보 문서:
https://docs.python.org/3.4/library/exceptions.html
파이썬 예외처리 정보 문서:
https://docs.python.org/3.4/tutorial/errors.html
연습문제
연습
아래 코드는 100을 입력한 값으로 나누는 함수이다.
다만 0을 입력할 경우 0으로 나누기 오류(ZeroDivisionError)가 발생한다.
End of explanation
number_to_square = raw_input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
except ZeroDivisionError:
raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')
except ValueError:
raise ValueError('숫자를 입력하세요.')
number_to_square = raw_input("A number to divide 100: ")
try:
number = float(number_to_square)
print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.")
except ZeroDivisionError:
raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')
except ValueError:
raise ValueError('숫자를 입력하세요.')
Explanation: 아래 내용이 충족되도록 위 코드를 수정하라.
나눗셈이 부동소수점으로 계산되도록 한다.
0이 아닌 숫자가 입력될 경우 100을 그 숫자로 나눈다.
0이 입력될 경우 0이 아닌 숫자를 입력하라고 전달한다.
숫자가 아닌 값이 입력될 경우 숫자를 입력하라고 전달한다.
견본답안:
End of explanation |
10,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: 1. Multiplication
The product of Gaussians comes up, for example, when the sampling distributions for different data points are independent Gaussians, or when the sampling distribution and prior are both Gaussian (this is a conjugate pair).
So, consider
$\mathrm{Normal}(x|\mu_1,\sigma_1) \, \mathrm{Normal}(x|\mu_2,\sigma_2)$.
This can be manipulated into a different product of two Gaussians, with $x$ appearing in only one of them. Do so. (Note that this is a proportionality, not an equality - the coefficient in front will not perfectly normalize things when you're done.)
$\mathrm{Normal}(x|\mu_1,\sigma_1) \, \mathrm{Normal}(x|\mu_2,\sigma_2) \propto \mathrm{Normal}(x|\mu_a,\sigma_a) \, \mathrm{Normal}(0|\mu_b,\sigma_b)$.
If $x$ were a model parameter, and $\mu_i$ and $\sigma_i$ were independent measurements of $x$ with error bars, how do you interpret each term of this factorization?
math, math, math, math,
Check your solution by plugging in some values for $x$, $\mu_i$ and $\sigma_i$. The function below returns the $\frac{(x-\mu)^2}{\sigma^2}$ part of the PDF, which is what we care about here (since it's where $x$ appears).
Step2: 2. Conjugacy
When the sampling distribution is normal with a fixed variance, the conjugate prior for the mean is also normal. Show this for the case of a single data point, $y$; that is,
$p(\mu|y,\sigma) \propto \mathrm{Normal}(y|\mu,\sigma)\,\mathrm{Normal}(\mu|m_0,s_0) \propto \mathrm{Normal}(\mu|m_1,s_1)$
and find $m_1$ and $s_1$ in terms of $y$, $\sigma$, $m_0$ and $s_0$.
math, math, math, math
Again, check your work by choosing some fiducial values and
looking at the ratio $\mathrm{Normal}(y|\mu,\sigma)\,\mathrm{Normal}(\mu|m_0,s_0) / \mathrm{Normal}(\mu|m_1,s_1)$ over a range of $\mu$. It should be constant.
Step3: 3. Linear transformation
Consider the distribution
$\mathrm{Normal}\left[y\,\big|\,\mu_y(x;a,b),\sigma_y\right]$,
where $\mu_y(x;a,b)=a+bx$. Re-express this in terms of a distribution over $x$, i.e.
$\mathrm{Normal}\left[x|\mu_x(y;a,b),\sigma_x(y;a,b)\right]$.
math, math, math, math
4. Classical weighted least squares
Classical WLS is a simple method for fitting a line to data that you've almost certainly seen before. Consider data consisting of $n$ triplets $(x_i,y_i,\sigma_i)$, where $x_i$ are assumed to be known perfectly and $\sigma_i$ is interpreted as a "measurement error" for $y_i$. WLS maximizes the likelihood function
$\mathcal{L}(a,b;x,y,\sigma) = \prod_{i=1}^n \mathrm{Normal}(y_i|a+bx_i,\sigma_i)$.
In fact, we can get away with being more general and allowing for the possibility that the different measurements are not independent, with their measurement errors jointly characterized by a known covariance matrix, $\Sigma$, rather than the individual $\sigma_i$
Step4: The next cell uses the statsmodels package to perform the WLS calculations. You are encouraged to implement the matrix algebra above to verify the results. What we get at the end are $\mu_\beta$ and $\Sigma_\beta$, as defined above.
Step5: Now, compute the parameters of the posterior for $\beta$ based on $\mu_\beta$ and $\Sigma_\beta$ (parameters that appear in the sampling distribution) and the parameters of the conjugate prior. Set the prior parameters to be equivalent to the uniform distribution for the check below (you can put in something different to see how it looks later).
Transform post_mean to a shape (2,) numpy array for convenience (as opposed to, say, a 2x1 matrix).
Step6: Compare the WLS and posterior parameters (they should be identical for a uniform prior)
Step7: Below, we can compare your analytic solution to a brute-force calculation of the posterior | Python Code:
exec(open('tbc.py').read()) # define TBC and TBC_above
import numpy as np
import scipy.stats as st
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Tutorial: Gaussians and Least Squares
So far in the notes and problems, we've mostly avoided one of the most commonly used probability distributions, the Gaussian or normal distribution:
$\mathrm{Normal}(x|\mu,\sigma) \equiv p_\mathrm{Normal}(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma}}\exp \left[-\frac{(x-\mu)^2}{2\sigma^2}\right]$. [Endnote 1]
There are two reasons for this:
1. The symmetry between $x$ and $\mu$ makes it easy to miss the distinction between the sampling distribution and the likelihood function, and to conflate the model parameter $\sigma$ with an "error bar" associated strictly with the data (which it may or may not be).
2. The assumption of Gaussian PDFs is baked into various classical statistics methods to the extent that it isn't always obvious to the user. As always, it's important to think about whether an assumption or approximation is justified, and thus to see examples of when it is not.
That said, it is certainly common to use Gaussian distributions in practice, particularly in cases where
1. the approximation is well justified, as in the large-count limit of the Poisson distribution (typical of optical astronomy and longer wavelengths); or
2. we are effectively handed a table of data with "error bars" and have no better alternative than to assume a Gaussian sampling distribution.
Gaussians have lots of nice mathematical features that make them convenient to work with when we can. For example, see a list of identities for the multivariate Gaussian here or here.
There are a couple of cases that it's useful to work through if you haven't before, to build intuition. We'll do that here, with:
the product of two Gaussians
showing conjugacy
linear transformations
extending classical weighted least squares
End of explanation
TBC()
# pick some values (where m is mu, s sigma)
# x =
# m1 =
# s1 =
# m2 =
# s2 =
# compute things
# sa =
# ma =
# mb =
# sb =
def exp_part(y, m, s):
return ((y - m) / s)**2
print('This should be a pretty small number:',
exp_part(x,m1,s1) + exp_part(x,m2,s2) - ( exp_part(x,ma,sa) + exp_part(0,mb,sb) ) )
Explanation: 1. Multiplication
The product of Gaussians comes up, for example, when the sampling distributions for different data points are independent Gaussians, or when the sampling distribution and prior are both Gaussian (this is a conjugate pair).
So, consider
$\mathrm{Normal}(x|\mu_1,\sigma_1) \, \mathrm{Normal}(x|\mu_2,\sigma_2)$.
This can be manipulated into a different product of two Gaussians, with $x$ appearing in only one of them. Do so. (Note that this is a proportionality, not an equality - the coefficient in front will not perfectly normalize things when you're done.)
$\mathrm{Normal}(x|\mu_1,\sigma_1) \, \mathrm{Normal}(x|\mu_2,\sigma_2) \propto \mathrm{Normal}(x|\mu_a,\sigma_a) \, \mathrm{Normal}(0|\mu_b,\sigma_b)$.
If $x$ were a model parameter, and $\mu_i$ and $\sigma_i$ were independent measurements of $x$ with error bars, how do you interpret each term of this factorization?
math, math, math, math,
Check your solution by plugging in some values for $x$, $\mu_i$ and $\sigma_i$. The function below returns the $\frac{(x-\mu)^2}{\sigma^2}$ part of the PDF, which is what we care about here (since it's where $x$ appears).
End of explanation
TBC()
# pick some values
# y =
# sigma =
# m0 =
# s0 =
# compute things
# s1 =
# m1 =
# plot
mugrid = np.arange(-1.0, 2.0, 0.01)
# we'll compare the log-probabilities, since that's a good habit to be in
diff = st.norm.logpdf(y, loc=mugrid, scale=sigma)+st.norm.logpdf(mugrid, loc=m0, scale=s0) - st.norm.logpdf(mugrid, loc=m1, scale=s1)
print('This should be a pretty small number, and constant:')
plt.rcParams['figure.figsize'] = (7.0, 5.0)
plt.plot(mugrid, diff, 'b-');
plt.xlabel(r'$\mu$');
plt.ylabel('log-posterior difference');
Explanation: 2. Conjugacy
When the sampling distribution is normal with a fixed variance, the conjugate prior for the mean is also normal. Show this for the case of a single data point, $y$; that is,
$p(\mu|y,\sigma) \propto \mathrm{Normal}(y|\mu,\sigma)\,\mathrm{Normal}(\mu|m_0,s_0) \propto \mathrm{Normal}(\mu|m_1,s_1)$
and find $m_1$ and $s_1$ in terms of $y$, $\sigma$, $m_0$ and $s_0$.
math, math, math, math
Again, check your work by choosing some fiducial values and
looking at the ratio $\mathrm{Normal}(y|\mu,\sigma)\,\mathrm{Normal}(\mu|m_0,s_0) / \mathrm{Normal}(\mu|m_1,s_1)$ over a range of $\mu$. It should be constant.
End of explanation
# generate some fake data
a = 0.0
b = 1.0
n = 10
x = st.norm.rvs(size=n)
sigma = st.uniform.rvs(1.0, 2.0, size=n)
y = st.norm.rvs(loc=a+b*x, scale=sigma, size=n)
plt.rcParams['figure.figsize'] = (7.0, 5.0)
plt.errorbar(x, y, yerr=sigma, fmt='bo');
plt.xlabel('x');
plt.ylabel('y');
Explanation: 3. Linear transformation
Consider the distribution
$\mathrm{Normal}\left[y\,\big|\,\mu_y(x;a,b),\sigma_y\right]$,
where $\mu_y(x;a,b)=a+bx$. Re-express this in terms of a distribution over $x$, i.e.
$\mathrm{Normal}\left[x|\mu_x(y;a,b),\sigma_x(y;a,b)\right]$.
math, math, math, math
4. Classical weighted least squares
Classical WLS is a simple method for fitting a line to data that you've almost certainly seen before. Consider data consisting of $n$ triplets $(x_i,y_i,\sigma_i)$, where $x_i$ are assumed to be known perfectly and $\sigma_i$ is interpreted as a "measurement error" for $y_i$. WLS maximizes the likelihood function
$\mathcal{L}(a,b;x,y,\sigma) = \prod_{i=1}^n \mathrm{Normal}(y_i|a+bx_i,\sigma_i)$.
In fact, we can get away with being more general and allowing for the possibility that the different measurements are not independent, with their measurement errors jointly characterized by a known covariance matrix, $\Sigma$, rather than the individual $\sigma_i$:
$\mathcal{L}(a,b;x,y,\Sigma) = \mathrm{Normal}(y|X\beta,\Sigma) = \frac{1}{\sqrt{(2\pi)^n|\Sigma|}}\exp \left[-\frac{1}{2}(y-X\beta)^\mathrm{T}\Sigma^{-1}(y-X\beta)\right]$,
where $X$ is called the design matrix, with each row equal to $(1, x_i)$, and $\beta = \left(\begin{array}{c}a\b\end{array}\right)$.
With a certain amount of algebra, it can be shown that $\mathcal{L}$ is proportional to a bivariate Gaussian over $\beta$,
$\mathcal{L} \propto \mathrm{Normal}(\beta | \mu_\beta, \Sigma_\beta)$,
with
$\Sigma_\beta = (X^\mathrm{T}\Sigma^{-1}X)^{-1}$;
$\mu_\beta = \Sigma_\beta X^\mathrm{T}\Sigma^{-1} y$.
In classical WLS, $\mu_\beta$ is the "best fit" estimate of $a$ and $b$, and $\Sigma_\beta$ is the covariance of the standard errors on those parameters.
The relative simplicity of the computations above, not to mention the fact that they are efficiently implemented in numerous packages, can be useful even in situations beyond the assumption-heavy scenario where WLS is derived. As a simple example, consider a case where the sampling distribution corresponds to the likelihood function above, but we wish to use an informative prior on $a$ and $b$.
Taking advantage of the results you derived above (all of which have straightforward multivariate analogs),
1. What is the form of prior, $p(a,b|\alpha)$, that makes this problem conjugate? (Here $\alpha$ is a stand-in for whatever parameters determine the prior.)
2. What are the form and parameters of the posterior, $p(a,b|x,y,\Sigma,\alpha)$?
3. Verify that you recover the WLS solution in the limit of the prior being uniform over the $(a,b)$ plane.
1.
Below, we will explicitly show the correspondance in (3) for a WLS fit of some mock data.
End of explanation
import statsmodels.api as sm
model = sm.WLS(y, sm.add_constant(x), weights=sigma**-2)
wls = model.fit()
mu_beta = np.matrix(wls.params).T # cast as a column vector
Sigma_beta = np.asmatrix(wls.normalized_cov_params)
Explanation: The next cell uses the statsmodels package to perform the WLS calculations. You are encouraged to implement the matrix algebra above to verify the results. What we get at the end are $\mu_\beta$ and $\Sigma_\beta$, as defined above.
End of explanation
TBC()
# define prior parameters
# do some calculations, possibly
# parameters of the posterior:
# post_cov = ...
# post_mean = ...
Explanation: Now, compute the parameters of the posterior for $\beta$ based on $\mu_\beta$ and $\Sigma_\beta$ (parameters that appear in the sampling distribution) and the parameters of the conjugate prior. Set the prior parameters to be equivalent to the uniform distribution for the check below (you can put in something different to see how it looks later).
Transform post_mean to a shape (2,) numpy array for convenience (as opposed to, say, a 2x1 matrix).
End of explanation
print('WLS mean and covariance:')
print(mu_beta)
print(Sigma_beta)
print('Posterior mean and covariance:')
print(post_mean)
print(post_cov)
Explanation: Compare the WLS and posterior parameters (they should be identical for a uniform prior):
End of explanation
def log_post_brute(a, b):
like = np.sum( st.norm.logpdf(y, loc=a+b*x, scale=sigma) )
prior = st.multivariate_normal.logpdf([a,b], mean=np.asarray(prior_mean)[:,0], cov=prior_cov)
return prior + like
print('Difference between elegant and brute-force log posteriors for some random parameter values:')
print('(The third column should be basically constant, though non-zero.)\n')
for i in range(10):
a = np.random.rand() * 10.0 - 5.0
b = np.random.rand() * 10.0 - 5.0
diff = st.multivariate_normal.logpdf([a,b], mean=post_mean, cov=post_cov) - log_post_brute(a,b)
print([a, b, diff])
Explanation: Below, we can compare your analytic solution to a brute-force calculation of the posterior:
End of explanation |
10,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MM
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
10,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. Crear vocabulario
En un principio partiremos de las características de HoG para crear nuestro vocabulario, aunque se podría hacer con cualquier otras.
Importamos las características
Step1: B. Construcción del vocabulario mediante un algoritmo de clustering
La razón por la que utilizamos un algoritmo de clustering es para la agrupación de dichas palabras en un determinado número de grupos. De manera que estos grupos de palabras resulten en patrones visuales que aporten mayor información al clasificador y, por lo tanto, nos permitan llevar a cabo una clasificación más eficiente.
Vamos a proceder a entrenar dos variantes de algoritmos de clustering
Step2: Serializamos Kmeans | Python Code:
import pickle
path = '../../rsc/obj/'
X_train_path = path + 'X_train.sav'
train_features = pickle.load(open(X_train_path, 'rb'))
# import pickle # Módulo para serializar
# import numpy as np
# path = '..//..//rsc//obj//BoW_features//'
# for i in (15000,30000,45000,53688):
# daisy_features_path = path + 'BoW_features'+ str(i) +'.sav'
# if i == 15000:
# train_features = pickle.load(open(daisy_features_path, 'rb'))
# set_to_add = pickle.load(open(daisy_features_path, 'rb'))
# train_features = np.vstack((train_features,set_to_add))
Explanation: 2. Crear vocabulario
En un principio partiremos de las características de HoG para crear nuestro vocabulario, aunque se podría hacer con cualquier otras.
Importamos las características:
End of explanation
%%time
from sklearn.cluster import MiniBatchKMeans as MiniKMeans
import warnings
warnings.filterwarnings("ignore")
# Se inicializa el algoritmo de Kmeans indicando el número de clusters
mini_kmeans = MiniKMeans(500)
#se construye el clusterr con todas las características del conjunto de entramiento
mini_kmeans.fit(train_features)
Explanation: B. Construcción del vocabulario mediante un algoritmo de clustering
La razón por la que utilizamos un algoritmo de clustering es para la agrupación de dichas palabras en un determinado número de grupos. De manera que estos grupos de palabras resulten en patrones visuales que aporten mayor información al clasificador y, por lo tanto, nos permitan llevar a cabo una clasificación más eficiente.
Vamos a proceder a entrenar dos variantes de algoritmos de clustering :
- KMeans
- MiniBatchKMeans
End of explanation
import pickle # Módulo para serializar
path = '../../rsc/obj/'
mini_kmeans_path = path + 'mini_kmeans.sav'
pickle.dump(mini_kmeans, open(mini_kmeans_path, 'wb'))
Explanation: Serializamos Kmeans
End of explanation |
10,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
10,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sveučilište u Zagrebu<br>
Fakultet elektrotehnike i računarstva
Strojno učenje
<a href="http
Step1: Sadržaj
Step2: Nagib sigmoide može se regulirati množenjem ulaza određenim faktorom
Step3: Derivacija sigmoidalne funkcije
Step4: Ako $y=1$, funkcija kažnjava model to više što je njegov izlaz manji od jedinice. Slično, ako $y=0$, funkcija kažnjava model to više što je njegov izlaz veći od nule
Intutivno se ovakva funkcija čini u redu, ali je pitanje kako smo do nje došli
Izvod
Funkciju gubitka izvest ćemo iz funkcije pogreške
Podsjetnik
Step5: Q
Step6: Minimizacija pogreške
$$
\begin{align}
E(\mathbf{w}) &=
\sum_{i=1}^N L\big(h(\mathbf{x}^{(i)}|\mathbf{w}),y^{(i)}\big)\
L(h(\mathbf{x}),y) &= - y \ln h(\mathbf{x}) - (1-y)\ln \big(1-h(\mathbf{x})\big)\
h(\mathbf{x}) &= \sigma(\mathbf{w}^\intercal\mathbf{x}) = \frac{1}{1 + \exp(-\mathbf{w}^\intercal\mathbf{x})}
\end{align}
$$
Ne postoji rješenje u zatvorenoj formi (zbog nelinearnosti funkcije $\sigma$)
Minimiziramo gradijentnim spustom | Python Code:
import scipy as sp
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
%pylab inline
Explanation: Sveučilište u Zagrebu<br>
Fakultet elektrotehnike i računarstva
Strojno učenje
<a href="http://www.fer.unizg.hr/predmet/su">http://www.fer.unizg.hr/predmet/su</a>
Ak. god. 2015./2016.
Bilježnica 7: Logistička regresija
(c) 2015 Jan Šnajder
<i>Verzija: 0.2 (2015-11-16)</i>
End of explanation
def sigm(x): return 1 / (1 + sp.exp(-x))
xs = sp.linspace(-10, 10)
plt.plot(xs, sigm(xs));
Explanation: Sadržaj:
Model logističke regresije
Gubitak unakrsne entropije
Minimizacija pogreške
Poveznica s generativnim modelom
Usporedba linearnih modela
Sažetak
Model logističke regresije
Podsjetnik: poopćeni linearni modeli
$$
h(\mathbf{x}) = \color{red}{f\big(}\mathbf{w}^\intercal\tilde{\mathbf{x}}\color{red}{\big)}
$$
$f : \mathbb{R}\to[0,1]$ ili $f : \mathbb{R}\to[-1,+1]$ je aktivacijska funkcija
Linearna granica u ulaznom prostoru (premda je $f$ nelinearna)
Međutim, ako preslikavamo s $\boldsymbol\phi(\mathbf{x})$ u prostor značajki, granica u ulaznom prostoru može biti nelinearna
Model nelinearan u parametrima (jer je $f$ nelinearna)
Komplicira optimizaciju (nema rješenja u zatvorenoj formi)
Podsjetnik: klasifikacija regresijom
Model:
$$
h(\mathbf{x}) = \mathbf{w}^\intercal\boldsymbol{\phi}(\mathbf{x}) \qquad (f(\alpha)=\alpha)
$$
[Skica]
Funkcija gubitka: kvadratni gubitak
Optimizacijski postupak: izračun pseudoinverza (rješenje u zatvorenoj formi)
Prednosti:
Uvijek dobivamo rješenje
Nedostatci:
Nerobusnost: ispravno klasificirani primjeri utječu na granicu $\Rightarrow$ pogrešna klasifikacija čak i kod linearno odvojivih problema
Izlaz modela nije probabilistički
Podsjetnik: perceptron
Model:
$$
h(\mathbf{x}) = f\big(\mathbf{w}^\intercal\boldsymbol\phi(\mathbf{x})\big)
\qquad f(\alpha) = \begin{cases}
+1 & \text{ako $\alpha\geq0$}\
-1 & \text{inače}
\end{cases}
$$
[Skica]
Funkcija gubitka: količina pogrešne klasifikacije
$$
\mathrm{max}(0,-\tilde{\mathbf{w}}^\intercal\boldsymbol{\phi}(\mathbf{x})y)
$$
Optimizacijski postupak: gradijentni spust
Prednosti:
Ispravno klasificirani primjeri ne utječu na granicu<br>
$\Rightarrow$ ispravna klasifikacija linearno odvojivih problema
Nedostatci:
Aktivacijska funkcija nije derivabilna<br>
$\Rightarrow$ funkcija gubitka nije derivabilna<br>
$\Rightarrow$ gradijent funkcije pogreške nije nula u točki minimuma<br>
$\Rightarrow$ postupak ne konvergira ako primjeri nisu linearno odvojivi
Decizijska granica ovisi o početnom izboru težina
Izlaz modela nije probabilistički
Logistička regresija
Ideja: upotrijebiti aktivacijsku funkciju s izlazima $[0,1]$ ali koja jest derivabilna
Logistička (sigmoidalna) funkcija:
$$
\sigma(\alpha) = \frac{1}{1 + \exp(-\alpha)}
$$
End of explanation
plt.plot(xs, sigm(0.5*xs), 'r');
plt.plot(xs, sigm(xs), 'g');
plt.plot(xs, sigm(2*xs), 'b');
Explanation: Nagib sigmoide može se regulirati množenjem ulaza određenim faktorom:
End of explanation
xs = linspace(0, 1)
plt.plot(xs, -sp.log(xs));
plt.plot(xs, 1 - sp.log(1 - xs));
Explanation: Derivacija sigmoidalne funkcije:
$$
\frac{\partial\sigma(\alpha)}{\partial\alpha} =
\frac{\partial}{\partial\alpha}\big(1 + \exp(-\alpha)\big) =
\sigma(\alpha)\big(1 - \sigma(\alpha)\big)
$$
Model logističke regresije:
$$
h(\mathbf{x}|\mathbf{w}) = \sigma\big(\mathbf{w}^\intercal\boldsymbol{\phi}(\mathbf{x})\big) =
\frac{1}{1+\exp(-\mathbf{w}^\intercal\boldsymbol{\phi}(\mathbf{x}))}
$$
NB: Logistička regresija je klasifikacijski model (unatoč nazivu)!
Probabilistički izlaz
$h(\mathbf{x})\in[0,1]$, pa $h(\mathbf{x})$ možemo tumačiti kao vjerojatnost da primjer pripada klasi $\mathcal{C}_1$ (klasi za koju $y=1$):
$$
h(\mathbf{x}|\mathbf{w}) = \sigma\big(\mathbf{w}^\intercal\mathbf{\phi}(\mathbf{x})\big) = \color{red}{P(y=1|\mathbf{x})}
$$
Vidjet ćemo kasnije da postoji i dublje opravdanje za takvu interpretaciju
Funkcija logističkog gubitka
Definirali smo model, trebamo još definirati funkciju gubitka i optimizacijski postupak
Logistička funkcija koristi gubitak unakrsne entropije
Definicija
Funkcija pokriva dva slučajeva (kada je oznaka primjera $y=1$ i kada je $y=0$):
$$
L(h(\mathbf{x}),y) =
\begin{cases}
- \ln h(\mathbf{x}) & \text{ako $y=1$}\
- \ln \big(1-h(\mathbf{x})\big) & \text{ako $y=0$}
\end{cases}
$$
Ovo možemo napisati sažetije:
$$
L(h(\mathbf{x}),y) =
y \ln h(\mathbf{x}) - (1-y)\ln \big(1-h(\mathbf{x})\big)
$$
End of explanation
def cross_entropy_loss(h_x, y):
return -y * sp.log(h_x) - (1 - y) * sp.log(1 - h_x)
xs = linspace(0, 1)
plt.plot(xs, cross_entropy_loss(xs, 0), label='y=0')
plt.plot(xs, cross_entropy_loss(xs, 1), label='y=1')
plt.ylabel('$L(h(\mathbf{x}),y)$')
plt.xlabel('$h(\mathbf{x}) = \sigma(w^\intercal\mathbf{x}$)')
plt.legend()
plt.show()
Explanation: Ako $y=1$, funkcija kažnjava model to više što je njegov izlaz manji od jedinice. Slično, ako $y=0$, funkcija kažnjava model to više što je njegov izlaz veći od nule
Intutivno se ovakva funkcija čini u redu, ali je pitanje kako smo do nje došli
Izvod
Funkciju gubitka izvest ćemo iz funkcije pogreške
Podsjetnik: funkcija pogreške = očekivanje funkcije gubitka
Budući da logistička regresija daje vjerojatnosti oznaka za svaki primjer, možemo izračunati kolika je vjerojatnost označenog skupa primjera $\mathcal{D}$ pod našim modelom, odnosno kolika je izglednost parametra $\mathbf{w}$ modela
Želimo da ta izglednost bude što veća, pa ćemo funkciju pogreške definirati kao negativnu log-izglednost parametara $\mathbf{w}$:
$$
E(\mathbf{w}|\mathcal{D}) = -\ln\mathcal{L}(\mathbf{w}|\mathcal{D})
$$
Želimo maksimizirati log-izglednost, tj. minimizirati ovu pogrešku
Log-izglednost:
$$
\begin{align}
\ln\mathcal{L}(\mathbf{w}|\mathcal{D})
&= \ln p(\mathcal{D}|\mathbf{w})
= \ln\prod_{i=1}^N p(\mathbf{x}^{(i)}, y^{(i)}|\mathbf{w})\
&= \ln\prod_{i=1}^N P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w})p(\mathbf{x}^{(i)})\
&= \sum_{i=1}^N \ln P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w}) + \underbrace{\color{gray}{\sum_{i=1}^N \ln p(\mathbf{x}^{(i)})}}_{\text{ne ovisi o $\mathbf{w}$}}
\end{align}
$$
$y^{(i)}$ je oznaka $i$-tog primjera koja može biti 0 ili 1 $\Rightarrow$ Bernoullijeva varijabla
Budući da $y^{(i)}$ Bernoullijeva varijabla, njezina distribucija je:
$$
P(y^{(i)}) = \mu^{y^{(i)}}(1-\mu)^{y^{(i)}}
$$
gdje je $\mu$ vjerojatnost da $y^{(i)}=1$
Naš model upravo daje vjerojatnost da primjer $\mathcal{x}^{(i)}$ ima oznaku $y^{(i)}=1$, tj.:
$$
\mu = P(y^{(i)}=1|\mathbf{x}^{(i)},\mathbf{w}) = \color{red}{h(\mathbf{x}^{(i)} | \mathbf{w})}
$$
To znači da vjerojatnost oznake $y^{(i)}$ za dani primjer $\mathbf{x}^{i}$ možemo napisati kao:
$$
P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w}) =
\color{red}{h(\mathbf{x}^{(i)}|\mathbf{w})}^{y^{(i)}}\big(1-\color{red}{h(\mathbf{x}^{(i)}|\mathbf{w})}\big)^{1-y^{(i)}}
$$
Nastavljamo s izvodom log-izglednosti:
$$
\begin{align}
\ln\mathcal{L}(\mathbf{w}|\mathcal{D})
&= \sum_{i=1}^N \ln P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w}) \color{gray}{+ \text{konst.}}\
&\Rightarrow \sum_{i=1}^N\ln \Big(h(\mathbf{x}^{(i)}|\mathbf{w})^{y^{(i)}}\big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)^{1-y^{(i)}}\Big) \
& = \sum_{i=1}^N \Big(y^{(i)} \ln h(\mathbf{x}^{(i)}|\mathbf{w})+ (1-y^{(i)})\ln\big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)\Big)
\end{align}
$$
Empirijsku pogrešku definiramo kao negativnu log-izglednost (do na konstantu):
$$
E(\mathbf{w}|\mathcal{D}) = \sum_{i=1}^N \Big(-y^{(i)} \ln h(\mathbf{x}^{(i)}|\mathbf{w}) - (1-y^{(i)})\ln \big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)\Big)
$$
Alternativno (kako ne bi ovisila o broju primjera):
$$
E(\mathbf{w}|\mathcal{D}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N\Big( - y^{(i)} \ln h(\mathbf{x}^{(i)}|\mathbf{w})- (1-y^{(i)})\ln \big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)\Big)
$$
$\Rightarrow$ pogreška unakrsne entropije (engl. cross-entropy error)
Iz pogreške možemo iščitati funkciju gubitka:
$$
L(h(\mathbf{x}),y) = - y \ln h(\mathbf{x}) - (1-y)\ln \big(1-h(\mathbf{x})\big)
$$
$\Rightarrow$ gubitak unakrsne entropije (engl. cross-entropy loss)
NB: Izraz kompaktno definira grananje za dva slučaja (za $y=1$ i za $y=0$)
End of explanation
#TODO: konkretan primjer u ravnini
Explanation: Q: Koliki je gubitak na primjeru $\mathbf{x}$ za koji model daje $h(\mathbf{x})=P(y=1|\mathbf{x})=0.7$, ako je stvarna oznaka primjera $y=0$? Koliki je gubitak ako je stvarna oznaka $y=1$?
Gubitaka nema jedino onda kada je primjer savršeno točno klasificiran ($h(x)=1$ za $y=1$ odnosno $h(x)=0$ za $y=0$)
U svim drugim slučajevima postoji gubitak: čak i ako je primjer ispravno klasificiran (na ispravnoj strani granice) postoji malen gubitak, ovisno o pouzdanosti klasifikacije
Ipak, primjeri na ispravnoj strani granice ($h(\mathbf{x})\geq 0.5$ za $y=1$ odnosno $h(\mathbf{x})< 0.5$ za $y=0$) nanose puno manji gubitak od primjera na pogrešnoj strani granice
End of explanation
#TODO kod + primjer
Explanation: Minimizacija pogreške
$$
\begin{align}
E(\mathbf{w}) &=
\sum_{i=1}^N L\big(h(\mathbf{x}^{(i)}|\mathbf{w}),y^{(i)}\big)\
L(h(\mathbf{x}),y) &= - y \ln h(\mathbf{x}) - (1-y)\ln \big(1-h(\mathbf{x})\big)\
h(\mathbf{x}) &= \sigma(\mathbf{w}^\intercal\mathbf{x}) = \frac{1}{1 + \exp(-\mathbf{w}^\intercal\mathbf{x})}
\end{align}
$$
Ne postoji rješenje u zatvorenoj formi (zbog nelinearnosti funkcije $\sigma$)
Minimiziramo gradijentnim spustom:
$$
\nabla E(\mathbf{w}) =
\sum_{i=1}^N \nabla L\big(h(\mathbf{x}^{(i)}|\mathbf{w}),y^{(i)}\big)
$$
Prisjetimo se:
$$
\frac{\partial\sigma(\alpha)}{\partial\alpha} =
\sigma(\alpha)\big(1 - \sigma(\alpha)\big)
$$
Dobivamo:
$$
\nabla L\big(h(\mathbf{x}),y\big) =
\Big(-\frac{y}{h(\mathbf{x})} + \frac{1-y}{1-h(\mathbf{x})}\Big)h(\mathbf{x})\big(1-h(\mathbf{x})\big)
\tilde{\mathbf{x}} = \big(h(\mathbf{x})-y\big)\tilde{\mathbf{x}}
$$
Gradijent-vektor pogreške:
$$
\nabla E(\mathbf{w}) = \sum_{i=1}^N \big(h(\mathbf{x}^{(i)})-y^{(i)}\big)\tilde{\mathbf{x}}^{(i)}
$$
Gradijentni spust (batch)
$\mathbf{w} \gets (0,0,\dots,0)$<br>
ponavljaj do konvergencije<br>
$\quad \Delta\mathbf{w} \gets (0,0,\dots,0)$<br>
$\quad$ za $i=1,\dots, N$<br>
$\qquad h \gets \sigma(\mathbf{w}^\intercal\tilde{\mathbf{x}}^{(i)})$<br>
$\qquad \Delta \mathbf{w} \gets \Delta\mathbf{w} + (h-y^{(i)})\, \tilde{\mathbf{x}}^{(i)}$<br>
$\quad \mathbf{w} \gets \mathbf{w} - \eta \Delta\mathbf{w} $
Stohastički gradijentni spust (on-line)
$\mathbf{w} \gets (0,0,\dots,0)$<br>
ponavljaj do konvergencije<br>
$\quad$ (slučajno permutiraj primjere u $\mathcal{D}$)<br>
$\quad$ za $i=1,\dots, N$<br>
$\qquad$ $h \gets \sigma(\mathbf{w}^\intercal\tilde{\mathbf{x}}^{(i)})$<br>
$\qquad$ $\mathbf{w} \gets \mathbf{w} - \eta (h-y^{(i)})\tilde{\mathbf{x}}^{(i)}$
End of explanation |
10,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Forecast with Basic RNN
Dataset is downloaded from https
Step2: Note
Scaling the variables will make optimization functions work better, so here going to scale the variable into [0,1] range
Step3: Note
In 2D array above for X_train, X_val, it means (number of samples, number of time steps)
However RNN input has to be 3D array, (number of samples, number of time steps, number of features per timestep)
Only 1 feature which is scaled_pm2.5
So, the code below converts 2D array to 3D array | Python Code:
import pandas as pd
import numpy as np
import datetime
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv('data/pm25.csv')
print(df.shape)
df.head()
df.isnull().sum()*100/df.shape[0]
df.dropna(subset=['pm2.5'], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(
lambda row: datetime.datetime(year=row['year'],
month=row['month'], day=row['day'],hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
df.head()
df['year'].value_counts()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['pm2.5'], color='g')
g.set_title('pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
Explanation: Time Series Forecast with Basic RNN
Dataset is downloaded from https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
End of explanation
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))
df.head()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['scaled_pm2.5'], color='purple')
g.set_title('Scaled pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('scaled_pm2.5 readings')
# 2014 data as validation data, before 2014 as training data
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
df_val.reset_index(drop=True, inplace=True)
df_val.head()
# The way this works is to have the first nb_timesteps-1 observations as X and nb_timesteps_th as the target,
## collecting the data with 1 stride rolling window.
def makeXy(ts, nb_timesteps):
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
print(X_train[0], y_train[0])
print(X_train[1], y_train[1])
X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
print(X_val[0], y_val[0])
print(X_val[1], y_val[1])
Explanation: Note
Scaling the variables will make optimization functions work better, so here going to scale the variable into [0,1] range
End of explanation
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of arrays after reshaping:', X_train.shape, X_val.shape)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SimpleRNN
from tensorflow.keras.layers import Dense, Dropout, Input
from tensorflow.keras.models import load_model
from tensorflow.keras.callbacks import ModelCheckpoint
from sklearn.metrics import mean_absolute_error
tf.random.set_seed(10)
model = Sequential()
model.add(SimpleRNN(32, input_shape=(X_train.shape[1:])))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
model.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=['mae'])
model.summary()
save_weights_at = 'basic_rnn_model'
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
save_freq='epoch')
history = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
# load the best model
best_model = load_model('basic_rnn_model')
# Compare the prediction with y_true
preds = best_model.predict(X_val)
pred_pm25 = scaler.inverse_transform(preds)
pred_pm25 = np.squeeze(pred_pm25)
# Measure MAE of y_pred and y_true
mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)
print('MAE for the validation set:', round(mae, 4))
mae = mean_absolute_error(df_val['scaled_pm2.5'].loc[7:], preds)
print('MAE for the scaled validation set:', round(mae, 4))
# Check the metrics and loss of each apoch
mae = history.history['mae']
val_mae = history.history['val_mae']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(mae))
plt.plot(epochs, mae, 'bo', label='Training MAE')
plt.plot(epochs, val_mae, 'b', label='Validation MAE')
plt.title('Training and Validation MAE')
plt.legend()
plt.figure()
# Here I was using MAE as loss too, that's why they lookedalmost the same...
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
plt.show()
Explanation: Note
In 2D array above for X_train, X_val, it means (number of samples, number of time steps)
However RNN input has to be 3D array, (number of samples, number of time steps, number of features per timestep)
Only 1 feature which is scaled_pm2.5
So, the code below converts 2D array to 3D array
End of explanation |
10,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: <table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 句子
我们从 Wikipedia 中获取一些句子以通过模型运行
Step3: 运行模型
我们将从 TF-Hub 加载 BERT 模型,使用 TF-Hub 中的匹配预处理模型将句子词例化,然后将词例化句子馈入模型。为了让此 Colab 变得快速而简单,我们建议在 GPU 上运行。
转到 Runtime → Change runtime type 以确保选择 GPU
Step5: 语义相似度
现在,我们看一下句子的 pooled_output 嵌入向量,并比较它们在句子中的相似程度。 | Python Code:
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip3 install --quiet tensorflow
!pip3 install --quiet tensorflow_text
import seaborn as sns
from sklearn.metrics import pairwise
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text # Imports TF ops for preprocessing.
#@title Configure the model { run: "auto" }
BERT_MODEL = "https://tfhub.dev/google/experts/bert/wiki_books/2" # @param {type: "string"} ["https://tfhub.dev/google/experts/bert/wiki_books/2", "https://tfhub.dev/google/experts/bert/wiki_books/mnli/2", "https://tfhub.dev/google/experts/bert/wiki_books/qnli/2", "https://tfhub.dev/google/experts/bert/wiki_books/qqp/2", "https://tfhub.dev/google/experts/bert/wiki_books/squad2/2", "https://tfhub.dev/google/experts/bert/wiki_books/sst2/2", "https://tfhub.dev/google/experts/bert/pubmed/2", "https://tfhub.dev/google/experts/bert/pubmed/squad2/2"]
# Preprocessing must match the model, but all the above use the same.
PREPROCESS_MODEL = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3"
Explanation: <table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/bert_experts"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View 在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bert_experts.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bert_experts.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/bert_experts.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td><a href="https://tfhub.dev/s?q=experts%2Fbert"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
TF-Hub 中的 BERT 专家模型
此 Colab 演示了如何执行以下操作:
从 TensorFlow Hub 加载已针对不同任务(包括 MNLI、SQuAD 和 PubMed)进行训练的 BERT 模型
使用匹配的预处理模型将原始文本词例化并转换为 ID
使用加载的模型从词例输入 ID 生成池化和序列输出
查看不同句子的池化输出的语义相似度
注:此 Colab 应与 GPU 运行时一起运行
设置和导入
End of explanation
sentences = [
"Here We Go Then, You And I is a 1999 album by Norwegian pop artist Morten Abel. It was Abel's second CD as a solo artist.",
"The album went straight to number one on the Norwegian album chart, and sold to double platinum.",
"Among the singles released from the album were the songs \"Be My Lover\" and \"Hard To Stay Awake\".",
"Riccardo Zegna is an Italian jazz musician.",
"Rajko Maksimović is a composer, writer, and music pedagogue.",
"One of the most significant Serbian composers of our time, Maksimović has been and remains active in creating works for different ensembles.",
"Ceylon spinach is a common name for several plants and may refer to: Basella alba Talinum fruticosum",
"A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth.",
"A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.",
]
Explanation: 句子
我们从 Wikipedia 中获取一些句子以通过模型运行
End of explanation
preprocess = hub.load(PREPROCESS_MODEL)
bert = hub.load(BERT_MODEL)
inputs = preprocess(sentences)
outputs = bert(inputs)
print("Sentences:")
print(sentences)
print("\nBERT inputs:")
print(inputs)
print("\nPooled embeddings:")
print(outputs["pooled_output"])
print("\nPer token embeddings:")
print(outputs["sequence_output"])
Explanation: 运行模型
我们将从 TF-Hub 加载 BERT 模型,使用 TF-Hub 中的匹配预处理模型将句子词例化,然后将词例化句子馈入模型。为了让此 Colab 变得快速而简单,我们建议在 GPU 上运行。
转到 Runtime → Change runtime type 以确保选择 GPU
End of explanation
#@title Helper functions
def plot_similarity(features, labels):
Plot a similarity matrix of the embeddings.
cos_sim = pairwise.cosine_similarity(features)
sns.set(font_scale=1.2)
cbar_kws=dict(use_gridspec=False, location="left")
g = sns.heatmap(
cos_sim, xticklabels=labels, yticklabels=labels,
vmin=0, vmax=1, cmap="Blues", cbar_kws=cbar_kws)
g.tick_params(labelright=True, labelleft=False)
g.set_yticklabels(labels, rotation=0)
g.set_title("Semantic Textual Similarity")
plot_similarity(outputs["pooled_output"], sentences)
Explanation: 语义相似度
现在,我们看一下句子的 pooled_output 嵌入向量,并比较它们在句子中的相似程度。
End of explanation |
10,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Query Meta data in database Groups [v1.1]
Step1: Setup
Step2: Check one of the meta tables
Step3: Query meta with Query dict
A simple example
Step4: Another example
Step5: One more
Step6: Query meta at position
As with query catalog, the position coordinates can have a range of formats
One simple source
Step7: Multiple meta entries (GGG)
Step8: Multiple sources
Step9: Restrict on groups
Step10: Query Meta with Coordinates list
When querying with a coordinate list, there are two approaches to
the data returned. The default is to return the meta data
for the first spectrum matched to each coordinate.
The other option is to retrieve all of the meta data for each
coordinate input. The returned object is then a list of bool arrays
and a Table of all the meta data.
We provide examples for each.
Meta for first match for each coordinate
Returns a Table with a single meta data entry per coordinate even if multiple exist.
If there is no match, the row is empty in the Table.
If there are zero matches, return None.
Single source (which matches)
Step11: Single source which fails to match
Step12: Source where multiple spectra exist, but only the first record is returned
Step13: Multiple coordinates, each matched
Step14: Multiple coordinates, one fails to match by coordinate
Step15: Multiple coordiantes, one fails to match input group list
Step16: All Meta Data for each input coordinate
Here, a list of bool arrays relative to stacked meta table is returned.
This is a bit convoluted, but is *much* faster for large coordinate lists.
If there is no match to a given coordinate, the entry in the list is None
Two sources. The second one has two spectra in the database
Step17: Two sources, limit by groups
Step18: Three sources; second one has no match | Python Code:
# imports
from astropy import units as u
from astropy.coordinates import SkyCoord
import specdb
from specdb.specdb import SpecDB
from specdb import specdb as spdb_spdb
from specdb.cat_utils import flags_to_groups
Explanation: Query Meta data in database Groups [v1.1]
End of explanation
db_file = specdb.__path__[0]+'/tests/files/IGMspec_DB_v02_debug.hdf5'
reload(spdb_spdb)
sdb = spdb_spdb.SpecDB(db_file=db_file)
Explanation: Setup
End of explanation
ggg_meta = sdb['GGG'].meta
ggg_meta[0:4]
Explanation: Check one of the meta tables
End of explanation
qdict = {'TELESCOPE': 'Gemini-North', 'NPIX': (1580,1583), 'DISPERSER': ['B600', 'R400']}
qmeta = sdb.query_meta(qdict)
qmeta
Explanation: Query meta with Query dict
A simple example
End of explanation
qdict = {'R': (4000.,1e9), 'WV_MIN': (0., 4000.)}
qmeta = sdb.query_meta(qdict)
qmeta
Explanation: Another example
End of explanation
qdict = {'R': (1800.,2500), 'WV_MIN': (0., 4000.)}
qmeta = sdb.query_meta(qdict)
qmeta['GROUP'].data
Explanation: One more
End of explanation
meta = sdb.meta_from_position((0.0019,17.7737), 1*u.arcsec)
meta
Explanation: Query meta at position
As with query catalog, the position coordinates can have a range of formats
One simple source
End of explanation
meta = sdb.meta_from_position('001115.23+144601.8', 1*u.arcsec)
meta['WV_MIN'].data
Explanation: Multiple meta entries (GGG)
End of explanation
meta = sdb.meta_from_position((2.813500,14.767200), 20*u.deg)
meta[0:3]
meta['GROUP'].data
Explanation: Multiple sources
End of explanation
meta = sdb.meta_from_position((2.813500,14.767200), 20*u.deg, groups=['GGG','HD-LLS_DR1'])
meta['GROUP'].data
Explanation: Restrict on groups
End of explanation
coord = SkyCoord(ra=0.0019, dec=17.7737, unit='deg')
matches, meta = sdb.meta_from_coords(coord)
meta
Explanation: Query Meta with Coordinates list
When querying with a coordinate list, there are two approaches to
the data returned. The default is to return the meta data
for the first spectrum matched to each coordinate.
The other option is to retrieve all of the meta data for each
coordinate input. The returned object is then a list of bool arrays
and a Table of all the meta data.
We provide examples for each.
Meta for first match for each coordinate
Returns a Table with a single meta data entry per coordinate even if multiple exist.
If there is no match, the row is empty in the Table.
If there are zero matches, return None.
Single source (which matches)
End of explanation
coord = SkyCoord(ra=0.0019, dec=-17.7737, unit='deg')
matches, meta = sdb.meta_from_coords(coord)
print(meta)
Explanation: Single source which fails to match
End of explanation
coord = SkyCoord(ra=2.813458, dec=14.767167, unit='deg')
_, meta = sdb.meta_from_coords(coord)
meta
Explanation: Source where multiple spectra exist, but only the first record is returned
End of explanation
coords = SkyCoord(ra=[0.0028,2.813458], dec=[14.9747,14.767167], unit='deg')
matches, meta = sdb.meta_from_coords(coords)
print(matches)
meta
Explanation: Multiple coordinates, each matched
End of explanation
coords = SkyCoord(ra=[0.0028,9.99,2.813458], dec=[14.9747,-9.99,14.767167], unit='deg')
matches, meta = sdb.meta_from_coords(coords)
print(matches)
meta
Explanation: Multiple coordinates, one fails to match by coordinate
End of explanation
coords = SkyCoord(ra=[0.0028,2.813458], dec=[14.9747,14.767167], unit='deg')
matches, meta = sdb.meta_from_coords(coords, groups=['GGG'])
print(matches)
print(meta['IGM_ID'])
meta
Explanation: Multiple coordiantes, one fails to match input group list
End of explanation
coords = SkyCoord(ra=[0.0028,2.813458], dec=[14.9747,14.767167], unit='deg')
matches, list_of_meta, meta_stack = sdb.meta_from_coords(coords, first=False)
print('Matches = ', matches)
list_of_meta, meta_stack[list_of_meta[0]]
Explanation: All Meta Data for each input coordinate
Here, a list of bool arrays relative to stacked meta table is returned.
This is a bit convoluted, but is *much* faster for large coordinate lists.
If there is no match to a given coordinate, the entry in the list is None
Two sources. The second one has two spectra in the database
End of explanation
matches, list_of_meta, meta_stack = sdb.meta_from_coords(coords, first=False, groups=['GGG'])
list_of_meta, meta_stack[list_of_meta[1]]
Explanation: Two sources, limit by groups
End of explanation
coords = SkyCoord(ra=[0.0028,9.99,2.813458], dec=[14.9747,-9.99,14.767167], unit='deg')
matches, list_of_meta, meta_stack = sdb.meta_from_coords(coords, first=False)
print('Matches = ', matches)
meta_stack[list_of_meta[0]]
Explanation: Three sources; second one has no match
End of explanation |
10,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome!
Let's start by assuming you have downloaded the code, and ran the setup.py . This demonstration will show the user how predict the time constant of their trEFM data using the methods of statistical learning. Let's start by importing the data simulation module trEFMlearn package. This package contains methods to numerically simulate some experimental data.
Step1: Simulation
You can create an array of time constants that you would like to simulate the data for. This array can then be input into the simulation function which simulates the data as well as fits it using standard vector regression. This function can take a few minutes dpending on the number of time constants you provide. Run this cell and wait for the function to complete. There may be an error that occurs, don't fret as this has no effect.
Step2: Neato!
Looks like that function is all done. We now have an SVR Object called "fit_object" as well as a result of the fit called "fit_tau". Let's take a look at the result of the fit by comparing it to the actual input tau.
Step3: Clearly the SVR method is quite capable of reproducing the time constants simulated data using very simple to calculate features. We observe some lower limit to the model's ability to calculate time constants, which is quite interesting. However, this lower limit appears below 100 nanoseconds, a time-scale that is seldom seen in the real world. This could be quite useful for extracting time constant data!
Analyzing a Real Image
The Data
In order to assess the ability of the model to apply to real images, I have taken a trEFM image of an MDMO photovoltaic material. There are large aggregates of acceptor material that should show a nice contrast in the way that they generate and hold charge. Each pixel of this image has been pre-averaged before being saved with this demo program. Each pixel is a measurement of the AFM cantilever position as a function of time.
The Process
Our mission is to extract the time constant out of this signal using the SVR fit of our simulated data. We accomplish this by importing and calling the "process_image" function.
Step4: The image processing function needs two inputs. First we show the function the path to the provided image data. We then provide the function with the SVR object that was previously generated using the simulated cantilever data. Processing this image should only take 15 to 30 seconds.
Step5: Awesome. That was pretty quick huh? Without this machine learning method, the exact same image we just analyzed takes over 8 minutes to run. Yes! Now let's take a look at what we get.
Step6: You can definitely begin to make out some of the structure that is occuring in the photovoltaic performance of this device. This image looks great, but there are still many areas of improvement. For example, I will need to extensively prove that this image is not purely a result of topographical cross-talk. If this image is correct, this is a significant improvement on our current imaging technique.
The Features
In the next cell we show an image of the various features that were calculated from the raw deflection signal. Some features more clearly matter than others and indicate that the search for better and more representative features is desirable. However, I think this is a great start to a project I hope to continue developing in the future. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from trEFMlearn import data_sim
%matplotlib inline
Explanation: Welcome!
Let's start by assuming you have downloaded the code, and ran the setup.py . This demonstration will show the user how predict the time constant of their trEFM data using the methods of statistical learning. Let's start by importing the data simulation module trEFMlearn package. This package contains methods to numerically simulate some experimental data.
End of explanation
tau_array = np.logspace(-8, -5, 100)
fit_object, fit_tau = data_sim.sim_fit(tau_array)
Explanation: Simulation
You can create an array of time constants that you would like to simulate the data for. This array can then be input into the simulation function which simulates the data as well as fits it using standard vector regression. This function can take a few minutes dpending on the number of time constants you provide. Run this cell and wait for the function to complete. There may be an error that occurs, don't fret as this has no effect.
End of explanation
plt.figure()
plt.title('Fit Time Constant vs. Actual')
plt.plot(fit_tau, 'bo')
plt.plot(tau_array,'g')
plt.ylabel('Tau (s)')
plt.yscale('log')
plt.show()
# Calculate the error at each measurement.
error = (tau_array - fit_tau) / tau_array
plt.figure()
plt.title('Error Signal')
plt.plot(tau_array, error)
plt.ylabel('Error (%)')
plt.xlabel('Time Constant (s)')
plt.xscale('log')
plt.show()
Explanation: Neato!
Looks like that function is all done. We now have an SVR Object called "fit_object" as well as a result of the fit called "fit_tau". Let's take a look at the result of the fit by comparing it to the actual input tau.
End of explanation
from trEFMlearn import process_image
Explanation: Clearly the SVR method is quite capable of reproducing the time constants simulated data using very simple to calculate features. We observe some lower limit to the model's ability to calculate time constants, which is quite interesting. However, this lower limit appears below 100 nanoseconds, a time-scale that is seldom seen in the real world. This could be quite useful for extracting time constant data!
Analyzing a Real Image
The Data
In order to assess the ability of the model to apply to real images, I have taken a trEFM image of an MDMO photovoltaic material. There are large aggregates of acceptor material that should show a nice contrast in the way that they generate and hold charge. Each pixel of this image has been pre-averaged before being saved with this demo program. Each pixel is a measurement of the AFM cantilever position as a function of time.
The Process
Our mission is to extract the time constant out of this signal using the SVR fit of our simulated data. We accomplish this by importing and calling the "process_image" function.
End of explanation
tau_img, real_sum_img, fft_sum_img, amp_diff_img = process_image.analyze_image('.\\image data\\', fit_object)
Explanation: The image processing function needs two inputs. First we show the function the path to the provided image data. We then provide the function with the SVR object that was previously generated using the simulated cantilever data. Processing this image should only take 15 to 30 seconds.
End of explanation
# Something went wrong in the data on the first line. Let's skip it.
tau_img = tau_img[1:]
real_sum_img = real_sum_img[1:]
fft_sum_img = fft_sum_img[1:]
amp_diff_img = amp_diff_img[1:]
plt.figure()
upper_lim = (tau_img.mean() + 2*tau_img.std())
lower_lim = (tau_img.mean() - 2*tau_img.std())
plt.imshow(tau_img,vmin=lower_lim, vmax=upper_lim,cmap = 'cubehelix')
plt.show()
Explanation: Awesome. That was pretty quick huh? Without this machine learning method, the exact same image we just analyzed takes over 8 minutes to run. Yes! Now let's take a look at what we get.
End of explanation
fig, axs = plt.subplots(nrows=3)
axs[0].imshow(real_sum_img ,'hot')
axs[0].set_title('Total Signal Sum')
axs[1].imshow(fft_sum_img, cmap='hot')
axs[1].set_title('Sum of the FFT Power Spectrum')
axs[2].imshow(amp_diff_img, cmap='hot')
axs[2].set_title('Difference in Amplitude After Trigger')
plt.tight_layout()
plt.show()
Explanation: You can definitely begin to make out some of the structure that is occuring in the photovoltaic performance of this device. This image looks great, but there are still many areas of improvement. For example, I will need to extensively prove that this image is not purely a result of topographical cross-talk. If this image is correct, this is a significant improvement on our current imaging technique.
The Features
In the next cell we show an image of the various features that were calculated from the raw deflection signal. Some features more clearly matter than others and indicate that the search for better and more representative features is desirable. However, I think this is a great start to a project I hope to continue developing in the future.
End of explanation |
10,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grade
Step1: print a list of Lil's that are more popular than Lil's Kim
Step2: Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks
Step3: Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
Step4: Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
Step5: TA-COMMENT
Step6: How to automate getting all of the results | Python Code:
import requests
!pip3 install requests
response = requests.get("https://api.spotify.com/v1/search?q=Lil&type=artist&market=US&limit=50")
print(response.text)
data = response.json()
type(data)
data.keys()
data['artists'].keys()
artists=data['artists']
type(artists['items'])
artist_info = artists['items']
for artist in artist_info:
print(artist['name'], artist['popularity'])
print(artist_info[5])
artists=data['artists']
artist_info = artists['items']
separator = ", "
for artist in artist_info:
if len(artist['genres']) == 0:
print("No genres listed.")
else:
print(artist['name'], ":", separator.join(artist['genres']))
most_popular_name = ""
most_popular_score = 0
for artist in artist_info:
if artist['popularity'] > most_popular_score and artist['name'] != "Lil Wayne":
most_popular_name = artist['name']
most_popular_score = artist['popularity']
else:
pass
print(most_popular_name,most_popular_score)
Explanation: Grade: 7 / 8 -- look through the notebook for "TA-COMMENT"
I'm missing your NYTimes work -- is that included somewhere else in your repository? That portion of the homework is an additional 8 points.
End of explanation
for artist in artist_info:
print(artist['name'])
if artist['name']== "Lil' Kim":
print("Found Lil Kim")
print(artist['popularity'])
else:
pass #print
Lil_kim_popularity = 62
more_popular_than_Lil_kim = []
for artist in artist_info:
if artist['popularity'] > Lil_kim_popularity:
#If yes, let's add them to our list
print(artist['name'], "is more popular with a score of", artist['popularity'])
more_popular_than_Lil_kim.append(artist['name'])
else:
print(artist['name'], "is less popular with a score of", artist['popularity'])
for artist_name in more_popular_than_Lil_kim:
print(artist_name)
Explanation: print a list of Lil's that are more popular than Lil's Kim
End of explanation
for artist in artist_info:
print(artist['name'], artist['id'])
#I chose Lil Fate and Lil' Flip, first I want to figure out the top track of Lil Fate
response = requests.get("https://api.spotify.com/v1/artists/6JUnsP7jmvYmdhbg7lTMQj/top-tracks?country=US")
# TA-COMMENT: You don't need to do response.text because we want to just read the response in as json!
print(response.text)
data = response.json()
type(data)
data.keys()
type(data['tracks'])
print(data['tracks'])
data['tracks'][0]
for item in data['tracks']:
print(item['name'])
# now to figure out the top track of Lil' Flip #things within {} or ALL Caps means to replace them
response = requests.get("https://api.spotify.com/v1/artists/4Q5sPmM8j4SpMqL4UA1DtS/top-tracks?country=US")
print(response.text)
data = response.json()
type(data)
data.keys()
type(data['tracks'])
for item in data['tracks']:
#type(item): dict
#print(item.keys()), saw 'name'
print(item['name'])
Explanation: Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks
End of explanation
#for Lil' fate's top tracks
explicit_count = 0
non_explicit_count = 0
popularity_explicit = 0
popularity_non_explicit = 0
minutes_explicit = 0
minutes_non_explicit = 0
for track in data['tracks']:
if track['explicit']== True:
explicit_count = explicit_count + 1
popularity_explicit = popularity_explicit + track['popularity']
minutes_explicit = minutes_explicit + track['duration_ms']
elif track['explicit']== False:
non_explicit_count = non_explicit_count + 1
popularity_non_explicit = popularity_non_explicit + track['popularity']
minutes_non_explicit = minutes_non_explicit + track['duration_ms']
print("Lil' Flip has", (minutes_explicit/1000)/60, "of explicit songs")
print("Lil' Flip has", (minutes_non_explicit/1000)/60, "of non-explicit songs")
print("The average popularity of Lil' Flip explicits songs is", popularity_explicit/explicit_count)
print("The average popularity of Lil' Flip non-explicits songs is", popularity_non_explicit/non_explicit_count)
Explanation: Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
End of explanation
response = requests.get('https://api.spotify.com/v1/search?q=Lil&type=artist&market=US')
all_lil = response.json()
print(response.text)
all_lil.keys()
all_lil['artists'].keys()
print(all_lil['artists']['total'])
response = requests.get('https://api.spotify.com/v1/search?q=Biggie&type=artist&market=US')
all_biggies = response.json()
print(all_biggies['artists']['total'])
Explanation: Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
End of explanation
all_genres = []
for artist in artist_info:
print("All genres we've heard of:", all_genres)
print("Current artist has:", artist['genres'])
all_genres = all_genres + artist['genres']
all_genres.count('dirty south rap')
## There is a library that comes with Python called Collections, inside of it is a thing called Counter
from collections import Counter
Explanation: TA-COMMENT: (-1) You need to take this question one step further -- given the number of results that you found in the code above for Biggies and Lil's, how long would it take to query the API about them if each request took 5 seconds?
how to count the genres
End of explanation
response=requests.get('https://api.spotify.com/v1/search?q=Lil&type=artist&market=US&limit50')
small_data = response.json()
data['artists']
print(len(data['artists']['items'])) #we only get 10 artists
print(data['artists']['total'])
#first page: artists 1-50, offset of 0
# https://
Explanation: How to automate getting all of the results
End of explanation |
10,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Strings
Step1: <font color="red"><i>Note
Step2: <img src="../images/string_indices.png">
Step3: Formatting | Python Code:
# this is an empty string
empty_str = ''
# create a string
str1 = ' the quick brown fox jumps over the lazy dog. '
str1
# strip whitespaces from the beginning and ending of the string
str2=str1.strip()
print str2
print str1
# this capitalizes the 1st letter of the string
str2.capitalize()
# count the number of occurrences for the string o
str2.count('o')
# check if a string ends with a certain character
str2.endswith('.')
# check if a substring exists in the string
'jum' in str2
# find the index of the first occurrence
str2.find('fox')
# let's see what character is at index 19
str2[19]
Explanation: Python Strings
End of explanation
S = 'shrubbery'
S[1]='c'
Explanation: <font color="red"><i>Note: strings are immutable while lists are not. In other words immutability does not allow for in-place modification of the object.</i></font>
"Also notice in the prior examples that we were not changing the original string with any of the operations we ran on it. Every string operation is defined to produce a new string as its result, because strings are immutable in Python—they cannot be chang ed in place after they are created. In other words, you can never overwrite the values of immutable objects. For example, you can’t change a string by assigning to one of its positions, but you can always build a new one and assign it to the same name. Because Python cleans up old objects as you go (as you’ll see later), this isn’t as inefficient as it may sound:"
End of explanation
S = 'shrubbery'
print "length of string is : ", len(S)
L = list(S)
print "L = ", L
S[0]
L[1] = 'c'
print L
S1 = ''.join(L)
print 'this is S1:', S1
print S
for x in S:
print x
# another way of changing the string
S = S[0] + 'c' + S[2:] # string concatenation
S
#
line = 'aaa,bbb,cccc c,dd'
line1 = line.split(',')
print line
print line1
# list the methods and attributes for string operations
dir(S)
help(S.split)
ord(S[0])
Explanation: <img src="../images/string_indices.png">
End of explanation
# define variables
x = 3.1415926
y = 1
# 2 decimal places
print "{:.2f}".format(x)
# 2 decimal palces with sign
print "{:+.2f}".format(x)
# 2 decimal palces with sign
print "{:.2f}".format(-y)
# print with no decimal palces
print "{:.0f}".format(3.51)
# left padded with 0's - width 4
print "{:0>4d}".format(11)
for i in range(20):
print "{:0>4d}".format(i)
# right padd with x's - total width 4
print "{:x<4d}".format(33)
# right padd with x's - total width 4
print y
print "{:x<4d}".format(10*y)
# insert a comma separator
print "{:,}".format(10000000000000)
# % format
print "{:.4%}".format(0.1235676)
# exponent notation
print "{:.3e}".format(10000000000000)
# right justified, with 10
print '1234567890' # for place holders
print "{:10d}".format(10000000)
# left justified, with 10
print '12345678901234567890' # place holder
print "{:<10d}".format(100), "{:<10d}".format(100)
# center justified, with 10
print '1234567890'
print "{:^10d}".format(100)
# string substitution
s1 = 'so much depends upon {}'.format('a red wheel barrow')
s2 = 'glazed with {} water beside the {} chickens'.format('rain', 'white')
print s1
print s2
# another substitution
s1 = " {0} is better than {1} ".format("emacs", "vim")
s2 = " {1} is better than {0} ".format("emacs", "vim")
print s1
print s2
## defining formats
email_f = "Your email address was {email}".format
print email_f
## use elsewhere
var1 = "[email protected]"
var2 = '[email protected]'
var3 = '[email protected]'
print(email_f(email=var1))
print(email_f(email=var2))
print(email_f(email=var3))
Explanation: Formatting
End of explanation |
10,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Time series prediction using RNNs, with TensorFlow and Cloud ML Engine </h1>
This notebook illustrates
Step1: <h2> RNN </h2>
For more info, see
Step2: <h3> Input Fn to read CSV </h3>
Our CSV file structure is quite simple -- a bunch of floating point numbers (note the type of DEFAULTS). We ask for the data to be read BATCH_SIZE sequences at a time. The Estimator API in tf.contrib.learn wants the features returned as a dict. We'll just call this timeseries column 'rawdata'.
<p>
Our CSV file sequences consist of 10 numbers. We'll assume that 8 of them are inputs and we need to predict the next two.
Step3: Reading data using the Estimator API in tf.learn requires an input_fn. This input_fn needs to return a dict of features and the corresponding labels.
<p>
So, we read the CSV file. The Tensor format here will be batchsize x 1 -- entire line. We then decode the CSV. At this point, all_data will contain a list of Tensors. Each tensor has a shape batchsize x 1. There will be 10 of these tensors, since SEQ_LEN is 10.
<p>
We split these 10 into 8 and 2 (N_OUTPUTS is 2). Put the 8 into a dict, call it features. The other 2 are the ground truth, so labels.
Step4: <h3> Define RNN </h3>
A recursive neural network consists of possibly stacked LSTM cells.
<p>
The RNN has one output per input, so it will have 8 output cells. We use only the last output cell, but rather use it directly, we do a matrix multiplication of that cell by a set of weights to get the actual predictions. This allows for a degree of scaling between inputs and predictions if necessary (we don't really need it in this problem).
<p>
Finally, to supply a model function to the Estimator API, you need to return a ModelFnOps. The rest of the function creates the necessary objects.
Step5: <h3> Experiment </h3>
Distributed training is launched off using an Experiment. The key line here is that we use tflearn.Estimator rather than, say tflearn.DNNRegressor. This allows us to provide a model_fn, which will be our RNN defined above. Note also that we specify a serving_input_fn -- this is how we parse the input data provided to us at prediction time.
Step6: <h3> Standalone Python module </h3>
To train this on Cloud ML Engine, we take the code in this notebook, make an standalone Python module.
Step7: Try out online prediction. This is how the REST API will work after you train on Cloud ML Engine
Step8: <h3> Cloud ML Engine </h3>
Now to train on Cloud ML Engine.
Step9: <h2> Variant | Python Code:
!pip install --upgrade tensorflow
import tensorflow as tf
print tf.__version__
import numpy as np
import tensorflow as tf
import seaborn as sns
import pandas as pd
SEQ_LEN = 10
def create_time_series():
freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6
ampl = np.random.random() + 0.5 # 0.5 to 1.5
x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl
return x
for i in xrange(0, 5):
sns.tsplot( create_time_series() ); # 5 series
def to_csv(filename, N):
with open(filename, 'w') as ofp:
for lineno in xrange(0, N):
seq = create_time_series()
line = ",".join(map(str, seq))
ofp.write(line + '\n')
to_csv('train.csv', 1000) # 1000 sequences
to_csv('valid.csv', 50)
!head -5 train.csv valid.csv
Explanation: <h1> Time series prediction using RNNs, with TensorFlow and Cloud ML Engine </h1>
This notebook illustrates:
<ol>
<li> Creating a Recurrent Neural Network in TensorFlow
<li> Creating a Custom Estimator in tf.contrib.learn
<li> Training on Cloud ML Engine
</ol>
<p>
<h3> Simulate some time-series data </h3>
Essentially a set of sinusoids with random amplitudes and frequencies.
End of explanation
import tensorflow as tf
import shutil
from tensorflow.contrib.learn import ModeKeys
import tensorflow.contrib.rnn as rnn
Explanation: <h2> RNN </h2>
For more info, see:
<ol>
<li> http://colah.github.io/posts/2015-08-Understanding-LSTMs/ for the theory
<li> https://www.tensorflow.org/tutorials/recurrent for explanations
<li> https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb for sample code
</ol>
Here, we are trying to predict from 8 values of a timeseries, the next two values.
<p>
<h3> Imports </h3>
Several tensorflow packages and shutil
End of explanation
DEFAULTS = [[0.0] for x in xrange(0, SEQ_LEN)]
BATCH_SIZE = 20
TIMESERIES_COL = 'rawdata'
N_OUTPUTS = 2 # in each sequence, 1-8 are features, and 9-10 is label
N_INPUTS = SEQ_LEN - N_OUTPUTS
Explanation: <h3> Input Fn to read CSV </h3>
Our CSV file structure is quite simple -- a bunch of floating point numbers (note the type of DEFAULTS). We ask for the data to be read BATCH_SIZE sequences at a time. The Estimator API in tf.contrib.learn wants the features returned as a dict. We'll just call this timeseries column 'rawdata'.
<p>
Our CSV file sequences consist of 10 numbers. We'll assume that 8 of them are inputs and we need to predict the next two.
End of explanation
# read data and convert to needed format
def read_dataset(filename, mode=ModeKeys.TRAIN):
def _input_fn():
num_epochs = 100 if mode == ModeKeys.TRAIN else 1
# could be a path to one file or a file pattern.
input_file_names = tf.train.match_filenames_once(filename)
filename_queue = tf.train.string_input_producer(
input_file_names, num_epochs=num_epochs, shuffle=True)
reader = tf.TextLineReader()
_, value = reader.read_up_to(filename_queue, num_records=BATCH_SIZE)
value_column = tf.expand_dims(value, -1, name='value')
print('readcsv={}'.format(value_column))
# all_data is a list of tensors
all_data = tf.decode_csv(value_column, record_defaults=DEFAULTS)
inputs = all_data[:len(all_data)-N_OUTPUTS] # first few values
label = all_data[len(all_data)-N_OUTPUTS : ] # last few values
# from list of tensors to tensor with one more dimension
inputs = tf.concat(inputs, axis=1)
label = tf.concat(label, axis=1)
print('inputs={}'.format(inputs))
return {TIMESERIES_COL: inputs}, label # dict of features, label
return _input_fn
Explanation: Reading data using the Estimator API in tf.learn requires an input_fn. This input_fn needs to return a dict of features and the corresponding labels.
<p>
So, we read the CSV file. The Tensor format here will be batchsize x 1 -- entire line. We then decode the CSV. At this point, all_data will contain a list of Tensors. Each tensor has a shape batchsize x 1. There will be 10 of these tensors, since SEQ_LEN is 10.
<p>
We split these 10 into 8 and 2 (N_OUTPUTS is 2). Put the 8 into a dict, call it features. The other 2 are the ground truth, so labels.
End of explanation
LSTM_SIZE = 3 # number of hidden layers in each of the LSTM cells
# create the inference model
def simple_rnn(features, labels, mode, params):
# 0. Reformat input shape to become a sequence
x = tf.split(features[TIMESERIES_COL], N_INPUTS, 1)
#print 'x={}'.format(x)
# 1. configure the RNN
lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
outputs, _ = tf.nn.static_rnn(lstm_cell, x, dtype=tf.float32)
# slice to keep only the last cell of the RNN
outputs = outputs[-1]
#print 'last outputs={}'.format(outputs)
# output is result of linear activation of last layer of RNN
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias
# 2. loss function, training/eval ops
if mode == ModeKeys.TRAIN or mode == ModeKeys.EVAL:
loss = tf.losses.mean_squared_error(labels, predictions)
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.train.get_global_step(),
learning_rate=0.01,
optimizer="SGD")
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(labels, predictions)
}
else:
loss = None
train_op = None
eval_metric_ops = None
# 3. Create predictions
predictions_dict = {"predicted": predictions}
# 4. Create export outputs
export_outputs = {"predicted": tf.estimator.export.PredictOutput(predictions)}
# 5. return ModelFnOps
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
export_outputs=export_outputs)
Explanation: <h3> Define RNN </h3>
A recursive neural network consists of possibly stacked LSTM cells.
<p>
The RNN has one output per input, so it will have 8 output cells. We use only the last output cell, but rather use it directly, we do a matrix multiplication of that cell by a set of weights to get the actual predictions. This allows for a degree of scaling between inputs and predictions if necessary (we don't really need it in this problem).
<p>
Finally, to supply a model function to the Estimator API, you need to return a ModelFnOps. The rest of the function creates the necessary objects.
End of explanation
def get_train():
return read_dataset('train.csv', mode=ModeKeys.TRAIN)
def get_valid():
return read_dataset('valid.csv', mode=ModeKeys.EVAL)
def serving_input_receiver_fn():
feature_placeholders = {
TIMESERIES_COL: tf.placeholder(tf.float32, [None, N_INPUTS])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
features[TIMESERIES_COL] = tf.squeeze(features[TIMESERIES_COL], axis=[2], name='timeseries')
print('serving: features={}'.format(features[TIMESERIES_COL]))
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
def experiment_fn(output_dir):
train_spec = tf.estimator.TrainSpec(input_fn=get_train(), max_steps=1000)
exporter = tf.estimator.FinalExporter('timeseries',
serving_input_receiver_fn)
eval_spec = tf.estimator.EvalSpec(input_fn=get_valid(),
exporters=[exporter])
estimator = tf.estimator.Estimator(model_fn=simple_rnn, model_dir=output_dir)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
OUTPUT_DIR = 'outputdir'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True) # start fresh each time
experiment_fn(OUTPUT_DIR)
Explanation: <h3> Experiment </h3>
Distributed training is launched off using an Experiment. The key line here is that we use tflearn.Estimator rather than, say tflearn.DNNRegressor. This allows us to provide a model_fn, which will be our RNN defined above. Note also that we specify a serving_input_fn -- this is how we parse the input data provided to us at prediction time.
End of explanation
%bash
# run module as-is
REPO=$(pwd)
echo $REPO
rm -rf outputdir
export PYTHONPATH=${PYTHONPATH}:${REPO}/simplernn
python -m trainer.task \
--train_data_paths="${REPO}/train.csv*" \
--eval_data_paths="${REPO}/valid.csv*" \
--output_dir=${REPO}/outputdir \
--job-dir=./tmp
Explanation: <h3> Standalone Python module </h3>
To train this on Cloud ML Engine, we take the code in this notebook, make an standalone Python module.
End of explanation
%writefile test.json
{"rawdata": [0.0,0.0527,0.10498,0.1561,0.2056,0.253,0.2978,0.3395]}
%bash
MODEL_DIR=$(ls ./outputdir/export/Servo/)
gcloud ml-engine local predict --model-dir=./outputdir/export/Servo/$MODEL_DIR --json-instances=test.json
Explanation: Try out online prediction. This is how the REST API will work after you train on Cloud ML Engine
End of explanation
%bash
# run module on Cloud ML Engine
REPO=$(pwd)
BUCKET=cloud-training-demos-ml # CHANGE AS NEEDED
OUTDIR=gs://${BUCKET}/simplernn/model_trained
JOBNAME=simplernn_$(date -u +%y%m%d_%H%M%S)
REGION=us-central1
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${REPO}/simplernn/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=1.2 \
-- \
--train_data_paths="gs://${BUCKET}/train.csv*" \
--eval_data_paths="gs://${BUCKET}/valid.csv*" \
--output_dir=$OUTDIR \
--num_epochs=100
Explanation: <h3> Cloud ML Engine </h3>
Now to train on Cloud ML Engine.
End of explanation
import tensorflow as tf
import numpy as np
def breakup(sess, x, lookback_len):
N = sess.run(tf.size(x))
windows = [tf.slice(x, [b], [lookback_len]) for b in xrange(0, N-lookback_len)]
windows = tf.stack(windows)
return windows
x = tf.constant(np.arange(1,11, dtype=np.float32))
with tf.Session() as sess:
print 'input=', x.eval()
seqx = breakup(sess, x, 5)
print 'output=', seqx.eval()
Explanation: <h2> Variant: long sequence </h2>
To create short sequences from a very long sequence.
End of explanation |
10,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sistemas de ecuaciones lineales
En este notebook vamos a ver conceptos básicos para resolver sistemas de ecuaciones lineales.
La estructura de esta presentación está basada en http
Step1: Sistemas de ecuaciones lineales
Un ejemplo de un sistema de ecuaciones lineales puede ser el siguiente
$
\begin{split}
a_{11} x_1 + a_{12} x_2+ a_{13}x_3 = b_1 \
a_{21} x_1 + a_{22} x_2+ a_{23} x_3 = b_2 \
a_{31} x_1 + a_{32} x_2+ a_{33} x_3 = b_3 \
\end{split}
$
que puede ser escrito de manera matricial como $Ax = b$, donde la solución se puede escribir como $x=A^{-1}b$. Esto motiva el desarrollo de métodos para encontrar la inversa de una matriz.
Step2: Construyendo un sistemas de ecuaciones lineales
Tenemos ahora el ejemplo siguiente. Tenemos tres puntos en el plano (x,y) y queremos encontrar la parábola que pasa por esos tres puntos.
La ecuación de la parábola es $y=ax^2+bx+c$, si tenemos tres puntos $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$ podemos definir el siguiente sistema de ecuaciones lineales.
$
\begin{split}
x_1^2a+x_1b+c&=y_1 \
x_2^2a+x_2b+c&=y_2 \
x_3^2a+x_3b+c&=y_3 \
\end{split}
$
Que en notación matricial se ven así
$
\left(
\begin{array}{ccc}
x_1^2 & x_1 & 1 \
x_2^2 & x_2 & 1 \
x_3^2 & x_3 & 1 \
\end{array}
\right)
\left(
\begin{array}{c}
a \b \c \
\end{array}
\right)
=
\left(
\begin{array}{c}
y_1 \
y_2 \
y_3 \
\end{array}
\right)
$
Vamos a resolver este sistema lineal, asumiendo que los tres puntos son
Step3: Ejercicio 1
¿Qué pasa si los puntos no se encuentran sobre una parábola?
Ejercicio 2
Tomemos las mediciones de una cantidad $y$ a diferentes tiempos $t$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Sistemas de ecuaciones lineales
En este notebook vamos a ver conceptos básicos para resolver sistemas de ecuaciones lineales.
La estructura de esta presentación está basada en http://nbviewer.ipython.org/github/mbakker7/exploratory_computing_with_python/blob/master/notebook_adv2/py_exp_comp_adv2_sol.ipynb
End of explanation
#usando numpy se pueden resolver sistemas de este tipo.
A = np.array([[4.0,3.0,-2.0],[1.0,2.0,1.0],[-3.0,3.0,2.0]])
b = np.array([[3.0],[2.0],[1.0]])
b = np.array([[3.0],[2.0],[1.0]])
sol = np.linalg.solve(A,b)
print(A)
print(b)
print("sol",sol)
print(np.dot(A,sol))
#la inversa se puede encontrar como
Ainv = np.linalg.inv(A)
print("Ainv")
print(Ainv)
print("A * Ainv")
print(np.dot(A,Ainv))
Explanation: Sistemas de ecuaciones lineales
Un ejemplo de un sistema de ecuaciones lineales puede ser el siguiente
$
\begin{split}
a_{11} x_1 + a_{12} x_2+ a_{13}x_3 = b_1 \
a_{21} x_1 + a_{22} x_2+ a_{23} x_3 = b_2 \
a_{31} x_1 + a_{32} x_2+ a_{33} x_3 = b_3 \
\end{split}
$
que puede ser escrito de manera matricial como $Ax = b$, donde la solución se puede escribir como $x=A^{-1}b$. Esto motiva el desarrollo de métodos para encontrar la inversa de una matriz.
End of explanation
#primero construimos las matrices A y b
xp = np.array([-2, 1,4])
yp = np.array([ 2,-1,4])
A = np.zeros((3,3))
b = np.zeros(3)
for i in range(3):
A[i] = xp[i]**2, xp[i], 1 # Store one row at a time
b[i] = yp[i]
print 'Array A: '
print A
print 'b: ',b
#ahora resolvemos el sistema lineal y graficamos la solucion
sol = np.linalg.solve(A,b)
print 'solution is: ', sol
print 'A dot sol: ', np.dot(A,sol)
plt.plot([-2,1,4], [2,-1,4], 'ro')
x = np.linspace(-3,5,100)
y = sol[0]*x**2 + sol[1]*x + sol[2]
plt.plot(x,y,'b')
Explanation: Construyendo un sistemas de ecuaciones lineales
Tenemos ahora el ejemplo siguiente. Tenemos tres puntos en el plano (x,y) y queremos encontrar la parábola que pasa por esos tres puntos.
La ecuación de la parábola es $y=ax^2+bx+c$, si tenemos tres puntos $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$ podemos definir el siguiente sistema de ecuaciones lineales.
$
\begin{split}
x_1^2a+x_1b+c&=y_1 \
x_2^2a+x_2b+c&=y_2 \
x_3^2a+x_3b+c&=y_3 \
\end{split}
$
Que en notación matricial se ven así
$
\left(
\begin{array}{ccc}
x_1^2 & x_1 & 1 \
x_2^2 & x_2 & 1 \
x_3^2 & x_3 & 1 \
\end{array}
\right)
\left(
\begin{array}{c}
a \b \c \
\end{array}
\right)
=
\left(
\begin{array}{c}
y_1 \
y_2 \
y_3 \
\end{array}
\right)
$
Vamos a resolver este sistema lineal, asumiendo que los tres puntos son: $(x_1,y_1)=(-2,2)$, $(x_2,y_2)=(1,-1)$, $(x_3,y_3)=(4,4)$
End of explanation
data = np.loadtxt("movimiento.dat")
plt.scatter(data[:,0], data[:,1])
Explanation: Ejercicio 1
¿Qué pasa si los puntos no se encuentran sobre una parábola?
Ejercicio 2
Tomemos las mediciones de una cantidad $y$ a diferentes tiempos $t$: $(t_0,y_0)=(0,3)$, $(t_1,y_1)=(0.25,1)$, $(t_2,y_2)=(0.5,-3)$, $(t_3,y_3)=(0.75,1)$. Estas medidas son parte de una función periódica que se puede escribir como
$y = a\cos(\pi t) + b\cos(2\pi t) + c\cos(3\pi t) + d\cos(4\pi t)$
donde $a$, $b$, $c$, and $d$ son parámetros. Construya un sistema de ecuaciones lineales y encuentre el valor de estos parámetros. Verifique su respuesta haciendo una gráfica.
Mínimos cuadrados
Volvamos por un momento al ejercicio de la parábola. Que pasaría si en realidad tuviéramos 10 mediciones? En ese caso la matriz $A$ sería de 10 por 3 y no podríamos encontrar una inversa. Aún así es interesante el problema de encontrar los parámetros de la parábola a partir de las mediciones. Aunque en este caso tenemos que olvidarnos de que la parábola pase por todos los puntos experimentales porque en general no lo va a hacer.
Para este caso tenemos que definir un criterio para decir que los parámetros son los mejores. Un posible criterio es que la suma de los cuadrados entre la curva teórica y los datos sea mínima. ¿Cómo podemos entonces encontrar una solución para este caso?
Cambiando un poco la notación pensemos que tenemos un vector $d$ de datos, un vector $m$ con los parámetros del modelo que queremos encontrar y una matriz $G$ que resume la información sobre el modelo que queremos utilizar para explicar los datos. De esta manera el problema se podría escribir como
$G m = d$
Donde $G$ en general no es invertible. Pero usando el criterio de mínimos cuadrados vamos a tener que el vector $m$ en realidad puede ser estimado por un vector $\hat{m}$ que cumple la siguiente condición
$G^T G \hat{m} = G^{T}d$
donde $T$ indica la transpuesta. Si ahora escribimos $G^{T}G=A$, $\hat{m}=x$ y $G^{T}d=b$ volvemos al problema del principio y podemos encontrar fácilmente a $\hat{m}$
Ejercicio 3
Los datos siguientes
https://raw.githubusercontent.com/ComputoCienciasUniandes/MetodosComputacionales/master/hands_on/lin_algebra/movimiento.dat
Representan una coordenada temporal y una coordenada espacial de un movimiento unidimensional en un campo gravitacional. Encuentre el mejor valor posible de la posición inicial, velocidad inicial y gravedad. Verifique que sus valores son razonables con una gráfica.
End of explanation |
10,467 | Given the following text description, write Python code to implement the functionality described.
Description:
Same Number Of Set Bits As N
; function ; __builtin_popcount function that count set bits in n ; Iterate from n - 1 to 1 ; check if the number of set bits equals to temp increment count ; Driver Code
| Python Code:
/ * returns number of set bits in a number * /
def __builtin_popcount(n ) :
t = 0
while(n > 0 ) :
d = n % 2
n = int(n / 2 )
if(d == 1 ) :
t = t + 1
return t
def smallerNumsWithSameSetBits(n ) :
temp = __builtin_popcount(n )
count = 0
for i in range(n - 1 , 0 , - 1 ) :
if(temp == __builtin_popcount(i ) ) :
count = count + 1
return count
n = 4
print(smallerNumsWithSameSetBits(n ) )
|
10,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Exploring deaths of notable people by year in Wikipedia
By R. Stuart Geiger, last updated 2016-12-28
Dual-licensed under CC-BY-SA 4.0 and the MIT License.
How many articles are in the "[year] deaths" categories in the English Wikipedia?
The first thing I tried was just counting up the number of articles in each of the "[year] deaths" categories, from 2000-2016.
Step3: Interpreting total article counts
One of the first things that we see in this graph is that the data is far from uniform, and has a distinct trend. This should make us suspicious. There are about 4,945 articles in the "2000 deaths" category, and the number steadily rises each year to 7,486 articles in the "2010 deaths" category. Is there any compelling reason we have to believe that the number of notable people in the world would steadily increase by a few percent each year from 2000 to 2010, then plateau? Or is it more of an artifact of what Wikipedia's volunteer editors choose to work on?
What if we look at this over a much longer timescale, like 1800-2016?
Step4: We can see the two big jumps in the 20th century, likely reflecting the events around World War I and II. This makes sense, as those time periods were certainly sharp increases in the total number of deaths, as well as the number of notable deaths. Remember
Step5: Querying the pageview API for all the articles in the "2016 deaths" category
I was wanting to get 2016 pageview data for 2016 deaths, 2015 pageview data for 2015 deaths, and so on. But there isn't full historical data for the pageview API. However, we can take a detour and do some interesting exploration with only the 2016 dataset.
This code iterates through the category for "2016 deaths" and for each page, queries the pageview API to get the number of total pageviews in 2016. It takes a few minutes to run. This throws some errors for a few articles (in pink boxes below), which we will ignore.
Step6: Getting the daily pageview counts for 6 most viewed articles in "2016 deaths" (includes the "Deaths in 2016" article)
Step7: Plotting pageviews per day of top 5 articles
Step9: Querying edit counts for articles in the "[year] deaths" categories using SQL/Quarry
To get data about the number of times each article the "[year] deaths" categories has been edited, we could use the API, but it would take a long time. There are over 100,000 articles in the 2000-2016 categories, and that would require a new API call for each one. This is the kind of query that SQL is meant for, and we can use the Quarry service to run this query directly on Wikipedia's servers.
I've included the query below in a code cell, but it was run here. We will download the results in a TSV file, then load it into a pandas dataframe for processing.
Step17: Filtering articles by number of edits
We can filter the number of articles in the various death by year categories by the total edit count. But what will be our threshold? What are we looking for? I've chosen 7 different thresholds (over 10, 50, 100, 250, 500, 750, and 1,000 edits). The results these different thresholds produce give rise to different interpretations of the same question. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
%matplotlib inline
matplotlib.style.use('seaborn-darkgrid')
import pywikibot
site = pywikibot.Site('en', 'wikipedia')
def yearly_death_counts(startyear,endyear):
years = np.arange(startyear,endyear+1) # add 1 to endyear because np.arange doesn't include the stop
deaths_per_year = {}
for year in years:
deaths_per_year[year] = 0
for year in years:
yearstr = 'Category:' + str(year) + "_deaths"
deathcat = pywikibot.Page(site, yearstr)
deathcat_o = site.categoryinfo(deathcat)
deaths_per_year[year] = deathcat_o['pages']
yearly_articles_df = pd.DataFrame.from_dict(deaths_per_year, orient='index')
yearly_articles_df.columns = ['articles in category']
yearly_articles_df = yearly_articles_df.sort_index()
return yearly_articles_df
yearly_articles_df = yearly_death_counts(2000,2016)
yearly_articles_df
ax = yearly_articles_df.plot(kind='bar',figsize=[10,4])
ax.legend_.remove()
ax.set_ylabel("Number of articles")
ax.set_title(Articles in the "[year] deaths" category in the English Wikipedia)
Explanation: Exploring deaths of notable people by year in Wikipedia
By R. Stuart Geiger, last updated 2016-12-28
Dual-licensed under CC-BY-SA 4.0 and the MIT License.
How many articles are in the "[year] deaths" categories in the English Wikipedia?
The first thing I tried was just counting up the number of articles in each of the "[year] deaths" categories, from 2000-2016.
End of explanation
yearly_articles_df = yearly_death_counts(1800,2016)
ax = yearly_articles_df.plot(kind='line',figsize=[10,4])
ax.legend_.remove()
ax.set_ylabel("Number of articles")
ax.set_title(Articles in the "[year] deaths" category in the English Wikipedia)
Explanation: Interpreting total article counts
One of the first things that we see in this graph is that the data is far from uniform, and has a distinct trend. This should make us suspicious. There are about 4,945 articles in the "2000 deaths" category, and the number steadily rises each year to 7,486 articles in the "2010 deaths" category. Is there any compelling reason we have to believe that the number of notable people in the world would steadily increase by a few percent each year from 2000 to 2010, then plateau? Or is it more of an artifact of what Wikipedia's volunteer editors choose to work on?
What if we look at this over a much longer timescale, like 1800-2016?
End of explanation
!pip install mwviews
from mwviews.api import PageviewsClient
def yearly_views(title,year):
p = PageviewsClient(2)
startdate = str(year) + "010100"
enddate = str(year) + "123123"
d = p.article_views('en.wikipedia', title, granularity='monthly', start=startdate, end=enddate)
total = 0
for month in d.values():
for titlecount in month.values():
if titlecount is not None:
total += titlecount
return total
yearly_views("Prince_(musician)", 2016)
yearly_views("Prince_(musician)", 2015)
yearly_views("Prince_(musician)", 2014)
Explanation: We can see the two big jumps in the 20th century, likely reflecting the events around World War I and II. This makes sense, as those time periods were certainly sharp increases in the total number of deaths, as well as the number of notable deaths. Remember: we have already assumed that Wikipedia's biographical articles doesn't represent all of humanity -- in fact, we are counting on it, so we can distinguish celebrity deaths.
However, for the purposes of our question, is it safe to assume that having a Wikipedia article means being a celebrity? When I hear people talk about so many celebrities dying in 2016, people seem to mean a lower number than the ~7,000 people with Wikipedia articles who died in 2010-2016. The number is maybe two orders of magnitude lower, somewhere closer to 70 than 7,000. So is there a way we can filter Wikipedia articles?
To get at this, I first thought of using the pageview data that Wikimedia collects. There is a nice API about how many times every article in every language version of Wikipedia is viewed each hour. I hadn't played around with that API, so I wanted to try it out.
Pageviews for articles in the "2016 Deaths" category
The mwviews python package has support for hourly, daily, and monthly granularity, but not annual. So I wrote a function that gets the pageview counts for a given article for an entire year. But, as we will see, the data in the historical pageview API only goes back to mid-2015.
End of explanation
year = 2016
yearstr = 'Category:' + str(year) + "_deaths"
deathcat = pywikibot.Page(site, yearstr)
pageviews_2016 = {}
for page in site.categorymembers(deathcat):
if page.title().find("List_of") is -1 and page.title().find("Category:") is -1:
try:
page_yearly_views = yearly_views(page.title(),year)
except Exception as e:
page_yearly_views = 0
pageviews_2016[page.title()] = page_yearly_views
pageviews_df = pd.DataFrame.from_dict(pageviews_2016,orient='index')
pageviews_df = pageviews_df.sort_values(0, ascending=False)
pageviews_df.head(25)
pageviews_df.to_csv("enwiki_pageviews_2016.csv")
Explanation: Querying the pageview API for all the articles in the "2016 deaths" category
I was wanting to get 2016 pageview data for 2016 deaths, 2015 pageview data for 2015 deaths, and so on. But there isn't full historical data for the pageview API. However, we can take a detour and do some interesting exploration with only the 2016 dataset.
This code iterates through the category for "2016 deaths" and for each page, queries the pageview API to get the number of total pageviews in 2016. It takes a few minutes to run. This throws some errors for a few articles (in pink boxes below), which we will ignore.
End of explanation
articles = []
for index,row in pageviews_df.head(6).iterrows():
articles.append(index)
from mwviews.api import PageviewsClient
p = PageviewsClient(10)
startdate = "2016010100"
enddate = "2016123123"
counts_dict = p.article_views('en.wikipedia', articles, granularity='daily', start=startdate, end=enddate)
counts_df = pd.DataFrame.from_dict(counts_dict, orient='index')
counts_df = counts_df.fillna(0)
counts_df.to_csv("deaths-enwiki-2016.csv")
Explanation: Getting the daily pageview counts for 6 most viewed articles in "2016 deaths" (includes the "Deaths in 2016" article)
End of explanation
articles = []
for index,row in pageviews_df.head(6).iterrows():
articles.append(index)
counts_dict = p.article_views('en.wikipedia', articles, granularity='daily', start=startdate, end=enddate)
counts_df = pd.DataFrame.from_dict(counts_dict, orient='index')
counts_df = counts_df.fillna(0)
matplotlib.style.use('seaborn-darkgrid')
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 18}
matplotlib.rc('font', **font)
plt.figure(figsize=[14,7.2])
for title in counts_df:
fig = counts_df[title].plot(legend=True, linewidth=2)
fig.set_ylabel('Views per day')
plt.legend(loc='best')
Explanation: Plotting pageviews per day of top 5 articles
End of explanation
sql_query =
select cl_to, cl_from, count(rev_id) as edits, page_title
from (select * from categorylinks where cl_to LIKE '20___deaths') as d
inner join revision on cl_from = rev_page
inner join page on rev_page = page_id
where page_namespace = 0 and cl_to NOT LIKE '200s_deaths' and page_title NOT LIKE 'List_of%'
group by cl_from
!wget https://quarry.wmflabs.org/run/139193/output/0/tsv?download=true -O deaths.tsv
deaths_df = pd.read_csv("deaths.tsv", sep='\t')
deaths_df.columns = ['year', 'page_id', 'edits', 'title']
deaths_df.head(15)
Explanation: Querying edit counts for articles in the "[year] deaths" categories using SQL/Quarry
To get data about the number of times each article the "[year] deaths" categories has been edited, we could use the API, but it would take a long time. There are over 100,000 articles in the 2000-2016 categories, and that would require a new API call for each one. This is the kind of query that SQL is meant for, and we can use the Quarry service to run this query directly on Wikipedia's servers.
I've included the query below in a code cell, but it was run here. We will download the results in a TSV file, then load it into a pandas dataframe for processing.
End of explanation
deaths_over10 = deaths_df[deaths_df.edits>10]
deaths_over50 = deaths_df[deaths_df.edits>50]
deaths_over100 = deaths_df[deaths_df.edits>100]
deaths_over250 = deaths_df[deaths_df.edits>250]
deaths_over500 = deaths_df[deaths_df.edits>500]
deaths_over750 = deaths_df[deaths_df.edits>750]
deaths_over1000 = deaths_df[deaths_df.edits>1000]
deaths_over10 = deaths_over10[['year','edits']]
deaths_over50 = deaths_over50[['year','edits']]
deaths_over100 = deaths_over100[['year','edits']]
deaths_over250 = deaths_over250[['year','edits']]
deaths_over500 = deaths_over500[['year','edits']]
deaths_over750 = deaths_over750[['year','edits']]
deaths_over1000 = deaths_over1000[['year','edits']]
matplotlib.style.use('seaborn-darkgrid')
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 10}
matplotlib.rc('font', **font)
ax = deaths_over10.groupby(['year']).agg(['count']).plot(kind='barh')
ax.legend_.remove()
ax.set_title(Number of articles with >10 edits in "[year] deaths" category)
ax = deaths_over50.groupby(['year']).agg(['count']).plot(kind='barh')
ax.legend_.remove()
ax.set_title(Number of articles with >50 edits in "[year] deaths" category)
ax = deaths_over100.groupby(['year']).agg(['count']).plot(kind='barh')
ax.legend_.remove()
ax.set_title(Number of articles with >100 edits in "[year] deaths" category)
ax = deaths_over250.groupby(['year']).agg(['count']).plot(kind='barh')
ax.legend_.remove()
ax.set_title(Number of articles with >250 edits in "[year] deaths" category)
ax = deaths_over500.groupby(['year']).agg(['count']).plot(kind='barh')
ax.legend_.remove()
ax.set_title(Number of articles with >500 edits in "[year] deaths" category)
ax = deaths_over750.groupby(['year']).agg(['count']).plot(kind='barh')
ax.legend_.remove()
ax.set_title(Number of articles with >750 edits in "[year] deaths" category)
ax = deaths_over1000.groupby(['year']).agg(['count']).plot(kind='barh')
ax.legend_.remove()
ax.set_title(Number of articles with >1,000 edits in "[year] deaths" category)
Explanation: Filtering articles by number of edits
We can filter the number of articles in the various death by year categories by the total edit count. But what will be our threshold? What are we looking for? I've chosen 7 different thresholds (over 10, 50, 100, 250, 500, 750, and 1,000 edits). The results these different thresholds produce give rise to different interpretations of the same question.
End of explanation |
10,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TOC trends October 2016 (part 2)
This notebook continues the work described here, where my latest trends code was modified and tested. My aim here is to use the code to generate trends results for three key periods of interest (see e-mail from Heleen 19/10/2016 at 10
Step1: 2. 1990 to 2012
Step2: 3. 1990 to 2004
Step3: 4. 1998 to 2012
Step4: 5. All data
Step5: 6. Basic checking
6.1. Boxplots
As a very basic check, let's create boxplots showing the long-term mean for each parameter at each site (i.e. each datapoint is the mean of all the annual means for a particular parameter at a single site). This should help identify any really extreme values that need further checking and cleaning. Note the following
Step6: 6.2. Map visualisation
As a further check, I'd like to build an updated map visualisation incorporating all of the results produced above. This requires some merging of the results files created above. I've also manually exported the basic station properties for all the sites associated with the 13 RESA2 projects chosen for this analysis. This file can be found here
Step7: I have uploaded all the trends plots to our web-hosting platform in the following folder
Step8: The workflow for creating the map is as follows
Step9: Now add an "include" column based on the criteria in Heleen's e-mail (received 24/10/2016 at 11
Step10: Heleen also wants a version in "wide" format, where each row includes all the data for a single station. I'm going to remove the data_period column, because column headings like Al_1990-2012_1991-2010 are confusing. | Python Code:
# Import custom functions
# Connect to db
resa2_basic_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template'
r'\useful_resa2_code.py')
resa2_basic = imp.load_source('useful_resa2_code', resa2_basic_path)
engine, conn = resa2_basic.connect_to_resa2()
# Import code for trends analysis
resa2_trends_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Python\icpw\toc_trends_analysis.py')
resa2_trends = imp.load_source('toc_trends_analysis', resa2_trends_path)
# User input
# Specify projects of interest
proj_list = ['ICPW_TOCTRENDS_2015_CA_ATL',
'ICPW_TOCTRENDS_2015_CA_DO',
'ICPW_TOCTRENDS_2015_CA_ICPW',
'ICPW_TOCTRENDS_2015_CA_NF',
'ICPW_TOCTRENDS_2015_CA_QU',
'ICPW_TOCTRENDS_2015_CZ',
'ICPW_TOCTRENDS_2015_Cz2',
'ICPW_TOCTRENDS_2015_FI',
'ICPW_TOCTRENDS_2015_NO',
'ICPW_TOCTRENDS_2015_SE',
'ICPW_TOCTRENDS_2015_UK',
'ICPW_TOCTRENDS_2015_US_LTM']
# Specify results folder
res_fold = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results')
Explanation: TOC trends October 2016 (part 2)
This notebook continues the work described here, where my latest trends code was modified and tested. My aim here is to use the code to generate trends results for three key periods of interest (see e-mail from Heleen 19/10/2016 at 10:24):
1990-2012
1990-2004
1998-2012
as well as creating a fourth set of results using all of the data available for each site. This latter set of results will hopefully help to identify any further obvious data issues (strange values etc.).
Update 27/10/2016: A few additional modifications have been made to the trends code, so that it now includes the number of non-missing values in the first and last 5 years. See e-mail from Heleen received 25/10/2016 at 15:56 for details. The code now performs the following additional calculations:
If start and/or end years are specified, the output includes the columns n_start and n_end, which specify the number of non-null values within 5 years of the start and end years, respectively. <br><br>
If start and/or end years are not specified, the n_start and n_end columns record the number of non-null values within 5 years of the start and end of the data series.
These changes do not affect the plotting code, so I haven't re-generated the plots, but I have replaced the various output spreadsheets.
1. Import functions and specify user input
End of explanation
# Run analysis
# Specify period of interest
st_yr, end_yr = 1990, 2012
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list, engine,
st_yr=st_yr, end_yr=end_yr,
plot=False, fold=False)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
nd_df.to_csv(nd_csv, index=False)
Explanation: 2. 1990 to 2012
End of explanation
# Run analysis
# Specify period of interest
st_yr, end_yr = 1990, 2004
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list, engine,
st_yr=st_yr, end_yr=end_yr,
plot=False, fold=False)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
nd_df.to_csv(nd_csv, index=False)
Explanation: 3. 1990 to 2004
End of explanation
# Run analysis
# Specify period of interest
st_yr, end_yr = 1998, 2012
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list, engine,
st_yr=st_yr, end_yr=end_yr,
plot=False, fold=False)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
nd_df.to_csv(nd_csv, index=False)
Explanation: 4. 1998 to 2012
End of explanation
# Run analysis
# Specify period of interest
st_yr, end_yr = None, None
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_all_years')
res_csv = os.path.join(res_fold, 'res_all_years.csv')
dup_csv = os.path.join(res_fold, 'dup_all_years.csv')
nd_csv = os.path.join(res_fold, 'nd_all_years.csv')
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list, engine,
st_yr=st_yr, end_yr=end_yr,
plot=False, fold=False)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
nd_df.to_csv(nd_csv, index=False)
Explanation: 5. All data
End of explanation
# Set up plot
fig = plt.figure(figsize=(20,10))
sn.set(style="ticks", palette="muted",
color_codes=True, font_scale=2)
# Horizontal boxplots
ax = sn.boxplot(x="mean", y="par_id", data=res_df,
whis=np.inf, color="c")
# Add "raw" data points for each observation, with some "jitter"
# to make them visible
sn.stripplot(x="mean", y="par_id", data=res_df, jitter=True,
size=3, color=".3", linewidth=0)
# Remove axis lines
sn.despine(trim=True)
Explanation: 6. Basic checking
6.1. Boxplots
As a very basic check, let's create boxplots showing the long-term mean for each parameter at each site (i.e. each datapoint is the mean of all the annual means for a particular parameter at a single site). This should help identify any really extreme values that need further checking and cleaning. Note the following:
All values are in $\mu eq/l$, except for Al and TOC, which have units of $\mu g/l$ and $mgC/l$, respectively. <br><br>
The "whiskers" on the boxplots extend from the minimum to the maximum values in each dataset (i.e. they show the full data range, not a percentile interval or a multiple of the IQR).
End of explanation
# Read results files and concatenate
# Container for data
df_list = []
# Loop over periods
for per in ['1990-2012', '1990-2004', '1998-2012', 'all_years']:
res_path = os.path.join(res_fold, 'res_%s.csv' % per)
df = pd.read_csv(res_path)
# Change 'period' col to 'data_period' and add 'analysis_period'
df['data_period'] = df['period']
del df['period']
df['analysis_period'] = per
df_list.append(df)
# Concat
df = pd.concat(df_list, axis=0)
# Read station data
stn_path = os.path.join(res_fold, 'trends_sites_oct_2016.xlsx')
stn_df = pd.read_excel(stn_path, sheetname='data')
# Join
df = pd.merge(df, stn_df, how='left', on='station_id')
# Re-order columns
df = df[['station_id', 'station_name', 'station_code', 'nfc_code','country',
'lat', 'lon', 'analysis_period', 'data_period', 'par_id',
'non_missing', 'n_start', 'n_end', 'mean', 'median', 'std_dev',
'mk_stat', 'norm_mk_stat', 'mk_p_val', 'trend', 'sen_slp']]
df.head()
Explanation: 6.2. Map visualisation
As a further check, I'd like to build an updated map visualisation incorporating all of the results produced above. This requires some merging of the results files created above. I've also manually exported the basic station properties for all the sites associated with the 13 RESA2 projects chosen for this analysis. This file can be found here:
C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Results\trends_sites_oct_2016.xlsx
Note the information in the readme sheet, which explains that there are 431 sites with data in the selected projects. This is less than in the previous analysis and perhaps less than Heleen was expecting (?). Do we need to review the list of projects under consideration?
End of explanation
def assign_colour(row):
if row['trend'] == 'increasing':
return 'small_red'
elif row['trend'] == 'decreasing':
return 'small_green'
else:
return 'small_yellow'
def build_path(row):
base = r'http://77.104.141.195/~icpwater/wp-content/trends_plots/'
# Get row properties
an_per = row['analysis_period']
stn = row['station_id']
par = row['par_id']
da_per = row['data_period']
# Make path
full_path = os.path.join(base,
'trends_plots_%s' % an_per,
'%s_%s_%s.png' % (stn, par, da_per))
return full_path
# Add symbol column
df['symbol'] = df.apply(assign_colour, axis=1)
# Build path to plots
df['link'] = df.apply(build_path, axis=1)
# Filter results
df = df.query('(non_missing != 0) and (non_missing != 1)')
# Save
out_path = os.path.join(res_fold, 'data_vis_all.csv')
df.to_csv(out_path, index=False, encoding='utf-8')
df.head()
Explanation: I have uploaded all the trends plots to our web-hosting platform in the following folder:
http://77.104.141.195/~icpwater/wp-content/trends_plots
In order to display these on my map, I need to build a column containing direct links to each of these files.
I also need to add a column defining colours for the three trend types.
Finally, I'm going to drop rows where non_missing = 0 or 1, as this implies there's not enough data to calculate any summary statistics.
End of explanation
# Read results files and concatenate
# Container for data
df_list = []
# Loop over periods
for per in ['1990-2012', '1990-2004', '1998-2012', 'all_years']:
res_path = os.path.join(res_fold, 'res_%s.csv' % per)
df = pd.read_csv(res_path)
# Change 'period' col to 'data_period' and add 'analysis_period'
df['data_period'] = df['period']
del df['period']
df['analysis_period'] = per
df_list.append(df)
# Concat
df = pd.concat(df_list, axis=0)
# Read station data
stn_path = os.path.join(res_fold, 'trends_sites_oct_2016.xlsx')
stn_df = pd.read_excel(stn_path, sheetname='data')
# Join
df = pd.merge(df, stn_df, how='left', on='station_id')
# Read projects table
sql = ('SELECT project_id, project_name '
'FROM resa2.projects '
'WHERE project_name in %s' % str(tuple(proj_list)))
proj_df = pd.read_sql_query(sql, engine)
# Get associated stations
sql = ('SELECT station_id, project_id '
'FROM resa2.projects_stations '
'WHERE project_id in %s' % str(tuple(proj_df['project_id'].values)))
proj_stn_df = pd.read_sql_query(sql, engine)
# Join proj details
proj_df = pd.merge(proj_stn_df, proj_df, how='left', on ='project_id')
# Join to results
df = pd.merge(df, proj_df, how='left', on='station_id')
# Re-order columns
df = df[['project_id', 'project_name', 'country', 'station_id',
'station_code', 'station_name', 'nfc_code', 'type',
'lat', 'lon', 'analysis_period', 'data_period', 'par_id',
'non_missing', 'n_start', 'n_end', 'mean', 'median',
'std_dev', 'mk_stat', 'norm_mk_stat', 'mk_p_val', 'trend',
'sen_slp']]
df.head()
Explanation: The workflow for creating the map is as follows:
Run the trends code and the notebook cells above to create a CSV file that will form the basis of the Google Fusion Table (data_vis_all.csv). <br><br>
Open a blank Excel workbook and choose Data > From text to import the CSV. Be sure to set the encoding to 65001: Unicode (utf-8) and set the column data types explicitly, otherwise Excel will truncate some of the NFC site codes and the special characters in the station names won't reproduce properly. (NB: there are still some problems with special characters in the station names, because in many cases RESA2 is storing names that are already corrupted. I don't have time to fix this now - it's another database issue to add to the list. The workflow described here should faithfully reproduce whatever's in the database, which is the best I can do at present). Check the file looks OK (data_vis_all.xlsx). <br><br>
Upload all the plots to SiteGround in the wp-content/trends_plots folder. This is done using the FileZilla FTP client. <br><br>
Create a new Fusion Table and import the data. Make sure to check the box to make the table downloadable, and make sure it's in a public folder on Google Drive. Next, check the column types are correct. In particular, lat and lon need be set to define the locations and the link column needs to set to Text > Link. Then switch to map view, click Change feature styles and style the map based on the entires in the symbol column. Turn on the Terrain option in map view if you want to. <br><br>
Click Change info window and modify how the pop-up information box is displayed. Entering something like this in the Custom tab is a good start:
<center><h2>{par_id} at {station_name}, {country}</h2></center>
<center><h2>{analysis_period}</h2></center>
<center><table>
<tr>
<td><b>ICPW ID:</b></td>
<td>{station_id}</td>
</tr>
<tr>
<td><b>ICPW code:</b></td>
<td>{station_code}</td>
</tr>
<tr>
<td><b>NFC code:</b></td>
<td>{nfc_code}</td>
</tr>
<tr>
<td><b>Data period:</b></td>
<td>{data_period}</td>
</tr>
<tr>
<td><b>Number of years with data:</b></td>
<td>{non_missing}</td>
</tr>
<tr>
<td><b>Mean:</b></td>
<td>{mean}</td>
</tr>
<tr>
<td><b>Median:</b></td>
<td>{median}</td>
</tr>
<tr>
<td><b>Standard deviation:</b></td>
<td>{std_dev}</td>
</tr>
<tr>
<td><b>Normalised Mann-Kendall statistic:</b></td>
<td>{norm_mk_stat}</td>
</tr>
<tr>
<td><b>Mann-Kendall p-value:</b></td>
<td>{mk_p_val}</td>
</tr>
<tr>
<td><b>Trend:</b></td>
<td>{trend}</td>
</tr>
<tr>
<td><b>Theil-Sen slope:</b></td>
<td>{sen_slp}</td>
</tr>
</table></center>
<center><img src={link} height="250"></center>
Follow the instructions in this Word document:
C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Python\Fusion tables tips.docx
which describes adding the table to the Fusion Tables Layer Wizard and then modifying the subsequent JavaScript to add e.g. filter boxes, legends etc. Save the resulting code as an .html file and upload it to a suitable public location at SiteGround. <br><br>
Link to you finished map by embedding the public path to your HTML file as an iframe in your webpage.
As of 21/10/2016, the finished page is here.
7. Data restructuring
Heleen would like the output in a particular format - see e-mailed received 19/10/2016 at 10:24 for details. The code below reads the results files and restructures them.
End of explanation
def include(row):
if ((row['analysis_period'] == '1990-2012') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 15)):
return 'yes'
elif ((row['analysis_period'] == '1990-2004') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 10)):
return 'yes'
elif ((row['analysis_period'] == '1998-2012') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 10)):
return 'yes'
else:
return 'no'
df['include'] = df.apply(include, axis=1)
# Save output
out_path = os.path.join(res_fold, 'toc_trends_long_format.csv')
df.to_csv(out_path, index=False, encoding='utf-8')
df.head()
Explanation: Now add an "include" column based on the criteria in Heleen's e-mail (received 24/10/2016 at 11:23) and save the result.
Updated 27/10/2016: The refined criteria are actually in the e-mail from Heleen received 25/10/2016 at 15:56.
End of explanation
del df['data_period']
# Melt to "long" format
melt_df = pd.melt(df,
id_vars=['project_id', 'project_name', 'country',
'station_id', 'station_code', 'station_name',
'nfc_code', 'type', 'lat', 'lon', 'analysis_period',
'par_id', 'include'],
var_name='stat')
# Get only values where include='yes'
melt_df = melt_df.query('include == "yes"')
del melt_df['include']
melt_df.head()
# Build multi-index on everything except "value"
melt_df.set_index(['project_id', 'project_name', 'country',
'station_id', 'station_code', 'station_name',
'nfc_code', 'type', 'lat', 'lon', 'par_id',
'analysis_period', 'stat'], inplace=True)
melt_df.head()
# Unstack levels of interest to columns
wide_df = melt_df.unstack(level=['par_id', 'analysis_period', 'stat'])
# Drop unwanted "value" level in index
wide_df.columns = wide_df.columns.droplevel(0)
# Replace multi-index with separate components concatenated with '_'
wide_df.columns = ["_".join(item) for item in wide_df.columns]
# Reset multiindex on rows
wide_df = wide_df.reset_index()
# Save output
out_path = os.path.join(res_fold, 'toc_trends_wide_format.csv')
wide_df.to_csv(out_path, index=False, encoding='utf-8')
wide_df.head()
Explanation: Heleen also wants a version in "wide" format, where each row includes all the data for a single station. I'm going to remove the data_period column, because column headings like Al_1990-2012_1991-2010 are confusing.
End of explanation |
10,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Collection - Crawling Flight Crash Data
This is the first step in our project. The code below shows a crawler (written using BeautifulSoup, the old school way) that gets raw HTML data from this site, extracts the data from the HTML tables, and writes it to a MongoDB instance running on the same machine.
The entire data pipeline is shown below
Step1: Declaring the important 'Global Variables'.
Basically the configuration of our crawl. The start_year and the end year of the crawl could be changed to suit your needs
Step2: Connecting to the Mongo DB client running on the same machine.
Must change if the Mongo DB is running on a separate machine. Check MongoDB docs
Step3: Helper function to convert month from text to a number [1-12]
Step4: Helper function that takes <i>url (string)</i> as input and returns <i>BeautifulSoup Object</i> of the url
Step5: Helper function that pushes a <i> Beautiful Soup Object (HTML table in this case) </i> to a <i>Mongo DB collection</i>
Open this crash record in your browser, and have a look at the HTML source code for reference.
The table_ input basically parses each value according to the format of the key (i.e. Date/location/Aircraft Type/others)
The string.encode('utf-8') is necessary, as the website uses windows-1252 character set- which causes some characters to get messed up if the encoding is not explicitly changed.
This is what the HTML table looks like
Step6: Crawler- The Core
<B><U>MAIN IDEA</U> | Python Code:
__author__ = 'shivam_gaur'
import requests
from bs4 import BeautifulSoup
import re
import os
import pymongo
from pymongo import MongoClient
import datetime
Explanation: Data Collection - Crawling Flight Crash Data
This is the first step in our project. The code below shows a crawler (written using BeautifulSoup, the old school way) that gets raw HTML data from this site, extracts the data from the HTML tables, and writes it to a MongoDB instance running on the same machine.
The entire data pipeline is shown below:
<img src="data_pipeline.PNG">
Import the required libraries
End of explanation
# The URL
rooturl = "http://www.planecrashinfo.com"
url = "http://www.planecrashinfo.com/database.htm"
#change start_year to 1920 to crawl the entire dataset
start_year = 2014
end_year = 2016
year_range = range(start_year,end_year+1,1)
newurl=''
Explanation: Declaring the important 'Global Variables'.
Basically the configuration of our crawl. The start_year and the end year of the crawl could be changed to suit your needs
End of explanation
# Connecting to Mongo instance
client = MongoClient()
# specify the name of the db in brackets
db = client['aircrashdb']
# specify the name of the collection in brackets
collection = db['crawled_data']
Explanation: Connecting to the Mongo DB client running on the same machine.
Must change if the Mongo DB is running on a separate machine. Check MongoDB docs
End of explanation
def getMonth(month):
Months = ['january','february','march','april','may','june','july','august','september','october','november','december']
month = month.lower()
for i,value in enumerate(Months):
if value == month:
return i+1
return 0 # if it is not a valid month string
Explanation: Helper function to convert month from text to a number [1-12]
End of explanation
def makeBeautifulSoupObject(url):
# Use a `Session` instance to customize how `requests` handles making HTTP requests.
session = requests.Session()
# `mount` a custom adapter that retries failed connections for HTTP and HTTPS requests, in this case- 5 times
session.mount("http://", requests.adapters.HTTPAdapter(max_retries=5))
session.mount("https://", requests.adapters.HTTPAdapter(max_retries=5))
source_code = session.get(url=url)
plain_text = source_code.text.encode('utf8')
soup = BeautifulSoup(plain_text, "lxml")
return soup
Explanation: Helper function that takes <i>url (string)</i> as input and returns <i>BeautifulSoup Object</i> of the url
End of explanation
def push_record_to_mongo(table_):
record = {}
table=BeautifulSoup(str(table_[0]))
for tr in table.find_all("tr")[1:]:
tds = tr.find_all("td")
# encoding the 'value' string to utf-8 and removing any non-breaking space (HTML Character)
tmp_str = tds[1].string.encode('utf-8').replace(" ", "")
value = str(tmp_str) # this is the value- In Column #2 of the HTML table
key = tds[0].string # this is the key- In Column #1 of the HTML table
if key == "Date:":
dat = str(value).replace(',','').split(' ')
date = datetime.datetime(int(dat[2]),getMonth(dat[0]),int(dat[1]))
record["date"] = date
elif key == "Time:":
if not value == '?':
time = re.sub("[^0-9]", "",value)
record["time"] = time
else:
record["time"] = "NULL"
elif key == "Location:":
if not value == '?':
record["loc"] = str(value)
else:
record["loc"] = "NULL"
elif key == "Operator:":
if not value == '?':
record["op"] = str(value)
else:
record["op"] = "NULL"
elif key == "Flight#:":
if not value == '?':
record["flight"] = str(value)
else:
record["flight"] = "NULL"
elif key == "Route:":
if not value == '?':
record["route"] = str(value)
else:
record["route"] = "NULL"
elif key == "Registration:":
if not value == '?':
record["reg"] = str(value)
else:
record["reg"] = "NULL"
elif key == "cn / ln:":
if not value == '?':
record["cnln"] = str(value)
else:
record["cnln"] = "NULL"
elif key == "Aboard:":
if not value == '?' :
s = ' '.join(value.split())
aboard_ = s.replace('(','').replace(')','').split(' ')
if aboard_[0] != '?':
record["aboard_total"] = aboard_[0]
else:
record["aboard_total"] = 'NULL'
passengers = aboard_[1].replace("passengers:","")
if passengers != '?':
record["aboard_passengers"] = passengers
else:
record["aboard_passengers"] = 'NULL'
crew = aboard_[2].replace("crew:","")
if crew != '?':
record["aboard_crew"] = crew
else:
record["aboard_crew"] = 'NULL'
else:
record["aboard_total"] = 'NULL'
record["aboard_passengers"] = 'NULL'
record["aboard_crew"] = 'NULL'
elif key == "Fatalities:":
if not value == '?':
s = ' '.join(value.split())
fatalities_ = s.replace('(','').replace(')','').split(' ')
if fatalities_[0] != '?':
record["fatalities_total"] = fatalities_[0]
else:
record["fatalities_total"] = 'NULL'
passengers = fatalities_[1].replace("passengers:","")
if passengers != '?':
record["fatalities_passengers"] = passengers
else:
record["fatalities_passengers"] = 'NULL'
crew = fatalities_[2].replace("crew:","")
if crew != '?':
record["fatalities_crew"] = crew
else:
record["fatalities_crew"] = 'NULL'
else:
record["aboard_total"] = 'NULL'
record["aboard_passengers"] = 'NULL'
record["aboard_crew"] = 'NULL'
elif key == "Ground:":
if not value == '?':
record["ground"] = str(value)
else:
record["ground"] = "NULL"
elif key == "Summary:":
if not value == '?':
record["summary"] = str(value)
else:
record["summary"] = "NULL"
else:
st1 = ''.join(tds[0].string.split()).lower()
if not value == '?':
record[st1] = str(value)
else:
record[st1] = "NULL"
collection.insert_one(record)
Explanation: Helper function that pushes a <i> Beautiful Soup Object (HTML table in this case) </i> to a <i>Mongo DB collection</i>
Open this crash record in your browser, and have a look at the HTML source code for reference.
The table_ input basically parses each value according to the format of the key (i.e. Date/location/Aircraft Type/others)
The string.encode('utf-8') is necessary, as the website uses windows-1252 character set- which causes some characters to get messed up if the encoding is not explicitly changed.
This is what the HTML table looks like:
<img src='record_example.PNG'>
End of explanation
program_start_time = datetime.datetime.utcnow() # you could uncomment this line if you wish to time the runtime of blocks from here onwards
for i in year_range:
year_start = datetime.datetime.utcnow()
# appending the path (year) to the url hostname
newurl = rooturl + "/" + str(i) + "/" + str(i) + ".htm"
soup = makeBeautifulSoupObject(newurl)
tables = soup.find_all('table')
print (newurl)
for table in tables:
#finding the no. of records for the given year
number_of_rows = len(table.findAll(lambda tag: tag.name == 'tr' and tag.findParent('table') == table))
row_range = range(1,number_of_rows,1)
for j in row_range:
# appending the row number to sub-path of the url, and building the final url that will be used for sending http request
accident_url = newurl.replace(".htm","") + "-" + str(j) + ".htm"
web_record = makeBeautifulSoupObject(accident_url)
# removing all the boilerplate html code except the data table
table_ = web_record.find_all('table')
push_record_to_mongo(table_)
print("Time to crawl year " + str(i) + "-" + str(datetime.datetime.utcnow()-year_start))
program_end_time = datetime.datetime.utcnow()
print ("_____________________________________")
print ("Total program time - " + str(program_end_time-program_start_time))
Explanation: Crawler- The Core
<B><U>MAIN IDEA</U>:</B> Leveraging the pattern in the url of the website.
The hostname of the url remains the same for all the years - i.e. http://<-hostname-> .
The path for each year comes after the hostname, i.e. http://<-hostname->/<-year->, where year is a 4 digit year from 1920 to 2016.
The sub path that actually points us to the record page is http://hostname/ <-year>/<-year>-<-record_number-> , where record_number is a number between 1 and the number of crashes that took place in the corresponding year.
http://www.planecrashinfo.com/<-year->/<-year->-<-record_number->.htm
We will <b>iterate through all the years</b> specified at the beginning of this notebook, and send an appropriate HTTP request by building a url, leveraging the url pattern described above.
The code can be parallelized by using IPython.parallel library, not done for the sake of simplicity
End of explanation |
10,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: STACKING DECORATORS
Lets look at decorators again. They're related to what we call "function composition" in that the decorator "eats" what's defined just below it, and returns a proxy with the same name, likewise a callable.
Here's a decorator that tells the identity function under it what letter to tack on (concatenate) to the end of string s, the argument.
The decorator itself takes an argument. The callable it then returns, the adder function, is ready to do the work of finally "eating" ident and extending what it does by the one letter).
Step2: Now let's stack up some decorators, showing how each swallows the result below. Work from the bottom up...
Step4: Now let's bring Compose into the mix, a decorator class makes our proxies able to compose with one another by means of multiplication. Even powering has been implemented. We're free to make our target functions composable, in addition to controlling what letters to add. | Python Code:
def plus(char):
returns a prepped adder to eat the target, and to
build a little lambda that does the job.
def adder(f):
return lambda s: f(s) + char
return adder
@plus('R')
def ident(s):
return s
ident('X') # do the job!
Explanation: STACKING DECORATORS
Lets look at decorators again. They're related to what we call "function composition" in that the decorator "eats" what's defined just below it, and returns a proxy with the same name, likewise a callable.
Here's a decorator that tells the identity function under it what letter to tack on (concatenate) to the end of string s, the argument.
The decorator itself takes an argument. The callable it then returns, the adder function, is ready to do the work of finally "eating" ident and extending what it does by the one letter).
End of explanation
# append from the bottom up, successive wrappings
@plus('W')
@plus('A')
@plus('R')
def ident(s):
return s
print("ident('X') :", ident('X'))
print("ident('WAR'):", ident('WAR'))
Explanation: Now let's stack up some decorators, showing how each swallows the result below. Work from the bottom up...
End of explanation
class Compose:
make function composible with multiply
also make self powerable e.g. f ** 3 == f * f * f
From: https://repl.it/@kurner
def __init__(self, f):
self.func = f
def __mul__(self, other):
return Compose(lambda x: self(other(x)))
def __pow__(self, n):
if n == 0:
return Compose(lambda x: x) # identity function
if n == 1:
return self
if n > 1:
me = self
for _ in range(n-1):
me *= self # me times me times me...
return me
def __call__(self, x): # callable instances
return self.func(x)
@Compose
@plus('W')
@plus('A')
@plus('R')
def ident(s):
return s
H = ident ** 3
H('X')
@Compose
@plus('T')
@plus('Y')
@plus('P')
def F(s):
return s
@Compose
@plus('N')
@plus('O')
@plus('H')
def G(s):
return s
H = F * G * G * F
H('')
@plus('EXPERIMENT!')
@plus('ANOTHER ')
@plus('DO ')
@plus('LETS ')
def ident(s): return s
ident('')
Explanation: Now let's bring Compose into the mix, a decorator class makes our proxies able to compose with one another by means of multiplication. Even powering has been implemented. We're free to make our target functions composable, in addition to controlling what letters to add.
End of explanation |
10,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Image Segmentation with Convolutional Neural Networks (CNNs)
Image segmentation
Here, we focus on using Convolutional Neural Networks or CNNs for segmenting images. Specifically, we use Python and Keras (with TensorFlow as backend) to implement a CNN capable of segmenting lungs in CT scan images with ~94% accuracy.
Convolutional neural networks
CNNs are a special kind of neural network inspired by the brain’s visual cortex. So it should come as no surprise that they excel at visual tasks. CNNs have layers. Each layer learns higher-level features from the previous layer. This layered architecture is analogous to how, in the visual cortex, higher-level neurons react to higher-level patterns that are combinations of lower-level patterns generated by lower-level neurons. Also, unlike traditional Artificial Neural Networks or ANNs, which typically consist of fully connected layers, CNNs consist of partially connected layers. In fact, in CNNs, neurons in one layer typically only connect to a few neighboring neurons from the previous layer. This partially connected architecture is analogous to how so-called cortical neurons in the visual cortex only react to stimuli in their receptive fields, which overlap to cover the entire visual field. Partial connectivity has computational benefits as well, since, with fewer connections, fewer weights need to be learned during training. This allows CNNs to handle larger images than traditional ANNs.
Import libraries and initialize Keras
Step1: Importing, downsampling, and visualizing data
Step2: Split data into training and validation sets
Step3: Create CNN model
Step4: Train CNN model | Python Code:
import os
import numpy as np
np.random.seed(123)
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
%matplotlib inline
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, BatchNormalization, UpSampling2D
from keras.utils import np_utils
from skimage.io import imread
from sklearn.model_selection import train_test_split
# set channels first notation
K.set_image_dim_ordering('th')
Explanation: Deep Image Segmentation with Convolutional Neural Networks (CNNs)
Image segmentation
Here, we focus on using Convolutional Neural Networks or CNNs for segmenting images. Specifically, we use Python and Keras (with TensorFlow as backend) to implement a CNN capable of segmenting lungs in CT scan images with ~94% accuracy.
Convolutional neural networks
CNNs are a special kind of neural network inspired by the brain’s visual cortex. So it should come as no surprise that they excel at visual tasks. CNNs have layers. Each layer learns higher-level features from the previous layer. This layered architecture is analogous to how, in the visual cortex, higher-level neurons react to higher-level patterns that are combinations of lower-level patterns generated by lower-level neurons. Also, unlike traditional Artificial Neural Networks or ANNs, which typically consist of fully connected layers, CNNs consist of partially connected layers. In fact, in CNNs, neurons in one layer typically only connect to a few neighboring neurons from the previous layer. This partially connected architecture is analogous to how so-called cortical neurons in the visual cortex only react to stimuli in their receptive fields, which overlap to cover the entire visual field. Partial connectivity has computational benefits as well, since, with fewer connections, fewer weights need to be learned during training. This allows CNNs to handle larger images than traditional ANNs.
Import libraries and initialize Keras
End of explanation
# Get paths to all images and masks.
all_image_paths = glob('E:\\data\\lungs\\2d_images\\*.tif')
all_mask_paths = glob('E:\\data\\lungs\\2d_masks\\*.tif')
print(len(all_image_paths), 'image paths found')
print(len(all_mask_paths), 'mask paths found')
# Define function to read in and downsample an image.
def read_image (path, sampling=1): return np.expand_dims(imread(path)[::sampling, ::sampling],0)
# Import and downsample all images and masks.
all_images = np.stack([read_image(path, 4) for path in all_image_paths], 0)
all_masks = np.stack([read_image(path, 4) for path in all_mask_paths], 0) / 255.0
print('Image resolution is', all_images[1].shape)
print('Mask resolution is', all_images[1].shape)
# Visualize an example CT image and manual segmentation.
example_no = 1
fig, ax = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(10,5))
ax[0].imshow(all_images[example_no, 0], cmap='Blues')
ax[0].set_title('CT image', fontsize=18)
ax[0].tick_params(labelsize=16)
ax[1].imshow(all_masks[example_no, 0], cmap='Blues')
ax[1].set_title('Manual segmentation', fontsize=18)
ax[1].tick_params(labelsize=16)
Explanation: Importing, downsampling, and visualizing data
End of explanation
X_train, X_test, y_train, y_test = train_test_split(all_images, all_masks, test_size=0.1)
print('Training input is', X_train.shape)
print('Training output is {}, min is {}, max is {}'.format(y_train.shape, y_train.min(), y_train.max()))
print('Testing set is', X_test.shape)
Explanation: Split data into training and validation sets
End of explanation
# Create a sequential model, i.e. a linear stack of layers.
model = Sequential()
# Add a 2D convolution layer.
model.add(
Conv2D(
filters=32,
kernel_size=(3, 3),
activation='relu',
input_shape=all_images.shape[1:],
padding='same'
)
)
# Add a 2D convolution layer.
model.add(
Conv2D(filters=64,
kernel_size=(3, 3),
activation='sigmoid',
input_shape=all_images.shape[1:],
padding='same'
)
)
# Add a max pooling layer.
model.add(
MaxPooling2D(
pool_size=(2, 2),
padding='same'
)
)
# Add a dense layer.
model.add(
Dense(
64,
activation='relu'
)
)
# Add a 2D convolution layer.
model.add(
Conv2D(
filters=1,
kernel_size=(3, 3),
activation='sigmoid',
input_shape=all_images.shape[1:],
padding='same'
)
)
# Add a 2D upsampling layer.
model.add(
UpSampling2D(
size=(2,2)
)
)
model.compile(
loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy','mse']
)
print(model.summary())
Explanation: Create CNN model
End of explanation
history = model.fit(X_train, y_train, validation_split=0.10, epochs=10, batch_size=10)
test_no = 7
fig, ax = plt.subplots(nrows=1, ncols=3, sharex='col', sharey='row', figsize=(15,5))
ax[0].imshow(X_test[test_no,0], cmap='Blues')
ax[0].set_title('CT image', fontsize=18)
ax[0].tick_params(labelsize=16)
ax[1].imshow(y_test[test_no,0], cmap='Blues')
ax[1].set_title('Manual segmentation', fontsize=18)
ax[1].tick_params(labelsize=16)
ax[2].imshow(model.predict(X_test)[test_no,0], cmap='Blues')
ax[2].set_title('CNN segmentation', fontsize=18)
ax[2].tick_params(labelsize=16)
Explanation: Train CNN model
End of explanation |
10,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
layer = tf.layers.batch_normalization(layer, training=is_training)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training=is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training=is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: True})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels, is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False
})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
gamma = tf.Variable(tf.ones([num_units]))
beta = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(layer, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
#layer = tf.cond(is_training, batch_norm_training, batch_norm_inference)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
return tf.cond(is_training, batch_norm_training, batch_norm_inference)
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
gamma = tf.Variable(tf.ones([out_channels]))
beta = tf.Variable(tf.zeros([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(conv_layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(conv_layer, pop_mean, pop_variance, beta, gamma, epsilon)
return tf.cond(is_training, batch_norm_training, batch_norm_inference)
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: True})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels, is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]], is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
10,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None, use_bias=False)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training:False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training:False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training:False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training:False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None, use_bias=False)
gamma = tf.Variable(tf.ones([num_units]))
beta = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_units]), trainable=False)
epsilon = 1e-3
def b_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0])
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def b_infering():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
bnorm_layer = tf.cond(is_training, b_training, b_infering)
return tf.nn.relu(bnorm_layer)
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
gamma = tf.Variable(tf.ones([out_channels]))
beta = tf.Variable(tf.zeros([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False)
epsilon = 1e-3
def b_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False)
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def b_infering():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
bnorm_layer = tf.cond(is_training, b_training, b_infering)
return tf.nn.relu(bnorm_layer)
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys,
is_training:True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training:False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys,
is_training:False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training:False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training:False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training:False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
10,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple pipeline using hypergroup to perform community detection and network analysis
A social network of a karate club was studied by Wayne W. Zachary [1] for a period of three years from 1970 to 1972. The network captures 34 members of a karate club, documenting 78 pairwise links between members who interacted outside the club. During the study a conflict arose between the administrator "John A" and instructor "Mr. Hi" (pseudonyms), which led to the split of the club into two. Half of the members formed a new club around Mr. Hi, members from the other part found a new instructor or gave up karate. Basing on collected data Zachary assigned correctly all but one member of the club to the groups they actually joined after the split.
[1] W. Zachary, An information flow model for conflict and fission in small groups, Journal of Anthropological Research 33, 452-473 (1977)
Data Preparation
Import packages
Step1: Connect to Cloud Analytic Services in SAS Viya
Step2: Load the action set for hypergroup
Step3: Load data into CAS
Data set used from https
Step4: Hypergroup doesn't support numeric source and target columns - so make sure to cast them as varchars.
Step5: Data Exploration
Get to know your data (what are variables?)
Step6: Graph rendering utility
Step7: Execute community and hypergroup detection
Step8: Note
Step9: How many hypergroups and communities do we have?
Step10: Basic community analysis
What are the 2 biggest communities?
Step11: Note
Step12: What edges do we have?
Step13: Render the network graph
Step14: Analyze node centrality
How important is a user in the network?
Step15: Between-ness centrality quantifies the number of times a node acts as a bridge along the shortest path(s) between two other nodes. As such it describes the importance of a node in a network.
Step16: Filter communities
Only filter community 2. | Python Code:
import swat
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
# Also import networkx used for rendering a network
import networkx as nx
%matplotlib inline
Explanation: A simple pipeline using hypergroup to perform community detection and network analysis
A social network of a karate club was studied by Wayne W. Zachary [1] for a period of three years from 1970 to 1972. The network captures 34 members of a karate club, documenting 78 pairwise links between members who interacted outside the club. During the study a conflict arose between the administrator "John A" and instructor "Mr. Hi" (pseudonyms), which led to the split of the club into two. Half of the members formed a new club around Mr. Hi, members from the other part found a new instructor or gave up karate. Basing on collected data Zachary assigned correctly all but one member of the club to the groups they actually joined after the split.
[1] W. Zachary, An information flow model for conflict and fission in small groups, Journal of Anthropological Research 33, 452-473 (1977)
Data Preparation
Import packages: SAS Wrapper for Analytic Transfer and open source libraries
End of explanation
s = swat.CAS('http://cas.mycompany.com:8888') # REST API
Explanation: Connect to Cloud Analytic Services in SAS Viya
End of explanation
s.loadactionset('hypergroup')
Explanation: Load the action set for hypergroup
End of explanation
df = pd.DataFrame.from_records([[2,1],[3,1],[3,2],[4,1],[4,2],[4,3],[5,1],[6,1],[7,1],[7,5],[7,6],[8,1],[8,2],[8,3],[8,4],[9,1],[9,3],[10,3],[11,1],[11,5],[11,6],[12,1],[13,1],[13,4],[14,1],[14,2],[14,3],[14,4],[17,6],[17,7],[18,1],[18,2],[20,1],[20,2],[22,1],[22,2],[26,24],[26,25],[28,3],[28,24],[28,25],[29,3],[30,24],[30,27],[31,2],[31,9],[32,1],[32,25],[32,26],[32,29],[33,3],[33,9],[33,15],[33,16],[33,19],[33,21],[33,23],[33,24],[33,30],[33,31],[33,32],[34,9],[34,10],[34,14],[34,15],[34,16],[34,19],[34,20],[34,21],[34,23],[34,24],[34,27],[34,28],[34,29],[34,30],[34,31],[34,32],[34,33]],
columns=['FROM','TO'])
df['SOURCE'] = df['FROM'].astype(str)
df['TARGET'] = df['TO'].astype(str)
df.head()
Explanation: Load data into CAS
Data set used from https://en.wikipedia.org/wiki/Zachary%27s_karate_club.
End of explanation
if s.tableexists('karate').exists:
s.CASTable('KARATE').droptable()
dataset = s.upload(df,
importoptions=dict(filetype='csv',
vars=[dict(type='double'),
dict(type='double'),
dict(type='varchar'),
dict(type='varchar')]),
casout=dict(name='KARATE', promote=True)).casTable
Explanation: Hypergroup doesn't support numeric source and target columns - so make sure to cast them as varchars.
End of explanation
dataset.head(5)
dataset.summary()
Explanation: Data Exploration
Get to know your data (what are variables?)
End of explanation
def renderNetworkGraph(filterCommunity=-1, size=18, sizeVar='_HypGrp_',
colorVar='', sizeMultipler=500, nodes_table='nodes',
edges_table='edges'):
''' Build an array of node positions and related colors based on community '''
nodes = s.CASTable(nodes_table)
if filterCommunity >= 0:
nodes = nodes.query('_Community_ EQ %F' % filterCommunity)
nodes = nodes.to_frame()
nodePos = {}
nodeColor = {}
nodeSize = {}
communities = []
i = 0
for nodeId in nodes._Value_:
nodePos[nodeId] = (nodes._AllXCoord_[i], nodes._AllYCoord_[i])
if colorVar:
nodeColor[nodeId] = nodes[colorVar][i]
if nodes[colorVar][i] not in communities:
communities.append(nodes[colorVar][i])
nodeSize[nodeId] = max(nodes[sizeVar][i],0.1)*sizeMultipler
i += 1
communities.sort()
# Build a list of source-target tuples
edges = s.CASTable(edges_table)
if filterCommunity >= 0:
edges = edges.query('_SCommunity_ EQ %F AND _TCommunity_ EQ %F' %
(filterCommunity, filterCommunity))
edges = edges.to_frame()
edgeTuples = []
for i, p in enumerate(edges._Source_):
edgeTuples.append( (edges._Source_[i], edges._Target_[i]) )
# Add nodes and edges to the graph
plt.figure(figsize=(size,size))
graph = nx.DiGraph()
graph.add_edges_from(edgeTuples)
# Size mapping
getNodeSize=[nodeSize[v] for v in graph]
# Color mapping
jet = cm = plt.get_cmap('jet')
getNodeColor=None
if colorVar:
getNodeColor=[nodeColor[v] for v in graph]
cNorm = colors.Normalize(vmin=min(communities), vmax=max(communities))
scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet)
# Using a figure here to work-around the fact that networkx doesn't
# produce a labelled legend
f = plt.figure(1)
ax = f.add_subplot(1,1,1)
for community in communities:
ax.plot([0],[0], color=scalarMap.to_rgba(community),
label='Community %s' % '{:2.0f}'.format(community), linewidth=10)
# Render the graph
nx.draw_networkx_nodes(graph, nodePos, node_size=getNodeSize,
node_color=getNodeColor, cmap=jet)
nx.draw_networkx_edges(graph, nodePos, width=1, alpha=0.5)
nx.draw_networkx_labels(graph, nodePos, font_size=11, font_family='sans-serif')
if len(communities) > 0:
plt.legend(loc='upper left', prop={'size':11})
plt.title('Zachary Karate Club social network', fontsize=30)
plt.axis('off')
plt.show()
Explanation: Graph rendering utility
End of explanation
# Create output table objects
edges = s.CASTable('edges', replace=True)
nodes = s.CASTable('nodes', replace=True)
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
allGraphs = True,
edges = edges,
vertices = nodes
)
renderNetworkGraph(size=10, sizeMultipler=2000)
Explanation: Execute community and hypergroup detection
End of explanation
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
allGraphs = True,
community = True,
edges = edges,
vertices = nodes
)
Explanation: Note: Network of the Zachary Karate Club. Distribution by degree of the node. Node 1 stands for the instructor, node 34 for the president
End of explanation
nodes.distinct()
nodes.summary()
Explanation: How many hypergroups and communities do we have?
End of explanation
topKOut = s.CASTable('topKOut', replace=True)
nodes[['_Community_']].topk(
aggregator = 'N',
topK = 4,
casOut = topKOut
)
topKOut = topKOut.sort_values('_Rank_').head(10)
topKOut.columns
nCommunities = len(topKOut)
ind = np.arange(nCommunities) # the x locations for the groups
plt.figure(figsize=(8,4))
p1 = plt.bar(ind + 0.2, topKOut._Score_, 0.5, color='orange', alpha=0.75)
plt.ylabel('Vertices', fontsize=12)
plt.xlabel('Community', fontsize=12)
plt.title('Number of nodes for the top %s communities' % '{:2.0f}'.format(nCommunities))
plt.xticks(ind + 0.2, topKOut._Fmtvar_)
plt.show()
Explanation: Basic community analysis
What are the 2 biggest communities?
End of explanation
nodes.query('_Community_ EQ 1').head(5)
Explanation: Note: This shows that the biggest communities have up to 18 vertices.
What nodes belong to community 4?
End of explanation
edges.head(5)
Explanation: What edges do we have?
End of explanation
renderNetworkGraph(size=10, colorVar='_Community_', sizeMultipler=2000)
Explanation: Render the network graph
End of explanation
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
community = True,
centrality = True,
mergeCommSmallest = True,
allGraphs = True,
graphPartition = True,
scaleCentralities = 'central1', # Returns centrality values closer to 1 in the center
edges = edges,
vertices = nodes
)
nodes.head()
Explanation: Analyze node centrality
How important is a user in the network?
End of explanation
renderNetworkGraph(size=10, colorVar='_Community_', sizeVar='_Betweenness_')
Explanation: Between-ness centrality quantifies the number of times a node acts as a bridge along the shortest path(s) between two other nodes. As such it describes the importance of a node in a network.
End of explanation
renderNetworkGraph(1, size=10, sizeVar='_CentroidAngle_', sizeMultipler=5)
s.close()
Explanation: Filter communities
Only filter community 2.
End of explanation |
10,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 14 (or so)
Step1: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.
Step2: So great, we have 702 of them. Now let's import them.
Step3: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.
Take a look at the contents of the first 5 speeches
Step4: Doing our analysis
Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns.
Be sure to include English-language stopwords
Step5: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
Step6: Now let's push all of that into a dataframe with nicely named columns.
Step7: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and how many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
Step8: What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
Step9: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
Step10: Now what if I'm using a TfidfVectorizer?
Step11: What's the content of the speeches? Here's a way to get them
Step12: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
Step13: Enough of this garbage, let's cluster
Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Step14: Which one do you think works the best?
The last two seem to make more sense than the first one, telling from its cluster three. The last two are more human-readable. However human-readability ends with that distinction -- I can't tell which from the last two would be better, based on the top terms per cluster.
Harry Potter time
I have a scraped collection of Harry Potter fanfiction at https | Python Code:
# If you'd like to download it through the command line...
!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz
# And then extract it through the command line...
!tar -zxf convote_v1.1.tar.gz
Explanation: Homework 14 (or so): TF-IDF text analysis and clustering
Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat.
No, just kidding, we're professionals now.
Investigating the Congressional Record
The Congressional Record is more or less what happened in Congress every single day. Speeches and all that. A good large source of text data, maybe?
Let's pretend it's totally secret but we just got it leaked to us in a data dump, and we need to check it out. It was leaked from this page here.
End of explanation
# glob finds files matching a certain filename pattern
import glob
# Give me all the text files
paths = glob.glob('convote_v1.1/data_stage_one/development_set/*')
paths[:5]
len(paths)
Explanation: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.
End of explanation
speeches = []
for path in paths:
with open(path) as speech_file:
speech = {
'pathname': path,
'filename': path.split('/')[-1],
'content': speech_file.read()
}
speeches.append(speech)
speeches_df = pd.DataFrame(speeches)
speeches_df.head()
Explanation: So great, we have 702 of them. Now let's import them.
End of explanation
for item in speeches_df['content'].head(5):
print("++++++++++++++++++++NEW SPEECH+++++++++++++++++++++")
print(item)
print(" ")
Explanation: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.
Take a look at the contents of the first 5 speeches
End of explanation
c_vectorizer = CountVectorizer(stop_words='english')
x = c_vectorizer.fit_transform(speeches_df['content'])
x
df = pd.DataFrame(x.toarray(), columns=c_vectorizer.get_feature_names())
df
Explanation: Doing our analysis
Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns.
Be sure to include English-language stopwords
End of explanation
#http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
c2_vectorizer = CountVectorizer(stop_words='english', max_features=100)
y = c2_vectorizer.fit_transform(speeches_df['content'])
y
Explanation: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
End of explanation
new_df = pd.DataFrame(y.toarray(), columns=c2_vectorizer.get_feature_names())
#new_df
Explanation: Now let's push all of that into a dataframe with nicely named columns.
End of explanation
#http://stackoverflow.com/questions/15943769/how-to-get-row-count-of-pandas-dataframe
total_speeches = len(new_df.index)
print("In total there are", total_speeches, "speeches.")
wo_chairman = new_df[new_df['chairman']==0]['chairman'].count()
print(wo_chairman, "speeches don't mention 'chairman'")
wo_mr_chairman = new_df[(new_df['chairman']==0) & (new_df['mr']==0)]['chairman'].count()
print(wo_mr_chairman, "speeches don't mention neither 'chairman' nor 'mr'")
Explanation: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and how many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
End of explanation
#http://stackoverflow.com/questions/18199288/getting-the-integer-index-of-a-pandas-dataframe-row-fulfilling-a-condition
print("The speech with the most 'thank's has the index", np.where(new_df['thank']==new_df['thank'].max()))
Explanation: What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
End of explanation
china_trade_speeches = (new_df['china'] + new_df['trade']).sort_values(ascending = False).head(3)
china_trade_speeches
Explanation: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
End of explanation
porter_stemmer = PorterStemmer()
def stem_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
words = [porter_stemmer.stem(word) for word in words]
return words
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stem_tokenizer, use_idf=False, norm='l1', max_features=100)
X = tfidf_vectorizer.fit_transform(speeches_df['content'])
t_df = pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())
china_trade_speeches_v2 = (t_df['china'] + t_df['trade']).sort_values(ascending = False).head(3)
china_trade_speeches_v2
Explanation: Now what if I'm using a TfidfVectorizer?
End of explanation
# index 0 is the first speech, which was the first one imported.
paths[0]
# Pass that into 'cat' using { } which lets you put variables in shell commands
# that way you can pass the path to cat
print("++++++++++NEW SPEECH+++++++++")
!cat {paths[345]}
print("++++++++++NEW SPEECH+++++++++")
!cat {paths[336]}
print("++++++++++NEW SPEECH+++++++++")
!cat {paths[402]}
Explanation: What's the content of the speeches? Here's a way to get them:
End of explanation
new_df.columns
election_speeches = (new_df['discrimination'] + new_df['rights']).sort_values(ascending = False).head(3)
election_speeches
Explanation: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
End of explanation
def new_stem_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
#With PorterStemmer implemented as above, the text was pretty crippled and hard to judge which made more sense.
#that's why I have commented that line out for now
#words = [porter_stemmer.stem(word) for word in words]
return words
vectorizer_types = [
{'name': 'CVectorizer', 'definition': CountVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, max_features=100)},
{'name': 'TFVectorizer', 'definition': TfidfVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, max_features=100, use_idf=False)},
{'name': 'TFVIDFVectorizer', 'definition': TfidfVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, max_features=100, use_idf=True)}
]
for vectorizer in vectorizer_types:
X = vectorizer['definition'].fit_transform(speeches_df['content'])
number_of_clusters = 8
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("++++++++ Top terms per cluster -- using a", vectorizer['name'])
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer['definition'].get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :7]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
Explanation: Enough of this garbage, let's cluster
Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
End of explanation
!curl -O https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip
import zipfile
import glob
potter_paths = glob.glob('hp/*')
potter_paths[:5]
potter = []
for path in potter_paths:
with open(path) as potter_file:
potter_text = {
'pathname': path,
'filename': path.split('/')[-1],
'content': potter_file.read()
}
potter.append(potter_text)
potter_df = pd.DataFrame(potter)
potter_df.head()
vectorizer = TfidfVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, use_idf=True)
X = vectorizer.fit_transform(potter_df['content'])
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :7]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
Explanation: Which one do you think works the best?
The last two seem to make more sense than the first one, telling from its cluster three. The last two are more human-readable. However human-readability ends with that distinction -- I can't tell which from the last two would be better, based on the top terms per cluster.
Harry Potter time
I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip.
I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction. What is your hypothesis?
End of explanation |
10,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Embedded Operator Splitting (EOS) Methods
This examples shows how to use the Embedded Operator Splitting (EOS) Methods described in Rein (2019). The idea is to embedded one operator splitting method inside another. The inner operator splitting method solves the Keplerian motion, whereas the outer solves the planet-planet interactions. The accuracy and speed of the EOS methods are comparable to standard Wisdom-Holman type methods. However, the main advantage of the EOS methods is that they do not require a Kepler solver. This significantly simplifies the implementation. And in certain cases this can lead to a speed-up by a factor of 2-3x.
Step1: We first create a function to setup our initial conditions of two Jupiter-mass planets with moderate eccentricities . We also create a function to run the simulation and periodically measure the relative energy error. The function then runs the simulation again, this not only measuring the runtime. This way we don't include the time required to calculate the energy error in our run time measurements.
Step2: Standard methods
Let us first run a few standard methods. WH is the Wisdom-Holman method, WHC is the Wisdom-Holman method with symplectic correctors, and SABA(8,6,4) is a high order variant of the WH method. All methods except the LEAPFROG method use a Kepler solver. We run each method for 20 different timesteps, ranging from $10^{-4}$ to $0.3$ orbital periods of the innermost planet.
Step3: We then plot the relative energy error as a function of the timestep on the left panel, and the relative energy error as a function of the run time on the right panel. In the right panel, methods further to the bottom left are more efficient.
Step4: EOS with $\Phi_0=\Phi_1=LF$
We now run several EOS methods where we set both the inner and outer operator splitting method to the standard second order leapfrog method. Our resulting EOS method will therefore also be a second order method. We vary the number of steps $n$ taken by $\Phi_1$.
Step5: We can see in the following plot that for $n=1$, we recover the LEAPFROG method. For $n=16$ both the accuracy and efficiency of our EOS method is very similar to the standard WH method.
Step6: An extra factor of $\epsilon$
We now create EOS methods which are comparable to the Wisdom-Holman method with symplectic correctors. For the same timestep, the error is smaller by a factor of the mass ratio of the planet to the star, $\epsilon$. We set $\Phi_1$ to the fourth order LF4 method and use $n=2$. For $\Phi_0$ we try out LF4, LF(4,2) and PMLF4.
Step7: We can see in the following plot that the EOS methods using LF(4,2) and PMLF4 do approach the accuracy and efficiency of the Wisdom-Holman method with symplectic correctors for small enough timesteps. To achieve a better accuracy for larger timesteps, we could increase the order of $\Phi_1$ or the number of steps $n$. Note that the EOS method with $\Phi_0=LF4$ is a true 4th order method, whereas the methods with LF(4,2) and PMLF4 have generalized order (4,2).
Step8: High order methods
Next, we will construct arbitrarily high order methods using LF, LF4, LF6, and LF8 for both $\Phi_0$ and $\Phi_1$.
Step9: The following plots show that the methods are indeed 2nd, 4th, 6th, and 8th order methods.
Step10: Modified potentials
We can use operator splitting methods which make use of derivatives of the acceleration, or the so called modified potential. For the same order, these methods can have fewer function evaluations, and thus better performance. Let us compare using the sixth order methods LF6 (nine function evaluations) and PMLF6 (three modified function evaluations) for $\Phi_0$. We keep using LF6 for $\Phi_1$.
Step11: In the following plot, we see that the method using PMLF6 is indeed about a factor of 2 faster than the one using LF6.
Step12: High order methods for perturbed systems
We can do better than above by making use of high order methods for $\Phi_0$ which were specifically designed for perturbed systems such as LF(8,6,4) and PLF(7,6,4).
Step13: We can see in the following plot that these methods indeed perform very well. With $\Phi_0=LF(8,6,4)$, $\Phi_1=LF8$ and $n=1$ we achieve an accuracy and efficiency comparable to SABA(8,6,4). In contrast to SABA(8,6,4) we do not require a Kepler solver. | Python Code:
import rebound
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import time
linestyles = ["--","-","-.",":"]
labels = {"LF": "LF", "LF4": "LF4", "LF6": "LF6", "LF8": "LF8", "LF4_2": "LF(4,2)", "LF8_6_4": "LF(8,6,4)", "PLF7_6_4": "PLF(7,6,4)", "PMLF4": "PMLF4", "PMLF6": "PMLF6"}
Explanation: Embedded Operator Splitting (EOS) Methods
This examples shows how to use the Embedded Operator Splitting (EOS) Methods described in Rein (2019). The idea is to embedded one operator splitting method inside another. The inner operator splitting method solves the Keplerian motion, whereas the outer solves the planet-planet interactions. The accuracy and speed of the EOS methods are comparable to standard Wisdom-Holman type methods. However, the main advantage of the EOS methods is that they do not require a Kepler solver. This significantly simplifies the implementation. And in certain cases this can lead to a speed-up by a factor of 2-3x.
End of explanation
def initial_conditions():
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=1e-3,a=1,e=0.05,f=0.)
sim.add(m=1e-3,a=1.6,e=0.05,f=1.)
sim.move_to_com()
return sim
def run(sim):
simc = sim.copy() # Use later for timing
tmax = 100.
# First run to measure energy error
Emax = 0
E0 = sim.calculate_energy()
while sim.t<tmax:
sim.integrate(sim.t+1.23456, exact_finish_time=0)
E1 = sim.calculate_energy()
Emax = np.max([Emax,np.abs((E0-E1)/E0)])
# Second run to measuring run time
start = time.time()
simc.integrate(tmax,exact_finish_time=0)
end = time.time()
return [Emax, end-start]
Explanation: We first create a function to setup our initial conditions of two Jupiter-mass planets with moderate eccentricities . We also create a function to run the simulation and periodically measure the relative energy error. The function then runs the simulation again, this not only measuring the runtime. This way we don't include the time required to calculate the energy error in our run time measurements.
End of explanation
dts = 2.*np.pi*np.logspace(-4,-0.5,20)
methods = ["LEAPFROG", "WH", "WHC", "SABA(8,6,4)"]
results = np.zeros((len(dts), len(methods), 2))
for i, dt in enumerate(dts):
for m, method in enumerate(methods):
sim = initial_conditions()
sim.dt = dt
sim.integrator = method
sim.ri_whfast.safe_mode = 0
sim.ri_saba.safe_mode = 0
results[i,m] = run(sim)
Explanation: Standard methods
Let us first run a few standard methods. WH is the Wisdom-Holman method, WHC is the Wisdom-Holman method with symplectic correctors, and SABA(8,6,4) is a high order variant of the WH method. All methods except the LEAPFROG method use a Kepler solver. We run each method for 20 different timesteps, ranging from $10^{-4}$ to $0.3$ orbital periods of the innermost planet.
End of explanation
fig,ax = plt.subplots(1,2,figsize=(8,3),sharey=True)
plt.tight_layout()
for _ax in ax:
_ax.set_xscale("log")
_ax.set_yscale("log")
ax[0].set_xlabel("timestep");
ax[1].set_xlabel("runtime");
ax[0].set_ylabel("error")
for m, method in enumerate(methods):
ax[0].plot(dts/np.pi/2.,results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[1].plot(results[:,m,1],results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[0].legend(loc='upper center', bbox_to_anchor=(1.1, -0.2), ncol=5);
Explanation: We then plot the relative energy error as a function of the timestep on the left panel, and the relative energy error as a function of the run time on the right panel. In the right panel, methods further to the bottom left are more efficient.
End of explanation
ns = [1,2,4,8,16]
results_lf = np.zeros((len(dts), len(ns), 2))
for i, dt in enumerate(dts):
for j, n in enumerate(ns):
sim = initial_conditions()
sim.dt = dt
sim.integrator = "eos"
sim.ri_eos.phi0 = "lf"
sim.ri_eos.phi1 = "lf"
sim.ri_eos.n = n
sim.ri_eos.safe_mode = 0
results_lf[i,j] = run(sim)
Explanation: EOS with $\Phi_0=\Phi_1=LF$
We now run several EOS methods where we set both the inner and outer operator splitting method to the standard second order leapfrog method. Our resulting EOS method will therefore also be a second order method. We vary the number of steps $n$ taken by $\Phi_1$.
End of explanation
fig,ax = plt.subplots(1,2,figsize=(8,3),sharey=True)
plt.tight_layout()
for _ax in ax:
_ax.set_xscale("log")
_ax.set_yscale("log")
ax[0].set_xlabel("timestep");
ax[1].set_xlabel("runtime");
ax[0].set_ylabel("error")
colors = plt.cm.viridis(np.linspace(0,1,len(ns)))
for j, n in enumerate(ns):
label = "$\Phi_0=\Phi_1=LF$, $n=%d$"%n
ax[0].plot(dts/np.pi/2.,results_lf[:,j,0],label=label,color=colors[j])
ax[1].plot(results_lf[:,j,1],results_lf[:,j,0],label=label,color=colors[j])
for m, method in enumerate(methods):
ax[0].plot(dts/np.pi/2.,results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[1].plot(results[:,m,1],results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[0].legend(loc='upper center', bbox_to_anchor=(1.1, -0.2), ncol=4);
Explanation: We can see in the following plot that for $n=1$, we recover the LEAPFROG method. For $n=16$ both the accuracy and efficiency of our EOS method is very similar to the standard WH method.
End of explanation
phi0s = ["LF4", "LF4_2", "PMLF4"]
results_4 = np.zeros((len(dts), len(phi0s), 2))
for i, dt in enumerate(dts):
for j, phi0 in enumerate(phi0s):
sim = initial_conditions()
sim.dt = dt
sim.integrator = "eos"
sim.ri_eos.phi0 = phi0
sim.ri_eos.phi1 = "LF4"
sim.ri_eos.n = 2
sim.ri_eos.safe_mode = 0
results_4[i,j] = run(sim)
Explanation: An extra factor of $\epsilon$
We now create EOS methods which are comparable to the Wisdom-Holman method with symplectic correctors. For the same timestep, the error is smaller by a factor of the mass ratio of the planet to the star, $\epsilon$. We set $\Phi_1$ to the fourth order LF4 method and use $n=2$. For $\Phi_0$ we try out LF4, LF(4,2) and PMLF4.
End of explanation
fig,ax = plt.subplots(1,2,figsize=(8,3),sharey=True)
plt.tight_layout()
for _ax in ax:
_ax.set_xscale("log")
_ax.set_yscale("log")
ax[0].set_xlabel("timestep");
ax[1].set_xlabel("runtime");
ax[0].set_ylabel("error")
for j, phi0 in enumerate(phi0s):
label = "$\Phi_0=%s, \Phi_1=LF4, n=2$" % labels[phi0]
ax[0].plot(dts/np.pi/2.,results_4[:,j,0],label=label)
ax[1].plot(results_4[:,j,1],results_4[:,j,0],label=label)
for m, method in enumerate(methods):
ax[0].plot(dts/np.pi/2.,results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[1].plot(results[:,m,1],results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[0].legend(loc='upper center', bbox_to_anchor=(1.1, -0.2), ncol=4);
Explanation: We can see in the following plot that the EOS methods using LF(4,2) and PMLF4 do approach the accuracy and efficiency of the Wisdom-Holman method with symplectic correctors for small enough timesteps. To achieve a better accuracy for larger timesteps, we could increase the order of $\Phi_1$ or the number of steps $n$. Note that the EOS method with $\Phi_0=LF4$ is a true 4th order method, whereas the methods with LF(4,2) and PMLF4 have generalized order (4,2).
End of explanation
phis = ["LF", "LF4", "LF6", "LF8"]
results_2468 = np.zeros((len(dts), len(phis), 2))
for i, dt in enumerate(dts):
for j, phi in enumerate(phis):
sim = initial_conditions()
sim.dt = dt
sim.integrator = "eos"
sim.ri_eos.phi0 = phi
sim.ri_eos.phi1 = phi
sim.ri_eos.n = 1
sim.ri_eos.safe_mode = 0
results_2468[i,j] = run(sim)
Explanation: High order methods
Next, we will construct arbitrarily high order methods using LF, LF4, LF6, and LF8 for both $\Phi_0$ and $\Phi_1$.
End of explanation
fig,ax = plt.subplots(1,2,figsize=(8,3),sharey=True)
plt.tight_layout()
for _ax in ax:
_ax.set_xscale("log")
_ax.set_yscale("log")
ax[0].set_xlabel("timestep");
ax[1].set_xlabel("runtime");
ax[0].set_ylabel("error")
for j, phi in enumerate(phis):
label = "$\Phi_0=\Phi_1=%s, n=1$" % labels[phi]
ax[0].plot(dts/np.pi/2.,results_2468[:,j,0],label=label)
ax[1].plot(results_2468[:,j,1],results_2468[:,j,0],label=label)
for m, method in enumerate(methods):
ax[0].plot(dts/np.pi/2.,results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[1].plot(results[:,m,1],results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[0].legend(loc='upper center', bbox_to_anchor=(1.1, -0.2), ncol=4);
Explanation: The following plots show that the methods are indeed 2nd, 4th, 6th, and 8th order methods.
End of explanation
phis_m = ["LF6", "PMLF6"]
results_m = np.zeros((len(dts), len(phis_m), 2))
for i, dt in enumerate(dts):
for j, phi in enumerate(phis_m):
sim = initial_conditions()
sim.dt = dt
sim.integrator = "eos"
sim.ri_eos.phi0 = phi
sim.ri_eos.phi1 = "LF6"
sim.ri_eos.n = 1
sim.ri_eos.safe_mode = 0
results_m[i,j] = run(sim)
Explanation: Modified potentials
We can use operator splitting methods which make use of derivatives of the acceleration, or the so called modified potential. For the same order, these methods can have fewer function evaluations, and thus better performance. Let us compare using the sixth order methods LF6 (nine function evaluations) and PMLF6 (three modified function evaluations) for $\Phi_0$. We keep using LF6 for $\Phi_1$.
End of explanation
fig,ax = plt.subplots(1,2,figsize=(8,3),sharey=True)
plt.tight_layout()
for _ax in ax:
_ax.set_xscale("log")
_ax.set_yscale("log")
ax[0].set_xlabel("timestep");
ax[1].set_xlabel("runtime");
ax[0].set_ylabel("error")
for j, phi in enumerate(phis_m):
label = "$\Phi_0=%s, \Phi_1=LF6, n=1$" % labels[phi]
ax[0].plot(dts/np.pi/2.,results_m[:,j,0],label=label)
ax[1].plot(results_m[:,j,1],results_m[:,j,0],label=label)
for m, method in enumerate(methods):
ax[0].plot(dts/np.pi/2.,results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[1].plot(results[:,m,1],results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[0].legend(loc='upper center', bbox_to_anchor=(1.1, -0.2), ncol=4);
Explanation: In the following plot, we see that the method using PMLF6 is indeed about a factor of 2 faster than the one using LF6.
End of explanation
phi0s_8 = ["LF8", "LF8_6_4", "PLF7_6_4"]
results_8 = np.zeros((len(dts), len(phi0s_8), 2))
for i, dt in enumerate(dts):
for j, phi0 in enumerate(phi0s_8):
sim = initial_conditions()
sim.dt = dt
sim.integrator = "eos"
sim.ri_eos.phi0 = phi0
sim.ri_eos.phi1 = "LF8"
sim.ri_eos.n = 1
sim.ri_eos.safe_mode = 0
results_8[i,j] = run(sim)
Explanation: High order methods for perturbed systems
We can do better than above by making use of high order methods for $\Phi_0$ which were specifically designed for perturbed systems such as LF(8,6,4) and PLF(7,6,4).
End of explanation
fig,ax = plt.subplots(1,2,figsize=(8,3),sharey=True)
plt.tight_layout()
for _ax in ax:
_ax.set_xscale("log")
_ax.set_yscale("log")
ax[0].set_xlabel("timestep");
ax[1].set_xlabel("runtime");
ax[0].set_ylabel("error")
for j, phi0 in enumerate(phi0s_8):
label = "$\Phi_0=%s, \Phi_1=LF8, n=1$" % labels[phi0]
ax[0].plot(dts/np.pi/2.,results_8[:,j,0],label=label)
ax[1].plot(results_8[:,j,1],results_8[:,j,0],label=label)
for m, method in enumerate(methods):
ax[0].plot(dts/np.pi/2.,results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[1].plot(results[:,m,1],results[:,m,0],label=method,color="black",ls=linestyles[m])
ax[0].legend(loc='upper center', bbox_to_anchor=(1.1, -0.2), ncol=4);
Explanation: We can see in the following plot that these methods indeed perform very well. With $\Phi_0=LF(8,6,4)$, $\Phi_1=LF8$ and $n=1$ we achieve an accuracy and efficiency comparable to SABA(8,6,4). In contrast to SABA(8,6,4) we do not require a Kepler solver.
End of explanation |
10,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following code will take the CLI commands produced in 01-JJA-L2V-Configuration-Files notebook
You need to install aws cli
http
Step1: This function will format the AWS CLI commands so we can pass them to the cluster using boto3
Step2: To load the commands into EMR
Here we create steps based on the three steps in the pipeline
Step3: If we are adding multiple runs of the pipeline
Step4: To run the steps into EMR using boto3 | Python Code:
from load_config import params_to_cli
llr, emb, pred,evaluation = params_to_cli("CONFIGS/ex1-ml-1m-config.yml", "CONFIGS/ex4-du04d100w10l80n10d30p1q1-1000-081417-params.yml")
llr
evaluation
Explanation: The following code will take the CLI commands produced in 01-JJA-L2V-Configuration-Files notebook
You need to install aws cli
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
You need to run aws config
Let's import the functions defined before for loading parameters
End of explanation
def create_steps(llr=None, emb=None, pred=None, evaluation=None, name=''):
if llr != None:
Steps=[
{
'Name': name + '-LLR',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': (llr).split(),
}
},
{
'Name': name + '-EMB',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': (emb).split(),
}
},
{
'Name': name + '-PRED',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': (pred).split(),
}
},
{
'Name': name + '-EVAL',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': (evaluation).split(),
}
}
]
else:
Steps=[
{
'Name': name + '-EMB',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': (emb).split(),
}
},
{
'Name': name + '-PRED',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': (pred).split(),
}
},
{
'Name': name + '-EVAL',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': (evaluation).split(),
}
}
]
return Steps
Explanation: This function will format the AWS CLI commands so we can pass them to the cluster using boto3
End of explanation
# ex2 = create_steps(llr=llr, emb=emb, pred=pred, evaluation=evaluation, name='EXP3')
ex3 = create_steps(llr=llr, emb=emb, pred=pred, evaluation=evaluation, name='EXP3')
# ex4 = create_steps(emb=emb348, pred=pred348, name='EXP4')
# ex5 = create_steps(emb=emb349, pred=pred349, name='EXP5')
Explanation: To load the commands into EMR
Here we create steps based on the three steps in the pipeline
End of explanation
# steps = ex2 + ex3 + ex4 + ex5
steps = ex3
steps
Explanation: If we are adding multiple runs of the pipeline
End of explanation
import boto3
client = boto3.client('emr')
cluster_id = 'j-2JGJ9RIFQ4VRK'
response = client.add_job_flow_steps(
JobFlowId = cluster_id,
Steps= steps
)
response
Explanation: To run the steps into EMR using boto3
End of explanation |
10,479 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am trying to vectorize some data using | Problem:
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
corpus = [
'We are looking for Java developer',
'Frontend developer with knowledge in SQL and Jscript',
'And this is the third one.',
'Is this the first document?',
]
vectorizer = CountVectorizer(stop_words="english", binary=True, lowercase=False,
vocabulary=['Jscript', '.Net', 'TypeScript', 'NodeJS', 'Angular', 'Mongo',
'CSS',
'Python', 'PHP', 'Photoshop', 'Oracle', 'Linux', 'C++', "Java", 'TeamCity',
'Frontend', 'Backend', 'Full stack', 'UI Design', 'Web', 'Integration',
'Database design', 'UX'])
X = vectorizer.fit_transform(corpus).toarray()
X = 1 - X
feature_names = vectorizer.get_feature_names_out() |
10,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the trajectory of a single patient
Import Python libraries
We first need to import some tools for working with data in Python.
- NumPy is for working with numbers
- Pandas is for analysing data
- MatPlotLib is for making plots
- Sqlite3 to connect to the database
Step2: Connect to the database
We can use the sqlite3 library to connect to the MIMIC database
Once the connection is established, we'll run a simple SQL query.
Step4: Load the chartevents data
The chartevents table contains data charted at the patient bedside. It includes variables such as heart rate, respiratory rate, temperature, and so on.
We'll begin by loading the chartevents data for a single patient.
Step5: Review the patient's heart rate
We can select individual columns using the column name.
For example, if we want to select just the label column, we write ce.LABEL or alternatively ce['LABEL']
Step6: In a similar way, we can select rows from data using indexes.
For example, to select rows where the label is equal to 'Heart Rate', we would create an index using [ce.LABEL=='Heart Rate']
Step7: Plot 1
Step8: Task 1
What is happening to this patient's heart rate?
Plot respiratory rate over time for the patient.
Is there anything unusual about the patient's respiratory rate?
Step9: Plot 2
Step10: Task 2
Based on the data, does it look like the alarms would have triggered for this patient?
Plot 3
Step12: Task 3
How is the patient's consciousness changing over time?
Stop here...
Plot 4
Step14: To provide necessary context to this plot, it would help to include patient input data. This provides the necessary context to determine a patient's fluid balance - a key indicator in patient health.
Step15: Note that the column headers are different
Step16: As the plot shows, the patient's intake tends to be above their output (as one would expect!) - but there are periods where they are almost one to one. One of the biggest challenges of working with ICU data is that context is everything - let's look at a treatment (lasix) that we know will affect this graph.
Step17: Exercise 2
Plot the alarms for the mean arterial pressure ('Arterial Blood Pressure mean')
HINT
Step18: Plot 3
Step20: Plot 5 | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sqlite3
%matplotlib inline
Explanation: Exploring the trajectory of a single patient
Import Python libraries
We first need to import some tools for working with data in Python.
- NumPy is for working with numbers
- Pandas is for analysing data
- MatPlotLib is for making plots
- Sqlite3 to connect to the database
End of explanation
# Connect to the MIMIC database
conn = sqlite3.connect('data/mimicdata.sqlite')
# Create our test query
test_query =
SELECT subject_id, hadm_id, admittime, dischtime, admission_type, diagnosis
FROM admissions
# Run the query and assign the results to a variable
test = pd.read_sql_query(test_query,conn)
# Display the first few rows
test.head()
Explanation: Connect to the database
We can use the sqlite3 library to connect to the MIMIC database
Once the connection is established, we'll run a simple SQL query.
End of explanation
query =
SELECT de.icustay_id
, (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS
, di.label
, de.value
, de.valuenum
, de.uom
FROM chartevents de
INNER join d_items di
ON de.itemid = di.itemid
INNER join icustays ie
ON de.icustay_id = ie.icustay_id
WHERE de.icustay_id = 252522
ORDER BY charttime;
ce = pd.read_sql_query(query,conn)
# OPTION 2: load chartevents from a CSV file
# ce = pd.read_csv('data/example_chartevents.csv', index_col='HOURSSINCEADMISSION')
# Preview the data
# Use 'head' to limit the number of rows returned
ce.head()
Explanation: Load the chartevents data
The chartevents table contains data charted at the patient bedside. It includes variables such as heart rate, respiratory rate, temperature, and so on.
We'll begin by loading the chartevents data for a single patient.
End of explanation
# Select a single column
ce['LABEL']
Explanation: Review the patient's heart rate
We can select individual columns using the column name.
For example, if we want to select just the label column, we write ce.LABEL or alternatively ce['LABEL']
End of explanation
# Select just the heart rate rows using an index
ce[ce.LABEL=='Heart Rate']
Explanation: In a similar way, we can select rows from data using indexes.
For example, to select rows where the label is equal to 'Heart Rate', we would create an index using [ce.LABEL=='Heart Rate']
End of explanation
# Which time stamps have a corresponding heart rate measurement?
print ce.index[ce.LABEL=='Heart Rate']
# Set x equal to the times
x_hr = ce.HOURS[ce.LABEL=='Heart Rate']
# Set y equal to the heart rates
y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate']
# Plot time against heart rate
plt.figure(figsize=(14, 6))
plt.plot(x_hr,y_hr)
plt.xlabel('Time',fontsize=16)
plt.ylabel('Heart rate',fontsize=16)
plt.title('Heart rate over time from admission to the intensive care unit')
Explanation: Plot 1: How did the patients heart rate change over time?
Using the methods described above to select our data of interest, we can create our x and y axis values to create a time series plot of heart rate.
End of explanation
# Exercise 1 here
Explanation: Task 1
What is happening to this patient's heart rate?
Plot respiratory rate over time for the patient.
Is there anything unusual about the patient's respiratory rate?
End of explanation
plt.figure(figsize=(14, 6))
plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'],
ce.VALUENUM[ce.LABEL=='Respiratory Rate'],
'k+', markersize=10, linewidth=4)
plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - High'],
ce.VALUENUM[ce.LABEL=='Resp Alarm - High'],
'm--')
plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - Low'],
ce.VALUENUM[ce.LABEL=='Resp Alarm - Low'],
'm--')
plt.xlabel('Time',fontsize=16)
plt.ylabel('Respiratory rate',fontsize=16)
plt.title('Respiratory rate over time from admission, with upper and lower alarm thresholds')
plt.ylim(0,55)
Explanation: Plot 2: Did the patient's vital signs breach any alarm thresholds?
Alarm systems in the intensive care unit are commonly based on high and low thresholds defined by the carer.
False alarms are often a problem and so thresholds may be set arbitrarily to reduce alarms.
As a result, alarm settings carry limited information.
End of explanation
# Display the first few rows of the GCS eye response data
ce[ce.LABEL=='GCS - Eye Opening'].head()
# Prepare the size of the figure
plt.figure(figsize=(18, 10))
# Set x equal to the times
x_hr = ce.HOURS[ce.LABEL=='Heart Rate']
# Set y equal to the heart rates
y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate']
plt.plot(x_hr,y_hr)
plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'],
ce.VALUENUM[ce.LABEL=='Respiratory Rate'],
'k', markersize=6)
# Add a text label to the y-axis
plt.text(-20,155,'GCS - Eye Opening',fontsize=14)
plt.text(-20,150,'GCS - Motor Response',fontsize=14)
plt.text(-20,145,'GCS - Verbal Response',fontsize=14)
# Iterate over list of GCS labels, plotting around 1 in 10 to avoid overlap
for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Eye Opening'].values):
if np.mod(i,6)==0 and i < 65:
plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Eye Opening'].values[i],155),fontsize=14)
for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Motor Response'].values):
if np.mod(i,6)==0 and i < 65:
plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Motor Response'].values[i],150),fontsize=14)
for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Verbal Response'].values):
if np.mod(i,6)==0 and i < 65:
plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Verbal Response'].values[i],145),fontsize=14)
plt.title('Vital signs and Glasgow Coma Scale over time from admission',fontsize=16)
plt.xlabel('Time (hours)',fontsize=16)
plt.ylabel('Heart rate or GCS',fontsize=16)
plt.ylim(10,165)
Explanation: Task 2
Based on the data, does it look like the alarms would have triggered for this patient?
Plot 3: What is patient's level of consciousness?
Glasgow Coma Scale (GCS) is a measure of consciousness.
It is commonly used for monitoring patients in the intensive care unit.
It consists of three components: eye response; verbal response; motor response.
End of explanation
# OPTION 1: load outputs from the patient
query =
select de.icustay_id
, (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS
, di.label
, de.value
, de.valueuom
from outputevents de
inner join icustays ie
on de.icustay_id = ie.icustay_id
inner join d_items di
on de.itemid = di.itemid
where de.subject_id = 40080
order by charttime;
oe = pd.read_sql_query(query,conn)
oe.head()
plt.figure(figsize=(14, 10))
plt.figure(figsize=(14, 6))
plt.title('Fluid output over time')
plt.plot(oe.HOURS,
oe.VALUE.cumsum()/1000,
'ro', markersize=8, label='Output volume, L')
plt.xlim(0,72)
plt.ylim(0,10)
plt.legend()
Explanation: Task 3
How is the patient's consciousness changing over time?
Stop here...
Plot 4: What other data do we have on the patient?
Using Pandas 'read_csv function' again, we'll now load the outputevents data - this table contains all information about patient outputs (urine output, drains, dialysis).
End of explanation
# OPTION 1: load inputs given to the patient (usually intravenously) using the database connection
query =
select de.icustay_id
, (strftime('%s',de.starttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_START
, (strftime('%s',de.endtime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_END
, de.linkorderid
, di.label
, de.amount
, de.amountuom
, de.rate
, de.rateuom
from inputevents_mv de
inner join icustays ie
on de.icustay_id = ie.icustay_id
inner join d_items di
on de.itemid = di.itemid
where de.subject_id = 40080
order by endtime;
ie = pd.read_sql_query(query,conn)
# # OPTION 2: load ioevents using the CSV file with endtime as the index
# ioe = pd.read_csv('inputevents.csv'
# ,header=None
# ,names=['subject_id','itemid','label','starttime','endtime','amount','amountuom','rate','rateuom']
# ,parse_dates=True)
ie.head()
Explanation: To provide necessary context to this plot, it would help to include patient input data. This provides the necessary context to determine a patient's fluid balance - a key indicator in patient health.
End of explanation
ie['LABEL'].unique()
plt.figure(figsize=(14, 10))
# Plot the cumulative input against the cumulative output
plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'],
ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000,
'go', markersize=8, label='Intake volume, L')
plt.plot(oe.HOURS,
oe.VALUE.cumsum()/1000,
'ro', markersize=8, label='Output volume, L')
plt.title('Fluid balance over time',fontsize=16)
plt.xlabel('Hours',fontsize=16)
plt.ylabel('Volume (litres)',fontsize=16)
# plt.ylim(0,38)
plt.legend()
Explanation: Note that the column headers are different: we have "HOURS_START" and "HOURS_END". This is because inputs are administered over a fixed period of time.
End of explanation
plt.figure(figsize=(14, 10))
# Plot the cumulative input against the cumulative output
plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'],
ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000,
'go', markersize=8, label='Intake volume, L')
plt.plot(oe.HOURS,
oe.VALUE.cumsum()/1000,
'ro', markersize=8, label='Output volume, L')
# example on getting two columns from a dataframe: ie[['HOURS_START','HOURS_END']].head()
for i, idx in enumerate(ie.index[ie.LABEL=='Furosemide (Lasix)']):
plt.plot([ie.HOURS_START[ie.LABEL=='Furosemide (Lasix)'][idx],
ie.HOURS_END[ie.LABEL=='Furosemide (Lasix)'][idx]],
[ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx],
ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx]],
'b-',linewidth=4)
plt.title('Fluid balance over time',fontsize=16)
plt.xlabel('Hours',fontsize=16)
plt.ylabel('Volume (litres)',fontsize=16)
# plt.ylim(0,38)
plt.legend()
ie['LABEL'].unique()
Explanation: As the plot shows, the patient's intake tends to be above their output (as one would expect!) - but there are periods where they are almost one to one. One of the biggest challenges of working with ICU data is that context is everything - let's look at a treatment (lasix) that we know will affect this graph.
End of explanation
# Exercise 2 here
Explanation: Exercise 2
Plot the alarms for the mean arterial pressure ('Arterial Blood Pressure mean')
HINT: you can use ce.LABEL.unique() to find a list of variable names
Were the alarm thresholds breached?
End of explanation
plt.figure(figsize=(14, 10))
plt.plot(ce.index[ce.LABEL=='Heart Rate'],
ce.VALUENUM[ce.LABEL=='Heart Rate'],
'rx', markersize=8, label='HR')
plt.plot(ce.index[ce.LABEL=='O2 saturation pulseoxymetry'],
ce.VALUENUM[ce.LABEL=='O2 saturation pulseoxymetry'],
'g.', markersize=8, label='O2')
plt.plot(ce.index[ce.LABEL=='Arterial Blood Pressure mean'],
ce.VALUENUM[ce.LABEL=='Arterial Blood Pressure mean'],
'bv', markersize=8, label='MAP')
plt.plot(ce.index[ce.LABEL=='Respiratory Rate'],
ce.VALUENUM[ce.LABEL=='Respiratory Rate'],
'k+', markersize=8, label='RR')
plt.title('Vital signs over time from admission')
plt.ylim(0,130)
plt.legend()
Explanation: Plot 3: Were the patient's other vital signs stable?
End of explanation
# OPTION 1: load labevents data using the database connection
query =
SELECT de.subject_id
, de.charttime
, di.label, de.value, de.valuenum
, de.uom
FROM labevents de
INNER JOIN d_labitems di
ON de.itemid = di.itemid
where de.subject_id = 40080
le = pd.read_sql_query(query,conn)
# OPTION 2: load labevents from the CSV file
# le = pd.read_csv('data/example_labevents.csv', index_col='HOURSSINCEADMISSION')
# preview the labevents data
le.head()
# preview the ioevents data
le[le.LABEL=='HEMOGLOBIN']
plt.figure(figsize=(14, 10))
plt.plot(le.index[le.LABEL=='HEMATOCRIT'],
le.VALUENUM[le.LABEL=='HEMATOCRIT'],
'go', markersize=6, label='Haematocrit')
plt.plot(le.index[le.LABEL=='HEMOGLOBIN'],
le.VALUENUM[le.LABEL=='HEMOGLOBIN'],
'bv', markersize=8, label='Hemoglobin')
plt.title('Laboratory measurements over time from admission')
plt.ylim(0,38)
plt.legend()
Explanation: Plot 5: Laboratory measurements
Using Pandas 'read_csv function' again, we'll now load the labevents data.
This data corresponds to measurements made in a laboratory - usually on a sample of patient blood.
End of explanation |
10,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cluster Analysis
This notebook prototypes the cluster analysis visualizers that I'm currently putting together.
NOTE
Step1: Elbow Method
This method runs multiple clustering instances and computes the average silhoutte score for each K. Model selection works by selecting the K that is the "elbow" of a curve that looks like an arm.
Step2: 8 Blobs Dataset
This series shows the use of different metrics with a dataset that does contain centers
Step3: Datasets without Centers
Step4: Silhouette Score
Visualizer using the silhouette score metric
Step6: Intercluster Distance Map | Python Code:
import sys
sys.path.append("../..")
import numpy as np
import yellowbrick as yb
import matplotlib.pyplot as plt
from functools import partial
from sklearn.datasets import make_blobs as sk_make_blobs
from sklearn.datasets import make_circles, make_moons
# Helpers for easy dataset creation
N_SAMPLES = 1000
N_FEATURES = 12
SHUFFLE = True
# Make blobs partial
make_blobs = partial(sk_make_blobs, n_samples=N_SAMPLES, n_features=N_FEATURES, shuffle=SHUFFLE)
Explanation: Cluster Analysis
This notebook prototypes the cluster analysis visualizers that I'm currently putting together.
NOTE: Currently I'm using the sklearn make_blobs function to create test datasets with specific numbers of clusters. However, in order to add this to the documentation, we should add a real dataset.
End of explanation
from sklearn.cluster import KMeans
from yellowbrick.cluster import KElbowVisualizer
Explanation: Elbow Method
This method runs multiple clustering instances and computes the average silhoutte score for each K. Model selection works by selecting the K that is the "elbow" of a curve that looks like an arm.
End of explanation
## Make 8 blobs dataset
X, y = make_blobs(centers=8)
visualizer = KElbowVisualizer(KMeans(), k=(4,12))
visualizer.fit(X)
visualizer.show()
visualizer = KElbowVisualizer(KMeans(), k=(4,12), metric="silhouette")
visualizer.fit(X)
visualizer.show()
visualizer = KElbowVisualizer(KMeans(), k=(4,12), metric="calinski_harabaz")
visualizer.fit(X)
visualizer.show()
Explanation: 8 Blobs Dataset
This series shows the use of different metrics with a dataset that does contain centers
End of explanation
## Make cicles dataset
X, y = make_circles(n_samples=N_SAMPLES)
visualizer = KElbowVisualizer(KMeans(), k=(4,12))
visualizer.fit(X)
visualizer.show()
## Make moons dataset
X, y = make_moons(n_samples=N_SAMPLES)
visualizer = KElbowVisualizer(KMeans(), k=(4,12))
visualizer.fit(X)
visualizer.show()
Explanation: Datasets without Centers
End of explanation
from yellowbrick.cluster import SilhouetteVisualizer
## Make 8 blobs dataset
X, y = make_blobs(centers=8)
visualizer = SilhouetteVisualizer(KMeans(6))
visualizer.fit(X)
visualizer.show()
Explanation: Silhouette Score
Visualizer using the silhouette score metric
End of explanation
def prop_to_size(prop, mi=0, ma=5, power=0.5):
Scale a property to be used as a size
prop = np.asarray(prop)
return mi + (ma - mi)*(((prop - prop.min()) / (prop.max() - prop.min()))**power)
from sklearn.manifold import MDS
## Make 12 blobs dataset
X, y = make_blobs(centers=12)
## Fit KMeans model on dataset
model = KMeans(9).fit(X)
from matplotlib.lines import Line2D
def intercluster_distance(model, ax=None):
# Create the figure if an axes isn't passed in
if ax is None:
fig, ax = plt.subplots(figsize=(9,6))
else:
fig = plt.gcf()
## Get centers
## TODO: is this how sklearn stores centers in all models?
C = model.cluster_centers_
## Compute the sizes of the clusters
scores = np.bincount(model.predict(X))
size = prop_to_size(scores, 400, 25000)
## Use MDS to plot centers
Cm = MDS().fit_transform(C)
ax.scatter(Cm[:,0], Cm[:,1], s=size, c='#2e719344', edgecolor='#2e719399', linewidth=1)
## Annotate the clustes with their labels
for i, pt in enumerate(Cm):
ax.text(s=str(i), x=pt[0], y=pt[1], va="center", ha="center", fontweight='bold', size=13)
## Set the title
ax.set_title("Intercluster Distance Map (via Multidimensional Scaling)")
# Create origin grid
ax.set_xticks([0])
ax.set_yticks([0])
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_xlabel("PC2")
ax.set_ylabel("PC1")
# Create a regular legend with target "size" descriptor
# handles = tuple([
# Line2D([0], [0], color="none", marker="o", markersize=i, markerfacecolor='none', markeredgecolor="#999999", markeredgewidth=1, markevery=i)
# for i in [3,9,18]
# ])
# ax.legend([handles], ['membership',], loc='best')
# Create the size legend on an inner axes
lax = fig.add_axes([.9, 0.25, 0.3333, 0.5], frameon=False, facecolor="none")
make_size_legend(scores, size, lax)
return ax
intercluster_distance(model)
from matplotlib.patches import Circle
def make_size_legend(scores, areas, ax=None):
# Create the figure if an axes isn't passed in
if ax is None:
_, ax = plt.subplots()
## Compute the sizes of the clusters
radii = np.sqrt(areas / np.pi)
scaled = np.interp(radii, (radii.min(), radii.max()), (.1, 1))
print(size, radii)
# Compute the locations of the 25th, 50th, and 75th percentiles of the score
indices = np.array([
np.where(scores==np.percentile(scores, p, interpolation='nearest'))[0][0]
for p in (25, 50, 75)
])
# Draw circles with their various sizes
for idx in indices:
center = (-0.30, 1-scaled[idx])
c = Circle(center, scaled[idx], facecolor="none", edgecolor="#2e7193", linewidth=1.5, linestyle="--", label="bob")
ax.add_patch(c)
ax.annotate(
scores[idx], (-0.30, 1-(2*scaled[idx])), xytext=(1, 1-(2*scaled[idx])),
arrowprops=dict(arrowstyle="wedge", color="#2e7193"), va='center', ha='center',
)
# Draw size legend title
ax.text(s="membership", x=0, y=1.2, va='center', ha='center')
ax.set_xlim(-1.4,1.4)
ax.set_ylim(-1.4,1.4)
ax.set_xticks([])
ax.set_yticks([])
for name in ax.spines:
ax.spines[name].set_visible(False)
ax.grid(False)
return ax
Explanation: Intercluster Distance Map
End of explanation |
10,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare Datasets
Once the datasets are obtained, they must be aligned and cropped to the same region. In this notebook, we crop the Planet scene and ground truth data to the aoi.
The sections are
Step1: Datasets
Train Scene
Step2: Test Scene
Step3: AOI and Ground Truth
These datasets are created in identify-datasets notebook
Step5: Crop Ground Truth Data to AOI
Step6: Train Ground Truth Data
Step7: Test Ground Truth Data
Step8: Crop Train Image to AOI
Step9: Copy over the image metadata
Step13: Visualize Cropped Image
Step14: <a id='visualize'></a>
Visualize Ground Truth Data over Image
To ensure accurate alignment between the planet scene and the ground truth data, we will visualize them overlayed in a geographic reference system.
Define Layer for cropped Planet scene
First we project the cropped Planet scene to WGS84 for showing on the map. Then we adjust the scene for display and save as an 8-bit jpeg. Finally, we define the image layer using the projected image bounds.
Leaflet appears to support local files if they are jpg (src)
Step15: Define layer for ground truth data
Step16: Awesome! The data looks nicely registered to the imagery and the crop outlines don't appear to have changed much over the years.
Crop Test Image to AOI
Repeat the above procedures for the test image.
Step17: Visualize cropped image | Python Code:
from collections import namedtuple
import copy
import json
import os
import pathlib
import shutil
import subprocess
import tempfile
import ipyleaflet as ipyl
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import rasterio
from shapely.geometry import shape, mapping
%matplotlib inline
Explanation: Prepare Datasets
Once the datasets are obtained, they must be aligned and cropped to the same region. In this notebook, we crop the Planet scene and ground truth data to the aoi.
The sections are:
- Crop Ground Truth Data to AOI
- Crop Image to AOI
- Visualize Ground Truth Data over Image
Note: there are quite a few cells that are dedicated to defining utility functions that have broad applicability and can be lifted from this notebook. They are indicated by a line at the top of the cell that starts with
```
Utility functions:
```
End of explanation
train_scene_id = '20160831_180231_0e0e'
# define and, if necessary, create train data directory
train_dir = os.path.join('data', 'train')
pathlib.Path(train_dir).mkdir(parents=True, exist_ok=True)
# define train scene
train_scene = os.path.join(train_dir, train_scene_id + '_3B_AnalyticMS.tif')
train_scene_metadata = os.path.join(train_dir,
train_scene_id + '_3B_AnalyticMS_metadata.xml')
# First test if scene file exists, if not, use the Planet commandline tool to download the image, metadata, and udm.
# This command assumes a bash shell, available in Unix-based operating systems.
!test -f $train_scene || \
planet data download \
--item-type PSOrthoTile \
--dest $train_dir \
--asset-type analytic,analytic_xml \
--string-in id $train_scene_id
Explanation: Datasets
Train Scene
End of explanation
test_scene_id = '20160831_180257_0e26'
# define and, if necessary, create test data directory
test_dir = os.path.join('data', 'test')
pathlib.Path(test_dir).mkdir(parents=True, exist_ok=True)
# define test scene
test_scene = os.path.join(test_dir, test_scene_id + '_3B_AnalyticMS.tif')
test_scene_metadata = os.path.join(test_dir,
test_scene_id + '_3B_AnalyticMS_metadata.xml')
# First test if scene file exists, if not, use the Planet commandline tool to download the image, metadata, and udm.
# This command assumes a bash shell, available in Unix-based operating systems.
!test -f $test_scene || \
planet data download \
--item-type PSOrthoTile \
--dest $test_dir \
--asset-type analytic,analytic_xml \
--string-in id $test_scene_id
Explanation: Test Scene
End of explanation
predata_dir = 'pre-data'
test_aoi_filename = os.path.join(predata_dir, 'aoi-test.geojson')
assert os.path.isfile(test_aoi_filename)
train_aoi_filename = os.path.join(predata_dir, 'aoi-train.geojson')
assert os.path.isfile(train_aoi_filename)
ground_truth_filename = os.path.join(predata_dir, 'ground-truth.geojson')
assert os.path.isfile(ground_truth_filename)
Explanation: AOI and Ground Truth
These datasets are created in identify-datasets notebook
End of explanation
# Utility functions: cropping polygons
# Uses shapely for geospatial operations
def crop_polygons_to_aoi(polygons, aoi):
Crops polygons to the aoi.
Polygons within aoi are copied. For Polygons that intersect aoi boundary, the
intersection geometry is saved. If the intersection is a MultiPolygon, it is
stored as multiple Polygons.
:param dict aoi: geojson polygon describing crop feature
:param list features: geojson polygons to be cropped
aoi_shp = shape(aoi['geometry'])
cropped_features = []
for f in polygons:
shp = shape(f['geometry'])
assert shp.type == 'Polygon'
if shp.within(aoi_shp):
cropped_features.append(copy.deepcopy(f))
elif shp.intersects(aoi_shp):
# 'cut' features at the aoi boundary by the aoi
cropped_shp = shp.intersection(aoi_shp)
try:
# try to iterate, which only works for MultiPolygon
for s in cropped_shp:
new_f = copy.deepcopy(f)
new_f['geometry'] = mapping(s)
cropped_features.append(new_f)
except TypeError:
# Polygon is not iterable
new_f = copy.deepcopy(f)
new_f['geometry'] = mapping(cropped_shp)
cropped_features.append(new_f)
return cropped_features
# Utility functions: loading and saving geojson
def save_geojson(features, filename):
with open(filename, 'w') as f:
f.write(json.dumps(features))
def load_geojson(filename):
with open(filename, 'r') as f:
return json.load(f)
ground_truth_data = load_geojson(ground_truth_filename)
Explanation: Crop Ground Truth Data to AOI
End of explanation
train_aoi = load_geojson(train_aoi_filename)
train_ground_truth_data = crop_polygons_to_aoi(ground_truth_data, train_aoi)
print(len(train_ground_truth_data))
train_ground_truth_filename = os.path.join(predata_dir, 'ground-truth-train.geojson')
save_geojson(train_ground_truth_data, train_ground_truth_filename)
Explanation: Train Ground Truth Data
End of explanation
test_aoi = load_geojson(test_aoi_filename)
test_ground_truth_data = crop_polygons_to_aoi(ground_truth_data, test_aoi)
print(len(test_ground_truth_data))
test_ground_truth_filename = os.path.join(predata_dir, 'ground-truth-test.geojson')
save_geojson(test_ground_truth_data, test_ground_truth_filename)
Explanation: Test Ground Truth Data
End of explanation
# Utility functions: crop and project an image
def _gdalwarp_crop_options(crop_filename):
return ['-cutline', crop_filename, '-crop_to_cutline']
def _gdalwarp_project_options(src_proj, dst_proj):
return ['-s_srs', src_proj, '-t_srs', dst_proj]
def _gdalwarp(input_filename, output_filename, options):
commands = ['gdalwarp'] + options + \
['-overwrite',
input_filename,
output_filename]
print(' '.join(commands))
subprocess.check_call(commands)
# lossless compression of an image
def _compress(input_filename, output_filename):
commands = ['gdal_translate',
'-co', 'compress=LZW',
'-co', 'predictor=2',
input_filename,
output_filename]
print(' '.join(commands))
subprocess.check_call(commands)
# uses Rasterio to get image srs if dst_srs is specified
def warp(input_filename,
output_filename,
crop_filename=None,
dst_srs=None,
overwrite=True,
compress=False):
options = []
if crop_filename is not None:
options += _gdalwarp_crop_options(crop_filename)
if dst_srs is not None:
src_srs = rasterio.open(input_filename).crs['init']
options += _gdalwarp_project_options(src_srs, dst_srs)
# check to see if output file exists, if it does, do not warp
if os.path.isfile(output_filename) and not overwrite:
print('{} already exists. Aborting warp of {}.'.format(output_filename, input_filename))
elif compress:
with tempfile.NamedTemporaryFile(suffix='.vrt') as vrt_file:
options += ['-of', 'vrt']
_gdalwarp(input_filename, vrt_file.name, options)
_compress(vrt_file.name, output_filename)
else:
_gdalwarp(input_filename, output_filename, options)
train_scene_cropped = os.path.join(predata_dir, 'train_scene_cropped.tif')
warp(train_scene, train_scene_cropped, crop_filename=train_aoi_filename, overwrite=False, compress=True)
Explanation: Crop Train Image to AOI
End of explanation
train_scene_cropped_metadata = os.path.join(predata_dir, 'train_scene_cropped_metadata.xml')
shutil.copyfile(train_scene_metadata, train_scene_cropped_metadata)
Explanation: Copy over the image metadata
End of explanation
# Utility functions: loading an image
NamedBands = namedtuple('NamedBands', 'b, g, r, nir')
def load_masked_bands(filename):
Loads a 4-band BGRNir Planet Image file as a list of masked bands.
The masked bands share the same mask, so editing one band mask will
edit them all.
with rasterio.open(filename) as src:
b, g, r, nir = src.read()
mask = src.read_masks(1) == 0 # 0 value means the pixel is masked
bands = NamedBands(b=b, g=g, r=r, nir=nir)
return NamedBands(*[np.ma.array(b, mask=mask)
for b in bands])
print(load_masked_bands(train_scene_cropped).b.shape)
# Utility functions: displaying an image
def _linear_scale(ndarray, old_min, old_max, new_min, new_max):
Linear scale from old_min to new_min, old_max to new_max.
Values below min/max are allowed in input and output.
Min/Max values are two data points that are used in the linear scaling.
#https://en.wikipedia.org/wiki/Normalization_(image_processing)
return (ndarray - old_min)*(new_max - new_min)/(old_max - old_min) + new_min
# print(linear_scale(np.array([1,2,10,100,256,2560, 2660]), 2, 2560, 0, 256))
def _mask_to_alpha(bands):
band = np.atleast_3d(bands)[...,0]
alpha = np.zeros_like(band)
alpha[~band.mask] = 1
return alpha
def _add_alpha_mask(bands):
return np.dstack([bands, _mask_to_alpha(bands)])
def bands_to_display(bands, alpha=True):
Converts a list of bands to a 3-band rgb, normalized array for display.
rgb_bands = np.dstack(bands[:3])
old_min = np.percentile(rgb_bands, 2)
old_max = np.percentile(rgb_bands, 98)
new_min = 0
new_max = 1
scaled = _linear_scale(rgb_bands.astype(np.double),
old_min, old_max, new_min, new_max)
bands = np.clip(scaled, new_min, new_max)
if alpha is True:
bands = _add_alpha_mask(bands)
return bands
plt.figure()
bands = load_masked_bands(train_scene_cropped)
plt.imshow(bands_to_display([bands.r, bands.g, bands.b]))
Explanation: Visualize Cropped Image
End of explanation
# Utility functions: creating an image layer for display on a map
def _save_display_image(src_filename, dst_filename):
# convert to rgb and scale to 8-bit
bands = load_masked_bands(src_filename)
img = bands_to_display([bands.r, bands.g, bands.b])
# save as jpeg
if(os.path.isfile(dst_filename)): os.remove(dst_filename)
matplotlib.image.imsave(dst_filename, img)
def create_image_layer(filename):
with tempfile.NamedTemporaryFile(suffix='.tif') as temp_file:
projected_filename = temp_file.name
# project to wgs84
dst_srs = 'epsg:4326' #WGS84
warp(filename, projected_filename, dst_srs=dst_srs)
# save as jpeg
display_image = os.path.join('data', 'display.jpg')
_save_display_image(projected_filename, display_image)
# determine image layer bounds
(minx, miny, maxx, maxy) = rasterio.open(projected_filename).bounds
sw = [miny, minx]
ne = [maxy, maxx]
# Create image layer
return ipyl.ImageOverlay(url=display_image, bounds=[sw, ne])
Explanation: <a id='visualize'></a>
Visualize Ground Truth Data over Image
To ensure accurate alignment between the planet scene and the ground truth data, we will visualize them overlayed in a geographic reference system.
Define Layer for cropped Planet scene
First we project the cropped Planet scene to WGS84 for showing on the map. Then we adjust the scene for display and save as an 8-bit jpeg. Finally, we define the image layer using the projected image bounds.
Leaflet appears to support local files if they are jpg (src)
End of explanation
def create_feature_layer(features):
# Assign colors to classes
# Class descriptions can be found in datasets-identify notebook
agg_classes = ['G', 'R', 'F', 'P', 'T', 'D', 'C', 'V']
# colors determined using [colorbrewer2.org](http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3)
colors = ['#ffffd9','#edf8b1','#c7e9b4','#7fcdbb','#41b6c4','#1d91c0','#225ea8','#0c2c84']
class_colors = dict((a,c) for a,c in zip(agg_classes, colors))
def get_color(cls):
return class_colors[cls]
feature_collection = {
"type": "FeatureCollection",
"features": features
}
for f in feature_collection['features']:
feature_color = get_color(f['properties']['CLASS1'])
f['properties']['style'] = {
'color': feature_color,
'weight': 1,
'fillColor': feature_color,
'fillOpacity': 0.1}
return ipyl.GeoJSON(data=feature_collection)
zoom = 13
center = [38.30933576918588, -121.55410766601564] # lat/lon
map_tiles = ipyl.TileLayer(url='http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png')
data_map = ipyl.Map(
center=center,
zoom=zoom,
default_tiles = map_tiles
)
data_map.add_layer(create_image_layer(train_scene_cropped))
data_map.add_layer(create_feature_layer(train_ground_truth_data))
# display
data_map
Explanation: Define layer for ground truth data
End of explanation
test_scene_cropped = os.path.join(predata_dir, 'test_scene_cropped.tif')
warp(test_scene, test_scene_cropped, crop_filename=test_aoi_filename, overwrite=False, compress=True)
test_scene_cropped_metadata = os.path.join(predata_dir, 'test_scene_cropped_metadata.xml')
shutil.copyfile(test_scene_metadata, test_scene_cropped_metadata)
Explanation: Awesome! The data looks nicely registered to the imagery and the crop outlines don't appear to have changed much over the years.
Crop Test Image to AOI
Repeat the above procedures for the test image.
End of explanation
plt.figure()
bands = load_masked_bands(test_scene_cropped)
plt.imshow(bands_to_display([bands.r, bands.g, bands.b]))
Explanation: Visualize cropped image
End of explanation |
10,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 09
Step1: Define the placeholders and variables for the CNN model
Step2: Define helper functions for the convolution and maxpool layers
Step3: The CNN model is defined all within the following method
Step4: Here's the cost function to train the classifier.
Step5: Let's train the classifier on our data | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import cifar_tools
import tensorflow as tf
learning_rate = 0.001
names, data, labels = \
cifar_tools.read_data('./cifar-10-batches-py')
Explanation: Ch 09: Concept 03
Convolution Neural Network
Load data from CIFAR-10.
End of explanation
x = tf.placeholder(tf.float32, [None, 24 * 24])
y = tf.placeholder(tf.float32, [None, len(names)])
W1 = tf.Variable(tf.random_normal([5, 5, 1, 64]))
b1 = tf.Variable(tf.random_normal([64]))
W2 = tf.Variable(tf.random_normal([5, 5, 64, 64]))
b2 = tf.Variable(tf.random_normal([64]))
W3 = tf.Variable(tf.random_normal([6*6*64, 1024]))
b3 = tf.Variable(tf.random_normal([1024]))
W_out = tf.Variable(tf.random_normal([1024, len(names)]))
b_out = tf.Variable(tf.random_normal([len(names)]))
Explanation: Define the placeholders and variables for the CNN model:
End of explanation
def conv_layer(x, W, b):
conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
conv_with_b = tf.nn.bias_add(conv, b)
conv_out = tf.nn.relu(conv_with_b)
return conv_out
def maxpool_layer(conv, k=2):
return tf.nn.max_pool(conv, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME')
Explanation: Define helper functions for the convolution and maxpool layers:
End of explanation
def model():
x_reshaped = tf.reshape(x, shape=[-1, 24, 24, 1])
conv_out1 = conv_layer(x_reshaped, W1, b1)
maxpool_out1 = maxpool_layer(conv_out1)
norm1 = tf.nn.lrn(maxpool_out1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
conv_out2 = conv_layer(norm1, W2, b2)
norm2 = tf.nn.lrn(conv_out2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
maxpool_out2 = maxpool_layer(norm2)
maxpool_reshaped = tf.reshape(maxpool_out2, [-1, W3.get_shape().as_list()[0]])
local = tf.add(tf.matmul(maxpool_reshaped, W3), b3)
local_out = tf.nn.relu(local)
out = tf.add(tf.matmul(local_out, W_out), b_out)
return out
Explanation: The CNN model is defined all within the following method:
End of explanation
model_op = model()
cost = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=model_op, labels=y)
)
train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
correct_pred = tf.equal(tf.argmax(model_op, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Here's the cost function to train the classifier.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
onehot_labels = tf.one_hot(labels, len(names), on_value=1., off_value=0., axis=-1)
onehot_vals = sess.run(onehot_labels)
batch_size = len(data) // 200
print('batch size', batch_size)
for j in range(0, 1000):
avg_accuracy_val = 0.
batch_count = 0.
for i in range(0, len(data), batch_size):
batch_data = data[i:i+batch_size, :]
batch_onehot_vals = onehot_vals[i:i+batch_size, :]
_, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: batch_data, y: batch_onehot_vals})
avg_accuracy_val += accuracy_val
batch_count += 1.
avg_accuracy_val /= batch_count
print('Epoch {}. Avg accuracy {}'.format(j, avg_accuracy_val))
Explanation: Let's train the classifier on our data:
End of explanation |
10,484 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How can I get get the indices of the largest value in a multi-dimensional NumPy array `a`? | Problem:
import numpy as np
a = np.array([[10,50,30],[60,20,40]])
result = np.unravel_index(a.argmax(), a.shape) |
10,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Streaming Data
Learning Objectives
1. Learn how to process real-time data for ML models using Cloud Dataflow
2. Learn how to serve online predictions using real-time data
Introduction
It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial.
Typically you will have the following
Step1: Re-train our model with trips_last_5min feature
In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.
Simulate Real Time Taxi Data
Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.
Inspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery.
In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub.
To execute the iot_devices.py script, launch a terminal and navigate to the training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs directory. Then run the following two commands.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID
You will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.
Create a BigQuery table to collect the processed data
In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.
Step2: Next, we create a table called traffic_realtime and set up the schema.
Step3: Launch Streaming Dataflow Pipeline
Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.
The pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it.
There are 5 transformations being applied
Step5: Make predictions from the new data
In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook.
The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.
Exercise. Complete the code in the function below. Write a SQL query that will return the most recent entry in traffic_realtime and add it to the instance.
Step6: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
Step7: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
Exercise. Complete the code below to call prediction on an instance incorporating realtime traffic info. You should
- use the function add_traffic_last_5min to add the most recent realtime traffic data to the prediction instance
- call prediction on your model for this realtime instance and save the result as a variable called response
- parse the json of response to print the predicted taxifare cost | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from google.api_core.client_options import ClientOptions
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
Explanation: Working with Streaming Data
Learning Objectives
1. Learn how to process real-time data for ML models using Cloud Dataflow
2. Learn how to serve online predictions using real-time data
Introduction
It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial.
Typically you will have the following:
- A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis)
- A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub)
- A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow)
- A persistent store to keep the processed data (in our case this is BigQuery)
These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below.
Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below.
<img src='../assets/taxi_streaming_data.png' width='80%'>
In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of trips_last_5min data as an additional feature. This is our proxy for real-time traffic.
End of explanation
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
Explanation: Re-train our model with trips_last_5min feature
In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.
Simulate Real Time Taxi Data
Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.
Inspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery.
In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub.
To execute the iot_devices.py script, launch a terminal and navigate to the training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs directory. Then run the following two commands.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID
You will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.
Create a BigQuery table to collect the processed data
In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.
End of explanation
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
Explanation: Next, we create a table called traffic_realtime and set up the schema.
End of explanation
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
Explanation: Launch Streaming Dataflow Pipeline
Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.
The pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it.
There are 5 transformations being applied:
- Read from PubSub
- Window the messages
- Count number of messages in the window
- Format the count for BigQuery
- Write results to BigQuery
TODO: Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the beam programming guide for guidance. To check your answer reference the solution.
For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds.
In a new terminal, launch the dataflow pipeline using the command below. You can change the BUCKET variable, if necessary. Here it is assumed to be your PROJECT_ID.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID # CHANGE AS NECESSARY
python3 ./taxicab_traffic/streaming_count.py \
--input_topic taxi_rides \
--runner=DataflowRunner \
--project=$PROJECT_ID \
--temp_location=gs://$BUCKET/dataflow_streaming
Once you've submitted the command above you can examine the progress of that job in the Dataflow section of Cloud console.
Explore the data in the table
After a few moments, you should also see new data written to your BigQuery table as well.
Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
End of explanation
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string =
TODO: Your code goes here
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
Explanation: Make predictions from the new data
In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook.
The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.
Exercise. Complete the code in the function below. Write a SQL query that will return the most recent entry in traffic_realtime and add it to the instance.
End of explanation
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
Explanation: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
End of explanation
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
endpoint = f'https://{REGION}-ml.googleapis.com'
client_options = ClientOptions(api_endpoint=endpoint)
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False, client_options=client_options)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
Explanation: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
Exercise. Complete the code below to call prediction on an instance incorporating realtime traffic info. You should
- use the function add_traffic_last_5min to add the most recent realtime traffic data to the prediction instance
- call prediction on your model for this realtime instance and save the result as a variable called response
- parse the json of response to print the predicted taxifare cost
End of explanation |
10,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ansible is
configuration manager
simple
extensible via modules
written in python
broad community
many external tools
playbook repository
used by openstack, openshift & tonns of project
# Configuration Manager
Explain infrastructure as code
# Advantages
No agents
Step1: ansible.cfg
It's the main configuration file. While all ansible are in yaml, ansible.cfg is in .ini format. Eg.
```
[stanza]
key = value
```
Let's check the content of a sample ansible.cfg
Step2: Inventories
a simple inventory file contains a static list of nodes to contact.
Generally, an inventory can be static or dynamic, as we will see in the following lessons.
Step3: Environment variables
N.B. ansible environment variables are not related with process environment
You defined your host groups in the environment, eg
Step4: Exercise
Dump env_name tied to the staging inventory.
which is the expected output?
what ties the "staging" inventory file to group_vars/staging?
Step5: Exercise | Python Code:
cd /notebooks/exercise-00/
# Let's check our ansible directory
!tree
Explanation: Ansible is
configuration manager
simple
extensible via modules
written in python
broad community
many external tools
playbook repository
used by openstack, openshift & tonns of project
# Configuration Manager
Explain infrastructure as code
# Advantages
No agents: ansible copies python and all deployment scripts/modules to the target machine via ssh and executes them remotely. Some modules though require that target hosts contain specific python libraries.
Jobs are executed in parallel, but you can configure for serialization using different strategies for speed up, rollout or other purposes: (link)
Authentication can be passwordless (ssh/pki, kerberos) or with password.
Automation jobs (Playbooks) are described via YAML - a very concise and simple language. You can validate and lint files with yamllint and ansible-lint.
```
this_is:
a: yaml
file:
- with dict
- a list
```
Passwords are supported, but SSH keys with ssh-agent are one of the best ways to use Ansible. Though if you want to use Kerberos, that's good too.
You have a lot of options! Root logins are not required, you can login as any user, and then su or sudo to any user.
End of explanation
!cat ansible.cfg
Explanation: ansible.cfg
It's the main configuration file. While all ansible are in yaml, ansible.cfg is in .ini format. Eg.
```
[stanza]
key = value
```
Let's check the content of a sample ansible.cfg:
there's a lot of stuff in there
there will be more ;)
for now let's check only the uncommented ones.
End of explanation
!cat inventory
# You can have many inventory files
!cat staging
Explanation: Inventories
a simple inventory file contains a static list of nodes to contact.
Generally, an inventory can be static or dynamic, as we will see in the following lessons.
End of explanation
# group_vars - a directory containing environment files for various host groups.
!tree group_vars
# I set env_name in two different files
!grep env_name -r group_vars/
!cat group_vars/staging
# The debug module (-m debug) shows variables' content or dumps messages.
# by default uses the inventory set into ansible.cfg, thus writing
!ansible all -m debug -a 'var=env_name'
Explanation: Environment variables
N.B. ansible environment variables are not related with process environment
You defined your host groups in the environment, eg:
course
ansible
staging
Ansible defines two default groups: all and ungrouped.
You can assign variables to all hosts using the all group.
End of explanation
# Solution
!ansible all -i staging -m debug -a 'var=env_name'
# Use this cell for the exercise
Explanation: Exercise
Dump env_name tied to the staging inventory.
which is the expected output?
what ties the "staging" inventory file to group_vars/staging?
End of explanation
#
# Read the inventory and try to predict the output of
#
!ansible course -i staging -m debug -a 'var=proxy_env'
Explanation: Exercise
End of explanation |
10,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms
Step8: I worked with Hunter, Jessica, and Brett. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h = x[1]-x[0]
y = np.zeros_like(x)
y[0] = y0
for i in range(len(x)-1):
y[i+1] = y[i] + h*derivs(y[i],x[i])
return y
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h = x[1]-x[0]
y = np.zeros_like(x)
y[0] = y0
for i in range(len(x)-1):
y[i+1] = y[i] + h*derivs(y[i]+(h/2)*derivs(y[i],x[i]),x[i]+h/2)
return y
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
def function(point):
return .25*np.exp(2*point) - .5*point - .25
y = np.array([function(i) for i in x])
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
Explanation: You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
End of explanation
f, (ax1, ax2) = plt.subplots(2, 1,figsize=(12,8))
ax1.plot(x,y1,label = 'Euler',color='red');
ax1.plot(x,y2,label = 'Midpoint',color='blue');
ax1.plot(x,y3,label = 'Exact',color='black');
ax1.plot(x,y4,label = 'ODE',color='green');
ax1.set_title('Four Differential Methods')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax2.plot(x,(abs(y1-y3)),label = 'Euler Difference',color='red');
ax2.plot(x,(abs(y2-y3)),label = 'Midpoint Difference',color='blue');
ax2.plot(x,(abs(newy4-y3)),label = 'ODE Difference',color='green');
ax2.set_title('Error for Each Differential Method');
ax2.set_ylim(-.1,1);
ax2.set_xlabel('x')
ax2.set_ylabel('y')
plt.tight_layout()
ax2.legend();
ax1.legend();
Explanation: In the following cell you are going to solve the above ODE using four different algorithms:
Euler's method
Midpoint method
odeint
Exact
Here are the details:
Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
Define the derivs function for the above differential equation.
Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
Plot the $y(x)$ versus $x$ for each of the 4 approaches.
Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
End of explanation
assert True # leave this for grading the plots
Explanation: I worked with Hunter, Jessica, and Brett.
End of explanation |
10,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TODO
Step1: Коэффициент для учета вклада гелия в массу газа (см. Notes)
Step2: Коэффициент, с которым пересчитывается масса молекулярного газа
Step3: Путь для картинок в статью
Step4: Путь до сохраненных моделей
Step5: Врапер, который позволяет сохранить модель и не пересчитывать ее бесконечно. Используется для plot_2f_vs_1f.
Step6: Для большой оси
Step7: Для случая бесконечного тонкого диска
Step8: Два других механизма из http
Step9: Hunter et al (1998), 'competition with shear' according to Leroy
Step10: Кривая вращения тонкого диска
Step11: Функция для печати статистики по фотометриям, куда добавлена также информация о полной массе диска $M_d = 2\pi h^2 \Sigma(0)$ (только нужно учесть, что там пк в arcsec надо перевести)
Step12: Префикс для плохих фотометрий, чтобы их не брать в конце.
Step13: Функция, которая возвращает суммарный профиль плотности для двухдисковой модели, если это необходимо
Step14: Цикл по стилям линий для того, что бы они отличались там, где их много на одной картинке и цвета сливаются.
Step15: Раскрашиваем задний фон в "зебру" с некоей периодичностью.
Step16: Сравним с оценкой Romeo & Falstad (2013) https
Step17: Двухкомпонентная версия
Step18: Функция-сравнение с наблюдениями
Step19: Функция для исправления центральной поверхностной яркости за наклон (приведение диска к виду "плашмя"). Взята из http
Step20: Функция для анализа влияния параметров. Берет стандартные параметры, делает из них списки и прогоняет для всех в списке, после чего измеряет среднее и std. Можно варьировать несколько параметров одновременно. | Python Code:
%run ../../utils/load_notebook.py
from instabilities import *
import numpy as np
Explanation: TODO: сделать так, чтобы можно было импортировать
End of explanation
He_coeff = 1.36
Explanation: Коэффициент для учета вклада гелия в массу газа (см. Notes):
End of explanation
X_CO = 1.9
Explanation: Коэффициент, с которым пересчитывается масса молекулярного газа:
End of explanation
paper_imgs_dir = r'C:\Users\root\Dropbox\RotationCurves\PhD\paper2\imgs\\'
Explanation: Путь для картинок в статью:
End of explanation
models_path = '..\\..\\notebooks\\2f\\test_short\\models\\'
Explanation: Путь до сохраненных моделей:
End of explanation
def save_model(path):
'''враппер для plot_2f_vs_1f, сохраняющий параметры модели по заданному пути до .npy файла'''
def real_decorator(function):
def wrapper(*args, **kwargs):
dictionary = kwargs.copy()
rr = zip(*kwargs['total_gas_data'])[0]
dictionary.pop('ax', None)
for i in dictionary.items():
if callable(i[1]):
if i[0] == 'epicycl':
dictionary[i[0]] = [i[1](kwargs['gas_approx'], r, kwargs['scale']) for r in rr]
else:
try:
dictionary[i[0]] = [i[1](r) for r in rr]
except ValueError:
dictionary[i[0]] = i[1](rr)
np.save(path, dictionary)
return function(*args, **kwargs)
return wrapper
return real_decorator
def calc_scale(D):
'''Масштаб в кпк/arcsec из расстояния в Мпк'''
return (D*1000*2*3.141596)/(360*60*60.)
def flat_end(argument):
'''декоратор для того, чтобы продолжать функцию на уровне последнего значения'''
def real_decorator(function):
def wrapper(*args, **kwargs):
if args[0] < argument:
return function(*args, **kwargs)
else:
return function(argument, *args[1:], **kwargs)
return wrapper
return real_decorator
Explanation: Врапер, который позволяет сохранить модель и не пересчитывать ее бесконечно. Используется для plot_2f_vs_1f.
End of explanation
# sig_maj_lim=None
# spl_maj=None
# @flat_end(sig_maj_lim)
# def sig_R_maj_minmin(r, spl_maj=spl_maj):
# return spl_maj(r).item()
# @flat_end(sig_maj_lim)
# def sig_R_maj_min(r, spl_maj=spl_maj):
# return spl_maj(r).item()/sqrt(sin_i**2 + 0.49*cos_i**2)
# @flat_end(sig_maj_lim)
# def sig_R_maj_max(r, spl_maj=spl_maj):
# return spl_maj(r).item()/sqrt(0.5*sin_i**2 + 0.09*cos_i**2)
# @flat_end(sig_maj_lim)
# def sig_R_maj_maxmax(r, spl_maj=spl_maj):
# return spl_maj(r)*sqrt(2)/sin_i
# @flat_end(sig_maj_lim)
# def sig_R_maj_maxmaxtrue(r, spl_maj=spl_maj):
# return spl_maj(r)/sin_i/sqrt(sigPhi_to_sigR_real(r))
# sig_min_lim=None
# spl_min=None
# @flat_end(sig_min_lim)
# def sig_R_minor_minmin(r, spl_min=spl_min):
# return spl_min(r).item()
# @flat_end(sig_min_lim)
# def sig_R_minor_min(r, spl_min=spl_min):
# return spl_min(r).item()/sqrt(sin_i**2 + 0.49*cos_i**2)
# @flat_end(sig_min_lim)
# def sig_R_minor_max(r, spl_min=spl_min):
# return spl_min(r).item()/sqrt(sin_i**2 + 0.09*cos_i**2)
# @flat_end(sig_min_lim)
# def sig_R_minor_maxmax(r, spl_min=spl_min):
# return spl_min(r)/sin_i
# TODO: move to proper place
def plot_data_lim(ax, data_lim):
'''Вертикальная линия, обозначающая конец данных'''
ax.axvline(x=data_lim, ls='-.', color='black', alpha=0.5)
def plot_disc_scale(scale, ax, text=None):
'''Обозначает масштаб диска'''
ax.plot([scale, scale], [0., 0.05], '-', lw=6., color='black')
if text:
ax.annotate(text, xy=(scale, 0.025), xytext=(scale, 0.065), textcoords='data', arrowprops=dict(arrowstyle="->"))
def plot_Q_levels(ax, Qs, style='--', color='grey', alpha=0.4):
'''Функция, чтобы рисовать горизонтальные линии различных уровней $Q^{-1}$:'''
for Q in Qs:
ax.axhline(y=1./Q, ls=style, color=color, alpha=alpha)
def plot_2f_vs_1f(ax=None, total_gas_data=None, epicycl=None, gas_approx=None, sound_vel=None, scale=None, sigma_max=None, sigma_min=None, star_density_max=None,
star_density_min=None, data_lim=None, color=None, alpha=0.3, disk_scales=[], label=None, verbose=False, **kwargs):
'''Картинка сравнения 2F и 1F критерия для разных фотометрий и величин sig_R,
куда подается весь газ, результат НЕ исправляется за осесимметричные возмущения.'''
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=total_gas_data,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_max,
star_density=star_density_min, verbose=verbose))
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=total_gas_data,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_min,
star_density=star_density_max, verbose=verbose))
rr = zip(*total_gas_data)[0]
ax.fill_between(rr, invQeff_min, invQeff_max, color=color, alpha=alpha, label=label)
ax.plot(rr, invQeff_min, 'd-', color=color, alpha=0.6)
ax.plot(rr, invQeff_max, 'd-', color=color, alpha=0.6)
ax.plot(rr, invQg, 'v-', color='b')
ax.set_ylim(0., 1.5)
ax.set_xlim(0., data_lim+50.)
plot_data_lim(ax, data_lim)
for h, annot in disk_scales:
plot_disc_scale(h, ax, annot)
plot_Q_levels(ax, [1., 1.5, 2., 3.])
ax.legend()
Explanation: Для большой оси: $\sigma^2_{maj} = \sigma^2_{\varphi}\sin^2 i + \sigma^2_{z}\cos^2 i$, следовательно примерные ограничения
$$\sigma_{maj} < \frac{\sigma_{maj}}{\sqrt{\sin^2 i + 0.49\cos^2 i}}< \sigma_R = \frac{\sigma_{maj}}{\sqrt{f\sin^2 i + \alpha^2\cos^2 i}} ~< \frac{\sigma_{maj}}{\sqrt{0.5\sin^2 i + 0.09\cos^2 i}} < \frac{\sqrt{2}\sigma_{maj}}{\sin i} (или \frac{\sigma_{maj}}{\sqrt{f}\sin i}),$$
или можно более точную оценку дать, если построить $f$ (сейчас $0.5 < f < 1$).
Для малой оси: $\sigma^2_{min} = \sigma^2_{R}\sin^2 i + \sigma^2_{z}\cos^2 i$ и ограничения
$$\sigma_{min} < \frac{\sigma_{min}}{\sqrt{\sin^2 i + 0.49\cos^2 i}} < \sigma_R = \frac{\sigma_{min}}{\sqrt{\sin^2 i + \alpha^2\cos^2 i}} ~< \frac{\sigma_{min}}{\sqrt{\sin^2 i + 0.09\cos^2 i}} < \frac{\sigma_{min}}{\sin i}$$
Соответственно имеем 5 оценок из maj и 4 оценки из min.
End of explanation
def epicyclicFreq_real(poly_gas, R, resolution):
'''Честное вычисление эпициклической частоты на расстоянии R для сплайна или полинома'''
try:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.deriv()(R) / poly_gas(R)) / (R * resolution )
except:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.derivative()(R) / poly_gas(R)) / (R * resolution )
Explanation: Для случая бесконечного тонкого диска: $$\kappa=\frac{3}{R}\frac{d\Phi}{dR}+\frac{d^2\Phi}{dR^2}$$
где $\Phi$ - гравпотенциал, однако его знать не надо, т.к. есть проще формула: $$\kappa=\sqrt{2}\frac{\vartheta_c}{R}\sqrt{1+\frac{R}{\vartheta_c}\frac{d\vartheta_c}{dR}}$$
End of explanation
def Sigma_crit_S04(gas_dens, r_gas, star_surf_dens):
return 6.1 * gas_dens / (gas_dens + star_surf_dens(r_gas))
Explanation: Два других механизма из http://iopscience.iop.org/article/10.1088/0004-6256/148/4/69/pdf:
Schaye (2004), 'cold gas phase':
$$\Sigma_g > 6.1 f_g^{0.3} Z^{-0.3} I^{0.23}$$
или при constant metallicity of 0.1 $Z_{sun}$ and interstellar flux of ionizing photons 10^6 cm−2 s−1:
$$\Sigma_g > 6.1 \frac{\Sigma_g}{\Sigma_g + \Sigma_s}$$
End of explanation
def oort_a(r, gas_vel):
try:
return 0.5 * (gas_vel(r)/r - gas_vel.deriv()(r))
except:
return 0.5 * (gas_vel(r)/r - gas_vel.derivative()(r))
def Sigma_crit_A(r, gas_vel, alpha, sound_vel):
G = 4.32
return alpha * (sound_vel*oort_a(r, gas_vel)) / (np.pi*G)
Explanation: Hunter et al (1998), 'competition with shear' according to Leroy:
$$\Sigma_A = \alpha_A\frac{\sigma_g A}{\pi G}$$
End of explanation
from scipy.special import i0, i1, k0, k1
def disc_vel(r, Sigma0, h, scale, Sigma0_2=None, h_2=None):
G = 4.3
bessels = i0(0.5*r/h)*k0(0.5*r/h) - i1(0.5*r/h)*k1(0.5*r/h)
if h_2 is None:
return np.sqrt(2*np.pi*G*Sigma0*r*scale * 0.5*r/h * bessels)
else: #двухдисковая модель
bessels2 = i0(0.5*r/h_2)*k0(0.5*r/h_2) - i1(0.5*r/h_2)*k1(0.5*r/h_2)
return np.sqrt(2*np.pi*G*Sigma0*r*scale * 0.5*r/h * bessels + 2*np.pi*G*Sigma0_2*r*scale * 0.5*r/h_2 * bessels2)
Explanation: Кривая вращения тонкого диска:
$$\frac{v^2}{r} = 2\pi G \Sigma_0 \frac{r}{2h} \left[I_0(\frac{r}{2h})K_0(\frac{r}{2h}) - I_1(\frac{r}{2h})K_1(\frac{r}{2h})\right]$$
End of explanation
from tabulate import tabulate
import pandas as pd
def show_all_photometry_table(all_photometry, scale):
'''scale in kpc/arcsec'''
copy = [list(l) for l in all_photometry]
#все это дальше из-за того, что бывают двухдисковые модели и их нужно по другому использовать
for entry in copy:
if type(entry[5]) == tuple:
entry[5] = (round(entry[5][0], 2), round(entry[5][1], 2))
else:
entry[5] = round(entry[5], 2)
for entry in copy:
if type(entry[4]) == tuple:
entry[4] = (round(entry[4][0], 2), round(entry[4][1], 2))
else:
entry[4] = round(entry[4], 2)
for entry in copy:
if type(entry[5]) == tuple:
entry.append(2*math.pi*entry[5][0]**2 * entry[-1][0](0) * (scale * 1000.)**2 +
2*math.pi*entry[5][1]**2 * entry[-1][1](0) * (scale * 1000.)**2)
else:
entry.append(2*math.pi*entry[5]**2 * entry[-1](0) * (scale * 1000.)**2)
for entry in copy:
if type(entry[5]) == tuple:
entry.append(entry[7][0](0) + entry[7][1](0))
else:
entry.append(entry[7](0))
df = pd.DataFrame(data=copy, columns=['Name', 'r_eff', 'mu_eff', 'n', 'mu0_d', 'h_disc', 'M/L', 'surf', 'M_d/M_sun', 'Sigma_0'])
df['M/L'] = df['M/L'].apply(lambda l: '%2.2f'%l)
# df['Sigma_0'] = df['surf'].map(lambda l:l(0))
df['Sigma_0'] = df['Sigma_0'].apply(lambda l: '%2.0f' % l)
# df['M_d/M_sun'] = 2*math.pi*df['h_disc']**2 * df['surf'].map(lambda l:l(0)) * (scale * 1000.)**2
df['M_d/M_sun'] = df['M_d/M_sun'].apply(lambda l: '%.2E.' % l)
df.drop('surf', axis=1, inplace=True)
print tabulate(df, headers='keys', tablefmt='psql', floatfmt=".2f")
Explanation: Функция для печати статистики по фотометриям, куда добавлена также информация о полной массе диска $M_d = 2\pi h^2 \Sigma(0)$ (только нужно учесть, что там пк в arcsec надо перевести):
End of explanation
BAD_MODEL_PREFIX = 'b:'
Explanation: Префикс для плохих фотометрий, чтобы их не брать в конце.
End of explanation
def tot_dens(dens):
if type(dens) == tuple:
star_density = lambda l: dens[0](l) + dens[1](l)
else:
star_density = lambda l: dens(l)
return star_density
Explanation: Функция, которая возвращает суммарный профиль плотности для двухдисковой модели, если это необходимо:
End of explanation
from itertools import cycle
lines = ["-","--","-.",":"]
linecycler = cycle(lines)
Explanation: Цикл по стилям линий для того, что бы они отличались там, где их много на одной картинке и цвета сливаются.
End of explanation
def foreground_zebra(ax, step, alpha):
for i in range(int(ax.get_xlim()[1])+1):
if i%2 == 0:
ax.axvspan(i*step, (i+1)*step, color='grey', alpha=alpha)
Explanation: Раскрашиваем задний фон в "зебру" с некоей периодичностью.
End of explanation
from math import pi
def romeo_Qinv(r=None, epicycl=None, sound_vel=None, sound_vel_CO=6., sound_vel_HI=11., sigma_R=None, star_density=None,
HI_density=None, CO_density=None, alpha=None, verbose=False, thin=True, He_corr=False):
'''Возвращает приближение Q из Romeo&Falstad(2013), оно же трехкомпонентное Romeo&Wiegert(2011) и наиболее неустойчивую компоненту.'''
G = 4.32
kappa = epicycl
if not He_corr:
CO_density = He_coeff*CO_density
HI_density = He_coeff*HI_density
if sound_vel is not None:
sound_vel_CO=sound_vel
sound_vel_HI=sound_vel
Q_star = kappa*sigma_R/(pi*G*star_density) # не 3.36
Q_CO = kappa*sound_vel_CO/(pi*G*CO_density)
Q_HI = kappa*sound_vel_HI/(pi*G*HI_density)
if not thin:
T_CO, T_HI = 1.5, 1.5
if alpha > 0 and alpha <= 0.5:
T_star = 1. + 0.6*alpha**2
else:
T_star = 0.8 + 0.7*alpha
else:
T_CO, T_HI, T_star = 1., 1., 1.
dispersions = [sigma_R, sound_vel_HI, sound_vel_CO]
QTs = [Q_star*T_star, Q_HI*T_HI, Q_CO*T_CO]
components = ['star', 'HI', 'H2']
mindex = QTs.index(min(QTs))
if verbose:
print 'QTs: {}'.format(QTs)
print 'min index: {}'.format(mindex)
print 'min component: {}'.format(components[mindex])
sig_m = dispersions[mindex]
def W_i(sig_m, sig_i):
return 2*sig_m*sig_i/(sig_m**2 + sig_i**2)
if verbose:
print 'Ws/TQs={:5.3f} WHI/TQHI={:5.3f} WCO/TQCO={:5.3f}'.format(W_i(sig_m, dispersions[0])/QTs[0], W_i(sig_m, dispersions[1])/QTs[1], W_i(sig_m, dispersions[2])/QTs[2])
print 'Ws={:5.3f} WHI={:5.3f} WCO={:5.3f}'.format(W_i(sig_m, dispersions[0]), W_i(sig_m, dispersions[1]), W_i(sig_m, dispersions[2]))
return W_i(sig_m, dispersions[0])/QTs[0] + W_i(sig_m, dispersions[1])/QTs[1] + W_i(sig_m, dispersions[2])/QTs[2], components[mindex]
Explanation: Сравним с оценкой Romeo & Falstad (2013) https://ui.adsabs.harvard.edu/#abs/2013MNRAS.433.1389R/abstract:
$$Q_N^{-1} = \sum_{i=1}^{N}\frac{W_i}{Q_iT_i}$$ где
$$Q_i = \frac{\kappa\sigma_{R,i}}{\pi G\Sigma_i}$$
$$T_i= \begin{cases} 1 + 0.6(\frac{\sigma_z}{\sigma_R})^2_i, & \mbox{if } 0.0 \le \frac{\sigma_z}{\sigma_R} \le 0.5,
\ 0.8 + 0.7(\frac{\sigma_z}{\sigma_R})_i, & \mbox{if } 0.5 \le \frac{\sigma_z}{\sigma_R} \le 1.0 \end{cases}$$
$$W_i = \frac{2\sigma_{R,m}\sigma_{R,i}}{\sigma_{R,m}^2 + \sigma_{R,i}^2},$$
$$m:\ index\ of\ min(T_iQ_i)$$
В самой развитой модели 3 компонента - HI, CO, stars. В их же модели полагается $(\sigma_z/\sigma_R){CO} = (\sigma_z/\sigma_R){HI} = 1$, т.е. $T_{CO} = T_{HI} = 1.5$. Скорость звука я буду полагать в обеих средах равной 11 км/c. Звезды у меня в двух крайних случаях бывают с $\alpha$ равным 0.3 и 0.7, соответственно $T_{0.3} = 1.05;\ T_{0.7}=1.29$.
End of explanation
def romeo_Qinv2(r=None, epicycl=None, sound_vel=None, sigma_R=None, star_density=None,
HI_density=None, CO_density=None, alpha=None, verbose=False, thin=True, He_corr=False, **kwargs):
'''Возвращает двухкомпонентное приближение Q из Romeo&Falstad(2011).'''
G = 4.32
kappa = epicycl
if not He_corr:
CO_density_ = He_coeff*CO_density
HI_density_ = He_coeff*HI_density
else:
CO_density_ = CO_density
HI_density_ = HI_density
gas = CO_density_ + HI_density_
Q_star = kappa*sigma_R/(pi*G*star_density) # не 3.36
Q_g = kappa*sound_vel/(pi*G*gas)
if not thin:
T_g = 1.5
if alpha > 0 and alpha <= 0.5:
T_star = 1. + 0.6*alpha**2
else:
T_star = 0.8 + 0.7*alpha
else:
T_g, T_star = 1., 1.
dispersions = [sigma_R, sound_vel]
QTs = [Q_star*T_star, Q_g*T_g]
components = ['star', 'gas']
mindex = QTs.index(min(QTs))
if verbose:
print 'QTs: {}'.format(QTs)
print 'min index: {}'.format(mindex)
print 'min component: {}'.format(components[mindex])
sig_m = dispersions[mindex]
def W_i(sig_m, sig_i):
return 2*sig_m*sig_i/(sig_m**2 + sig_i**2)
if verbose:
print 'Ws/TQs={:5.3f} Wg/TQg={:5.3f}'.format(W_i(sig_m, dispersions[0])/QTs[0], W_i(sig_m, dispersions[1])/QTs[1])
print 'Ws={:5.3f} Wg={:5.3f}'.format(W_i(sig_m, dispersions[0]), W_i(sig_m, dispersions[1]))
return W_i(sig_m, dispersions[0])/QTs[0] + W_i(sig_m, dispersions[1])/QTs[1], components[mindex]
Explanation: Двухкомпонентная версия:
End of explanation
def plot_RF13_vs_2F(r_g_dens=None, HI_gas_dens=None, CO_gas_dens=None, epicycl=None, sound_vel_CO=6., sound_vel_HI=11., sound_vel=None, sigma_R_max=None, sigma_R_min=None,
star_density=None, alpha_max=None, alpha_min=None, thin=True, verbose=False, scale=None, gas_approx=None):
'''Плотности газа передаются не скорр. за гелий.'''
fig = plt.figure(figsize=[20, 5])
ax = plt.subplot(131)
totgas = zip(r_g_dens, [He_coeff*(l[0]+l[1]) for l in zip(HI_gas_dens, CO_gas_dens)])[1:]
if sound_vel is not None:
sound_vel_CO=sound_vel
sound_vel_HI=sound_vel
if verbose:
print 'sig_R_max case:'
romeo_min = []
for r, g, co in zip(r_g_dens, HI_gas_dens, CO_gas_dens):
rom, _ = romeo_Qinv(r=r, epicycl=epicycl(gas_approx, r, scale), sound_vel_CO=sound_vel_CO, sound_vel_HI=sound_vel_HI, sigma_R=sigma_R_max(r),
star_density=star_density(r), HI_density=He_coeff*g, CO_density=He_coeff*co,
alpha=alpha_min, verbose=verbose, He_corr=True, thin=thin)
romeo_min.append(rom)
if _ == 'star':
color = 'g'
elif _ == 'HI':
color = 'b'
else:
color = 'm'
ax.scatter(r, rom, 10, marker='o', color=color)
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=totgas,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_R_max,
star_density=star_density))
if verbose:
print 'sig_R_min case:'
romeo_max = []
for r, g, co in zip(r_g_dens, HI_gas_dens, CO_gas_dens):
rom, _ = romeo_Qinv(r=r, epicycl=epicycl(gas_approx, r, scale), sound_vel_CO=sound_vel_CO, sound_vel_HI=sound_vel_HI, sigma_R=sigma_R_min(r),
star_density=star_density(r), HI_density=He_coeff*g, CO_density=He_coeff*co,
alpha=alpha_max, verbose=verbose, He_corr=True, thin=thin)
romeo_max.append(rom)
if _ == 'star':
color = 'g'
elif _ == 'HI':
color = 'b'
else:
color = 'm'
ax.scatter(r, rom, 10, marker = 's', color=color)
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=totgas,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_R_min,
star_density=star_density))
ax.plot(r_g_dens[1:], invQeff_min, '-', alpha=0.5, color='r')
ax.plot(r_g_dens[1:], invQeff_max, '-', alpha=0.5, color='r')
plot_Q_levels(ax, [1., 1.5, 2., 3.])
ax.set_xlim(0)
ax.set_ylim(0)
ax.legend([matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='g', mec='none', marker='o'),
matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='b', mec='none', marker='o'),
matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='m', mec='none', marker='o')],
['star', 'HI', 'H2'], numpoints=1, markerscale=1, loc='upper right') #add custom legend
ax.set_title('RF13: major component')
ax = plt.subplot(132)
ax.plot(romeo_min[1:], invQeff_min, 'o')
ax.plot(romeo_max[1:], invQeff_max, 'o', color='m', alpha=0.5)
ax.set_xlabel('Romeo')
ax.set_ylabel('2F')
ax.set_xlim(0., 1.)
ax.set_ylim(0., 1.)
ax.plot(ax.get_xlim(), ax.get_ylim(), '--')
ax = plt.subplot(133)
ax.plot(r_g_dens[1:], [l[1]/l[0] for l in zip(romeo_min[1:], invQeff_min)], 'o-')
ax.plot(r_g_dens[1:], [l[1]/l[0] for l in zip(romeo_max[1:], invQeff_max)], 'o-', color='m', alpha=0.5)
ax.set_xlabel('R')
ax.set_ylabel('[2F]/[Romeo]');
Explanation: Функция-сравнение с наблюдениями:
End of explanation
def mu_face_on(mu0d, cos_i):
return mu0d + 2.5*np.log10(1./cos_i)
Explanation: Функция для исправления центральной поверхностной яркости за наклон (приведение диска к виду "плашмя"). Взята из http://www.astronet.ru/db/msg/1166765/node20.html, (61).
End of explanation
def plot_param_depend(ax=None, N=None, data_lim=None, color=None, alpha=0.3, disk_scales=[], label=None, max_range=False, **kwargs):
params = kwargs.copy()
for p in params.keys():
if p == 'total_gas_data':
depth = lambda L: isinstance(L, list) and max(map(depth, L))+1 #depth of nested lists
if depth(params[p]) == 1:
params[p] = [params[p]]*N
elif type(params[p]) is not list:
params[p] = [params[p]]*N
result = []
for i in range(N):
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=params['total_gas_data'][i],
epicycl=params['epicycl'][i],
gas_approx=params['gas_approx'][i],
sound_vel=params['sound_vel'][i],
scale=params['scale'][i],
sigma=params['sigma_max'][i],
star_density=params['star_density_min'][i]))
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=params['total_gas_data'][i],
epicycl=params['epicycl'][i],
gas_approx=params['gas_approx'][i],
sound_vel=params['sound_vel'][i],
scale=params['scale'][i],
sigma=params['sigma_min'][i],
star_density=params['star_density_max'][i]))
result.append((invQeff_min, invQeff_max))
rr = zip(*params['total_gas_data'][0])[0]
qmins = []
qmaxs = []
for ind, rrr in enumerate(rr):
qmin = [result[l][0][ind] for l in range(len(result))]
qmax = [result[l][1][ind] for l in range(len(result))]
if max_range:
qmins.append(((np.min(qmin)+np.max(qmin))/2., (np.max(qmin)-np.min(qmin))/2.))
qmaxs.append(((np.min(qmax)+np.max(qmax))/2., (np.max(qmax)-np.min(qmax))/2.))
else:
qmins.append((np.mean(qmin), np.std(qmin)))
qmaxs.append((np.mean(qmax), np.std(qmax)))
ax.errorbar(rr, zip(*qmins)[0], fmt='o-', yerr=zip(*qmins)[1], elinewidth=6, alpha=0.3);
ax.errorbar(rr, zip(*qmaxs)[0], fmt='o-', yerr=zip(*qmaxs)[1])
ax.axhline(y=1., ls='-', color='grey')
ax.set_ylim(0.)
ax.set_xlim(0.)
plot_data_lim(ax, data_lim)
plot_Q_levels(ax, [1., 1.5, 2., 3.]);
Explanation: Функция для анализа влияния параметров. Берет стандартные параметры, делает из них списки и прогоняет для всех в списке, после чего измеряет среднее и std. Можно варьировать несколько параметров одновременно.
End of explanation |
10,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
Note to Amazon EC2 users
Step1: Upgrading to Scipy 0.16.0 or later. This assignment requires SciPy 0.16.0 or later. To upgrade, uncomment and run the following cell
Step2: Load in the Wikipedia dataset
Step3: For this assignment, let us assign a unique ID to each document.
Step4: Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
Step6: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https
Step7: The conversion should take a few minutes to complete.
Step8: Checkpoint
Step9: Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
Step10: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
Step11: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
Step12: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
Step13: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
Step14: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
Step15: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
Step16: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer
Step17: Since it's the dot product again, we batch it with a matrix operation
Step18: This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
Compute the integer bin indices. This step is already completed.
For each document in the dataset, do the following
Step19: Checkpoint.
Step20: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Inspect bins
Let us look at some documents and see which bins they fall into.
Step21: Quiz Question. What is the document id of Barack Obama's article?
Quiz Question. Which bin contains Barack Obama's article? Enter its integer index.
Step22: Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
Step23: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
14 out of 16 places
12 out of 16 places
10 out of 16 places
8 out of 16 places
Step24: Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
Step25: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
Step26: There are four other documents that belong to the same bin. Which documents are they?
Step27: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
Step28: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this
Step30: With this output in mind, implement the logic for nearby bin search
Step31: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
Step32: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
Step33: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
Step34: Let's try it out with Obama
Step35: To identify the documents, it's helpful to join this table with the Wikipedia table
Step36: We have shown that we have a working LSH implementation!
Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius
Step37: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables
Step38: Some observations
Step39: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
Step40: The observations for Barack Obama generalize to the entire dataset.
Effect of number of random vectors
Let us now turn our focus to the remaining parameter | Python Code:
import numpy as np
import graphlab
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import norm
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
# !conda upgrade -y scipy
Explanation: Upgrading to Scipy 0.16.0 or later. This assignment requires SciPy 0.16.0 or later. To upgrade, uncomment and run the following cell:
End of explanation
wiki = graphlab.SFrame('people_wiki.gl/')
Explanation: Load in the Wikipedia dataset
End of explanation
wiki = wiki.add_row_number()
wiki
Explanation: For this assignment, let us assign a unique ID to each document.
End of explanation
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
wiki
Explanation: Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
End of explanation
def sframe_to_scipy(column):
Convert a dict-typed SArray into a SciPy sparse matrix.
Returns
-------
mat : a SciPy sparse matrix where mat[i, j] is the value of word j for document i.
mapping : a dictionary where mapping[j] is the word whose values are in column j.
# Create triples of (row_id, feature_id, count).
x = graphlab.SFrame({'X1':column})
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack('X1', ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# We first fit the transformer using the above data.
f.fit(x)
# The transform method will add a new column that is the transformed version
# of the 'word' column.
x = f.transform(x)
# Get the feature mapping.
mapping = f['feature_encoding']
# Get the actual word id.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
Explanation: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https://en.wikipedia.org/wiki/Matrix_(mathematics%29 ) that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.
We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format.
End of explanation
start=time.time()
corpus, mapping = sframe_to_scipy(wiki['tf_idf'])
end=time.time()
print end-start
Explanation: The conversion should take a few minutes to complete.
End of explanation
assert corpus.shape == (59071, 547979)
print 'Check passed correctly!'
Explanation: Checkpoint: The following code block should return 'Check passed correctly', indicating that your matrix contains TF-IDF values for 59071 documents and 547979 unique words. Otherwise, it will return Error.
End of explanation
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
Explanation: Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
End of explanation
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
Explanation: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
End of explanation
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
Explanation: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
End of explanation
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
Explanation: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
End of explanation
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
Explanation: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
End of explanation
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
Explanation: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
End of explanation
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
Explanation: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
End of explanation
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print index_bits
print powers_of_two
print index_bits.dot(powers_of_two)
Explanation: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer:
Bin index integer
[0,0,0,0,0,0,0,0,0,0,0,0] => 0
[0,0,0,0,0,0,0,0,0,0,0,1] => 1
[0,0,0,0,0,0,0,0,0,0,1,0] => 2
[0,0,0,0,0,0,0,0,0,0,1,1] => 3
...
[1,1,1,1,1,1,1,1,1,1,0,0] => 65532
[1,1,1,1,1,1,1,1,1,1,0,1] => 65533
[1,1,1,1,1,1,1,1,1,1,1,0] => 65534
[1,1,1,1,1,1,1,1,1,1,1,1] => 65535 (= 2^16-1)
By the rules of binary number representation, we just need to compute the dot product between the document vector and the vector consisting of powers of 2:
End of explanation
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
Explanation: Since it's the dot product again, we batch it with a matrix operation:
End of explanation
def train_lsh(data, num_vector=16, seed=None):
dim = data.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = [] # YOUR CODE HERE
# Fetch the list of document ids associated with the bin and add the document id to the end.
table[bin_index].append(data_index) # YOUR CODE HERE
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
Explanation: This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
Compute the integer bin indices. This step is already completed.
For each document in the dataset, do the following:
Get the integer bin index for the document.
Fetch the list of document ids associated with the bin; if no list yet exists for this bin, assign the bin an empty list.
Add the document id to the end of the list.
End of explanation
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print 'Passed!'
else:
print 'Check your code.'
Explanation: Checkpoint.
End of explanation
wiki[wiki['name'] == 'Barack Obama']
Explanation: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Inspect bins
Let us look at some documents and see which bins they fall into.
End of explanation
for data_list in model['table']:
if 35817 in model['table'][data_list]:
print data_list
Explanation: Quiz Question. What is the document id of Barack Obama's article?
Quiz Question. Which bin contains Barack Obama's article? Enter its integer index.
End of explanation
wiki[wiki['name'] == 'Joe Biden']
Explanation: Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
End of explanation
for data_list in model['table']:
if 24478 in model['table'][data_list]:
print data_list
print np.array(model['bin_index_bits'][24478], dtype=int) # list of 0/1's
print model['bin_indices'][24478] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][24478]
Explanation: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
14 out of 16 places
12 out of 16 places
10 out of 16 places
8 out of 16 places
End of explanation
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's
print model['bin_indices'][22745] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
Explanation: Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
End of explanation
model['table'][model['bin_indices'][35817]]
Explanation: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
End of explanation
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column
docs
Explanation: There are four other documents that belong to the same bin. Which documents are they?
End of explanation
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print '================= Cosine distance from Barack Obama'
print 'Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print 'Barack Obama - {0:24s}: {1:f}'.format(wiki[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf))
Explanation: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
End of explanation
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print diff
Explanation: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this:
1. Let L be the bit representation of the bin that contains the query documents.
2. Consider all documents in bin L.
3. Consider documents in the bins whose bit representation differs from L by 1 bit.
4. Consider documents in the bins whose bit representation differs from L by 2 bits.
...
To obtain candidate bins that differ from the query bin by some number of bits, we use itertools.combinations, which produces all possible subsets of a given list. See this documentation for details.
1. Decide on the search radius r. This will determine the number of different bits between the two vectors.
2. For each subset (n_1, n_2, ..., n_r) of the list [0, 1, 2, ..., num_vector-1], do the following:
* Flip the bits (n_1, n_2, ..., n_r) of the query bin to produce a new bit vector.
* Fetch the list of documents belonging to the bin indexed by the new bit vector.
* Add those documents to the candidate set.
Each line of output from the following cell is a 3-tuple indicating where the candidate bin would differ from the query bin. For instance,
(0, 1, 3)
indicates that the candiate bin differs from the query bin in first, second, and fourth bits.
End of explanation
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = copy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = copy(query_bin_bits)
for i in different_bits:
alternate_bits[i] = ~alternate_bits[i] # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
candidate_set = candidate_set.union(table[nearby_bin]) # YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
Explanation: With this output in mind, implement the logic for nearby bin search:
End of explanation
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print 'Passed test'
else:
print 'Check your code'
print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261'
Explanation: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
End of explanation
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print 'Passed test'
else:
print 'Check your code'
Explanation: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
End of explanation
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in xrange(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = graphlab.SFrame({'id':candidate_set})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set)
Explanation: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
End of explanation
query(corpus[35817,:], model, k=10, max_search_radius=3)
Explanation: Let's try it out with Obama:
End of explanation
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance')
Explanation: To identify the documents, it's helpful to join this table with the Wikipedia table:
End of explanation
wiki[wiki['name']=='Barack Obama']
num_candidates_history = []
query_time_history = []
max_distance_from_query_history = []
min_distance_from_query_history = []
average_distance_from_query_history = []
for max_search_radius in xrange(17):
start=time.time()
result, num_candidates = query(corpus[35817,:], model, k=10,
max_search_radius=max_search_radius)
end=time.time()
query_time = end-start
print 'Radius:', max_search_radius
print result.join(wiki[['id', 'name']], on='id').sort('distance')
average_distance_from_query = result['distance'][1:].mean()
print "avg: ",average_distance_from_query
max_distance_from_query = result['distance'][1:].max()
min_distance_from_query = result['distance'][1:].min()
num_candidates_history.append(num_candidates)
query_time_history.append(query_time)
average_distance_from_query_history.append(average_distance_from_query)
max_distance_from_query_history.append(max_distance_from_query)
min_distance_from_query_history.append(min_distance_from_query)
Explanation: We have shown that we have a working LSH implementation!
Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius:
* Number of candidate documents considered
* Query time
* Distance of approximate neighbors from the query
Let us run LSH multiple times, each with different radii for nearby bin search. We will measure the three variables as discussed above.
End of explanation
plt.figure(figsize=(7,4.5))
plt.plot(num_candidates_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('# of documents searched')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(query_time_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors')
plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors')
plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance of neighbors')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables:
End of explanation
def brute_force_query(vec, data, k):
num_data_points = data.shape[0]
# Compute distances for ALL data points in training set
nearest_neighbors = graphlab.SFrame({'id':range(num_data_points)})
nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True)
Explanation: Some observations:
* As we increase the search radius, we find more neighbors that are a smaller distance away.
* With increased search radius comes a greater number documents that have to be searched. Query time is higher as a consequence.
* With sufficiently high search radius, the results of LSH begin to resemble the results of brute-force search.
Quiz Question. What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden? 2
Quiz Question. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better? 7
Quality metrics for neighbors
The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis.
For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics:
Precision@10: How many of the 10 neighbors given by LSH are among the true 25 nearest neighbors?
Average cosine distance of the neighbors from the query
Then we run LSH multiple times with different search radii.
End of explanation
max_radius = 17
precision = {i:[] for i in xrange(max_radius)}
average_distance = {i:[] for i in xrange(max_radius)}
query_time = {i:[] for i in xrange(max_radius)}
np.random.seed(0)
num_queries = 10
for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)):
print('%s / %s' % (i, num_queries))
ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for r in xrange(1,max_radius):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r)
end = time.time()
query_time[r].append(end-start)
# precision = (# of neighbors both in result and ground_truth)/10.0
precision[r].append(len(set(result['id']) & ground_truth)/10.0)
average_distance[r].append(result['distance'][1:].mean())
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(average_distance[i]) for i in xrange(1,17)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(precision[i]) for i in xrange(1,17)], linewidth=4, label='Precison@10')
plt.xlabel('Search radius')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(query_time[i]) for i in xrange(1,17)], linewidth=4, label='Query time')
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
End of explanation
precision = {i:[] for i in xrange(5,20)}
average_distance = {i:[] for i in xrange(5,20)}
query_time = {i:[] for i in xrange(5,20)}
num_candidates_history = {i:[] for i in xrange(5,20)}
ground_truth = {}
np.random.seed(0)
num_queries = 10
docs = np.random.choice(corpus.shape[0], num_queries, replace=False)
for i, ix in enumerate(docs):
ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for num_vector in xrange(5,20):
print('num_vector = %s' % (num_vector))
model = train_lsh(corpus, num_vector, seed=143)
for i, ix in enumerate(docs):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3)
end = time.time()
query_time[num_vector].append(end-start)
precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0)
average_distance[num_vector].append(result['distance'][1:].mean())
num_candidates_history[num_vector].append(num_candidates)
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(average_distance[i]) for i in xrange(5,20)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('# of random vectors')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(precision[i]) for i in xrange(5,20)], linewidth=4, label='Precison@10')
plt.xlabel('# of random vectors')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(query_time[i]) for i in xrange(5,20)], linewidth=4, label='Query time (seconds)')
plt.xlabel('# of random vectors')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in xrange(5,20)], linewidth=4,
label='# of documents searched')
plt.xlabel('# of random vectors')
plt.ylabel('# of documents searched')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The observations for Barack Obama generalize to the entire dataset.
Effect of number of random vectors
Let us now turn our focus to the remaining parameter: the number of random vectors. We run LSH with different number of random vectors, ranging from 5 to 20. We fix the search radius to 3.
Allow a few minutes for the following cell to complete.
End of explanation |
10,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Visualizations with TensorFlow Data Validaiton
Learning Objectives
Install TFDV
Compute and visualize statistics
Infer a schema
Check evaluation data for errors
Check for evaluation anomalies and fix it
Check for drift and skew
Freeze the schema
Introduction
This notebook illustrates how TensorFlow Data Validation (TFDV) can be used to investigate and visualize your dataset. That includes looking at descriptive statistics, inferring a schema, checking for and fixing anomalies, and checking for drift and skew in our dataset. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent.
We'll use data from the Taxi Trips dataset released by the City of Chicago.
Note
Step1: Restart the kernel (Kernel > Restart kernel > Restart).
Re-run the above cell and proceed further.
Note
Step2: Load the Files
We will download our dataset from Google Cloud Storage.
Step3: Check the version
Step4: Compute and visualize statistics
First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings)
TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions.
Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation.
Step5: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data
Step6: Infer a schema
Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics.
Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it.
Step7: Check evaluation data for errors
So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface.
Notice that each feature now includes statistics for both the training and evaluation datasets.
Notice that the charts now have both the training and evaluation datasets overlaid, making it easy to compare them.
Notice that the charts now include a percentages view, which can be combined with log or the default linear scales.
Notice that the mean and median for trip_miles are different for the training versus the evaluation datasets. Will that cause problems?
Wow, the max tips is very different for the training versus the evaluation datasets. Will that cause problems?
Click expand on the Numeric Features chart, and select the log scale. Review the trip_seconds feature, and notice the difference in the max. Will evaluation miss parts of the loss surface?
Step8: Check for evaluation anomalies
Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.
Key Point
Step9: Fix evaluation anomalies in the schema
Oops! It looks like we have some new values for company in our evaluation data, that we didn't have in our training data. We also have a new value for payment_type. These should be considered anomalies, but what we decide to do about them depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset.
Key Point
Step10: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;)
Schema Environments
We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary.
Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
For example, in this dataset the tips feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly.
Step11: We'll deal with the tips feature below. We also have an INT value in our trip seconds, where our schema expected a FLOAT. By making us aware of that difference, TFDV helps uncover inconsistencies in the way the data is generated for training and serving. It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation.
In this case, we can safely convert INT values to FLOATs, so we want to tell TFDV to use our schema to infer the type. Let's do that now.
Step12: Now we just have the tips feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
Step13: Check for drift and skew
In addition to checking whether a dataset conforms to the expectations set in the schema, TFDV also provides functionalities to detect drift and skew. TFDV performs this check by comparing the statistics of the different datasets based on the drift/skew comparators specified in the schema.
Drift
Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Skew
TFDV can detect three different kinds of skew in your data - schema skew, feature skew, and distribution skew.
Schema Skew
Schema skew occurs when the training and serving data do not conform to the same schema. Both training and serving data are expected to adhere to the same schema. Any expected deviations between the two (such as the label feature being only present in the training data but not in serving) should be specified through environments field in the schema.
Feature Skew
Feature skew occurs when the feature values that a model trains on are different from the feature values that it sees at serving time. For example, this can happen when
Step14: In this example we do see some drift, but it is well below the threshold that we've set.
Freeze the schema
Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state. | Python Code:
!pip install pyarrow==5.0.0
!pip install numpy==1.19.2
!pip install tensorflow-data-validation
Explanation: Advanced Visualizations with TensorFlow Data Validaiton
Learning Objectives
Install TFDV
Compute and visualize statistics
Infer a schema
Check evaluation data for errors
Check for evaluation anomalies and fix it
Check for drift and skew
Freeze the schema
Introduction
This notebook illustrates how TensorFlow Data Validation (TFDV) can be used to investigate and visualize your dataset. That includes looking at descriptive statistics, inferring a schema, checking for and fixing anomalies, and checking for drift and skew in our dataset. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent.
We'll use data from the Taxi Trips dataset released by the City of Chicago.
Note: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.
Read more about the dataset in Google BigQuery. Explore the full dataset in the BigQuery UI.
Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about ML fairness.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
The columns in the dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
Install Libraries
End of explanation
import pandas as pd
import tensorflow_data_validation as tfdv
import sys
import warnings
warnings.filterwarnings('ignore')
print('Installing TensorFlow Data Validation')
!pip install -q tensorflow_data_validation[visualization]
Explanation: Restart the kernel (Kernel > Restart kernel > Restart).
Re-run the above cell and proceed further.
Note: Please ignore any incompatibility warnings and errors.
Install TFDV
This will pull in all the dependencies, which will take a minute. Please ignore the warnings or errors regarding incompatible dependency versions.
End of explanation
import os
import tempfile, urllib, zipfile
# Set up some globals for our file paths
BASE_DIR = tempfile.mkdtemp()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
TRAIN_DATA = os.path.join(DATA_DIR, 'train', 'data.csv')
EVAL_DATA = os.path.join(DATA_DIR, 'eval', 'data.csv')
SERVING_DATA = os.path.join(DATA_DIR, 'serving', 'data.csv')
# Download the zip file from GCP and unzip it
zip, headers = urllib.request.urlretrieve('https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/chicago_data.zip')
zipfile.ZipFile(zip).extractall(BASE_DIR)
zipfile.ZipFile(zip).close()
print("Here's what we downloaded:")
!ls -R {os.path.join(BASE_DIR, 'data')}
Explanation: Load the Files
We will download our dataset from Google Cloud Storage.
End of explanation
import tensorflow_data_validation as tfdv
print('TFDV version: {}'.format(tfdv.version.__version__))
Explanation: Check the version
End of explanation
# TODO
train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA)
Explanation: Compute and visualize statistics
First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings)
TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions.
Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation.
End of explanation
# TODO
tfdv.visualize_statistics(train_stats)
Explanation: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data:
Notice that numeric features and categorical features are visualized separately, and that charts are displayed showing the distributions for each feature.
Notice that features with missing or zero values display a percentage in red as a visual indicator that there may be issues with examples in those features. The percentage is the percentage of examples that have missing or zero values for that feature.
Notice that there are no examples with values for pickup_census_tract. This is an opportunity for dimensionality reduction!
Try clicking "expand" above the charts to change the display
Try hovering over bars in the charts to display bucket ranges and counts
Try switching between the log and linear scales, and notice how the log scale reveals much more detail about the payment_type categorical feature
Try selecting "quantiles" from the "Chart to show" menu, and hover over the markers to show the quantile percentages
End of explanation
# Infers schema from the input statistics.
# TODO
schema = tfdv.infer_schema(statistics=train_stats)
tfdv.display_schema(schema=schema)
Explanation: Infer a schema
Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics.
Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it.
End of explanation
# Compute stats for evaluation data
eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA)
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
Explanation: Check evaluation data for errors
So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface.
Notice that each feature now includes statistics for both the training and evaluation datasets.
Notice that the charts now have both the training and evaluation datasets overlaid, making it easy to compare them.
Notice that the charts now include a percentages view, which can be combined with log or the default linear scales.
Notice that the mean and median for trip_miles are different for the training versus the evaluation datasets. Will that cause problems?
Wow, the max tips is very different for the training versus the evaluation datasets. Will that cause problems?
Click expand on the Numeric Features chart, and select the log scale. Review the trip_seconds feature, and notice the difference in the max. Will evaluation miss parts of the loss surface?
End of explanation
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
# TODO
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
Explanation: Check for evaluation anomalies
Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.
Key Point: What would happen if we tried to evaluate using data with categorical feature values that were not in our training dataset? What about numeric features that are outside the ranges in our training dataset?
End of explanation
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
# TODO
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
Explanation: Fix evaluation anomalies in the schema
Oops! It looks like we have some new values for company in our evaluation data, that we didn't have in our training data. We also have a new value for payment_type. These should be considered anomalies, but what we decide to do about them depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset.
Key Point: How would our evaluation results be affected if we did not fix these problems?
Unless we change our evaluation dataset we can't fix everything, but we can fix things in the schema that we're comfortable accepting. That includes relaxing our view of what is and what is not an anomaly for particular features, as well as updating our schema to include missing values for categorical features. TFDV has enabled us to discover what we need to fix.
Let's make those fixes now, and then review one more time.
End of explanation
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
Explanation: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;)
Schema Environments
We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary.
Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
For example, in this dataset the tips feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly.
End of explanation
options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
Explanation: We'll deal with the tips feature below. We also have an INT value in our trip seconds, where our schema expected a FLOAT. By making us aware of that difference, TFDV helps uncover inconsistencies in the way the data is generated for training and serving. It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation.
In this case, we can safely convert INT values to FLOATs, so we want to tell TFDV to use our schema to infer the type. Let's do that now.
End of explanation
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
# Specify that 'tips' feature is not in SERVING environment.
tfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING')
serving_anomalies_with_env = tfdv.validate_statistics(
serving_stats, schema, environment='SERVING')
tfdv.display_anomalies(serving_anomalies_with_env)
Explanation: Now we just have the tips feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
End of explanation
# Add skew comparator for 'payment_type' feature.
payment_type = tfdv.get_feature(schema, 'payment_type')
payment_type.skew_comparator.infinity_norm.threshold = 0.01
# Add drift comparator for 'company' feature.
company=tfdv.get_feature(schema, 'company')
company.drift_comparator.infinity_norm.threshold = 0.001
# TODO
skew_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
tfdv.display_anomalies(skew_anomalies)
Explanation: Check for drift and skew
In addition to checking whether a dataset conforms to the expectations set in the schema, TFDV also provides functionalities to detect drift and skew. TFDV performs this check by comparing the statistics of the different datasets based on the drift/skew comparators specified in the schema.
Drift
Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Skew
TFDV can detect three different kinds of skew in your data - schema skew, feature skew, and distribution skew.
Schema Skew
Schema skew occurs when the training and serving data do not conform to the same schema. Both training and serving data are expected to adhere to the same schema. Any expected deviations between the two (such as the label feature being only present in the training data but not in serving) should be specified through environments field in the schema.
Feature Skew
Feature skew occurs when the feature values that a model trains on are different from the feature values that it sees at serving time. For example, this can happen when:
A data source that provides some feature values is modified between training and serving time
There is different logic for generating features between training and serving. For example, if you apply some transformation only in one of the two code paths.
Distribution Skew
Distribution skew occurs when the distribution of the training dataset is significantly different from the distribution of the serving dataset. One of the key causes for distribution skew is using different code or different data sources to generate the training dataset. Another reason is a faulty sampling mechanism that chooses a non-representative subsample of the serving data to train on.
End of explanation
from tensorflow.python.lib.io import file_io
from google.protobuf import text_format
file_io.recursive_create_dir(OUTPUT_DIR)
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
Explanation: In this example we do see some drift, but it is well below the threshold that we've set.
Freeze the schema
Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
End of explanation |
10,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This module helps solve systems of linear equations. There are several ways of doing this. The first is to just pass the coefficients as a list of lists. Say we want to solve the system of equations
Step1: Clearly, the solution set $(2, -3)$ satisfies the two equations above.
If a system of equations that has no unique solutions is given, a warning is printed and None is returned.
Step2: Additionally, the coefficients of the equation can be read from a text file, where expressions are evaluated before they are read. For example, consider the following system of equations | Python Code:
import linear_solver as ls
xs = ls.solve_linear_system(
[[1, -1, 5],
[1, 1, -1]])
print(xs)
Explanation: This module helps solve systems of linear equations. There are several ways of doing this. The first is to just pass the coefficients as a list of lists. Say we want to solve the system of equations:
$$
\begin{array}{c|c|c}
x - y = 5\
x + y = -1
\end{array}
$$
This is done with a simple call to linear_solver.solve_linear_system(), like so
End of explanation
xs = ls.solve_linear_system(
[[1, 1, 0],
[2, 2, 0]])
print(xs)
xs = ls.solve_linear_system(
[[1, 1, 0],
[2, 2, 1]])
print(xs)
Explanation: Clearly, the solution set $(2, -3)$ satisfies the two equations above.
If a system of equations that has no unique solutions is given, a warning is printed and None is returned.
End of explanation
sol = ls.solve_linear_system('coefficients.txt')
for i, row in enumerate(sol):
print('m_{0} = {1:.2f}'.format(i, row[0,0]))
Explanation: Additionally, the coefficients of the equation can be read from a text file, where expressions are evaluated before they are read. For example, consider the following system of equations:
$$
\begin{array}{c|c|c|c}
22m_1 + 22m_2 - m_3 = 0\
(0.1)(22)m_1 + (0.9)(22)m_2 - 0.6m_3 = 0\
\frac{22}{0.68} m_1 + \frac{22}{0.78} m_2 = (500)(3.785)
\end{array}
$$
We can put these coefficients into a text file, 'coefficients.txt', which has the contents
<pre>
\# contents of coefficients.txt
22 22 -1 0
0.1\*22 0.9\*22 -0.6 0
22/0.68 22/0.78 0 500\*3.785
</pre>
and then pass that file to the solver function.
End of explanation |
10,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ROP Exam Analysis for NIRS and Pulse Ox
Finalized notebook to combine Masimo and NIRS Data into one iPython Notebook.
Select ROP Subject Number and Input Times
Step1: Baseline Average Calculation
From first point of data collection to the first eye drops
Step2: First Eye Drop Avg Every 10 Sec For 5 Minutes
Step3: Second Eye Drop Avg Every 10 Sec For 5 Minutes
Step4: Third Eye Drop Avg Every 10 Sec For 5 Minutes
Step5: Average Every 10 Sec During ROP Exam for first 4 minutes
Step6: Average Every 5 Mins Hour 1-2 After ROP Exam
Step7: Average Every 15 Mins Hour 2-3 After ROP Exam
Step8: Average Every 30 Mins Hour 3-4 After ROP Exam
Step9: Average Every Hour 4-24 Hours Post ROP Exam
Step10: Mild, Moderate, and Severe Desaturation Events | Python Code:
from ROP import *
#Takes a little bit, wait a while.
#ROP Number syntax: ###
#Eye Drop syntax: HH MM HH MM HH MM
#Exam Syntax: HH MM HH MM
Explanation: ROP Exam Analysis for NIRS and Pulse Ox
Finalized notebook to combine Masimo and NIRS Data into one iPython Notebook.
Select ROP Subject Number and Input Times
End of explanation
print 'Baseline Averages\n', 'NIRS :\t', avg0NIRS, '\nPI :\t',avg0PI, '\nSpO2 :\t',avg0O2,'\nPR :\t',avg0PR,
Explanation: Baseline Average Calculation
From first point of data collection to the first eye drops
End of explanation
print resultdrops1
Explanation: First Eye Drop Avg Every 10 Sec For 5 Minutes
End of explanation
print resultdrops2
Explanation: Second Eye Drop Avg Every 10 Sec For 5 Minutes
End of explanation
print resultdrops3
Explanation: Third Eye Drop Avg Every 10 Sec For 5 Minutes
End of explanation
print result1
Explanation: Average Every 10 Sec During ROP Exam for first 4 minutes
End of explanation
print result2
Explanation: Average Every 5 Mins Hour 1-2 After ROP Exam
End of explanation
print result3
Explanation: Average Every 15 Mins Hour 2-3 After ROP Exam
End of explanation
print result4
Explanation: Average Every 30 Mins Hour 3-4 After ROP Exam
End of explanation
print result5
Explanation: Average Every Hour 4-24 Hours Post ROP Exam
End of explanation
print "Desat Counts for X mins\n"
print "Pre Mild Desat (85-89) Count: %s\t" %above, "for %s min" %((a_len*2)/60.)
print "Pre Mod Desat (81-84) Count: %s\t" %middle, "for %s min" %((m_len*2)/60.)
print "Pre Sev Desat (=< 80) Count: %s\t" %below, "for %s min\n" %((b_len*2)/60.)
print "Post Mild Desat (85-89) Count: %s\t" %above2, "for %s min" %((a_len2*2)/60.)
print "Post Mod Desat (81-84) Count: %s\t" %middle2, "for %s min" %((m_len2*2)/60.)
print "Post Sev Desat (=< 80) Count: %s\t" %below2, "for %s min\n" %((b_len2*2)/60.)
print "Data Recording Time!"
print '*' * 10
print "Pre-Exam Data Recording Length\t", X - Y # start of exam - first data point
print "Post-Exam Data Recording Length\t", Q - Z #last data point - end of exam
print "Total Data Recording Length\t", Q - Y #last data point - first data point
Explanation: Mild, Moderate, and Severe Desaturation Events
End of explanation |
10,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting topographic arrowmaps of evoked data
Load evoked data and plot arrowmaps along with the topomap for selected time
points. An arrowmap is based upon the Hosaka-Cohen transformation and
represents an estimation of the current flow underneath the MEG sensors.
They are a poor man's MNE.
See [1]_ for details.
References
.. [1] D. Cohen, H. Hosaka
"Part II magnetic field produced by a current dipole",
Journal of electrocardiology, Volume 9, Number 4, pp. 409-417, 1976.
DOI
Step1: Plot magnetometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity
Step2: Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity
Step3: Since Vectorview 102 system perform sparse spatial sampling of the magnetic
field, data from the Vectorview (info_from) can be projected to the high
density CTF 272 system (info_to) for visualization
Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity | Python Code:
# Authors: Sheraz Khan <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.datasets.brainstorm import bst_raw
from mne import read_evokeds
from mne.viz import plot_arrowmap
print(__doc__)
path = sample.data_path()
fname = path + '/MEG/sample/sample_audvis-ave.fif'
# load evoked data
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
evoked_mag = evoked.copy().pick_types(meg='mag')
evoked_grad = evoked.copy().pick_types(meg='grad')
Explanation: Plotting topographic arrowmaps of evoked data
Load evoked data and plot arrowmaps along with the topomap for selected time
points. An arrowmap is based upon the Hosaka-Cohen transformation and
represents an estimation of the current flow underneath the MEG sensors.
They are a poor man's MNE.
See [1]_ for details.
References
.. [1] D. Cohen, H. Hosaka
"Part II magnetic field produced by a current dipole",
Journal of electrocardiology, Volume 9, Number 4, pp. 409-417, 1976.
DOI: 10.1016/S0022-0736(76)80041-6
End of explanation
max_time_idx = np.abs(evoked_mag.data).mean(axis=0).argmax()
plot_arrowmap(evoked_mag.data[:, max_time_idx], evoked_mag.info)
# Since planar gradiometers takes gradients along latitude and longitude,
# they need to be projected to the flatten manifold span by magnetometer
# or radial gradiometers before taking the gradients in the 2D Cartesian
# coordinate system for visualization on the 2D topoplot. You can use the
# ``info_from`` and ``info_to`` parameters to interpolate from
# gradiometer data to magnetometer data.
Explanation: Plot magnetometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=evoked_mag.info)
Explanation: Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation
path = bst_raw.data_path()
raw_fname = path + '/MEG/bst_raw/' \
'subj001_somatosensory_20111109_01_AUX-f.ds'
raw_ctf = mne.io.read_raw_ctf(raw_fname)
raw_ctf_info = mne.pick_info(
raw_ctf.info, mne.pick_types(raw_ctf.info, meg=True, ref_meg=False))
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=raw_ctf_info, scale=6e-10)
Explanation: Since Vectorview 102 system perform sparse spatial sampling of the magnetic
field, data from the Vectorview (info_from) can be projected to the high
density CTF 272 system (info_to) for visualization
Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation |
10,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introductory tutorial
pydov provides machine access to the data that can be visualized with the DOV viewer.
All the pydov functionalities rely on the existing DOV webservices. An in-depth overview of the available services and endpoints is provided on the accessing DOV data page. To retrieve data, pydov uses a combination of the available WFS services and the XML representation of the core DOV data.
As pydov relies on the XML data returned by the existing DOV webservices, downloading DOV data with pydov is governed by the same disclaimer that applies to the other DOV services. Be sure to consult it when using pydov!
pydov interfaces data and services hosted by the Flemish governement. Therefore, some syntax of the API as well as the descriptions provided by the backend are in Dutch.
Use case
Step1: pydov
Step2: If you would like some more information or metadata about the data you can retrieve, you can query the search object. Since pydov interfaces services and metadata from Flemish government agencies, the descriptions are in Dutch
Step3: The different fields that are available for objects of the 'Hydrogeologische Stratigrafie' datatype can be requested with the get_fields() method
Step4: You can get more information of a field by requesting it from the fields dictionary
Step5: The fields pkey_interpretatie and pkey_boring are important identifiers. In this case pkey_interpretatie is the unique identifier of this interpretation and is also the permanent url where the data can be consulted (~https
Step6: Query the data with pydov
Attributes
The data can be queried on attributes, location or both. To query on attributes, the OGC filter functions from OWSLib are used
Step7: If you are for example interested in all the hydrostratigraphic interpretations in the city of Leuven, you compose the query like below (mind that the values are in Dutch)
Step8: This yielded 38 interpretations from 38, or less, boreholes. It can be less than 38 boreholes because multiple interpretations can be made of a single borehole.
If you would like to narrow the search down to for example interpretations deeper than 200 meters, you can combine features in the search using the logical operators And, Or provided by OWSLib
Step9: Mind the difference between attributes diepte_tot_m and diepte_laag_.... The former is defined in the WFS service and can be used as attribute in the query. The latter attributes are defined in the linked XML document, from which the information is only available after it has been gathered from the DOV webservice. All the attributes with cannot be used in the intial query and should be used in a subsequent filtering of the Pandas DataFrame.
More information on querying attribute properties is given in the docs. Worth mentioning is the query using lists where pydov extends the default OGC filter expressions described with a new expression PropertyInList that allows you to use lists (of strings) in search queries.
One last goodie is the possibility to join searches using common attibutes. For example the pkey_boring field, denoting the borehole. As such, you can get the boreholes for which a hydrostratigraphical interpretation is available, and also query the lithological description of that borehole. Like below
Step10: Location
One can also query on location, using the location objects and spatial filters from the pydov.util.location module. For example, to request all hydrostratigraphic interpretations in a given bounding box
Step11: Alternatively, you can define a Point or a GML document for the spatial query as is described in the docs. For example, if you are interested in a site you can define the point with a search radius of for example 500 meters like this
Step12: Groundwater head data
Querying the groundwater head data follows the same workflow as mentioned above for the interpretation of borehole data with the instantiation of a search object and the subsequent query with selection on attribute or location properties.
Step13: For example query all data in a bounding box from screens that are situated in the phreatic aquifer
Step14: One important difference is the presence of time-related data. More specifically the attributes datum and tijdstip. These can be combined to create a date.datetime object that can be used in the subsequent manipuliation of the Pandas DataFrame. Make sure to remove the records without a valid datum and fill the empty tijdstip fields with a default timestamp (!)
Step15: More examples for the timeseries processing and analysis is available in the Notebooks of pydov.
Data cache
Notice the cc in the progress bar while loading of the data? It means the data was loaded from your local cache instead of being downloaded, as it was already part of an earlier data request. See the caching documentation for more in-depth information about the default directory, how to change and/or clean it, and even how to create some custom cache format.
Putting it all together | Python Code:
%matplotlib inline
import inspect, sys
import pydov
import pandas as pd
Explanation: Introductory tutorial
pydov provides machine access to the data that can be visualized with the DOV viewer.
All the pydov functionalities rely on the existing DOV webservices. An in-depth overview of the available services and endpoints is provided on the accessing DOV data page. To retrieve data, pydov uses a combination of the available WFS services and the XML representation of the core DOV data.
As pydov relies on the XML data returned by the existing DOV webservices, downloading DOV data with pydov is governed by the same disclaimer that applies to the other DOV services. Be sure to consult it when using pydov!
pydov interfaces data and services hosted by the Flemish governement. Therefore, some syntax of the API as well as the descriptions provided by the backend are in Dutch.
Use case: gather data for a hydrogeological model
End of explanation
from pydov.search.interpretaties import HydrogeologischeStratigrafieSearch
hs = HydrogeologischeStratigrafieSearch()
Explanation: pydov: general info
To get started with pydov you should first determine which information you want to search for. DOV provides a lot of different datasets about soil, subsoil and groundwater of Flanders, some of which can be queried using pydov. Supported datasets are listed in the quickstart.
In this case, to start with a hydrogeological model, we are interested in the hydrostratigraphic interpretation of the borehole data and the groundwater level. These datasets can be found with the following search objects:
- Hydrostratigraphic interpretation
- Groundwater level
Indeed, each of the datasets can be queried using a search object for the specific dataset. While the search objects are different, the workflow is the same for each dataset. Relevant classes can be imported from the pydov.search package, for example if we’d like to query the dataset with hydrostratigraphic interpretations of borehole data:
End of explanation
hs.get_description()
Explanation: If you would like some more information or metadata about the data you can retrieve, you can query the search object. Since pydov interfaces services and metadata from Flemish government agencies, the descriptions are in Dutch:
End of explanation
fields = hs.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
Explanation: The different fields that are available for objects of the 'Hydrogeologische Stratigrafie' datatype can be requested with the get_fields() method:
End of explanation
fields['pkey_interpretatie']
Explanation: You can get more information of a field by requesting it from the fields dictionary:
name: name of the field
definition: definition of this field
cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.
notnull: whether the field is mandatory or not
type: datatype of the values of this field
query: whether you can use this field in an attribute query
End of explanation
fields['aquifer']['values']
Explanation: The fields pkey_interpretatie and pkey_boring are important identifiers. In this case pkey_interpretatie is the unique identifier of this interpretation and is also the permanent url where the data can be consulted (~https://www.dov.vlaanderen.be/data/interpretatie/...). You can retrieve an XML representation by appending '.xml' to the URL, or a JSON equivalent by appending '.json'.
The pkey_boring is the identifier of the borehole from which this interpretation was made. As mentioned before, it is also the permanent url (~https://www.dov.vlaanderen.be/data/boring/...).
Optionally, if the values of a field have a specific domain the possible values are listed as values:
End of explanation
# list available query methods
methods = [i for i,j in inspect.getmembers(sys.modules['owslib.fes'],
inspect.isclass)
if 'Property' in i]
print(*methods, sep = "\n")
Explanation: Query the data with pydov
Attributes
The data can be queried on attributes, location or both. To query on attributes, the OGC filter functions from OWSLib are used:
End of explanation
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(
propertyname='gemeente',
literal='Leuven')
dfhs = hs.search(query=query)
dfhs.head()
Explanation: If you are for example interested in all the hydrostratigraphic interpretations in the city of Leuven, you compose the query like below (mind that the values are in Dutch):
End of explanation
from owslib.fes import And
from owslib.fes import PropertyIsGreaterThan
query = And([
PropertyIsEqualTo(
propertyname='gemeente',
literal='Leuven'),
PropertyIsGreaterThan(
propertyname='diepte_tot_m',
literal='200')
])
dfhs = hs.search(query=query)
dfhs.head()
Explanation: This yielded 38 interpretations from 38, or less, boreholes. It can be less than 38 boreholes because multiple interpretations can be made of a single borehole.
If you would like to narrow the search down to for example interpretations deeper than 200 meters, you can combine features in the search using the logical operators And, Or provided by OWSLib:
End of explanation
from pydov.util.query import Join
from pydov.search.interpretaties import LithologischeBeschrijvingenSearch
ls = LithologischeBeschrijvingenSearch()
dfls = ls.search(query=Join(dfhs, 'pkey_boring'))
df_joined = pd.merge(dfhs, dfls.loc[:, ['pkey_boring','diepte_laag_van', 'diepte_laag_tot', 'beschrijving']],
how='left',
left_on=['pkey_boring','diepte_laag_van', 'diepte_laag_tot'],
right_on = ['pkey_boring','diepte_laag_van', 'diepte_laag_tot']
)
df_joined.head()
Explanation: Mind the difference between attributes diepte_tot_m and diepte_laag_.... The former is defined in the WFS service and can be used as attribute in the query. The latter attributes are defined in the linked XML document, from which the information is only available after it has been gathered from the DOV webservice. All the attributes with cannot be used in the intial query and should be used in a subsequent filtering of the Pandas DataFrame.
More information on querying attribute properties is given in the docs. Worth mentioning is the query using lists where pydov extends the default OGC filter expressions described with a new expression PropertyInList that allows you to use lists (of strings) in search queries.
One last goodie is the possibility to join searches using common attibutes. For example the pkey_boring field, denoting the borehole. As such, you can get the boreholes for which a hydrostratigraphical interpretation is available, and also query the lithological description of that borehole. Like below:
End of explanation
from pydov.util.location import Within, Box
location = Within(Box(170000, 171000, 172000, 173000))
df = hs.search(location=location)
df.head()
Explanation: Location
One can also query on location, using the location objects and spatial filters from the pydov.util.location module. For example, to request all hydrostratigraphic interpretations in a given bounding box:
End of explanation
from pydov.util.location import WithinDistance, Point
location = WithinDistance(
Point(171500, 172500),
500,
distance_unit='meter'
)
df = hs.search(location=location)
df.head()
Explanation: Alternatively, you can define a Point or a GML document for the spatial query as is described in the docs. For example, if you are interested in a site you can define the point with a search radius of for example 500 meters like this:
End of explanation
from pydov.search.grondwaterfilter import GrondwaterFilterSearch
gws = GrondwaterFilterSearch()
fields = gws.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
Explanation: Groundwater head data
Querying the groundwater head data follows the same workflow as mentioned above for the interpretation of borehole data with the instantiation of a search object and the subsequent query with selection on attribute or location properties.
End of explanation
query = PropertyIsEqualTo(
propertyname='regime',
literal='freatisch')
location = Within(Box(170000, 171000, 173000, 174000))
df = gws.search(
query=query,
location=location)
df.head()
Explanation: For example query all data in a bounding box from screens that are situated in the phreatic aquifer:
End of explanation
import pandas as pd
df.reset_index(inplace=True)
df = df.loc[~df.datum.isna()]
df['tijdstip'] = df.tijdstip.fillna('00:00:00')
df['tijd'] = pd.to_datetime(df.datum.astype(str) + ' ' + df.tijdstip.astype(str))
df.tijd.head()
Explanation: One important difference is the presence of time-related data. More specifically the attributes datum and tijdstip. These can be combined to create a date.datetime object that can be used in the subsequent manipuliation of the Pandas DataFrame. Make sure to remove the records without a valid datum and fill the empty tijdstip fields with a default timestamp (!)
End of explanation
# imports
import pandas as pd
import pydov
from pydov.util.location import WithinDistance, Point
from pydov.util.query import Join
from pydov.search.interpretaties import LithologischeBeschrijvingenSearch
from pydov.search.interpretaties import HydrogeologischeStratigrafieSearch
from pydov.search.grondwaterfilter import GrondwaterFilterSearch
from owslib.fes import PropertyIsEqualTo
# define search objects
hs = HydrogeologischeStratigrafieSearch()
ls = LithologischeBeschrijvingenSearch()
gws = GrondwaterFilterSearch()
# search hydrostratigraphic interpretations based on location
location = WithinDistance(
Point(171500, 172500),
500,
distance_unit='meter'
)
dfhs = hs.search(location=location)
# join the lithostratigraphic desriptions
dfls = ls.search(query=Join(dfhs, 'pkey_boring'))
df_joined = pd.merge(dfhs, dfls.loc[:, ['pkey_boring','diepte_laag_van', 'diepte_laag_tot', 'beschrijving']],
how='left',
left_on=['pkey_boring','diepte_laag_van', 'diepte_laag_tot'],
right_on = ['pkey_boring','diepte_laag_van', 'diepte_laag_tot']
)
# search the groundwater head data of the phreatic aquifers in the neighbourhoud
query = PropertyIsEqualTo(
propertyname='regime',
literal='freatisch')
dfgw = gws.search(query=query,
location=location)
# create date.datetime objects for further processing
dfgw.reset_index(inplace=True)
dfgw = dfgw.loc[~dfgw.datum.isna()]
dfgw['tijdstip'] = dfgw.tijdstip.fillna('00:00:00')
dfgw['tijd'] = pd.to_datetime(dfgw.datum.astype(str) + ' ' + dfgw.tijdstip.astype(str))
df_joined.head()
dfgw.head()
Explanation: More examples for the timeseries processing and analysis is available in the Notebooks of pydov.
Data cache
Notice the cc in the progress bar while loading of the data? It means the data was loaded from your local cache instead of being downloaded, as it was already part of an earlier data request. See the caching documentation for more in-depth information about the default directory, how to change and/or clean it, and even how to create some custom cache format.
Putting it all together
End of explanation |
10,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wilkinson Power Divider
In this notebook we create a Wilkinson power divider, which splits an input signal into two equals phase output signals. Theoretical results about this circuit are exposed in reference [1]. Here we will reproduce the ideal circuit illustrated below and discussed in reference [2]. In this example, the circuit is designed to operate at 1 GHz.
<img src="wilkinson_power_divider.png">
[1] P. Hallbjörner, Microw. Opt. Technol. Lett. 38, 99 (2003).
[2] Microwaves 101
Step1: The circuit setup can be checked by visualising the circuit graph (this requires the python package networkx to be available).
Step2: Let's look to the scattering parameters of the circuit
Step3: Currents and Voltages
Is is possible to calculate currents and voltages at the Circuit's internals ports. However, if you try with this specific example, one obtains
Step4: This situation is "normal", in the sense that the voltages and currents calculation methods does not support the case of more than 2 ports are connected together, which is the case in this example, as we have defined the connection list
Step5: But there is hope! It is possible to calculate the internal voltages and currents of the circuit using intermediate splitting Networks. In our case, one needs three "T" Networks and to make only pair of connections
Step6: The resulting graph is a bit more stuffed
Step7: But the results are the same
Step8: And this time one can calculate internal voltages and currents | Python Code:
# standard imports
import numpy as np
import matplotlib.pyplot as plt
import skrf as rf
rf.stylely()
# frequency band
freq = rf.Frequency(start=0, stop=2, npoints=501, unit='GHz')
# characteristic impedance of the ports
Z0_ports = 50
# resistor
R = 100
line_resistor = rf.media.DefinedGammaZ0(frequency=freq, Z0=R)
resistor = line_resistor.resistor(R, name='resistor')
# branches
Z0_branches = np.sqrt(2)*Z0_ports
beta = freq.w/rf.c
line_branches = rf.media.DefinedGammaZ0(frequency=freq, Z0=Z0_branches, gamma=0+beta*1j)
d = line_branches.theta_2_d(90, deg=True) # @ 90°(lambda/4)@ 1 GHz is ~ 75 mm
branch1 = line_branches.line(d, unit='m', name='branch1')
branch2 = line_branches.line(d, unit='m', name='branch2')
# ports
port1 = rf.Circuit.Port(freq, name='port1', z0=50)
port2 = rf.Circuit.Port(freq, name='port2', z0=50)
port3 = rf.Circuit.Port(freq, name='port3', z0=50)
# Connection setup
#┬Note that the order of appearance of the port in the setup is important
connections = [
[(port1, 0), (branch1, 0), (branch2, 0)],
[(port2, 0), (branch1, 1), (resistor, 0)],
[(port3, 0), (branch2, 1), (resistor, 1)]
]
# Building the circuit
C = rf.Circuit(connections)
Explanation: Wilkinson Power Divider
In this notebook we create a Wilkinson power divider, which splits an input signal into two equals phase output signals. Theoretical results about this circuit are exposed in reference [1]. Here we will reproduce the ideal circuit illustrated below and discussed in reference [2]. In this example, the circuit is designed to operate at 1 GHz.
<img src="wilkinson_power_divider.png">
[1] P. Hallbjörner, Microw. Opt. Technol. Lett. 38, 99 (2003).
[2] Microwaves 101: "Wilkinson Power Splitters"
End of explanation
C.plot_graph(network_labels=True, edge_labels=True, port_labels=True, port_fontize=2)
Explanation: The circuit setup can be checked by visualising the circuit graph (this requires the python package networkx to be available).
End of explanation
fig, (ax1,ax2) = plt.subplots(2, 1, sharex=True)
C.network.plot_s_db(ax=ax1, m=0, n=0, lw=2) # S11
C.network.plot_s_db(ax=ax1, m=1, n=1, lw=2) # S22
ax1.set_ylim(-90, 0)
C.network.plot_s_db(ax=ax2, m=1, n=0, lw=2) # S21
C.network.plot_s_db(ax=ax2, m=2, n=0, ls='--', lw=2) # S31
ax2.set_ylim(-4, 0)
fig.suptitle('Ideal Wilkinson Divider @ 1 GHz')
Explanation: Let's look to the scattering parameters of the circuit:
End of explanation
power = [1,0,0]
phase = [0,0,0]
C.voltages(power, phase) # or C2.currents(power, phase)
Explanation: Currents and Voltages
Is is possible to calculate currents and voltages at the Circuit's internals ports. However, if you try with this specific example, one obtains:
End of explanation
C.voltages_external(power, phase) # or C.currents_external(power, phase)
Explanation: This situation is "normal", in the sense that the voltages and currents calculation methods does not support the case of more than 2 ports are connected together, which is the case in this example, as we have defined the connection list:
connections = [
[(port1, 0), (branch1, 0), (branch2, 0)],
[(port2, 0), (branch1, 1), (resistor, 0)],
[(port3, 0), (branch2, 1), (resistor, 1)]
]
However, note that the voltages and currents calculations at external ports works:
End of explanation
tee1 = line_branches.tee(name='tee1')
tee2 = line_branches.tee(name='tee2')
tee3 = line_branches.tee(name='tee3')
cnx = [
[(port1, 0), (tee1, 0)],
[(tee1, 1), (branch1, 0)],
[(tee1, 2), (branch2, 0)],
[(branch1, 1), (tee2, 0)],
[(branch2, 1), (tee3, 0)],
[(tee2, 2), (resistor, 0)],
[(tee3, 2), (resistor, 1)],
[(tee3, 1), (port3, 0)],
[(tee2, 1), (port2, 0)],
]
C2 = rf.Circuit(cnx)
Explanation: But there is hope! It is possible to calculate the internal voltages and currents of the circuit using intermediate splitting Networks. In our case, one needs three "T" Networks and to make only pair of connections:
End of explanation
C2.plot_graph(network_labels=True, edge_labels=True, port_labels=True, port_fontize=2)
Explanation: The resulting graph is a bit more stuffed:
End of explanation
C.network == C2.network
fig, (ax1,ax2) = plt.subplots(2, 1, sharex=True)
C2.network.plot_s_db(ax=ax1, m=0, n=0, lw=2) # S11
C2.network.plot_s_db(ax=ax1, m=1, n=1, lw=2) # S22
ax1.set_ylim(-90, 0)
C2.network.plot_s_db(ax=ax2, m=1, n=0, lw=2) # S21
C2.network.plot_s_db(ax=ax2, m=2, n=0, ls='--', lw=2) # S31
ax2.set_ylim(-4, 0)
fig.suptitle('Ideal Wilkinson Divider (2nd way) @ 1 GHz')
Explanation: But the results are the same:
End of explanation
C2.voltages(power, phase) # or C2.currents(power, phase)
Explanation: And this time one can calculate internal voltages and currents:
End of explanation |
10,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: TensorFlow execution
Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.
$\begin{bmatrix}
1. & 1. & 1. \
1. & 1. & 1. \
\end{bmatrix} +
\begin{bmatrix}
1. & 2. & 3. \
4. & 5. & 6. \
\end{bmatrix} =
\begin{bmatrix}
2. & 3. & 4. \
5. & 6. & 7. \
\end{bmatrix}$
Step2: GitHub
For a full discussion of interactions between Colab and GitHub, see Using Colab with GitHub. As a brief summary
Step3: Want to use a new library? pip install it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the importing libraries example notebook.
Step4: Forms
Forms can be used to parameterize code. See the forms example notebook for more details. | Python Code:
Hi. Can it be saved?
Explanation: <a href="https://colab.research.google.com/github/jiaqi-w/CoreNLPExampleCode/blob/master/Hello%2C_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Getting Started
Overview of Colaboratory
Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage
Importing libraries and installing dependencies
Using Google Cloud BigQuery
Forms, Charts, Markdown, & Widgets
TensorFlow with GPU
TensorFlow with TPU
Machine Learning Crash Course: Intro to Pandas & First Steps with TensorFlow
Using Colab with GitHub
<img height="60px" src="https://colab.research.google.com/img/colab_favicon.ico" align="left" hspace="20px" vspace="5px">
<h1>Welcome to Colaboratory!</h1>
Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our FAQ for more info.
Highlighted Features
Seedbank
Looking for Colab notebooks to learn from? Check out Seedbank, a place to discover interactive machine learning examples.
End of explanation
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
Explanation: TensorFlow execution
Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.
$\begin{bmatrix}
1. & 1. & 1. \
1. & 1. & 1. \
\end{bmatrix} +
\begin{bmatrix}
1. & 2. & 3. \
4. & 5. & 6. \
\end{bmatrix} =
\begin{bmatrix}
2. & 3. & 4. \
5. & 6. & 7. \
\end{bmatrix}$
End of explanation
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
Explanation: GitHub
For a full discussion of interactions between Colab and GitHub, see Using Colab with GitHub. As a brief summary:
To save a copy of your Colab notebook to Github, select File → Save a copy to GitHub…
To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.
For example to load this notebook in Colab: https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb use the following Colab URL: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb
To open a github notebook in one click, we recommend installing the Open in Colab Chrome Extension.
Visualization
Colaboratory includes widely used libraries like matplotlib, simplifying visualization.
End of explanation
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
Explanation: Want to use a new library? pip install it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the importing libraries example notebook.
End of explanation
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
Explanation: Forms
Forms can be used to parameterize code. See the forms example notebook for more details.
End of explanation |
10,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Работа 1.3. Изучение колебаний на примере физического маятника
Цель работы
Step1: Определение величины свободного падения и длины стержня с помощью физического маятника
Обозначим через $l$ расстояние от точки подвеса до конца маятника, через $t$ — время, за которое маятник совершин $n=20$ колебаний, через $a$ расстояние от точки подвеса до центра масс маятника. Тогда можно вычислить период $T=\frac{t}{n}$ колебания маятника и построить график зависимости величины $aT^2$ от $a^2$.
Step2: Погрешности. Известно, что $ \Delta a = \Delta l = 0.001~м$, тогда
$ \Delta a^2 = a^2 \cdot \varepsilon a^2 = a^2 \cdot 2 \varepsilon a = a^2 \cdot 2 \frac{\Delta a}{a} = 2a \Delta a $.
Пошрешность секундомера есть $ \Delta t = 0.5~с$. Тогда
$ \Delta aT^2 = aT^2 \cdot ( \varepsilon a + 2 \varepsilon \frac{t}{n} ) = aT^2 \cdot ( \frac{\Delta a}{a} + 2 \varepsilon t ) = aT^2 \cdot( \frac{\Delta a}{a} + 2 \frac{\Delta t}{t} ) $.
Step3: Как говорилось в теоретическом введении, $ T = 2 \pi \sqrt{\frac{a^2 + \frac{l^2}{12}}{ag}} $, а значит,
$$ aT^2 = \frac{4\pi^2}{g}a^2 + \frac{4\pi^2l^2}{12g}.$$
Мы имеем $ k = \frac{4\pi^2}{g} $ и $ b = \frac{4\pi^2l^2}{12g} $, откуда
$$ g = \frac{4 \pi^2}{k}, \quad l = \frac{1}{\pi} \sqrt{3gb}. $$
Погрешности. Отсюда $ \Delta g = g \varepsilon \frac{4\pi^2}{k} = g \frac{\Delta k}{k} $, а, как известно, $ \Delta k = \frac{k_1-k_2}{2} $. Далее, $ \Delta l = l \cdot \frac{1}{2} \varepsilon(gb) = \frac{1}{2} l \cdot (\frac{\Delta g}{g} + \frac{b_2-b_1}{2b}) $.
Step4: Таким образом, получено $g = 9.92 \pm 0.66~м \cdot с^2 $ и $ l = 1.01 \pm 0.05~м$.
Сравнение приведенной длины физического маятника с длиной математического маятника
Обозначим теперь через $l$ расстояние от точки подвеса до центра масс математического маятника, через $t$ время, за которое маятник совершил $n$ колебаний. Тогда по формуле $T =\frac{t}{n}$ можно найти период математического маятника и построить зависимость величины $T^2$ от $l$.
В каждом измерении изначальная амплитуда была $25~см$, а время и число колебаний измерялось до того момента, когда амплитуда уменьшилась примерно в $3 \approx e$ раз, то есть составляла чуть больше $9~см$.
Step5: Заметим, что при $l=0.6~м$ период $T=1.576912 \approx 1.5700~c$, который достигался при $a = 0.2~м$ для физического маятника. Приведенную длину физического маятника в этом случае вычисляется по формуле
$$ l_{пр} = a + \frac{l^2}{12a}, $$
где $ l \approx 1.009~м$ — длина физического маятника, найденная в предыдущем пункте.
Погрешности. Знаем, что $\Delta l_{пр} = \Delta a + \frac{l^2}{12a} \cdot (2\frac{\Delta l}{l} + \frac{\Delta a}{a})$ | Python Code:
import numpy as np
import scipy as ps
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Работа 1.3. Изучение колебаний на примере физического маятника
Цель работы: исследовать физический и математический маятники как колебательные системы, измерить зависимость периода колебаний физического маятника от его момента инерции.
В работе используются: физический маятник (однородный стальной стержень), опорная призма, математический маятник, счётчик числа колебаний, линейка, секундомер.
Теоретическое введение
Определения. Пусть маятник подвешен так, что его длина составляет $a$. Пусть он отклонился на угол $\phi$. У него есть угловая скорость $\omega = \frac{\delta \phi}{\delta t} $, которая постоянна для всех точек маятника в силу того, что он является абсолютно твердым телом (то есть расстояние между частицами тела не меняется с течением времени). Тогда скорость $i$-й частицы есть $v_i = \omega r_i$, где $r_i$ — расстояние от оси вращения до частицы. Напомним, что моментом инерции называется величина $ I = \sum_{i=1}^{N} m_i r_i^2$, где $N$ — количество рассматриваемых частиц тела. Тогда можно кинетическая энергия маятника есть
$$ \sum_{i=1}^{N} \frac{m_i v_i^2}{f} = \frac{I\omega^2}{2}. $$
Вывод уравнения движения. Теперь можно найти полную энергию маятника в момент, когда он отклонился на угол $\varphi$. Она равна $E=\frac{I\omega^2}{2} + Mga\cdot(1-\cos\phi)$, где $M = \sum_{i=1}^{N}m_i$ — масса маятника. Ясно, что эта величина постоянна и не зависит от времни.
Так что из равенства производной по времени нулю выводится уравнение $\phi'' + \omega_0^2 \sin\phi = 0 $, где $\omega_0 = \sqrt{\frac{Mga}{I}}$ — циклическая частота колебаний маятника. При малых углах (меньше 1 радиана) из разложения по Тейлору следует, что $ \sin\phi \approx \phi$, так что уравнение упрощается и получает решение вида $ \phi(t) = A\sin(\omega_0 t + \alpha)$, где $A$ — амплитуда. Можно разложить синус по Тейлору до второго члена, тогда получим $\phi'' + \omega_0^2 \phi \cdot(1 - \frac{\phi^2}{6}) = 0 $. Тогда можно считать, что $ \omega_0 $ зависит от $\phi$, и имеет место равенство $ \omega = \sqrt{\omega_0(\phi)} \approx \omega_0 \cdot (1 - \frac{\phi^2}{12})$.
Момент инерции. Далее, найдем момент инерции стержня относительно оси, проходящей через один из его концов перпендикулярно стержню. Он имеет вид
$$ \int_0^l x^2 \mbox{d}m = \rho \int_0^l x^2 \mbox{d}x = \rho \frac{l^3}{3} = \frac{Ml^2}{3}, $$
где $l$ — длина маятника, $M$ — масса маятника, $\rho = \frac{M}{l}$ — плотность стержня. Тогда момент инерции маятника относительно центра масс есть $ 2 \cdot \frac{M}{2} \cdot \frac{l^2}{4} \cdot \frac{1}{3} = \frac{Ml^2}{12} $.
Теорема (Гюйгенса–Штейнера). Момент инерции $I$ относительно произвольной оси равен сумме момента инерции $I_0$ относительно оси, параллельной ей и проходящей через центр масс тела, и произведения массы тела $m$ на квадрат расстояния между осями $a$: $$I = I_0 + ma^2.$$
Доказательство. Пусть $I_0 = \sum m_i \rho_i^2$, $I = \sum m_i r_i^2$ и $r_i = a + \rho_i$. Знаем, что $ \sum m_i \rho_i = 0 $ векторно (по определению центра масс), тогда
$$I = \sum m_i r_i^2 = \sum m_i a^2 + \sum m_i \rho_i^2 + 2 a \sum m_i \rho_i = I_0 + ma^2. $$
Затухание. Предположим, что в системе имеют место потери энергии за счет трения. Пусть это трение таково, что амплитуда уменьшается равномерно: за одно и то же время амплитуда $A(t)$ убывает в одно и то же число раз. Такую зависимость $A(t)$ можно представить в виде $A(t)=A_0 e^{-\gamma t}$, где величина 𝛾 имеет смысл обратного времени затухания амплитуды в $e$ раз и называется коэффициентом затухания.Затухающее колебание исследуемой величины является комбинацией медленного убывания амплитуды и гармонических колебаний: $\phi(t)=A_0 e^{-\gamma t} \sin(\omega t + \alpha) $. Нетрудно убедиться, что это есть решение (при $\gamma < \omega_0$) следующего дифференциального уравнения: $ \phi'' + 2\gamma \phi' + \omega_o^2 \phi = 0 $. Здесь $\omega_0$ — собственная частота системы без затухания, а $\omega^2 = \omega^2_0 - \gamma^2$ — частота свободных колебаний при затухании. Если затухание мало, то есть $\gamma ≪ \omega_0$, то различием частот $\omega$ и $\omega_0$ можно пренебречь: $\omega \approx \omega_0$.
Добротность системы. Понятие добротности определяется так: $ Q = \frac{\omega_0}{2\gamma} = \pi \frac{\tau_e}{T} $, где $\tau_e=\frac{1}{\gamma}$ — время убывания амплитуды колебаний $A$ в $e$ раз. Чем более добротной (чем больше $Q$) является колебательная система, тем больше колебаний она может совершить до их значительного затухания — например, число колебаний до затухания в $e$ раз равно $n_e = \frac{Q}{\pi}$.
Задание
End of explanation
table_1 = pd.read_excel('lab-1-3.xlsx', 'Table1')
table_1.head(len(table_1))
Explanation: Определение величины свободного падения и длины стержня с помощью физического маятника
Обозначим через $l$ расстояние от точки подвеса до конца маятника, через $t$ — время, за которое маятник совершин $n=20$ колебаний, через $a$ расстояние от точки подвеса до центра масс маятника. Тогда можно вычислить период $T=\frac{t}{n}$ колебания маятника и построить график зависимости величины $aT^2$ от $a^2$.
End of explanation
x = table_1.values[:, 5] # a^2
y = table_1.values[:, 6] # a T^2
a = table_1.values[:, 4] # a
dx = 2 * a * 0.001
t = table_1.values[:, 1] # t
dy = y * (0.001 / a + 2 * 0.5 / t)
k, b = np.polyfit(x, y, deg=1)
k1, b1 = np.polyfit(x, y + dy * np.linspace(-1, 1, len(dy)), deg=1)
k2, b2 = np.polyfit(x, y - dy * np.linspace(-1, 1, len(dy)), deg=1)
plt.figure(figsize=(12, 8))
plt.grid(linestyle='--')
plt.title('Зависимость $aT^2 $ от $a^2$', fontweight='bold')
plt.xlabel('$a^2, \quad c^2$')
plt.ylabel('$aT^2, \quad м \cdot с^2$')
plt.scatter(x, y)
plt.plot(x, k * x + b)
plt.plot(x, k1 * x + b1, '--')
plt.plot(x, k2 * x + b2, '--')
plt.errorbar(x, y, xerr=dx, yerr=dy, fmt='o')
plt.show()
Explanation: Погрешности. Известно, что $ \Delta a = \Delta l = 0.001~м$, тогда
$ \Delta a^2 = a^2 \cdot \varepsilon a^2 = a^2 \cdot 2 \varepsilon a = a^2 \cdot 2 \frac{\Delta a}{a} = 2a \Delta a $.
Пошрешность секундомера есть $ \Delta t = 0.5~с$. Тогда
$ \Delta aT^2 = aT^2 \cdot ( \varepsilon a + 2 \varepsilon \frac{t}{n} ) = aT^2 \cdot ( \frac{\Delta a}{a} + 2 \varepsilon t ) = aT^2 \cdot( \frac{\Delta a}{a} + 2 \frac{\Delta t}{t} ) $.
End of explanation
g = 4 * np.pi ** 2 / k
l = np.sqrt(3 * g * b) / np.pi
dk = (k1 - k2) / 2
db = (b2 - b1) / 2
dg = g * dk / k
dl = 0.5 * l * (dg / g + db / b)
print(g, dg)
print(l, dl)
Explanation: Как говорилось в теоретическом введении, $ T = 2 \pi \sqrt{\frac{a^2 + \frac{l^2}{12}}{ag}} $, а значит,
$$ aT^2 = \frac{4\pi^2}{g}a^2 + \frac{4\pi^2l^2}{12g}.$$
Мы имеем $ k = \frac{4\pi^2}{g} $ и $ b = \frac{4\pi^2l^2}{12g} $, откуда
$$ g = \frac{4 \pi^2}{k}, \quad l = \frac{1}{\pi} \sqrt{3gb}. $$
Погрешности. Отсюда $ \Delta g = g \varepsilon \frac{4\pi^2}{k} = g \frac{\Delta k}{k} $, а, как известно, $ \Delta k = \frac{k_1-k_2}{2} $. Далее, $ \Delta l = l \cdot \frac{1}{2} \varepsilon(gb) = \frac{1}{2} l \cdot (\frac{\Delta g}{g} + \frac{b_2-b_1}{2b}) $.
End of explanation
table_2 = pd.read_excel('lab-1-3.xlsx', 'Table2')
table_2.head(len(table_2))
Explanation: Таким образом, получено $g = 9.92 \pm 0.66~м \cdot с^2 $ и $ l = 1.01 \pm 0.05~м$.
Сравнение приведенной длины физического маятника с длиной математического маятника
Обозначим теперь через $l$ расстояние от точки подвеса до центра масс математического маятника, через $t$ время, за которое маятник совершил $n$ колебаний. Тогда по формуле $T =\frac{t}{n}$ можно найти период математического маятника и построить зависимость величины $T^2$ от $l$.
В каждом измерении изначальная амплитуда была $25~см$, а время и число колебаний измерялось до того момента, когда амплитуда уменьшилась примерно в $3 \approx e$ раз, то есть составляла чуть больше $9~см$.
End of explanation
a = 0.2
l_pr = a + l ** 2 / (12 * a)
dl_pr = 0.001 + l ** 2 / (12 * a ) * (2 * dl / l + 0.001 / a)
print(l_pr, dl_pr)
Explanation: Заметим, что при $l=0.6~м$ период $T=1.576912 \approx 1.5700~c$, который достигался при $a = 0.2~м$ для физического маятника. Приведенную длину физического маятника в этом случае вычисляется по формуле
$$ l_{пр} = a + \frac{l^2}{12a}, $$
где $ l \approx 1.009~м$ — длина физического маятника, найденная в предыдущем пункте.
Погрешности. Знаем, что $\Delta l_{пр} = \Delta a + \frac{l^2}{12a} \cdot (2\frac{\Delta l}{l} + \frac{\Delta a}{a})$
End of explanation |
10,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MRC Gain
This notebook illustrates the gains obtained when using MRC in a SIMO system. It reproduces the results found here.
Initializations
First we set the Python path and import some libraries.
Step1: Now we set the simulation parameters
Step2: Simulation
Now we simulate the MRC gain.
Step3: Plotting
Now we can finally plot the SNR gain obtained with MRC.
Step4: Error rate with Maximal Ratio Combining (MRC)
Now let's simulate an actual transmission with MRC. We simulate a BPSK transmission with MRC through a Rayleight channel.
First lets reset the variables in the workspace to guarantee we are not using anything from previous cells
Step5: Now lets make some initialization setting the Python path.
Step7: Now we define a function to simulate for the given transmission parameters.
Step8: Now we can finally perform the simulation for varying sets of transmission parameters.
Step9: Now we plot the results. | Python Code:
%matplotlib inline
import numpy as np
from pyphysim.util.conversion import linear2dB
from pyphysim.util.misc import randn_c
Explanation: MRC Gain
This notebook illustrates the gains obtained when using MRC in a SIMO system. It reproduces the results found here.
Initializations
First we set the Python path and import some libraries.
End of explanation
all_N = np.arange(1,21) # Number of receive antennas
rep_max = 10000 # Number of iterations
rep_max = 10000 # Number of iteration
Explanation: Now we set the simulation parameters
End of explanation
SNR_gain_dB = np.empty(all_N.size)
for index in range(all_N.size):
all_gains = np.empty(rep_max)
for rep in range(rep_max):
N = all_N[index]
# Generate the random channel matrix (here a column vector)
H = randn_c(N, 1)
all_gains[rep] = (H.T.conj() @ H)[0,0].real
SNR_gain_dB[index] = linear2dB(np.mean(all_gains))
SNR_gain_dB_theory = linear2dB(all_N)
Explanation: Simulation
Now we simulate the MRC gain.
End of explanation
from matplotlib import pyplot as plt
plt.plot(SNR_gain_dB,'--bs', label='Simulated')
plt.plot(SNR_gain_dB_theory,'--mo', label='Theory')
plt.legend(loc='best')
plt.grid()
plt.show()
Explanation: Plotting
Now we can finally plot the SNR gain obtained with MRC.
End of explanation
# Reset the variables in the workspace
%reset -f
Explanation: Error rate with Maximal Ratio Combining (MRC)
Now let's simulate an actual transmission with MRC. We simulate a BPSK transmission with MRC through a Rayleight channel.
First lets reset the variables in the workspace to guarantee we are not using anything from previous cells
End of explanation
# Add parent folder to path and import the required modules
import numpy as np
import sys
sys.path.append('../')
from pyphysim.util.conversion import dB2Linear
from pyphysim.util.misc import randn_c, count_bit_errors
from pyphysim.progressbar import ProgressbarText,ProgressbarText2
Explanation: Now lets make some initialization setting the Python path.
End of explanation
def simulate_MRC(SNR, N, NSymbs, num_reps):
Simulate the BPSK transmission with MRC with the given parameters
Params
------
SNR : double
The desired SNR value (in dB)
N : int
The number of receive antennas (the number of transmit antennas is always 1).
NSymbs : int
The number of transmitted symbols at each iteration
num_reps : int
The number of iterations.
bit_errors = 0.0
num_bits = NSymbs * num_reps
for rep in range(num_reps):
# Dependent Variables
noise_var = 1.0 / dB2Linear(SNR)
# Generates random data with 0 and 1
input_data = np.random.randint(0, 2, NSymbs)
# Modulate the data with BPSK
symbols = 1 - 2 * input_data
# Generate the complex channel
h = randn_c(N, 1)
# Pass the data through the channel
received_data = h * symbols + (np.sqrt(noise_var) * randn_c(1, NSymbs)) # This will use numpy broadcasting
# Apply the MRC
improved_received_data = np.dot(h.transpose().conjugate(), received_data)
# Decode the received data
decoded_data = np.zeros(NSymbs, dtype=int)
improved_received_data = np.squeeze(improved_received_data)
decoded_data[improved_received_data < 0] = 1
# Count the number of bit errors
bit_errors += count_bit_errors(input_data, decoded_data)
# Calculate the BER
BER = float(bit_errors) / num_bits
return BER
Explanation: Now we define a function to simulate for the given transmission parameters.
End of explanation
# Transmission parameters
NSymbs = 200 # Number of simulated symbols
NBits = NSymbs
all_SNR = np.linspace(0, 35, 14)
num_reps = 30000
# Number of SNR points
num_points = all_SNR.size
BER_NRx1 = np.zeros(num_points)
BER_NRx2 = np.zeros(num_points)
pbar = ProgressbarText2(num_points, message="Simulating")
for index in range(num_points):
pbar.progress(index)
SNR = all_SNR[index]
BER_NRx1[index] = simulate_MRC(SNR, 1, NSymbs, num_reps)
BER_NRx2[index] = simulate_MRC(SNR, 2, NSymbs, num_reps)
pbar.progress(num_points)
Explanation: Now we can finally perform the simulation for varying sets of transmission parameters.
End of explanation
from matplotlib import pyplot as plt
fig, ax = plt.subplots(figsize=(8,6))
ax.semilogy(all_SNR, BER_NRx1, '-ms')
ax.semilogy(all_SNR, BER_NRx2,'-ks')
ax.legend(['N=1', 'N=2'])
ax.set_xlabel("SNR (dB)")
ax.set_ylabel("BER")
ax.grid()
# fig.show()
Explanation: Now we plot the results.
End of explanation |
10,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
License
Copyright (C) 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions
Step2: Helper Functions
Determine data types
Step4: Impute with GLRM
Step6: Embed with GLRM
Step7: Import data
Step8: Split into to train and validation (before doing data prep!!!)
Step9: Impute numeric missing using GLRM matrix completion
Training data
Step10: Validation data
Step11: Test data
Step12: Embed categorical vars using GLRM
Training data
Step13: Validation data
Step14: Test data
Step15: Merge imputed and embedded frames
Step16: Redefine numerics and explore
Step17: Train model on imputed, embedded features
Step18: Train GLM on imputed, embedded inputs | Python Code:
import h2o
from h2o.estimators.glrm import H2OGeneralizedLowRankEstimator
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
from h2o.grid.grid_search import H2OGridSearch
h2o.init(max_mem_size='12G') # give h2o as much memory as possible
h2o.no_progress() # turn off h2o progress bars
import matplotlib as plt
%matplotlib inline
import numpy as np
import pandas as pd
Explanation: License
Copyright (C) 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Kaggle House Prices with GLRM Matrix Factorization Example
Imports and inits
End of explanation
def get_type_lists(frame, rejects=['Id', 'SalePrice']):
Creates lists of numeric and categorical variables.
:param frame: The frame from which to determine types.
:param rejects: Variable names not to be included in returned lists.
:return: Tuple of lists for numeric and categorical variables in the frame.
nums, cats = [], []
for key, val in frame.types.items():
if key not in rejects:
if val == 'enum':
cats.append(key)
else:
nums.append(key)
print('Numeric =', nums)
print()
print('Categorical =', cats)
return nums, cats
Explanation: Helper Functions
Determine data types
End of explanation
def glrm_num_impute(role, frame):
Helper function for imputing numeric variables using GLRM.
:param role: Role of frame to be imputed.
:param frame: H2OFrame to be imputed.
:return: H2OFrame of imputed numeric features.
# count missing values in training data numeric columns
print(role + ' missing:\n', [cnt for cnt in frame.nacnt() if cnt != 0.0])
# initialize GLRM
matrix_complete_glrm = H2OGeneralizedLowRankEstimator(
k=10, # create 10 features
transform='STANDARDIZE', # <- seems very important
gamma_x=0.001, # regularization on values in X
gamma_y=0.05, # regularization on values in Y
impute_original=True)
# train GLRM
matrix_complete_glrm.train(training_frame=frame, x=original_nums)
# plot iteration history to ensure convergence
matrix_complete_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History')
# impute numeric inputs by multiply the calculated xi and yj for the missing values in train
num_impute = matrix_complete_glrm.predict(frame)
# count missing values in imputed set
print('imputed ' + role + ' missing:\n', [cnt for cnt in num_impute.nacnt() if cnt != 0.0])
return num_impute
Explanation: Impute with GLRM
End of explanation
def glrm_cat_embed(frame):
Helper function for embedding caetgorical variables using GLRM.
:param frame: H2OFrame to be embedded.
:return: H2OFrame of embedded categorical features.
# initialize GLRM
cat_embed_glrm = H2OGeneralizedLowRankEstimator(
k=50,
transform='STANDARDIZE',
loss='Quadratic',
regularization_x='Quadratic',
regularization_y='L1',
gamma_x=0.25,
gamma_y=0.5)
# train GLRM
cat_embed_glrm.train(training_frame=frame, x=cats)
# plot iteration history to ensure convergence
cat_embed_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History')
# extracted embedded features
cat_embed = h2o.get_frame(cat_embed_glrm._model_json['output']['representation_name'])
return cat_embed
Explanation: Embed with GLRM
End of explanation
train = h2o.import_file('../../03_regression/data/train.csv')
test = h2o.import_file('../../03_regression/data/test.csv')
# bug fix - from Keston
dummy_col = np.random.rand(test.shape[0])
test = test.cbind(h2o.H2OFrame(dummy_col))
cols = test.columns
cols[-1] = 'SalePrice'
test.columns = cols
print(train.shape)
print(test.shape)
original_nums, cats = get_type_lists(train)
Explanation: Import data
End of explanation
train, valid = train.split_frame([0.7], seed=12345)
print(train.shape)
print(valid.shape)
Explanation: Split into to train and validation (before doing data prep!!!)
End of explanation
train_num_impute = glrm_num_impute('training', train)
train_num_impute.head()
Explanation: Impute numeric missing using GLRM matrix completion
Training data
End of explanation
valid_num_impute = glrm_num_impute('validation', valid)
Explanation: Validation data
End of explanation
test_num_impute = glrm_num_impute('test', test)
Explanation: Test data
End of explanation
train_cat_embed = glrm_cat_embed(train)
Explanation: Embed categorical vars using GLRM
Training data
End of explanation
valid_cat_embed = glrm_cat_embed(valid)
Explanation: Validation data
End of explanation
test_cat_embed = glrm_cat_embed(test)
Explanation: Test data
End of explanation
imputed_embedded_train = train[['Id', 'SalePrice']].cbind(train_num_impute).cbind(train_cat_embed)
imputed_embedded_valid = valid[['Id', 'SalePrice']].cbind(valid_num_impute).cbind(valid_cat_embed)
imputed_embedded_test = test[['Id', 'SalePrice']].cbind(test_num_impute).cbind(test_cat_embed)
Explanation: Merge imputed and embedded frames
End of explanation
imputed_embedded_nums, cats = get_type_lists(imputed_embedded_train)
print('Imputed and encoded numeric training data:')
imputed_embedded_train.describe()
print('--------------------------------------------------------------------------------')
print('Imputed and encoded numeric validation data:')
imputed_embedded_valid.describe()
print('--------------------------------------------------------------------------------')
print('Imputed and encoded numeric test data:')
imputed_embedded_test.describe()
Explanation: Redefine numerics and explore
End of explanation
h2o.show_progress() # turn on progress bars
# Check log transform - looks good
%matplotlib inline
imputed_embedded_train['SalePrice'].log().as_data_frame().hist()
# Execute log transform
imputed_embedded_train['SalePrice'] = imputed_embedded_train['SalePrice'].log()
imputed_embedded_valid['SalePrice'] = imputed_embedded_valid['SalePrice'].log()
print(imputed_embedded_train[0:3, 'SalePrice'])
Explanation: Train model on imputed, embedded features
End of explanation
alpha_opts = [0.01, 0.25, 0.5, 0.99] # always keep some L2
hyper_parameters = {"alpha":alpha_opts}
# initialize grid search
grid = H2OGridSearch(
H2OGeneralizedLinearEstimator(
family="gaussian",
lambda_search=True,
seed=12345),
hyper_params=hyper_parameters)
# train grid
grid.train(y='SalePrice',
x=imputed_embedded_nums,
training_frame=imputed_embedded_train,
validation_frame=imputed_embedded_valid)
# show grid search results
print(grid.show())
best = grid.get_grid()[0]
print(best)
# plot top frame values
yhat_frame = imputed_embedded_valid.cbind(best.predict(imputed_embedded_valid))
print(yhat_frame[0:10, ['SalePrice', 'predict']])
# plot sorted predictions
yhat_frame_df = yhat_frame[['SalePrice', 'predict']].as_data_frame()
yhat_frame_df.sort_values(by='predict', inplace=True)
yhat_frame_df.reset_index(inplace=True, drop=True)
_ = yhat_frame_df.plot(title='Ranked Predictions Plot')
# Shutdown H2O - this will erase all your unsaved frames and models in H2O
h2o.cluster().shutdown(prompt=True)
Explanation: Train GLM on imputed, embedded inputs
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.