Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
14,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hodgkin-Huxley IK Model
This example shows how the Hodgkin-Huxley potassium current (IK) toy model can be used.
This model recreates an experiment where a sequence of voltages is applied to a giant axon from a squid, and the resulting potassium current is measured. For information on the science behind it, see the original 1952 paper.
Step1: We can get an example set of parameters using the suggested_parameters() method
Step2: The voltage protocol used in the model has a fixed duration, which we can see using suggested_duration()
Step3: And it can also provide a suggested sequence of sampling times
Step4: Using the suggested parameters and times, we can run a simulation
Step5: This gives us all we need to create a plot of current versus time
Step6: The voltage protocol used to generate this data consists of 12 segments, of 100ms each.
Each segment starts with 90ms at the holding potential, followed by a 10ms step to an increasing step potential.
During this step, a current is elicited, while the signal at the holding potential is almost zero.
A common way to represent this data is to show only the data during the step, and to fold the steps over each other. This can be done using the fold() method
Step7: This recreates Figure 3 in the original paper.
Now we will add some noise to generate some fake "experimental" data and try to recover the original parameters.
Step8: We can then compare the true and fitted model output | Python Code:
import pints
import pints.toy
import matplotlib.pyplot as plt
import numpy as np
model = pints.toy.HodgkinHuxleyIKModel()
Explanation: Hodgkin-Huxley IK Model
This example shows how the Hodgkin-Huxley potassium current (IK) toy model can be used.
This model recreates an experiment where a sequence of voltages is applied to a giant axon from a squid, and the resulting potassium current is measured. For information on the science behind it, see the original 1952 paper.
End of explanation
x_true = np.array(model.suggested_parameters())
x_true
Explanation: We can get an example set of parameters using the suggested_parameters() method:
End of explanation
model.suggested_duration()
Explanation: The voltage protocol used in the model has a fixed duration, which we can see using suggested_duration():
End of explanation
times = model.suggested_times()
Explanation: And it can also provide a suggested sequence of sampling times:
End of explanation
values = model.simulate(x_true, times)
Explanation: Using the suggested parameters and times, we can run a simulation:
End of explanation
plt.figure()
plt.plot(times, values)
plt.show()
Explanation: This gives us all we need to create a plot of current versus time:
End of explanation
plt.figure()
for t, v in model.fold(times, values):
plt.plot(t, v)
plt.show()
Explanation: The voltage protocol used to generate this data consists of 12 segments, of 100ms each.
Each segment starts with 90ms at the holding potential, followed by a 10ms step to an increasing step potential.
During this step, a current is elicited, while the signal at the holding potential is almost zero.
A common way to represent this data is to show only the data during the step, and to fold the steps over each other. This can be done using the fold() method:
End of explanation
# Add noise
values += np.random.normal(0, 40, values.shape)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Select a score function
score = pints.SumOfSquaresError(problem)
# Select some boundaries above and below the true values
lower = [x / 1.5 for x in x_true]
upper = [x * 1.5 for x in x_true]
boundaries = pints.RectangularBoundaries(lower, upper)
# Perform an optimization with boundaries and hints
x0 = x_true * 0.98
optimiser = pints.Optimisation(score, x0, boundaries=boundaries, method=pints.CMAES)
optimiser.set_max_unchanged_iterations(100)
optimiser.set_log_to_screen(True)
found_parameters, found_score = optimiser.run()
# Compare parameters with original
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters):
print(pints.strfloat(x) + ' ' + pints.strfloat(x0[k]))
Explanation: This recreates Figure 3 in the original paper.
Now we will add some noise to generate some fake "experimental" data and try to recover the original parameters.
End of explanation
# Evaluate model at found parameters
found_values = problem.evaluate(found_parameters)
# Show quality of fit
plt.figure()
plt.xlabel('Time')
plt.ylabel('Value')
for t, v in model.fold(times, values):
plt.plot(t, v, c='b', label='Noisy data')
for t, v in model.fold(times, found_values):
plt.plot(t, v, c='r', label='Fit')
plt.show()
Explanation: We can then compare the true and fitted model output
End of explanation |
14,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
胶囊网络(CapsNets)
基于论文:Dynamic Routing Between Capsules,作者:Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017)。
部分启发来自于Huadong Liao的实现CapsNet-TensorFlow
<table align="left">
<td>
<a target="_blank" href="https
Step1: 你或许也需要观看视频,其展示了这个notebook的难点(大家可能看不到,因为youtube被墙了):
Step2: Imports
同时支持 Python 2 和 Python 3:
Step3: 为了绘制好看的图:
Step4: 我们会用到 NumPy 和 TensorFlow:
Step5: 可重复性
为了能够在不重新启动Jupyter Notebook Kernel的情况下重新运行本notebook,我们需要重置默认的计算图。
Step6: 设置随机种子,以便于本notebook总是可以输出相同的输出:
Step7: 装载MNIST
是的,我知道,又是MNIST。但我们希望这个极具威力的想法可以工作在更大的数据集上,时间会说明一切。(译注:因为是Hinton吗,因为他老是对;-)?)
Step8: 让我们看一下这些手写数字图像是什么样的:
Step9: 以及相应的标签:
Step10: 现在让我们建立一个胶囊网络来区分这些图像。这里有一个其总体的架构,享受一下ASCII字符的艺术吧! ;-)
注意:为了可读性,我摒弃了两种箭头:标签 → 掩盖,以及 输入的图像 → 重新构造损失。
损 失
↑
┌─────────┴─────────┐
标 签 → 边 际 损 失 重 新 构 造 损 失
↑ ↑
模 长 解 码 器
↑ ↑
数 字 胶 囊 们 ────遮 盖─────┘
↖↑↗ ↖↑↗ ↖↑↗
主 胶 囊 们
↑
输 入 的 图 像
我们打算从底层开始构建该计算图,然后逐步上移,左侧优先。让我们开始!
输入图像
让我们通过为输入图像创建一个占位符作为起步,该输入图像具有28×28个像素,1个颜色通道=灰度。
Step11: 主胶囊
第一层由32个特征映射组成,每个特征映射为6$\times$6个胶囊,其中每个胶囊输出8维的激活向量:
Step12: 为了计算它们的输出,我们首先应用两个常规的卷积层:
Step13: 注意:由于我们使用一个尺寸为9的核,并且没有使用填充(出于某种原因,这就是"valid"的含义),该图像每经历一个卷积层就会缩减 $9-1=8$ 个像素(从 $28\times 28$ 到 $20 \times 20$,再从 $20\times 20$ 到 $12\times 12$),并且由于在第二个卷积层中使用了大小为2的步幅,那么该图像的大小就被除以2。这就是为什么我们最后会得到 $6\times 6$ 的特征映射(feature map)。
接着,我们重塑该输出以获得一组8D向量,用来表示主胶囊的输出。conv2的输出是一个数组,包含对于每个实例都有32×8=256个特征映射(feature map),其中每个特征映射为6×6。所以该输出的形状为 (batch size, 6, 6, 256)。我们想要把256分到32个8维向量中,可以通过使用重塑 (batch size, 6, 6, 32, 8)来达到目的。然而,由于首个胶囊层会被完全连接到下一个胶囊层,那么我们就可以简单地把它扁平成6×6的网格。这意味着我们只需要把它重塑成 (batch size, 6×6×32, 8) 即可。
Step14: 现在我们需要压缩这些向量。让我们来定义squash()函数,基于论文中的公式(1):
$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$
该squash()函数将会压缩所有的向量到给定的数组中,沿给定轴(默认情况为最后一个轴)。
当心,这里有一个很讨厌的bug在等着你:当 $\|\mathbf{s}\|=0$时,$\|\mathbf{s}\|$ 为 undefined,这让我们不能直接使用 tf.norm(),否则会在训练过程中失败:如果一个向量为0,那么梯度就会是 nan,所以当优化器更新变量时,这些变量也会变为 nan,从那个时刻起,你就止步在 nan 那里了。解决的方法是手工实现norm,在计算的时候加上一个很小的值 epsilon:$\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$
Step15: 现在让我们应用这个函数以获得每个主胶囊$\mathbf{u}_i$的输出:
Step16: 太棒了!我们有了首个胶囊层的输出了。不是很难,对吗?然后,计算下一层才是真正乐趣的开始(译注:好戏刚刚开始)。
数字胶囊们
要计算数字胶囊们的输出,我们必须首先计算预测的输出向量(每个对应一个主胶囊/数字胶囊的对)。接着,我们就可以通过协议算法来运行路由。
计算预测输出向量
该数字胶囊层包含10个胶囊(每个代表一个数字),每个胶囊16维:
Step17: 对于在第一层里的每个胶囊 $i$,我们会在第二层中预测出每个胶囊 $j$ 的输出。为此,我们需要一个变换矩阵 $\mathbf{W}{i,j}$(每一对就是胶囊($i$, $j$) 中的一个),接着我们就可以计算预测的输出$\hat{\mathbf{u}}{j|i} = \mathbf{W}{i,j} \, \mathbf{u}_i$(论文中的公式(2)的右半部分)。由于我们想要将8维向量变形为16维向量,因此每个变换向量$\mathbf{W}{i,j}$必须具备(16, 8)形状。
要为每对胶囊 ($i$, $j$) 计算 $\hat{\mathbf{u}}_{j|i}$,我们会利用 tf.matmul() 函数的一个特点:你可能知道它可以让你进行两个矩阵相乘,但你可能不知道它可以让你进行更高维度的数组相乘。它将这些数组视作为数组矩阵,并且它会执行每项的矩阵相乘。例如,设有两个4D数组,每个包含2×3网格的矩阵。第一个包含矩阵为:$\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$,第二个包含矩阵为:$\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$。如果你使用 tf.matmul函数 对这两个4D数组进行相乘,你就会得到:
$
\pmatrix{
\mathbf{A} & \mathbf{B} & \mathbf{C} \
\mathbf{D} & \mathbf{E} & \mathbf{F}
} \times
\pmatrix{
\mathbf{G} & \mathbf{H} & \mathbf{I} \
\mathbf{J} & \mathbf{K} & \mathbf{L}
} = \pmatrix{
\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \
\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}
}
$
我们可以把这个函数用来计算每对胶囊 ($i$, $j$) 的 $\hat{\mathbf{u}}_{j|i}$,就像这样(回忆一下,有 6×6×32=1152 个胶囊在第一层,还有10个在第二层):
$
\pmatrix{
\mathbf{W}{1,1} & \mathbf{W}{1,2} & \cdots & \mathbf{W}{1,10} \
\mathbf{W}{2,1} & \mathbf{W}{2,2} & \cdots & \mathbf{W}{2,10} \
\vdots & \vdots & \ddots & \vdots \
\mathbf{W}{1152,1} & \mathbf{W}{1152,2} & \cdots & \mathbf{W}{1152,10}
} \times
\pmatrix{
\mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \
\mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \
\vdots & \vdots & \ddots & \vdots \
\mathbf{u}{1152} & \mathbf{u}{1152} & \cdots & \mathbf{u}{1152}
}
=
\pmatrix{
\hat{\mathbf{u}}{1|1} & \hat{\mathbf{u}}{2|1} & \cdots & \hat{\mathbf{u}}{10|1} \
\hat{\mathbf{u}}{1|2} & \hat{\mathbf{u}}{2|2} & \cdots & \hat{\mathbf{u}}{10|2} \
\vdots & \vdots & \ddots & \vdots \
\hat{\mathbf{u}}{1|1152} & \hat{\mathbf{u}}{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}
}
$
第一个数组的形状为 (1152, 10, 16, 8),第二个数组的形状为 (1152, 10, 8, 1)。注意到第二个数组必须包含10个对于向量$\mathbf{u}1$ 到 $\mathbf{u}{1152}$ 的完全拷贝。为了要创建这样的数组,我们将使用好用的 tf.tile() 函数,它可以让你创建包含很多基数组拷贝的数组,并且根据你想要的进行平铺。
哦,稍等!我们还忘了一个维度:batch size(批量/批次的大小)。假设我们要给胶囊网络提供50张图片,那么该网络需要同时作出这50张图片的预测。所以第一个数组的形状为 (50, 1152, 10, 16, 8),而第二个数组的形状为 (50, 1152, 10, 8, 1)。第一层的胶囊实际上已经对于所有的50张图像作出预测,所以第二个数组没有问题,但对于第一个数组,我们需要使用 tf.tile() 让其具有50个拷贝的变换矩阵。
好了,让我们开始,创建一个可训练的变量,形状为 (1, 1152, 10, 16, 8) 可以用来持有所有的变换矩阵。第一个维度的大小为1,可以让这个数组更容易的平铺。我们使用标准差为0.1的常规分布,随机初始化这个变量。
Step18: 现在我们可以通过每个实例重复一次W来创建第一个数组:
Step19: 就是这样!现在转到第二个数组。如前所述,我们需要创建一个数组,形状为 (batch size, 1152, 10, 8, 1),包含第一层胶囊的输出,重复10次(一次一个数字,在第三个维度,即axis=2)。 caps1_output 数组的形状为 (batch size, 1152, 8),所以我们首先需要展开两次来获得形状 (batch size, 1152, 1, 8, 1) 的数组,接着在第三维度重复它10次。
Step20: 让我们检查以下第一个数组的形状:
Step21: 很好,现在第二个:
Step22: 好!现在,为了要获得所有的预测好的输出向量 $\hat{\mathbf{u}}_{j|i}$,我们只需要将这两个数组使用tf.malmul()函数进行相乘,就像前面解释的那样:
Step23: 让我们检查一下形状:
Step24: 非常好,对于在该批次(我们还不知道批次的大小,使用 "?" 替代)中的每个实例以及对于每对第一和第二层的胶囊(1152×10),我们都有一个16D预测的输出列向量 (16×1)。我们已经准备好应用 根据协议算法的路由 了!
根据协议的路由
首先,让我们初始化原始的路由权重 $b_{i,j}$ 到0
Step25: 我们马上将会看到为什么我们需要最后两维大小为1的维度。
第一轮
首先,让我们应用 sofmax 函数来计算路由权重,$\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (论文中的公式(3)):
Step26: 现在让我们为每个第二层胶囊计算其预测输出向量的加权,$\mathbf{s}j = \sum\limits{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (论文公式(2)的左半部分):
Step27: 这里有几个重要的细节需要注意:
* 要执行元素级别矩阵相乘(也称为Hadamard积,记作$\circ$),我们需要使用tf.multiply() 函数。它要求 routing_weights 和 caps2_predicted 具有相同的秩,这就是为什么前面我们在 routing_weights 上添加了两个额外的维度。
* routing_weights的形状为 (batch size, 1152, 10, 1, 1) 而 caps2_predicted 的形状为 (batch size, 1152, 10, 16, 1)。由于它们在第四个维度上不匹配(1 vs 16),tf.multiply() 自动地在 routing_weights 该维度上 广播 了16次。如果你不熟悉广播,这里有一个简单的例子,也许可以帮上忙:
$ \pmatrix{1 & 2 & 3 \ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \ 40 & 500 & 6000} $
最后,让我们应用squash函数到在协议算法的第一次迭代迭代结束时获取第二层胶囊的输出上,$\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$:
Step28: 好!我们对于每个实例有了10个16D输出向量,就像我们期待的那样。
第二轮
首先,让我们衡量一下,每个预测向量 $\hat{\mathbf{u}}{j|i}$ 对于实际输出向量 $\mathbf{v}_j$ 之间到底有多接近,这是通过它们的标量乘积 $\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$来完成的。
快速数学上的提示:如果 $\vec{a}$ and $\vec{b}$ 是长度相等的向量,并且 $\mathbf{a}$ 和 $\mathbf{b}$ 是相应的列向量(如,只有一列的矩阵),那么 $\mathbf{a}^T \mathbf{b}$ (即 $\mathbf{a}$的转置和 $\mathbf{b}$的矩阵相乘)为一个1×1的矩阵,包含两个向量$\vec{a}\cdot\vec{b}$的标量积。在机器学习中,我们通常将向量表示为列向量,所以当我们探讨关于计算标量积 $\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$的时候,其实意味着计算 ${\hat{\mathbf{u}}{j|i}}^T \mathbf{v}_j$。
由于我们需要对每个实例和每个第一和第二层的胶囊对$(i, j)$,计算标量积 $\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$ ,我们将再次利用tf.matmul()可以同时计算多个矩阵相乘的特点。这就要求使用 tf.tile()来使得所有维度都匹配(除了倒数第二个),就像我们之前所作的那样。所以让我们查看caps2_predicted的形状,因为它持有对每个实例和每个胶囊对的所有预测输出向量$\hat{\mathbf{u}}{j|i}$。
Step29: 现在让我们查看 caps2_output_round_1 的形状,它有10个输出向量,每个16D,对应每个实例:
Step30: 为了让这些形状相匹配,我们只需要在第二个维度平铺 caps2_output_round_1 1152次(一次一个主胶囊):
Step31: 现在我们已经准备好可以调用 tf.matmul()(注意还需要告知它在第一个数组中的矩阵进行转置,让${\hat{\mathbf{u}}{j|i}}^T$ 来替代 $\hat{\mathbf{u}}{j|i}$):
Step32: 我们现在可以通过对于刚计算的标量积$\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$进行简单相加,来进行原始路由权重 $b{i,j}$ 的更新:$b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (参见论文过程1中第7步)
Step33: 第二轮的其余部分和第一轮相同:
Step34: 我们可以继续更多轮,只需要重复第二轮中相同的步骤,但为了保持简洁,我们就到这里:
Step35: 静态还是动态循环?
在上面的代码中,我们在TensorFlow计算图中为协调算法的每一轮路由创建了不同的操作。换句话说,它是一个静态循环。
当然,与其拷贝/粘贴这些代码几次,通常在python中,我们可以写一个 for 循环,但这不会改变这样一个事实,那就是在计算图中最后对于每个路由迭代都会有不同的操作。这其实是可接受的,因为我们通常不会具有超过5次路由迭代,所以计算图不会成长得太大。
然而,你可能更倾向于在TensorFlow计算图自身实现路由循环,而不是使用Python的for循环。为了要做到这点,将需要使用TensorFlow的 tf.while_loop() 函数。这种方式,所有的路由循环都可以重用在该计算图中的相同的操作,这被称为动态循环。
例如,这里是如何构建一个小循环用来计算1到100的平方和:
Step36: 如你所见, tf.while_loop() 函数期望的循环条件和循环体由两个函数来提供。这些函数仅会被TensorFlow调用一次,在构建计算图阶段,不 在执行计算图的时候。 tf.while_loop() 函数将由 condition() 和 loop_body() 创建的计算图碎片同一些用来创建循环的额外操作缝制在一起。
还注意到在训练的过程中,TensorFlow将自动地通过循环处理反向传播,因此你不需要担心这些事情。
当然,我们也可以一行代码搞定!;)
Step37: 开个玩笑,抛开缩减计算图的大小不说,使用动态循环而不是静态循环能够帮助减少很多的GPU RAM的使用(如果你使用GPU的话)。事实上,如果但调用 tf.while_loop() 函数时,你设置了 swap_memory=True ,TensorFlow会在每个循环的迭代上自动检查GPU RAM使用情况,并且它会照顾到在GPU和CPU之间swapping内存时的需求。既然CPU的内存便宜量又大,相对GPU RAM而言,这就很有意义了。
估算的分类概率(模长)
输出向量的模长代表了分类的概率,所以我们就可以使用tf.norm()来计算它们,但由于我们在讨论squash函数时看到的那样,可能会有风险,所以我们创建了自己的 safe_norm() 函数来进行替代:
Step38: 要预测每个实例的分类,我们只需要选择那个具有最高估算概率的就可以了。要做到这点,让我们通过使用 tf.argmax() 来达到我们的目的:
Step39: 让我们检查一下 y_proba_argmax 的形状:
Step40: 这正好是我们想要的:对于每一个实例,我们现在有了最长的输出向量的索引。让我们用 tf.squeeze() 来移除后两个大小为1的维度。这就给出了该胶囊网络对于每个实例的预测分类:
Step41: 好了,我们现在准备好开始定义训练操作,从损失开始。
标签
首先,我们将需要一个对于标签的占位符:
Step42: 边际损失
论文使用了一个特殊的边际损失,来使得在每个图像中侦测多于两个以上的数字成为可能:
$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$
$T_k$ 等于1,如果分类$k$的数字出现,否则为0.
在论文中,$m^{+} = 0.9$, $m^{-} = 0.1$,并且$\lambda = 0.5$
注意在视频15
Step43: 既然 y 将包含数字分类,从0到9,要对于每个实例和每个分类获取 $T_k$ ,我们只需要使用 tf.one_hot() 函数即可:
Step44: 一个小例子应该可以说明这到底做了什么:
Step45: 现在让我们对于每个输出胶囊和每个实例计算输出向量。首先,让我们验证 caps2_output 形状:
Step46: 这些16D向量位于第二到最后的维度,因此让我们在 axis=-2 使用 safe_norm() 函数:
Step47: 现在让我们计算 $\max(0, m^{+} - \|\mathbf{v}k\|)^2$,并且重塑其结果以获得一个简单的具有形状(_batch size, 10)的矩阵:
Step48: 接下来让我们计算 $\max(0, \|\mathbf{v}k\| - m^{-})^2$ 并且重塑成(_batch size,10):
Step49: 我们准备好为每个实例和每个数字计算损失:
Step50: 现在我们可以把对于每个实例的数字损失进行相加($L_0 + L_1 + \cdots + L_9$),并且在所有的实例中计算均值。这给予我们最后的边际损失:
Step51: 重新构造
现在让我们添加一个解码器网络,其位于胶囊网络之上。它是一个常规的3层全连接神经网络,其将基于胶囊网络的输出,学习重新构建输入图像。这将强制胶囊网络保留所有需要重新构造数字的信息,贯穿整个网络。该约束正则化了模型:它减少了训练数据集过拟合的风险,并且它有助于泛化到新的数字。
遮盖
论文中提及了在训练的过程中,与其发送所有的胶囊网络的输出到解码器网络,不如仅发送与目标数字对应的胶囊输出向量。所有其余输出向量必须被遮盖掉。在推断的时候,我们必须遮盖所有输出向量,除了最长的那个。即,预测的数字相关的那个。你可以查看论文中的图2(视频中的18
Step52: 现在让我们使用 tf.cond() 来定义重新构造的目标,如果 mask_with_labels 为 True 就是标签 y,否则就是 y_pred。
Step53: 注意到 tf.cond() 函数期望的是通过函数传递而来的if-True 和 if-False张量:这些函数会在计算图构造阶段(而非执行阶段)被仅调用一次,和tf.while_loop()类似。这可以允许TensorFlow添加必要操作,以此处理if-True 和 if-False 张量的条件评估。然而,在这里,张量 y 和 y_pred 已经在我们调用 tf.cond() 时被创建,不幸地是TensorFlow会认为 y 和 y_pred 是 reconstruction_targets 张量的依赖项。虽然,reconstruction_targets 张量最终是会计算出正确值,但是:
1. 无论何时,我们评估某个依赖于 reconstruction_targets 的张量,y_pred 张量也会被评估(即便 mask_with_layers 为 True)。这不是什么大问题,因为,在训练阶段计算y_pred 张量不会添加额外的开销,而且不管怎么样我们都需要它来计算边际损失。并且在测试中,如果我们做的是分类,我们就不需要重新构造,所以reconstruction_grpha根本不会被评估。
2. 我们总是需要为y占位符递送一个值(即使mask_with_layers为False)。这就有点讨厌了,当然我们可以传递一个空数组,因为TensorFlow无论如何都不会用到它(就是当检查依赖项的时候还不知道)。
现在我们有了重新构建的目标,让我们创建重新构建的遮盖。对于目标类型它应该为1.0,对于其他类型应该为0.0。为此我们就可以使用tf.one_hot()函数:
Step54: 让我们检查一下 reconstruction_mask的形状:
Step55: 和 caps2_output 的形状比对一下:
Step56: 嗯,它的形状是 (batch size, 1, 10, 16, 1)。我们想要将它和 reconstruction_mask 进行相乘,但 reconstruction_mask的形状是(batch size, 10)。我们必须对此进行reshape成 (batch size, 1, 10, 1, 1) 来满足相乘的要求:
Step57: 最终我们可以应用 遮盖 了!
Step58: 最后还有一个重塑操作被用来扁平化解码器的输入:
Step59: 这给予我们一个形状是 (batch size, 160) 的数组:
Step60: 解码器
现在让我们来构建该解码器。它非常简单:两个密集(全连接)ReLU 层紧跟这一个密集输出sigmoid层:
Step61: 重新构造的损失
现在让我们计算重新构造的损失。它不过是输入图像和重新构造过的图像的平方差。
Step62: 最终损失
最终损失为边际损失和重新构造损失(使用放大因子0.0005确保边际损失在训练过程中处于支配地位)的和:
Step63: 最后润色
精度
为了衡量模型的精度,我们需要计算实例被正确分类的数量。为此,我们可以简单地比较y和y_pred,并将比较结果的布尔值转换成float32(0.0代表False,1.0代表True),并且计算所有实例的均值:
Step64: 训练操作
论文中提到作者使用Adam优化器,使用了TensorFlow的默认参数:
Step65: 初始化和Saver
让我们来添加变量初始器,还要加一个 Saver:
Step66: 还有... 我们已经完成了构造阶段!花点时间可以庆祝🎉一下。
Step67: 我们在训练结束后,在验证集上达到了99.32%的精度,只用了5个epoches,看上去不错。现在让我们将模型运用到测试集上。
评估
Step68: 我们在测试集上达到了99.21%的精度。相当棒!
预测
现在让我们进行一些预测!首先从测试集确定一些图片,接着开始一个session,恢复已经训练好的模型,评估cap2_output来获得胶囊网络的输出向量,decoder_output来重新构造,用y_pred来获得类型预测:
Step69: 注意:我们传递的y使用了一个空的数组,不过TensorFlow并不会用到它,前面已经解释过了。
现在让我们把这些图片和它们的标签绘制出来,同时绘制出来的还有相应的重新构造和预测:
Step70: 预测都正确,而且重新构造的图片看上去很棒。阿弥陀佛!
理解输出向量
让我们调整一下输出向量,对它们的姿态参数表示进行查看。
首先让我们检查cap2_output_value NumPy数组的形状:
Step71: 让我们创建一个函数,该函数在所有的输出向量里对于每个 16(维度)姿态参数进行调整。每个调整过的输出向量将和原来的输出向量相同,除了它的 姿态参数 中的一个会加上一个-0.5到0.5之间变动的值。默认的会有11个步数(-0.5, -0.4, ..., +0.4, +0.5)。这个函数会返回一个数组,其形状为(调整过的姿态参数=16, 步数=11, batch size=5, 1, 10, 16, 1):
Step72: 让我们计算所有的调整过的输出向量并且重塑结果到 (parameters×steps×instances, 1, 10, 16, 1) 以便于我们能够传递该数组到解码器中:
Step73: 现在让我们递送这些调整过的输出向量到解码器并且获得重新构造,它会产生:
Step74: 让我们重塑解码器的输出以便于我们能够在输出维度,调整步数,和实例之上进行迭代:
Step75: 最后,让我们绘制所有的重新构造,对于前三个输出维度,对于每个调整中的步数(列)和每个数字(行): | Python Code:
from IPython.display import IFrame
IFrame(src="https://www.youtube.com/embed/pPN8d0E3900", width=560, height=315, frameborder=0, allowfullscreen=True)
Explanation: 胶囊网络(CapsNets)
基于论文:Dynamic Routing Between Capsules,作者:Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017)。
部分启发来自于Huadong Liao的实现CapsNet-TensorFlow
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/extra_capsnets-cn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
警告:这是本书第一版的代码。请访问 https://github.com/ageron/handson-ml2 获取第二版代码,其中包含使用最新库版本的最新笔记本。特别是,第一版基于TensorFlow 1,而第二版使用TensorFlow 2,使用起来更加简单。
简介
观看 视频来理解胶囊网络背后的关键想法(大家可能看不到,因为youtube被墙了):
End of explanation
IFrame(src="https://www.youtube.com/embed/2Kawrd5szHE", width=560, height=315, frameborder=0, allowfullscreen=True)
Explanation: 你或许也需要观看视频,其展示了这个notebook的难点(大家可能看不到,因为youtube被墙了):
End of explanation
from __future__ import division, print_function, unicode_literals
Explanation: Imports
同时支持 Python 2 和 Python 3:
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
Explanation: 为了绘制好看的图:
End of explanation
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
import numpy as np
import tensorflow as tf
Explanation: 我们会用到 NumPy 和 TensorFlow:
End of explanation
tf.reset_default_graph()
Explanation: 可重复性
为了能够在不重新启动Jupyter Notebook Kernel的情况下重新运行本notebook,我们需要重置默认的计算图。
End of explanation
np.random.seed(42)
tf.set_random_seed(42)
Explanation: 设置随机种子,以便于本notebook总是可以输出相同的输出:
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
Explanation: 装载MNIST
是的,我知道,又是MNIST。但我们希望这个极具威力的想法可以工作在更大的数据集上,时间会说明一切。(译注:因为是Hinton吗,因为他老是对;-)?)
End of explanation
n_samples = 5
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
sample_image = mnist.train.images[index].reshape(28, 28)
plt.imshow(sample_image, cmap="binary")
plt.axis("off")
plt.show()
Explanation: 让我们看一下这些手写数字图像是什么样的:
End of explanation
mnist.train.labels[:n_samples]
Explanation: 以及相应的标签:
End of explanation
X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
Explanation: 现在让我们建立一个胶囊网络来区分这些图像。这里有一个其总体的架构,享受一下ASCII字符的艺术吧! ;-)
注意:为了可读性,我摒弃了两种箭头:标签 → 掩盖,以及 输入的图像 → 重新构造损失。
损 失
↑
┌─────────┴─────────┐
标 签 → 边 际 损 失 重 新 构 造 损 失
↑ ↑
模 长 解 码 器
↑ ↑
数 字 胶 囊 们 ────遮 盖─────┘
↖↑↗ ↖↑↗ ↖↑↗
主 胶 囊 们
↑
输 入 的 图 像
我们打算从底层开始构建该计算图,然后逐步上移,左侧优先。让我们开始!
输入图像
让我们通过为输入图像创建一个占位符作为起步,该输入图像具有28×28个像素,1个颜色通道=灰度。
End of explanation
caps1_n_maps = 32
caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 主胶囊们
caps1_n_dims = 8
Explanation: 主胶囊
第一层由32个特征映射组成,每个特征映射为6$\times$6个胶囊,其中每个胶囊输出8维的激活向量:
End of explanation
conv1_params = {
"filters": 256,
"kernel_size": 9,
"strides": 1,
"padding": "valid",
"activation": tf.nn.relu,
}
conv2_params = {
"filters": caps1_n_maps * caps1_n_dims, # 256 个卷积滤波器
"kernel_size": 9,
"strides": 2,
"padding": "valid",
"activation": tf.nn.relu
}
conv1 = tf.layers.conv2d(X, name="conv1", **conv1_params)
conv2 = tf.layers.conv2d(conv1, name="conv2", **conv2_params)
Explanation: 为了计算它们的输出,我们首先应用两个常规的卷积层:
End of explanation
caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],
name="caps1_raw")
Explanation: 注意:由于我们使用一个尺寸为9的核,并且没有使用填充(出于某种原因,这就是"valid"的含义),该图像每经历一个卷积层就会缩减 $9-1=8$ 个像素(从 $28\times 28$ 到 $20 \times 20$,再从 $20\times 20$ 到 $12\times 12$),并且由于在第二个卷积层中使用了大小为2的步幅,那么该图像的大小就被除以2。这就是为什么我们最后会得到 $6\times 6$ 的特征映射(feature map)。
接着,我们重塑该输出以获得一组8D向量,用来表示主胶囊的输出。conv2的输出是一个数组,包含对于每个实例都有32×8=256个特征映射(feature map),其中每个特征映射为6×6。所以该输出的形状为 (batch size, 6, 6, 256)。我们想要把256分到32个8维向量中,可以通过使用重塑 (batch size, 6, 6, 32, 8)来达到目的。然而,由于首个胶囊层会被完全连接到下一个胶囊层,那么我们就可以简单地把它扁平成6×6的网格。这意味着我们只需要把它重塑成 (batch size, 6×6×32, 8) 即可。
End of explanation
def squash(s, axis=-1, epsilon=1e-7, name=None):
with tf.name_scope(name, default_name="squash"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=True)
safe_norm = tf.sqrt(squared_norm + epsilon)
squash_factor = squared_norm / (1. + squared_norm)
unit_vector = s / safe_norm
return squash_factor * unit_vector
Explanation: 现在我们需要压缩这些向量。让我们来定义squash()函数,基于论文中的公式(1):
$\operatorname{squash}(\mathbf{s}) = \dfrac{\|\mathbf{s}\|^2}{1 + \|\mathbf{s}\|^2} \dfrac{\mathbf{s}}{\|\mathbf{s}\|}$
该squash()函数将会压缩所有的向量到给定的数组中,沿给定轴(默认情况为最后一个轴)。
当心,这里有一个很讨厌的bug在等着你:当 $\|\mathbf{s}\|=0$时,$\|\mathbf{s}\|$ 为 undefined,这让我们不能直接使用 tf.norm(),否则会在训练过程中失败:如果一个向量为0,那么梯度就会是 nan,所以当优化器更新变量时,这些变量也会变为 nan,从那个时刻起,你就止步在 nan 那里了。解决的方法是手工实现norm,在计算的时候加上一个很小的值 epsilon:$\|\mathbf{s}\| \approx \sqrt{\sum\limits_i{{s_i}^2}\,\,+ \epsilon}$
End of explanation
caps1_output = squash(caps1_raw, name="caps1_output")
Explanation: 现在让我们应用这个函数以获得每个主胶囊$\mathbf{u}_i$的输出:
End of explanation
caps2_n_caps = 10
caps2_n_dims = 16
Explanation: 太棒了!我们有了首个胶囊层的输出了。不是很难,对吗?然后,计算下一层才是真正乐趣的开始(译注:好戏刚刚开始)。
数字胶囊们
要计算数字胶囊们的输出,我们必须首先计算预测的输出向量(每个对应一个主胶囊/数字胶囊的对)。接着,我们就可以通过协议算法来运行路由。
计算预测输出向量
该数字胶囊层包含10个胶囊(每个代表一个数字),每个胶囊16维:
End of explanation
init_sigma = 0.1
W_init = tf.random_normal(
shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),
stddev=init_sigma, dtype=tf.float32, name="W_init")
W = tf.Variable(W_init, name="W")
Explanation: 对于在第一层里的每个胶囊 $i$,我们会在第二层中预测出每个胶囊 $j$ 的输出。为此,我们需要一个变换矩阵 $\mathbf{W}{i,j}$(每一对就是胶囊($i$, $j$) 中的一个),接着我们就可以计算预测的输出$\hat{\mathbf{u}}{j|i} = \mathbf{W}{i,j} \, \mathbf{u}_i$(论文中的公式(2)的右半部分)。由于我们想要将8维向量变形为16维向量,因此每个变换向量$\mathbf{W}{i,j}$必须具备(16, 8)形状。
要为每对胶囊 ($i$, $j$) 计算 $\hat{\mathbf{u}}_{j|i}$,我们会利用 tf.matmul() 函数的一个特点:你可能知道它可以让你进行两个矩阵相乘,但你可能不知道它可以让你进行更高维度的数组相乘。它将这些数组视作为数组矩阵,并且它会执行每项的矩阵相乘。例如,设有两个4D数组,每个包含2×3网格的矩阵。第一个包含矩阵为:$\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}, \mathbf{E}, \mathbf{F}$,第二个包含矩阵为:$\mathbf{G}, \mathbf{H}, \mathbf{I}, \mathbf{J}, \mathbf{K}, \mathbf{L}$。如果你使用 tf.matmul函数 对这两个4D数组进行相乘,你就会得到:
$
\pmatrix{
\mathbf{A} & \mathbf{B} & \mathbf{C} \
\mathbf{D} & \mathbf{E} & \mathbf{F}
} \times
\pmatrix{
\mathbf{G} & \mathbf{H} & \mathbf{I} \
\mathbf{J} & \mathbf{K} & \mathbf{L}
} = \pmatrix{
\mathbf{AG} & \mathbf{BH} & \mathbf{CI} \
\mathbf{DJ} & \mathbf{EK} & \mathbf{FL}
}
$
我们可以把这个函数用来计算每对胶囊 ($i$, $j$) 的 $\hat{\mathbf{u}}_{j|i}$,就像这样(回忆一下,有 6×6×32=1152 个胶囊在第一层,还有10个在第二层):
$
\pmatrix{
\mathbf{W}{1,1} & \mathbf{W}{1,2} & \cdots & \mathbf{W}{1,10} \
\mathbf{W}{2,1} & \mathbf{W}{2,2} & \cdots & \mathbf{W}{2,10} \
\vdots & \vdots & \ddots & \vdots \
\mathbf{W}{1152,1} & \mathbf{W}{1152,2} & \cdots & \mathbf{W}{1152,10}
} \times
\pmatrix{
\mathbf{u}_1 & \mathbf{u}_1 & \cdots & \mathbf{u}_1 \
\mathbf{u}_2 & \mathbf{u}_2 & \cdots & \mathbf{u}_2 \
\vdots & \vdots & \ddots & \vdots \
\mathbf{u}{1152} & \mathbf{u}{1152} & \cdots & \mathbf{u}{1152}
}
=
\pmatrix{
\hat{\mathbf{u}}{1|1} & \hat{\mathbf{u}}{2|1} & \cdots & \hat{\mathbf{u}}{10|1} \
\hat{\mathbf{u}}{1|2} & \hat{\mathbf{u}}{2|2} & \cdots & \hat{\mathbf{u}}{10|2} \
\vdots & \vdots & \ddots & \vdots \
\hat{\mathbf{u}}{1|1152} & \hat{\mathbf{u}}{2|1152} & \cdots & \hat{\mathbf{u}}_{10|1152}
}
$
第一个数组的形状为 (1152, 10, 16, 8),第二个数组的形状为 (1152, 10, 8, 1)。注意到第二个数组必须包含10个对于向量$\mathbf{u}1$ 到 $\mathbf{u}{1152}$ 的完全拷贝。为了要创建这样的数组,我们将使用好用的 tf.tile() 函数,它可以让你创建包含很多基数组拷贝的数组,并且根据你想要的进行平铺。
哦,稍等!我们还忘了一个维度:batch size(批量/批次的大小)。假设我们要给胶囊网络提供50张图片,那么该网络需要同时作出这50张图片的预测。所以第一个数组的形状为 (50, 1152, 10, 16, 8),而第二个数组的形状为 (50, 1152, 10, 8, 1)。第一层的胶囊实际上已经对于所有的50张图像作出预测,所以第二个数组没有问题,但对于第一个数组,我们需要使用 tf.tile() 让其具有50个拷贝的变换矩阵。
好了,让我们开始,创建一个可训练的变量,形状为 (1, 1152, 10, 16, 8) 可以用来持有所有的变换矩阵。第一个维度的大小为1,可以让这个数组更容易的平铺。我们使用标准差为0.1的常规分布,随机初始化这个变量。
End of explanation
batch_size = tf.shape(X)[0]
W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name="W_tiled")
Explanation: 现在我们可以通过每个实例重复一次W来创建第一个数组:
End of explanation
caps1_output_expanded = tf.expand_dims(caps1_output, -1,
name="caps1_output_expanded")
caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,
name="caps1_output_tile")
caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],
name="caps1_output_tiled")
Explanation: 就是这样!现在转到第二个数组。如前所述,我们需要创建一个数组,形状为 (batch size, 1152, 10, 8, 1),包含第一层胶囊的输出,重复10次(一次一个数字,在第三个维度,即axis=2)。 caps1_output 数组的形状为 (batch size, 1152, 8),所以我们首先需要展开两次来获得形状 (batch size, 1152, 1, 8, 1) 的数组,接着在第三维度重复它10次。
End of explanation
W_tiled
Explanation: 让我们检查以下第一个数组的形状:
End of explanation
caps1_output_tiled
Explanation: 很好,现在第二个:
End of explanation
caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,
name="caps2_predicted")
Explanation: 好!现在,为了要获得所有的预测好的输出向量 $\hat{\mathbf{u}}_{j|i}$,我们只需要将这两个数组使用tf.malmul()函数进行相乘,就像前面解释的那样:
End of explanation
caps2_predicted
Explanation: 让我们检查一下形状:
End of explanation
raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],
dtype=np.float32, name="raw_weights")
Explanation: 非常好,对于在该批次(我们还不知道批次的大小,使用 "?" 替代)中的每个实例以及对于每对第一和第二层的胶囊(1152×10),我们都有一个16D预测的输出列向量 (16×1)。我们已经准备好应用 根据协议算法的路由 了!
根据协议的路由
首先,让我们初始化原始的路由权重 $b_{i,j}$ 到0:
End of explanation
routing_weights = tf.nn.softmax(raw_weights, dim=2, name="routing_weights")
Explanation: 我们马上将会看到为什么我们需要最后两维大小为1的维度。
第一轮
首先,让我们应用 sofmax 函数来计算路由权重,$\mathbf{c}_{i} = \operatorname{softmax}(\mathbf{b}_i)$ (论文中的公式(3)):
End of explanation
weighted_predictions = tf.multiply(routing_weights, caps2_predicted,
name="weighted_predictions")
weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,
name="weighted_sum")
Explanation: 现在让我们为每个第二层胶囊计算其预测输出向量的加权,$\mathbf{s}j = \sum\limits{i}{c_{i,j}\hat{\mathbf{u}}_{j|i}}$ (论文公式(2)的左半部分):
End of explanation
caps2_output_round_1 = squash(weighted_sum, axis=-2,
name="caps2_output_round_1")
caps2_output_round_1
Explanation: 这里有几个重要的细节需要注意:
* 要执行元素级别矩阵相乘(也称为Hadamard积,记作$\circ$),我们需要使用tf.multiply() 函数。它要求 routing_weights 和 caps2_predicted 具有相同的秩,这就是为什么前面我们在 routing_weights 上添加了两个额外的维度。
* routing_weights的形状为 (batch size, 1152, 10, 1, 1) 而 caps2_predicted 的形状为 (batch size, 1152, 10, 16, 1)。由于它们在第四个维度上不匹配(1 vs 16),tf.multiply() 自动地在 routing_weights 该维度上 广播 了16次。如果你不熟悉广播,这里有一个简单的例子,也许可以帮上忙:
$ \pmatrix{1 & 2 & 3 \ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000} = \pmatrix{1 & 2 & 3 \ 4 & 5 & 6} \circ \pmatrix{10 & 100 & 1000 \ 10 & 100 & 1000} = \pmatrix{10 & 200 & 3000 \ 40 & 500 & 6000} $
最后,让我们应用squash函数到在协议算法的第一次迭代迭代结束时获取第二层胶囊的输出上,$\mathbf{v}_j = \operatorname{squash}(\mathbf{s}_j)$:
End of explanation
caps2_predicted
Explanation: 好!我们对于每个实例有了10个16D输出向量,就像我们期待的那样。
第二轮
首先,让我们衡量一下,每个预测向量 $\hat{\mathbf{u}}{j|i}$ 对于实际输出向量 $\mathbf{v}_j$ 之间到底有多接近,这是通过它们的标量乘积 $\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$来完成的。
快速数学上的提示:如果 $\vec{a}$ and $\vec{b}$ 是长度相等的向量,并且 $\mathbf{a}$ 和 $\mathbf{b}$ 是相应的列向量(如,只有一列的矩阵),那么 $\mathbf{a}^T \mathbf{b}$ (即 $\mathbf{a}$的转置和 $\mathbf{b}$的矩阵相乘)为一个1×1的矩阵,包含两个向量$\vec{a}\cdot\vec{b}$的标量积。在机器学习中,我们通常将向量表示为列向量,所以当我们探讨关于计算标量积 $\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$的时候,其实意味着计算 ${\hat{\mathbf{u}}{j|i}}^T \mathbf{v}_j$。
由于我们需要对每个实例和每个第一和第二层的胶囊对$(i, j)$,计算标量积 $\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$ ,我们将再次利用tf.matmul()可以同时计算多个矩阵相乘的特点。这就要求使用 tf.tile()来使得所有维度都匹配(除了倒数第二个),就像我们之前所作的那样。所以让我们查看caps2_predicted的形状,因为它持有对每个实例和每个胶囊对的所有预测输出向量$\hat{\mathbf{u}}{j|i}$。
End of explanation
caps2_output_round_1
Explanation: 现在让我们查看 caps2_output_round_1 的形状,它有10个输出向量,每个16D,对应每个实例:
End of explanation
caps2_output_round_1_tiled = tf.tile(
caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],
name="caps2_output_round_1_tiled")
Explanation: 为了让这些形状相匹配,我们只需要在第二个维度平铺 caps2_output_round_1 1152次(一次一个主胶囊):
End of explanation
agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,
transpose_a=True, name="agreement")
Explanation: 现在我们已经准备好可以调用 tf.matmul()(注意还需要告知它在第一个数组中的矩阵进行转置,让${\hat{\mathbf{u}}{j|i}}^T$ 来替代 $\hat{\mathbf{u}}{j|i}$):
End of explanation
raw_weights_round_2 = tf.add(raw_weights, agreement,
name="raw_weights_round_2")
Explanation: 我们现在可以通过对于刚计算的标量积$\hat{\mathbf{u}}{j|i} \cdot \mathbf{v}_j$进行简单相加,来进行原始路由权重 $b{i,j}$ 的更新:$b_{i,j} \gets b_{i,j} + \hat{\mathbf{u}}_{j|i} \cdot \mathbf{v}_j$ (参见论文过程1中第7步)
End of explanation
routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,
dim=2,
name="routing_weights_round_2")
weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,
caps2_predicted,
name="weighted_predictions_round_2")
weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,
axis=1, keep_dims=True,
name="weighted_sum_round_2")
caps2_output_round_2 = squash(weighted_sum_round_2,
axis=-2,
name="caps2_output_round_2")
Explanation: 第二轮的其余部分和第一轮相同:
End of explanation
caps2_output = caps2_output_round_2
Explanation: 我们可以继续更多轮,只需要重复第二轮中相同的步骤,但为了保持简洁,我们就到这里:
End of explanation
def condition(input, counter):
return tf.less(counter, 100)
def loop_body(input, counter):
output = tf.add(input, tf.square(counter))
return output, tf.add(counter, 1)
with tf.name_scope("compute_sum_of_squares"):
counter = tf.constant(1)
sum_of_squares = tf.constant(0)
result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])
with tf.Session() as sess:
print(sess.run(result))
Explanation: 静态还是动态循环?
在上面的代码中,我们在TensorFlow计算图中为协调算法的每一轮路由创建了不同的操作。换句话说,它是一个静态循环。
当然,与其拷贝/粘贴这些代码几次,通常在python中,我们可以写一个 for 循环,但这不会改变这样一个事实,那就是在计算图中最后对于每个路由迭代都会有不同的操作。这其实是可接受的,因为我们通常不会具有超过5次路由迭代,所以计算图不会成长得太大。
然而,你可能更倾向于在TensorFlow计算图自身实现路由循环,而不是使用Python的for循环。为了要做到这点,将需要使用TensorFlow的 tf.while_loop() 函数。这种方式,所有的路由循环都可以重用在该计算图中的相同的操作,这被称为动态循环。
例如,这里是如何构建一个小循环用来计算1到100的平方和:
End of explanation
sum([i**2 for i in range(1, 100 + 1)])
Explanation: 如你所见, tf.while_loop() 函数期望的循环条件和循环体由两个函数来提供。这些函数仅会被TensorFlow调用一次,在构建计算图阶段,不 在执行计算图的时候。 tf.while_loop() 函数将由 condition() 和 loop_body() 创建的计算图碎片同一些用来创建循环的额外操作缝制在一起。
还注意到在训练的过程中,TensorFlow将自动地通过循环处理反向传播,因此你不需要担心这些事情。
当然,我们也可以一行代码搞定!;)
End of explanation
def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):
with tf.name_scope(name, default_name="safe_norm"):
squared_norm = tf.reduce_sum(tf.square(s), axis=axis,
keep_dims=keep_dims)
return tf.sqrt(squared_norm + epsilon)
y_proba = safe_norm(caps2_output, axis=-2, name="y_proba")
Explanation: 开个玩笑,抛开缩减计算图的大小不说,使用动态循环而不是静态循环能够帮助减少很多的GPU RAM的使用(如果你使用GPU的话)。事实上,如果但调用 tf.while_loop() 函数时,你设置了 swap_memory=True ,TensorFlow会在每个循环的迭代上自动检查GPU RAM使用情况,并且它会照顾到在GPU和CPU之间swapping内存时的需求。既然CPU的内存便宜量又大,相对GPU RAM而言,这就很有意义了。
估算的分类概率(模长)
输出向量的模长代表了分类的概率,所以我们就可以使用tf.norm()来计算它们,但由于我们在讨论squash函数时看到的那样,可能会有风险,所以我们创建了自己的 safe_norm() 函数来进行替代:
End of explanation
y_proba_argmax = tf.argmax(y_proba, axis=2, name="y_proba")
Explanation: 要预测每个实例的分类,我们只需要选择那个具有最高估算概率的就可以了。要做到这点,让我们通过使用 tf.argmax() 来达到我们的目的:
End of explanation
y_proba_argmax
Explanation: 让我们检查一下 y_proba_argmax 的形状:
End of explanation
y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name="y_pred")
y_pred
Explanation: 这正好是我们想要的:对于每一个实例,我们现在有了最长的输出向量的索引。让我们用 tf.squeeze() 来移除后两个大小为1的维度。这就给出了该胶囊网络对于每个实例的预测分类:
End of explanation
y = tf.placeholder(shape=[None], dtype=tf.int64, name="y")
Explanation: 好了,我们现在准备好开始定义训练操作,从损失开始。
标签
首先,我们将需要一个对于标签的占位符:
End of explanation
m_plus = 0.9
m_minus = 0.1
lambda_ = 0.5
Explanation: 边际损失
论文使用了一个特殊的边际损失,来使得在每个图像中侦测多于两个以上的数字成为可能:
$ L_k = T_k \max(0, m^{+} - \|\mathbf{v}_k\|)^2 + \lambda (1 - T_k) \max(0, \|\mathbf{v}_k\| - m^{-})^2$
$T_k$ 等于1,如果分类$k$的数字出现,否则为0.
在论文中,$m^{+} = 0.9$, $m^{-} = 0.1$,并且$\lambda = 0.5$
注意在视频15:47秒处有个错误:应该是最大化操作,而不是norms,被平方。不好意思。
End of explanation
T = tf.one_hot(y, depth=caps2_n_caps, name="T")
Explanation: 既然 y 将包含数字分类,从0到9,要对于每个实例和每个分类获取 $T_k$ ,我们只需要使用 tf.one_hot() 函数即可:
End of explanation
with tf.Session():
print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))
Explanation: 一个小例子应该可以说明这到底做了什么:
End of explanation
caps2_output
Explanation: 现在让我们对于每个输出胶囊和每个实例计算输出向量。首先,让我们验证 caps2_output 形状:
End of explanation
caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,
name="caps2_output_norm")
Explanation: 这些16D向量位于第二到最后的维度,因此让我们在 axis=-2 使用 safe_norm() 函数:
End of explanation
present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),
name="present_error_raw")
present_error = tf.reshape(present_error_raw, shape=(-1, 10),
name="present_error")
Explanation: 现在让我们计算 $\max(0, m^{+} - \|\mathbf{v}k\|)^2$,并且重塑其结果以获得一个简单的具有形状(_batch size, 10)的矩阵:
End of explanation
absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),
name="absent_error_raw")
absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),
name="absent_error")
Explanation: 接下来让我们计算 $\max(0, \|\mathbf{v}k\| - m^{-})^2$ 并且重塑成(_batch size,10):
End of explanation
L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,
name="L")
Explanation: 我们准备好为每个实例和每个数字计算损失:
End of explanation
margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name="margin_loss")
Explanation: 现在我们可以把对于每个实例的数字损失进行相加($L_0 + L_1 + \cdots + L_9$),并且在所有的实例中计算均值。这给予我们最后的边际损失:
End of explanation
mask_with_labels = tf.placeholder_with_default(False, shape=(),
name="mask_with_labels")
Explanation: 重新构造
现在让我们添加一个解码器网络,其位于胶囊网络之上。它是一个常规的3层全连接神经网络,其将基于胶囊网络的输出,学习重新构建输入图像。这将强制胶囊网络保留所有需要重新构造数字的信息,贯穿整个网络。该约束正则化了模型:它减少了训练数据集过拟合的风险,并且它有助于泛化到新的数字。
遮盖
论文中提及了在训练的过程中,与其发送所有的胶囊网络的输出到解码器网络,不如仅发送与目标数字对应的胶囊输出向量。所有其余输出向量必须被遮盖掉。在推断的时候,我们必须遮盖所有输出向量,除了最长的那个。即,预测的数字相关的那个。你可以查看论文中的图2(视频中的18:15):所有的输出向量都被遮盖掉了,除了那个重新构造目标的输出向量。
我们需要一个占位符来告诉TensorFlow,是否我们想要遮盖这些输出向量,根据标签 (True) 或 预测 (False, 默认):
End of explanation
reconstruction_targets = tf.cond(mask_with_labels, # 条件
lambda: y, # if True
lambda: y_pred, # if False
name="reconstruction_targets")
Explanation: 现在让我们使用 tf.cond() 来定义重新构造的目标,如果 mask_with_labels 为 True 就是标签 y,否则就是 y_pred。
End of explanation
reconstruction_mask = tf.one_hot(reconstruction_targets,
depth=caps2_n_caps,
name="reconstruction_mask")
Explanation: 注意到 tf.cond() 函数期望的是通过函数传递而来的if-True 和 if-False张量:这些函数会在计算图构造阶段(而非执行阶段)被仅调用一次,和tf.while_loop()类似。这可以允许TensorFlow添加必要操作,以此处理if-True 和 if-False 张量的条件评估。然而,在这里,张量 y 和 y_pred 已经在我们调用 tf.cond() 时被创建,不幸地是TensorFlow会认为 y 和 y_pred 是 reconstruction_targets 张量的依赖项。虽然,reconstruction_targets 张量最终是会计算出正确值,但是:
1. 无论何时,我们评估某个依赖于 reconstruction_targets 的张量,y_pred 张量也会被评估(即便 mask_with_layers 为 True)。这不是什么大问题,因为,在训练阶段计算y_pred 张量不会添加额外的开销,而且不管怎么样我们都需要它来计算边际损失。并且在测试中,如果我们做的是分类,我们就不需要重新构造,所以reconstruction_grpha根本不会被评估。
2. 我们总是需要为y占位符递送一个值(即使mask_with_layers为False)。这就有点讨厌了,当然我们可以传递一个空数组,因为TensorFlow无论如何都不会用到它(就是当检查依赖项的时候还不知道)。
现在我们有了重新构建的目标,让我们创建重新构建的遮盖。对于目标类型它应该为1.0,对于其他类型应该为0.0。为此我们就可以使用tf.one_hot()函数:
End of explanation
reconstruction_mask
Explanation: 让我们检查一下 reconstruction_mask的形状:
End of explanation
caps2_output
Explanation: 和 caps2_output 的形状比对一下:
End of explanation
reconstruction_mask_reshaped = tf.reshape(
reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],
name="reconstruction_mask_reshaped")
Explanation: 嗯,它的形状是 (batch size, 1, 10, 16, 1)。我们想要将它和 reconstruction_mask 进行相乘,但 reconstruction_mask的形状是(batch size, 10)。我们必须对此进行reshape成 (batch size, 1, 10, 1, 1) 来满足相乘的要求:
End of explanation
caps2_output_masked = tf.multiply(
caps2_output, reconstruction_mask_reshaped,
name="caps2_output_masked")
caps2_output_masked
Explanation: 最终我们可以应用 遮盖 了!
End of explanation
decoder_input = tf.reshape(caps2_output_masked,
[-1, caps2_n_caps * caps2_n_dims],
name="decoder_input")
Explanation: 最后还有一个重塑操作被用来扁平化解码器的输入:
End of explanation
decoder_input
Explanation: 这给予我们一个形状是 (batch size, 160) 的数组:
End of explanation
n_hidden1 = 512
n_hidden2 = 1024
n_output = 28 * 28
with tf.name_scope("decoder"):
hidden1 = tf.layers.dense(decoder_input, n_hidden1,
activation=tf.nn.relu,
name="hidden1")
hidden2 = tf.layers.dense(hidden1, n_hidden2,
activation=tf.nn.relu,
name="hidden2")
decoder_output = tf.layers.dense(hidden2, n_output,
activation=tf.nn.sigmoid,
name="decoder_output")
Explanation: 解码器
现在让我们来构建该解码器。它非常简单:两个密集(全连接)ReLU 层紧跟这一个密集输出sigmoid层:
End of explanation
X_flat = tf.reshape(X, [-1, n_output], name="X_flat")
squared_difference = tf.square(X_flat - decoder_output,
name="squared_difference")
reconstruction_loss = tf.reduce_mean(squared_difference,
name="reconstruction_loss")
Explanation: 重新构造的损失
现在让我们计算重新构造的损失。它不过是输入图像和重新构造过的图像的平方差。
End of explanation
alpha = 0.0005
loss = tf.add(margin_loss, alpha * reconstruction_loss, name="loss")
Explanation: 最终损失
最终损失为边际损失和重新构造损失(使用放大因子0.0005确保边际损失在训练过程中处于支配地位)的和:
End of explanation
correct = tf.equal(y, y_pred, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
Explanation: 最后润色
精度
为了衡量模型的精度,我们需要计算实例被正确分类的数量。为此,我们可以简单地比较y和y_pred,并将比较结果的布尔值转换成float32(0.0代表False,1.0代表True),并且计算所有实例的均值:
End of explanation
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss, name="training_op")
Explanation: 训练操作
论文中提到作者使用Adam优化器,使用了TensorFlow的默认参数:
End of explanation
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: 初始化和Saver
让我们来添加变量初始器,还要加一个 Saver:
End of explanation
n_epochs = 10
batch_size = 50
restore_checkpoint = True
n_iterations_per_epoch = mnist.train.num_examples // batch_size
n_iterations_validation = mnist.validation.num_examples // batch_size
best_loss_val = np.infty
checkpoint_path = "./my_capsule_network"
with tf.Session() as sess:
if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):
saver.restore(sess, checkpoint_path)
else:
init.run()
for epoch in range(n_epochs):
for iteration in range(1, n_iterations_per_epoch + 1):
X_batch, y_batch = mnist.train.next_batch(batch_size)
# 运行训练操作并且评估损失:
_, loss_train = sess.run(
[training_op, loss],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch,
mask_with_labels: True})
print("\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}".format(
iteration, n_iterations_per_epoch,
iteration * 100 / n_iterations_per_epoch,
loss_train),
end="")
# 在每个epoch之后,
# 衡量验证损失和精度:
loss_vals = []
acc_vals = []
for iteration in range(1, n_iterations_validation + 1):
X_batch, y_batch = mnist.validation.next_batch(batch_size)
loss_val, acc_val = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_vals.append(loss_val)
acc_vals.append(acc_val)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_validation,
iteration * 100 / n_iterations_validation),
end=" " * 10)
loss_val = np.mean(loss_vals)
acc_val = np.mean(acc_vals)
print("\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}".format(
epoch + 1, acc_val * 100, loss_val,
" (improved)" if loss_val < best_loss_val else ""))
# 如果有进步就保存模型:
if loss_val < best_loss_val:
save_path = saver.save(sess, checkpoint_path)
best_loss_val = loss_val
Explanation: 还有... 我们已经完成了构造阶段!花点时间可以庆祝🎉一下。:)
训练
训练我们的胶囊网络是非常标准的。为了简化,我们不需要作任何花哨的超参调整、丢弃等,我们只是一遍又一遍运行训练操作,显示损失,并且在每个epoch结束的时候,根据验证集衡量一下精度,显示出来,并且保存模型,当然,验证损失是目前为止最低的模型才会被保存(这是一种基本的实现早停的方法,而不需要实际上打断训练的进程)。我们希望代码能够自释,但这里应该有几个细节值得注意:
* 如果某个checkpoint文件已经存在,那么它会被恢复(这可以让训练被打断,再从最新的checkpoint中进行恢复成为可能),
* 我们不要忘记在训练的时候传递mask_with_labels=True,
* 在测试的过程中,我们可以让mask_with_labels默认为False(但是我们仍然需要传递标签,因为它们在计算精度的时候会被用到),
* 通过 mnist.train.next_batch()装载的图片会被表示为类型 float32 数组,其形状为[784],但输入的占位符X期望的是一个float32数组,其形状为 [28, 28, 1],所以在我们把送到模型之前,必须把这些图像进行重塑,
* 我们在整个完整的验证集上对模型的损失和精度进行评估。为了能够看到进度和支持那些并没有太多RAM的系统,评估损失和精度的代码在一个批次上执行一次,并且最后再计算平均损失和平均精度。
警告:如果你没有GPU,训练将会非常漫长(至少几个小时)。当使用GPU,它应该对于每个epoch只需要几分钟(如,在NVidia GeForce GTX 1080Ti上只需要6分钟)。
End of explanation
n_iterations_test = mnist.test.num_examples // batch_size
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
loss_tests = []
acc_tests = []
for iteration in range(1, n_iterations_test + 1):
X_batch, y_batch = mnist.test.next_batch(batch_size)
loss_test, acc_test = sess.run(
[loss, accuracy],
feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),
y: y_batch})
loss_tests.append(loss_test)
acc_tests.append(acc_test)
print("\rEvaluating the model: {}/{} ({:.1f}%)".format(
iteration, n_iterations_test,
iteration * 100 / n_iterations_test),
end=" " * 10)
loss_test = np.mean(loss_tests)
acc_test = np.mean(acc_tests)
print("\rFinal test accuracy: {:.4f}% Loss: {:.6f}".format(
acc_test * 100, loss_test))
Explanation: 我们在训练结束后,在验证集上达到了99.32%的精度,只用了5个epoches,看上去不错。现在让我们将模型运用到测试集上。
评估
End of explanation
n_samples = 5
sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
caps2_output_value, decoder_output_value, y_pred_value = sess.run(
[caps2_output, decoder_output, y_pred],
feed_dict={X: sample_images,
y: np.array([], dtype=np.int64)})
Explanation: 我们在测试集上达到了99.21%的精度。相当棒!
预测
现在让我们进行一些预测!首先从测试集确定一些图片,接着开始一个session,恢复已经训练好的模型,评估cap2_output来获得胶囊网络的输出向量,decoder_output来重新构造,用y_pred来获得类型预测:
End of explanation
sample_images = sample_images.reshape(-1, 28, 28)
reconstructions = decoder_output_value.reshape([-1, 28, 28])
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.imshow(sample_images[index], cmap="binary")
plt.title("Label:" + str(mnist.test.labels[index]))
plt.axis("off")
plt.show()
plt.figure(figsize=(n_samples * 2, 3))
for index in range(n_samples):
plt.subplot(1, n_samples, index + 1)
plt.title("Predicted:" + str(y_pred_value[index]))
plt.imshow(reconstructions[index], cmap="binary")
plt.axis("off")
plt.show()
Explanation: 注意:我们传递的y使用了一个空的数组,不过TensorFlow并不会用到它,前面已经解释过了。
现在让我们把这些图片和它们的标签绘制出来,同时绘制出来的还有相应的重新构造和预测:
End of explanation
caps2_output_value.shape
Explanation: 预测都正确,而且重新构造的图片看上去很棒。阿弥陀佛!
理解输出向量
让我们调整一下输出向量,对它们的姿态参数表示进行查看。
首先让我们检查cap2_output_value NumPy数组的形状:
End of explanation
def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):
steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25
pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15
tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])
tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps
output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]
return tweaks + output_vectors_expanded
Explanation: 让我们创建一个函数,该函数在所有的输出向量里对于每个 16(维度)姿态参数进行调整。每个调整过的输出向量将和原来的输出向量相同,除了它的 姿态参数 中的一个会加上一个-0.5到0.5之间变动的值。默认的会有11个步数(-0.5, -0.4, ..., +0.4, +0.5)。这个函数会返回一个数组,其形状为(调整过的姿态参数=16, 步数=11, batch size=5, 1, 10, 16, 1):
End of explanation
n_steps = 11
tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)
tweaked_vectors_reshaped = tweaked_vectors.reshape(
[-1, 1, caps2_n_caps, caps2_n_dims, 1])
Explanation: 让我们计算所有的调整过的输出向量并且重塑结果到 (parameters×steps×instances, 1, 10, 16, 1) 以便于我们能够传递该数组到解码器中:
End of explanation
tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)
with tf.Session() as sess:
saver.restore(sess, checkpoint_path)
decoder_output_value = sess.run(
decoder_output,
feed_dict={caps2_output: tweaked_vectors_reshaped,
mask_with_labels: True,
y: tweak_labels})
Explanation: 现在让我们递送这些调整过的输出向量到解码器并且获得重新构造,它会产生:
End of explanation
tweak_reconstructions = decoder_output_value.reshape(
[caps2_n_dims, n_steps, n_samples, 28, 28])
Explanation: 让我们重塑解码器的输出以便于我们能够在输出维度,调整步数,和实例之上进行迭代:
End of explanation
for dim in range(3):
print("Tweaking output dimension #{}".format(dim))
plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))
for row in range(n_samples):
for col in range(n_steps):
plt.subplot(n_samples, n_steps, row * n_steps + col + 1)
plt.imshow(tweak_reconstructions[dim, col, row], cmap="binary")
plt.axis("off")
plt.show()
Explanation: 最后,让我们绘制所有的重新构造,对于前三个输出维度,对于每个调整中的步数(列)和每个数字(行):
End of explanation |
14,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mapping federal crop insurance in the U.S.
A Jupyter notebook (Python 3) by Peter Donovan, [email protected]
Open data is not just a thing or a tool. It's a behavior, based on beliefs. This notebook is a way of sharing my methods and assumptions, and if you use the same or similar tools (such as R instead of Python, for example) you can retread these steps. I hope this notebook may also serve as a guide for me as well as others who want to do similar things.
With crop insurance, as with any data set, looking at the data is a good way of learning about its particulars if not its intentions. Some knowledge of the context or domain of the data is usually required.
For background on federal crop insurance, the following may be a start
Step1: From http
Step2: The column headers are supplied in a Word document (Record layout
Step3: There are spaces on either side of some of the fields. We can use str.strip() to remove them.
Step4: FIPS code
The state and county location codes are numeric (int64). FIPS (Federal Information Processing Standard) codes for counties are 5-digit strings. We'll pad with zeros using zfill function. This will come in handy when it comes to mapping, as we will want to merge or join our data with county boundaries using the FIPS code.
Step5: Map indemnities by county
Step6: Causes of loss
Let's look at the causes of loss. NOTE
Step7: 'Excess Moisture/Precip/Rain' and 'Drought' are by far the most common causes. Let's filter the dataframe by these two, so we can potentially see which counties had indemnities for both causes, and how much.
Step8: Now do a groupby on each dataframe by county, with sums of indemnity amounts.
Step9: Let's add two columns, a total, and a ratio of moisture to drought. | Python Code:
#some usual imports, including some options for displaying large currency amounts with commas and only 2 decimals
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
pd.set_option('display.float_format', '{:,}'.format)
pd.set_option('display.precision',2)
Explanation: Mapping federal crop insurance in the U.S.
A Jupyter notebook (Python 3) by Peter Donovan, [email protected]
Open data is not just a thing or a tool. It's a behavior, based on beliefs. This notebook is a way of sharing my methods and assumptions, and if you use the same or similar tools (such as R instead of Python, for example) you can retread these steps. I hope this notebook may also serve as a guide for me as well as others who want to do similar things.
With crop insurance, as with any data set, looking at the data is a good way of learning about its particulars if not its intentions. Some knowledge of the context or domain of the data is usually required.
For background on federal crop insurance, the following may be a start:
Dennis Shields' 2015 report from the Congressional Research Service: https://fas.org/sgp/crs/misc/R40532.pdf
Environmental Working Group's material on crop insurance, which includes interactive maps showing rate of return (payouts compared to premiums) on some crops by county from 2001 through 2014: http://www.ewg.org/research/crop-insurance-lottery. The average federal subsidy for crop insurance premiums is about 60%.
The Natural Resources Defense Council has a 2013 paper on crop insurance, https://www.nrdc.org/sites/default/files/soil-matters-IP.pdf. This paper suggests that crop insurance could be reformed to reward farming that is low risk with environmental rewards.
A starting hypothesis: federally subsidized crop insurance, while it sustains the economic viability of many farm businesses, might also tend to replace soil health and function as the foundation of a viable agriculture.
To investigate the hypothesis, we'll start by compiling data.
First, we get data. Download and unzip the data file from the USDA Risk Management Agency website: http://www.rma.usda.gov/data/cause.html The complete data for each year is under the "Summary of Business with Month of Loss" header. So far I am using the 2014 through 2016 data. You can get the column headers from the same web page as a Word or pdf doc.
End of explanation
df = pd.read_csv('/Users/Peter/Documents/atlas/RMA/colsom14.txt',sep='|',header=None)
df.shape #this counts rows, columns in our dataframe
Explanation: From http://www.rma.usda.gov/data/cause we see that years 2010 through 2016 are available as zip archives in Summary of Business. With a slower connection it is better to download and extract the zip archives outside of this notebook. Each contains a text file such as colsom14.txt, which will be an example for this notebook.
Unzip the file and inspect it with a text editor. There are pipe characters separating the fields, and sometimes sequences of spaces before them or after them. There are no column headers, we'll add those next.
End of explanation
the_columns_2014 = ['Crop Year Identifier','State Code','State Abbreviation ','County Code','County Name','Crop Code','Crop Name','Insurance Plan Code','Insurance Plan Name Abbreviation','Coverage Category','Stage Code','Cause of Loss Code','Cause of Loss Description','Month of Loss','Month of Loss Name','Policies Earning Premium','Policies Indemnified','Net Planted Acres','Liability','Total Premium','Subsidy','Determined Acres','Indemnity Amount','Loss Ratio']
the_columns_15_16 = ['Crop Year Identifier', 'State Code', 'State Abbreviation ',
'County Code', 'County Name', 'Crop Code', 'Crop Name',
'Insurance Plan Code', 'Insurance Plan Name Abbreviation',
'Coverage Category', 'Stage Code', 'Cause of Loss Code',
'Cause of Loss Description', 'Month of Loss', 'Month of Loss Name',
'Policies Earning Premium', 'Policies Indemnified', 'Net Planted Acres',
'Net Endorsed Acres', 'Liability', 'Total Premium', 'Subsidy',
'Determined Acres', 'Indemnity Amount', 'Loss Ratio']
df.columns = the_columns_2014 #this adds our column headers
Explanation: The column headers are supplied in a Word document (Record layout: Word) from the same web page. They differ for 2010-2014 and from 2015 forward. Format them as a python list of strings as follows, and add them to the dataframe.
End of explanation
#we strip excess white space from the columns (numeric columns don't work for strip)
cols_w_spaces = ['County Name','Crop Name','Insurance Plan Name Abbreviation','Cause of Loss Description']
for item in cols_w_spaces:
df[item] = df[item].map(lambda x: x.strip())
#check the result
print(list(df.loc[1187]))
Explanation: There are spaces on either side of some of the fields. We can use str.strip() to remove them.
End of explanation
#convert to strings, pad with zeros, 2 digits for state, 3 for county
df['State Code'] = df['State Code'].map(lambda x: str(x)).apply(lambda x: x.zfill(2))
df['County Code'] = df['County Code'].map(lambda x: str(x)).apply(lambda x: x.zfill(3))
#add FIPS or id column and test
df['FIPS'] = df['State Code'] + df['County Code']
df['FIPS'][10] #to make sure we have a 5-digit string, not a number
Explanation: FIPS code
The state and county location codes are numeric (int64). FIPS (Federal Information Processing Standard) codes for counties are 5-digit strings. We'll pad with zeros using zfill function. This will come in handy when it comes to mapping, as we will want to merge or join our data with county boundaries using the FIPS code.
End of explanation
counties = df.groupby(['FIPS','County Name'])
aggregated = counties.agg({'Indemnity Amount': np.sum})
aggregated.sort_values('Indemnity Amount',ascending=False)
aggregated.reset_index(level=0, inplace=True)
aggregated.reset_index(level=0, inplace=True)
#run this twice to convert the two indexes to columns
#rename columns for convenience
aggregated.rename(columns={'County Name': 'name', 'FIPS': 'id', 'Indemnity Amount': 'indemnity'}, inplace=True)
#convert to $millions
aggregated['indemnity']=aggregated['indemnity']/1000000
#reorder columns and write to tab-separated tsv file for d3 mapping
aggregated = aggregated[['id','name','indemnity']]
aggregated.to_csv('/Users/Peter/Documents/atlas/RMA/indemnity2014.tsv', sep='\t', index=False)
Explanation: Map indemnities by county
End of explanation
df.groupby('Cause of Loss Description').agg({'Indemnity Amount':np.sum}).sort_values('Indemnity Amount',ascending=False)
causes_2014 = df.groupby('Cause of Loss Description')['Indemnity Amount'].sum()
causes_2014.sort_values(ascending=False)
#to generate a table of total indemnities by Cause of Loss, you can export a csv
causes_2014.to_csv('/Users/Peter/Documents/atlas/RMA/causes_2014.csv')
Explanation: Causes of loss
Let's look at the causes of loss. NOTE: These procedures could be duplicated to aggregate indemnities by 'Crop Name' as well.
End of explanation
rain = df[df['Cause of Loss Description']=='Excess Moisture/Precip/Rain']
drought = df[df['Cause of Loss Description']=='Drought']
print(rain.shape, drought.shape)
Explanation: 'Excess Moisture/Precip/Rain' and 'Drought' are by far the most common causes. Let's filter the dataframe by these two, so we can potentially see which counties had indemnities for both causes, and how much.
End of explanation
g_rain = rain.groupby(['FIPS','County Name']).agg({'Indemnity Amount':np.sum})
g_drought = drought.groupby(['FIPS','County Name']).agg({'Indemnity Amount':np.sum})
together=pd.concat([g_rain,g_drought],axis=1)
together.columns = ['moisture','drought']
together.head()
Explanation: Now do a groupby on each dataframe by county, with sums of indemnity amounts.
End of explanation
together['total']=together.moisture + together.drought
together['ratio']=together.moisture / together.drought
together.head(20)
mixed = together[(together.ratio < 4) & (together.ratio > .25)]
mixed.shape
mixed.reset_index(level=0, inplace=True)
mixed.reset_index(level=0, inplace=True)
#run this twice
mixed = mixed.rename(columns={'total':'indemnity'})
mixed.indemnity = mixed.indemnity/1000000
mixed.to_csv('/Users/Peter/Documents/atlas/RMA/moisture_plus_drought_2014.tsv', sep='\t', index=False)
Explanation: Let's add two columns, a total, and a ratio of moisture to drought.
End of explanation |
14,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: The latex representations of parameters are mostly used while plotting distributions... so let's just create a few dummy distributions so that we can see how they're labeled when plotting.
Step2: Default Representation
By default, whenever parameters themselves are referenced in plotting (like when calling b.plot_distribution_collection, a latex representation of the parameter name, along with the component or dataset, when applicable, is used.
Step3: Overriding Component Labels
By default, the component labels themselves are used within this latex representation. These labels can be changed internally with b.rename_component. However, sometimes it is convenient to use a different naming convention for the latex representation.
For example, let's say that we wanted to keep the python-labels as-is ('primary', 'secondary', and 'binary'), but use 'A', 'B', and 'AB' in the latex representations, respectively. These latex-representations are stored in the latex_repr parameters.
Step4: These are blank (empty string) by default, in which case the actual component labels are used while plotting.
Step5: If we set these, then the latex_repr parameters will take precedence over the component labels
Step6: Overriding Parameter Latex "Templates"
Internally each parameter has a "template" for how to represent its name in latex. Let's look at those attributes for the parameters we have been plotting here.
Step7: When plotting, the {component} portion of this string is replaced with latex_repr (if not empty) and otherwise the component label itself. Changing this template isn't technically supported (since there are no checks to make sure the string is valid), but if you insist, you can change the underlying string as follows | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
Explanation: Advanced: Parameter Latex Representation
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b = phoebe.default_binary()
b.add_distribution({'teff@primary': phoebe.gaussian_around(100),
'teff@secondary': phoebe.gaussian_around(150),
'requiv@primary': phoebe.uniform_around(0.2)})
Explanation: The latex representations of parameters are mostly used while plotting distributions... so let's just create a few dummy distributions so that we can see how they're labeled when plotting.
End of explanation
_ = b.plot_distribution_collection(show=True)
Explanation: Default Representation
By default, whenever parameters themselves are referenced in plotting (like when calling b.plot_distribution_collection, a latex representation of the parameter name, along with the component or dataset, when applicable, is used.
End of explanation
print(b.filter(qualifier='latex_repr'))
Explanation: Overriding Component Labels
By default, the component labels themselves are used within this latex representation. These labels can be changed internally with b.rename_component. However, sometimes it is convenient to use a different naming convention for the latex representation.
For example, let's say that we wanted to keep the python-labels as-is ('primary', 'secondary', and 'binary'), but use 'A', 'B', and 'AB' in the latex representations, respectively. These latex-representations are stored in the latex_repr parameters.
End of explanation
print(b.components)
Explanation: These are blank (empty string) by default, in which case the actual component labels are used while plotting.
End of explanation
b.set_value(qualifier='latex_repr', component='primary', value='A')
b.set_value(qualifier='latex_repr', component='secondary', value='B')
b.set_value(qualifier='latex_repr', component='binary', value='AB')
_ = b.plot_distribution_collection(show=True)
Explanation: If we set these, then the latex_repr parameters will take precedence over the component labels
End of explanation
b.get_parameter(qualifier='teff', component='primary', context='component')
print(b.get_parameter(qualifier='teff', component='primary', context='component').latexfmt)
Explanation: Overriding Parameter Latex "Templates"
Internally each parameter has a "template" for how to represent its name in latex. Let's look at those attributes for the parameters we have been plotting here.
End of explanation
b.get_parameter(qualifier='teff', component='primary', context='component')._latexfmt = 'T_{{ \mathrm{{ {component} }}}}'
_ = b.plot_distribution_collection(show=True)
Explanation: When plotting, the {component} portion of this string is replaced with latex_repr (if not empty) and otherwise the component label itself. Changing this template isn't technically supported (since there are no checks to make sure the string is valid), but if you insist, you can change the underlying string as follows::
End of explanation |
14,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This is the central location where all variables should be defined, and any relationships between them should be given. Having all definitions collected in one file is useful because other files can reference this one, so there is no need for duplication, and less room for mistakes. In particular, the relationships between variables are defined only here, so we don't need to reimplement those relationships. And if you do ever find a mistake, you only have to fix it in one place, then just re-run the other notebooks.
There are two main classes of variables
Step1: Fundamental variables
Only the most basic variables should be defined here.
Note that we will be using (quaternion) logarithmic rotors to describe the orientations of the spins, and the orientation and velocity of the binary itself. This allows us to reduce the number of constraints in the system, and only evolve the minimal number of equations. For example, the spins are constant, so only two degrees of freedom are needed. These can be expressed without ambiguities or singularities in the form of logarithmic rotors
Step2: Derived variables
Any variable that can be derived from the variables above should be put in this section.
These variables should probably be left in arbitrary form, unless a particular simplification is desired. The substitutions dictionary should map from the general names and their definitions in terms of basic variables. In numerical codes, their values can be calculated once per time step and then stored, so that the values do not have to be re-calculated every time they appear in an expression.
Various common combinations of the two masses
Step3: The system's vector basis is given by $(\hat{\ell}, \hat{n}, \hat{\lambda})$, and will be computed by the code in terms of the fundamental logarithmic rotors defined above. Here, we give all the substitutions that will be needed in the code.
Step4: Various spin components and combinations
Step5: Other functions of the angular velocity that find frequent use | Python Code:
# Make sure division of integers does not round to the nearest integer
from __future__ import division
import sys
sys.path.insert(0, '..') # Look for modules in directory above this one
# Make everything in python's symbolic math package available
from sympy import * # Make sure sympy functions are used in preference to numpy
import sympy # Make sympy. constructions available
from sympy import Rational as frac # Rename for similarity to latex
from sympy import log as ln
# Print symbolic expressions nicely
init_printing()
# We'll use the numpy `array` object for vectors
from numpy import array, cross, dot
# We'll use a custom object to keep track of variables
from Utilities.PNObjects import PNCollection
PNVariables = PNCollection()
Explanation: Introduction
This is the central location where all variables should be defined, and any relationships between them should be given. Having all definitions collected in one file is useful because other files can reference this one, so there is no need for duplication, and less room for mistakes. In particular, the relationships between variables are defined only here, so we don't need to reimplement those relationships. And if you do ever find a mistake, you only have to fix it in one place, then just re-run the other notebooks.
There are two main classes of variables:
Fundamental variables
Derived variables
The distinction is only required for code output, to ensure that everything gets calculated correctly. The PN equations you write down and manipulate can be in terms of any of these variables.
The fundamental variables that go into PN equations are things like the mass, spins $\chi_1$, and $\chi_2$, orbital angular velocity $\hat{\ell}$, and unit separation vector $\hat{n}$. We also include the tidal-coupling parameters in this list. Also, note that only $M_1$ is included. This is because the total mass is always assumed to be 1, so $M_2 = 1-M_1$.
The derived variables are further distinguished by whether they will need to be recalculated at each time step or not. For example, though we define the spins fundamentally as $\chi_1$ and $\chi_2$, we can also define derived spins $S$ and $\Sigma$, which need to be recalculated if the system is precessing. On the other hand, the masses are constant and fundamentally defined by $M_1$, so $M_2$ and $\nu$ only need to be calculated from that information once.
Set up python
End of explanation
# Unit basis vectors
PNVariables.AddBasicConstants('xHat, yHat, zHat', datatype='Quaternions::Quaternion', commutative=False)
# Dimensionful quantities, just in case anybody uses them...
PNVariables.AddBasicConstants('G, c')
# Masses of objects 1 and 2.
PNVariables.AddBasicConstants('M1')
PNVariables.AddBasicConstants('M2')
# Angular speed of separation vector
PNVariables.AddBasicVariables('v')
# Initial spins expressed as spinors taking zHat onto those spins (assumed to have constant magnitudes)
PNVariables.AddBasicConstants('S_chi1', datatype='Quaternions::Quaternion', commutative=False)
PNVariables.AddBasicConstants('S_chi2', datatype='Quaternions::Quaternion', commutative=False)
# Dynamic spin directions
PNVariables.AddBasicVariables('rfrak_chi1_x, rfrak_chi1_y')
PNVariables.AddBasicVariables('rfrak_chi2_x, rfrak_chi2_y')
# Tidal deformabilities, in units where the total mass is 1
PNVariables.AddBasicConstants('lambda1, lambda2')
# Frame aligned to Orbital angular velocity vector and magnitude ("Newtonian" angular momentum)
PNVariables.AddBasicVariables('rfrak_frame_x, rfrak_frame_y, rfrak_frame_z')
Explanation: Fundamental variables
Only the most basic variables should be defined here.
Note that we will be using (quaternion) logarithmic rotors to describe the orientations of the spins, and the orientation and velocity of the binary itself. This allows us to reduce the number of constraints in the system, and only evolve the minimal number of equations. For example, the spins are constant, so only two degrees of freedom are needed. These can be expressed without ambiguities or singularities in the form of logarithmic rotors: $\mathfrak{r}1 = \mathfrak{r}{\chi_1 x} \hat{x} + \mathfrak{r}{\chi_1 y} \hat{y}$, so that $\vec{\chi}_1 = S{\chi_1}\, e^{\mathfrak{r}1}\, \hat{z}\, e^{-\mathfrak{r}_1}\, \bar{S}{\chi_1}$. Here, $S_{\chi_1}$ is a constant spinor with magnitude $\sqrt{\lvert \chi_1 \rvert}$ encoding the initial direction of the spin. This may look complicated, but it performs very well numerically.
We will still be able to write and manipulate the PN equations directly in terms of familiar quantities like $\vec{S}_1 \cdot \hat{\ell}$, etc., but the fundamental objects will be the rotors, which means that the substitutions made for code output will automatically be in terms of the rotors.
End of explanation
PNVariables.AddDerivedConstant('M', M1+M2)
PNVariables.AddDerivedConstant('delta', (M1-M2)/M)
PNVariables.AddDerivedConstant('nu', M1*M2/M**2)
PNVariables.AddDerivedConstant('nu__2', (M1*M2/M**2)**2)
PNVariables.AddDerivedConstant('nu__3', (M1*M2/M**2)**3)
PNVariables.AddDerivedConstant('q', M1/M2)
Explanation: Derived variables
Any variable that can be derived from the variables above should be put in this section.
These variables should probably be left in arbitrary form, unless a particular simplification is desired. The substitutions dictionary should map from the general names and their definitions in terms of basic variables. In numerical codes, their values can be calculated once per time step and then stored, so that the values do not have to be re-calculated every time they appear in an expression.
Various common combinations of the two masses:
End of explanation
# This rotor encodes all information about the frame
PNVariables.AddDerivedVariable('R', exp(rfrak_frame_x*xHat + rfrak_frame_y*yHat + rfrak_frame_z*zHat),
datatype='Quaternions::Quaternion', commutative=False)
# Unit separation vector between the compact objects
PNVariables.AddDerivedVariable('nHat', R*xHat*conjugate(R), datatype='Quaternions::Quaternion')
# Unit vector orthogonal to the other two; in the direction of velocity
PNVariables.AddDerivedVariable('lambdaHat', R*yHat*conjugate(R), datatype='Quaternions::Quaternion')
# Unit vector in direction of angular velocity
PNVariables.AddDerivedVariable('ellHat', R*zHat*conjugate(R), datatype='Quaternions::Quaternion')
# Components of the above
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('nHat_'+d, substitution_atoms=[nHat], substitution='nHat['+i+']')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('lambdaHat_'+d, substitution_atoms=[lambdaHat], substitution='lambdaHat['+i+']')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('ellHat_'+d, substitution_atoms=[ellHat], substitution='ellHat['+i+']')
Explanation: The system's vector basis is given by $(\hat{\ell}, \hat{n}, \hat{\lambda})$, and will be computed by the code in terms of the fundamental logarithmic rotors defined above. Here, we give all the substitutions that will be needed in the code.
End of explanation
# These rotors encode all information about the spin directions
PNVariables.AddDerivedVariable('R_S1', exp(rfrak_chi1_x*xHat + rfrak_chi1_y*yHat),
datatype='Quaternions::Quaternion', commutative=False)
PNVariables.AddDerivedVariable('R_S2', exp(rfrak_chi2_x*xHat + rfrak_chi2_y*yHat),
datatype='Quaternions::Quaternion', commutative=False)
# The spins are derived from rfrak_chi1_x, etc.
PNVariables.AddDerivedVariable('chiVec1', S_chi1*R_S1*zHat*conjugate(R_S1)*conjugate(S_chi1), datatype='Quaternions::Quaternion')
PNVariables.AddDerivedVariable('chiVec2', S_chi2*R_S2*zHat*conjugate(R_S2)*conjugate(S_chi2), datatype='Quaternions::Quaternion')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('chi1_'+d, substitution_atoms=[chiVec1], substitution='chiVec1['+i+']')
for i,d in zip(['1','2','3'],['x','y','z']):
PNVariables.AddDerivedVariable('chi2_'+d, substitution_atoms=[chiVec2], substitution='chiVec2['+i+']')
PNVariables.AddDerivedConstant('chi1chi1', substitution_atoms=[chiVec1], substitution='chiVec1.normsquared()')
PNVariables.AddDerivedConstant('chi1chi2', substitution_atoms=[chiVec1,chiVec2], substitution='chiVec1.dot(chiVec2)')
PNVariables.AddDerivedConstant('chi2chi2', substitution_atoms=[chiVec2], substitution='chiVec2.normsquared()')
PNVariables.AddDerivedVariable('chi1_n', substitution_atoms=[chiVec1,nHat], substitution='chiVec1.dot(nHat)')
PNVariables.AddDerivedVariable('chi1_lambda', substitution_atoms=[chiVec1,lambdaHat], substitution='chiVec1.dot(lambdaHat)')
PNVariables.AddDerivedVariable('chi1_ell', substitution_atoms=[chiVec1,ellHat], substitution='chiVec1.dot(ellHat)')
PNVariables.AddDerivedVariable('chi2_n', substitution_atoms=[chiVec2,nHat], substitution='chiVec2.dot(nHat)')
PNVariables.AddDerivedVariable('chi2_lambda', substitution_atoms=[chiVec2,lambdaHat], substitution='chiVec2.dot(lambdaHat)')
PNVariables.AddDerivedVariable('chi2_ell', substitution_atoms=[chiVec2,ellHat], substitution='chiVec2.dot(ellHat)')
PNVariables.AddDerivedConstant('sqrt1Mchi1chi1', sqrt(1-chi1chi1))
PNVariables.AddDerivedConstant('sqrt1Mchi2chi2', sqrt(1-chi2chi2))
PNVariables.AddDerivedVariable('S', chiVec1*M1**2 + chiVec2*M2**2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('S_ell', chi1_ell*M1**2 + chi2_ell*M2**2)
PNVariables.AddDerivedVariable('S_n', chi1_n*M1**2 + chi2_n*M2**2)
PNVariables.AddDerivedVariable('S_lambda', chi1_lambda*M1**2 + chi2_lambda*M2**2)
PNVariables.AddDerivedVariable('Sigma', M*(chiVec2*M2 - chiVec1*M1), datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('Sigma_ell', M*(chi2_ell*M2 - chi1_ell*M1))
PNVariables.AddDerivedVariable('Sigma_n', M*(chi2_n*M2 - chi1_n*M1))
PNVariables.AddDerivedVariable('Sigma_lambda', M*(chi2_lambda*M2 - chi1_lambda*M1))
PNVariables.AddDerivedVariable('S1', chiVec1*M1**2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('S1_ell', chi1_ell*M1**2)
PNVariables.AddDerivedVariable('S1_n', chi1_n*M1**2)
PNVariables.AddDerivedVariable('S1_lambda', chi1_lambda*M1**2)
PNVariables.AddDerivedVariable('S2', chiVec2*M2**2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('S2_ell', chi2_ell*M2**2)
PNVariables.AddDerivedVariable('S2_n', chi2_n*M2**2)
PNVariables.AddDerivedVariable('S2_lambda', chi2_lambda*M2**2)
PNVariables.AddDerivedVariable('chi_s', (chiVec1 + chiVec2)/2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('chi_s_ell', (chi1_ell+chi2_ell)/2)
PNVariables.AddDerivedVariable('chi_s_n', (chi1_n+chi2_n)/2)
PNVariables.AddDerivedVariable('chi_s_lambda', (chi1_lambda+chi2_lambda)/2)
PNVariables.AddDerivedVariable('chi_a', (chiVec1 - chiVec2)/2, datatype=chiVec1.datatype)
PNVariables.AddDerivedVariable('chi_a_ell', (chi1_ell-chi2_ell)/2)
PNVariables.AddDerivedVariable('chi_a_n', (chi1_n-chi2_n)/2)
PNVariables.AddDerivedVariable('chi_a_lambda', (chi1_lambda-chi2_lambda)/2)
Explanation: Various spin components and combinations:
End of explanation
PNVariables.AddDerivedVariable('x', v**2)
PNVariables.AddDerivedVariable('Omega_orb', v**3/M)
PNVariables.AddDerivedVariable('logv', log(v))
Explanation: Other functions of the angular velocity that find frequent use:
End of explanation |
14,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome
This notebook accompanies the Sunokisis Digital Classics common session on Named Entity Extraction, see https
Step1: And more precisely, we are using the following versions
Step2: Let's grab some text
To start with, we need some text from which we'll try to extract named entities using various methods and libraries.
There are several ways of doing this e.g.
Step3: With this information, we can query a CTS API and get some information about this text.
For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
Step4: But we can also query the same API and get back the text of a specific text section, for example the entire book 1.
To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
Step5: So we retrieve the first book of the De Bello Gallico by passing its CTS URN (that we just stored in the variable my_passage) to the CTS API, via the resolver provided by MyCapytains
Step6: At this point the passage is available in various formats
Step7: Let's check that the text is there by printing the content of the variable de_bello_gallico_book1 where we stored it
Step8: The text that we have just fetched by using a programming interface (API) can also be viewed in the browser.
Or even imported as an iframe into this notebook!
Step9: Let's see how many words (tokens, more properly) there are in Caesar's De Bello Gallico I
Step10: Very simple baseline
Now let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods.
Baseline method
Step11: Let's a havea look at the first 50 tokens that we just tagged
Step13: For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it
Step14: And now we can call it like this
Step16: We can modify slightly our function so that it prints the snippet of text where an entity is found
Step17: NER with CLTK
The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation).
The current implementation (as of version 0.1.47) uses a lookup-based method.
For each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities
Step18: Let's have a look at the ouput, only the first 10 tokens (by using the list slicing notation)
Step19: The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies).
But we can write a function to fix this, we call it reshape_cltk_output
Step20: We apply this function to CLTK's output
Step21: And the resulting output looks now ok
Step22: Now let's compare the two list of tagged tokens by using a python function called zip, which allows us to read multiple lists simultaneously
Step23: But, as you can see, the two lists are not aligned.
This is due to how the CLTK function tokenises the text. The comma after "tres" becomes a token on its own, whereas when we tokenise by white space the comma is attached to "tres" (i.e. "tres,").
A solution to this is to pass to the tag_ner function the text already tokenised by text.
Step24: NER with NLTK
Step25: Let's have a look at the output
Step26: Wrap up
At this point we can "compare" the output of the three different methods we used, again by using the zip function. | Python Code:
########
# NLTK #
########
import nltk
from nltk.tag import StanfordNERTagger
########
# CLTK #
########
import cltk
from cltk.tag.ner import tag_ner
##############
# MyCapytain #
##############
import MyCapytain
from MyCapytain.resolvers.cts.api import HttpCTSResolver
from MyCapytain.retrievers.cts5 import CTS
from MyCapytain.common.constants import Mimetypes
#################
# other imports #
#################
import sys
sys.path.append("/opt/nlp/pymodules/")
from idai_journals.nlp import sub_leaves
Explanation: Welcome
This notebook accompanies the Sunokisis Digital Classics common session on Named Entity Extraction, see https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I.
In this notebook we are going to experiment with three different methods for extracting named entities from a Latin text.
Library imports
External modules and libraries can be imported using import statements.
Let's the Natural Language ToolKit (NLTK), the Classical Language ToolKit (CLTK), MyCapytain and some local libraries that are used in this notebook.
End of explanation
print(nltk.__version__)
print(cltk.__version__)
print(MyCapytain.__version__)
Explanation: And more precisely, we are using the following versions:
End of explanation
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2"
Explanation: Let's grab some text
To start with, we need some text from which we'll try to extract named entities using various methods and libraries.
There are several ways of doing this e.g.:
1. copy and paste the text from Perseus or the Latin Library into a text document, and read it into a variable
2. load a text from one of the Latin corpora available via cltk (cfr. this blog post)
3. or load it from Perseus by leveraging its Canonical Text Services API
Let's gor for #3 :)
What's CTS?
CTS URNs stand for Canonical Text Service Uniform Resource Names.
You can think of a CTS URN like a social security number for texts (or parts of texts).
Here are some examples of CTS URNs with different levels of granularity:
- urn:cts:latinLit:phi0448 (Caesar)
- urn:cts:latinLit:phi0448.phi001 (Caesar's De Bello Gallico)
- urn:cts:latinLit:phi0448.phi001.perseus-lat2 DBG Latin edtion
- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1 DBG Latin edition, book 1
- urn:cts:latinLit:phi0448.phi001.perseus-lat2:1.1.1 DBG Latin edition, book 1, chapter 1, section 1
How do I find out the CTS URN of a given author or text? The Perseus Catalog is your friend! (crf. e.g. http://catalog.perseus.org/catalog/urn:cts:latinLit:phi0448)
Querying a CTS API
The URN of the Latin edition of Caesar's De Bello Gallico is urn:cts:latinLit:phi0448.phi001.perseus-lat2.
End of explanation
# We set up a resolver which communicates with an API available in Leipzig
resolver = HttpCTSResolver(CTS("http://cts.dh.uni-leipzig.de/api/cts/"))
# We require some metadata information
textMetadata = resolver.getMetadata("urn:cts:latinLit:phi0448.phi001.perseus-lat2")
# Texts in CTS Metadata have one interesting property : its citation scheme.
# Citation are embedded objects that carries information about how a text can be quoted, what depth it has
print([citation.name for citation in textMetadata.citation])
Explanation: With this information, we can query a CTS API and get some information about this text.
For example, we can "discover" its canonical text structure, an essential information to be able to cite this text.
End of explanation
my_passage = "urn:cts:latinLit:phi0448.phi001.perseus-lat2:1"
Explanation: But we can also query the same API and get back the text of a specific text section, for example the entire book 1.
To do so, we need to append the indication of the reference scope (i.e. book 1) to the URN.
End of explanation
passage = resolver.getTextualNode(my_passage)
Explanation: So we retrieve the first book of the De Bello Gallico by passing its CTS URN (that we just stored in the variable my_passage) to the CTS API, via the resolver provided by MyCapytains:
End of explanation
de_bello_gallico_book1 = passage.export(Mimetypes.PLAINTEXT)
Explanation: At this point the passage is available in various formats: text, but also TEI XML, etc.
Thus, we need to specify that we are interested in getting the text only:
End of explanation
print(de_bello_gallico_book1)
Explanation: Let's check that the text is there by printing the content of the variable de_bello_gallico_book1 where we stored it:
End of explanation
from IPython.display import IFrame
IFrame('http://cts.dh.uni-leipzig.de/read/latinLit/phi0448/phi001/perseus-lat2/1', width=1000, height=350)
Explanation: The text that we have just fetched by using a programming interface (API) can also be viewed in the browser.
Or even imported as an iframe into this notebook!
End of explanation
len(de_bello_gallico_book1.split(" "))
Explanation: Let's see how many words (tokens, more properly) there are in Caesar's De Bello Gallico I:
End of explanation
"T".istitle()
"t".istitle()
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(de_bello_gallico_book1.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
#else:
#tagged_tokens.append((token, "O"))
Explanation: Very simple baseline
Now let's write what in NLP jargon is called a baseline, that is a method for extracting named entities that can serve as a term of comparison to evaluate the accuracy of other methods.
Baseline method:
- cycle through each token of the text
- if the token starts with a capital letter it's a named entity (only one type, i.e. Entity)
End of explanation
tagged_tokens[:50]
Explanation: Let's a havea look at the first 50 tokens that we just tagged:
End of explanation
def extract_baseline(input_text):
:param input_text: the text to tag (string)
:return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(input_text.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
#else:
#tagged_tokens.append((token, "O"))
return tagged_tokens
Explanation: For convenience we can also wrap our baseline code into a function that we call extract_baseline. Let's define it:
End of explanation
tagged_tokens_baseline = extract_baseline(de_bello_gallico_book1)
tagged_tokens_baseline[-50:]
Explanation: And now we can call it like this:
End of explanation
def extract_baseline(input_text):
:param input_text: the text to tag (string)
:return: a list of tuples, where tuple[0] is the token and tuple[1] is the named entity tag
# we need a list to store the tagged tokens
tagged_tokens = []
# tokenisation is done by using the string method `split(" ")`
# that splits a string upon white spaces
for n, token in enumerate(input_text.split(" ")):
if(token.istitle()):
tagged_tokens.append((token, "Entity"))
context = input_text.split(" ")[n-5:n+5]
print("Found entity \"%s\" in context \"%s\""%(token, " ".join(context)))
#else:
#tagged_tokens.append((token, "O"))
return tagged_tokens
tagged_text_baseline = extract_baseline(de_bello_gallico_book1)
tagged_text_baseline[:150]
Explanation: We can modify slightly our function so that it prints the snippet of text where an entity is found:
End of explanation
%%time
tagged_text_cltk = tag_ner('latin', input_text=de_bello_gallico_book1)
Explanation: NER with CLTK
The CLTK library has some basic support for the extraction of named entities from Latin and Greek texts (see CLTK's documentation).
The current implementation (as of version 0.1.47) uses a lookup-based method.
For each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities:
- list of Latin proper nouns: https://github.com/cltk/latin_proper_names_cltk
- list of Greek proper nouns: https://github.com/cltk/greek_proper_names_cltk
Let's run CLTK's tagger (it takes a moment):
End of explanation
tagged_text_cltk[:10]
Explanation: Let's have a look at the ouput, only the first 10 tokens (by using the list slicing notation):
End of explanation
def reshape_cltk_output(tagged_tokens):
reshaped_output = []
for tagged_token in tagged_tokens:
if(len(tagged_token)==1):
continue
#reshaped_output.append((tagged_token[0], "O"))
else:
reshaped_output.append((tagged_token[0], tagged_token[1]))
return reshaped_output
Explanation: The output looks slightly different from the one of our baseline function (the size of the tuples in the list varies).
But we can write a function to fix this, we call it reshape_cltk_output:
End of explanation
tagged_text_cltk = reshape_cltk_output(tagged_text_cltk)
Explanation: We apply this function to CLTK's output:
End of explanation
tagged_text_cltk[:20]
Explanation: And the resulting output looks now ok:
End of explanation
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20]))
Explanation: Now let's compare the two list of tagged tokens by using a python function called zip, which allows us to read multiple lists simultaneously:
End of explanation
tagged_text_cltk = reshape_cltk_output(tag_ner('latin', input_text=de_bello_gallico_book1.split(" ")))
list(zip(tagged_text_baseline[:20], tagged_text_cltk[:20]))
Explanation: But, as you can see, the two lists are not aligned.
This is due to how the CLTK function tokenises the text. The comma after "tres" becomes a token on its own, whereas when we tokenise by white space the comma is attached to "tres" (i.e. "tres,").
A solution to this is to pass to the tag_ner function the text already tokenised by text.
End of explanation
stanford_model_italian = "/opt/nlp/stanford-tools/stanford-ner-2015-12-09/classifiers/ner-ita-nogpe-noiob_gaz_wikipedia_sloppy.ser.gz"
ner_tagger = StanfordNERTagger(stanford_model_italian)
tagged_text_nltk = ner_tagger.tag(de_bello_gallico_book1.split(" "))
Explanation: NER with NLTK
End of explanation
tagged_text_nltk[0:150]
Explanation: Let's have a look at the output
End of explanation
list(zip(tagged_text_baseline[:50], tagged_text_cltk[:50],tagged_text_nltk[:50]))
for baseline_out, cltk_out, nltk_out in zip(tagged_text_baseline[:150], tagged_text_cltk[:150], tagged_text_nltk[:150]):
print("Baseline: %s\nCLTK: %s\nNLTK: %s\n"%(baseline_out, cltk_out, nltk_out))
Explanation: Wrap up
At this point we can "compare" the output of the three different methods we used, again by using the zip function.
End of explanation |
14,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix generation
Init symbols for sympy
Step1: Lame params
Step2: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
Step3: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step4: Christoffel symbols
Step5: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
Step6: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
Step7: Physical coordinates
$u_i=u_{[i]} H_i$
Step8: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
Step9: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
Step10: Mass matrix | Python Code:
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
init_printing()
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
Explanation: Matrix generation
Init symbols for sympy
End of explanation
# h1 = Function("H1")
# h2 = Function("H2")
# h3 = Function("H3")
# H1 = h1(alpha1, alpha2, alpha3)
# H2 = h2(alpha1, alpha2, alpha3)
# H3 = h3(alpha1, alpha2, alpha3)
H1,H2,H3=symbols('H1,H2,H3')
H=[H1, H2, H3]
DIM=3
dH = zeros(DIM,DIM)
for i in range(DIM):
for j in range(DIM):
dH[i,j]=Symbol('H_{{{},{}}}'.format(i+1,j+1))
dH
Explanation: Lame params
End of explanation
G_up = getMetricTensorUpLame(H1, H2, H3)
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
G_down = getMetricTensorDownLame(H1, H2, H3)
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
DIM=3
G_down_diff = MutableDenseNDimArray.zeros(DIM, DIM, DIM)
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
G_down_diff[i,i,k]=2*H[i]*dH[i,k]
GK = getChristoffelSymbols2(G_up, G_down_diff, (alpha1, alpha2, alpha3))
GK
Explanation: Christoffel symbols
End of explanation
def row_index_to_i_j_grad(i_row):
return i_row // 3, i_row % 3
B = zeros(9, 12)
B[0,1] = S(1)
B[1,2] = S(1)
B[2,3] = S(1)
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[7,10] = S(1)
B[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B[row_index, 0] = -GK[i,j,0]
B[row_index, 4] = -GK[i,j,1]
B[row_index, 8] = -GK[i,j,2]
B
Explanation: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
def E_NonLinear(grad_u):
N = 3
du = zeros(N, N)
# print("===Deformations===")
for i in range(N):
for j in range(N):
index = i*N+j
du[j,i] = grad_u[index]
# print("========")
I = eye(3)
a_values = S(1)/S(2) * du * G_up
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,0] = 2*a_values[2,0]
E_NL[4,3] = 2*a_values[2,1]
E_NL[4,6] = 2*a_values[2,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
return E_NL
%aimport geom_util
u=getUHat3D(alpha1, alpha2, alpha3)
# u=getUHatU3Main(alpha1, alpha2, alpha3)
gradu=B*u
E_NL = E_NonLinear(gradu)*B
E_NL
Explanation: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
End of explanation
P=zeros(12,12)
P[0,0]=H[0]
P[1,0]=dH[0,0]
P[1,1]=H[0]
P[2,0]=dH[0,1]
P[2,2]=H[0]
P[3,0]=dH[0,2]
P[3,3]=H[0]
P[4,4]=H[1]
P[5,4]=dH[1,0]
P[5,5]=H[1]
P[6,4]=dH[1,1]
P[6,6]=H[1]
P[7,4]=dH[1,2]
P[7,7]=H[1]
P[8,8]=H[2]
P[9,8]=dH[2,0]
P[9,9]=H[2]
P[10,8]=dH[2,1]
P[10,10]=H[2]
P[11,8]=dH[2,2]
P[11,11]=H[2]
P=simplify(P)
P
B_P = zeros(9,9)
for i in range(3):
for j in range(3):
row_index = i*3+j
B_P[row_index, row_index] = 1/(H[i]*H[j])
Grad_U_P = simplify(B_P*B*P)
Grad_U_P
StrainL=simplify(E*Grad_U_P)
StrainL
%aimport geom_util
u=getUHat3D(alpha1, alpha2, alpha3)
gradup=Grad_U_P*u
E_NLp = E_NonLinear(gradup)*Grad_U_P
simplify(E_NLp)
Explanation: Physical coordinates
$u_i=u_{[i]} H_i$
End of explanation
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
D_p_T = StrainL*T
simplify(D_p_T)
u = Function("u")
t = Function("theta")
w = Function("w")
u1=u(alpha1)+alpha3*t(alpha1)
u3=w(alpha1)
gu = zeros(12,1)
gu[0] = u1
gu[1] = u1.diff(alpha1)
gu[3] = u1.diff(alpha3)
gu[8] = u3
gu[9] = u3.diff(alpha1)
gradup=Grad_U_P*gu
# o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2
# o21=K*t(alpha1)
# O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20
# O=expand(O)
# O=collect(O,alpha3)
# simplify(O)
StrainNL = E_NonLinear(gradup)*gradup
simplify(StrainNL)
Explanation: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
L=zeros(12,12)
h=Symbol('h')
# p0=1/2-alpha3/h
# p1=1/2+alpha3/h
# p2=1-(2*alpha3/h)**2
P0=Function('p_0')
P1=Function('p_1')
P2=Function('p_2')
# p1=1/2+alpha3/h
# p2=1-(2*alpha3/h)**2
p0=P0(alpha3)
p1=P1(alpha3)
p2=P2(alpha3)
L[0,0]=p0
L[0,2]=p1
L[0,4]=p2
L[1,1]=p0
L[1,3]=p1
L[1,5]=p2
L[3,0]=p0.diff(alpha3)
L[3,2]=p1.diff(alpha3)
L[3,4]=p2.diff(alpha3)
L[8,6]=p0
L[8,8]=p1
L[8,10]=p2
L[9,7]=p0
L[9,9]=p1
L[9,11]=p2
L[11,6]=p0.diff(alpha3)
L[11,8]=p1.diff(alpha3)
L[11,10]=p2.diff(alpha3)
L
D_p_L = StrainL*L
simplify(D_p_L)
h = 0.5
exp=(0.5-alpha3/h)*(1-(2*alpha3/h)**2)#/(1+alpha3*0.8)
p02=integrate(exp, (alpha3, -h/2, h/2))
integral = expand(simplify(p02))
integral
Explanation: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[2,8]=1
M=simplify(rho*P.T*B_h.T*G_up*B_h*P)
M
Explanation: Mass matrix
End of explanation |
14,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TUTORIAL 04 - Graetz problem 2
Keywords
Step1: 3. Affine decomposition
In order to obtain an affine decomposition, we proceed as in the previous tutorial and recast the problem on a fixed, parameter independent, reference domain $\Omega$. As reference domain which choose the one characterized by $\mu_0 = 1$ which we generate through the generate_mesh notebook provided in the data folder.
As in the previous tutorial, we pull back the problem to the reference domain $\Omega$.
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh_2.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the Graetz class
Step5: 4.4. Prepare reduction with a reduced basis method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from dolfin import *
from rbnics import *
Explanation: TUTORIAL 04 - Graetz problem 2
Keywords: successive constraints method
1. Introduction
This Tutorial addresses geometrical parametrization and the successive constraints method (SCM). In particular, we will solve the Graetz problem, which deals with forced heat convection in a channel $\Omega_o(\mu_0)$ divided into three parts $\Omega_o^1$, $\Omega_o^2(\mu_0)$ and $\Omega_o^3(\mu_0)$, as in the following picture:
<img src="data/graetz_2.png" width="70%"/>
Boundaries $\Gamma_{o, 1} \cup \Gamma_{o, 5} \cup \Gamma_{o, 6}$ are kept at low temperature (say, zero), while boundaries $\Gamma_{o, 2}(\mu_0) \cup \Gamma_{o, 4}(\mu_0)$ and $\Gamma_{o, 7}(\mu_0) \cup \Gamma_{o, 7}(\mu_0)$ are kept at high temperature (say, respectively $\mu_2$ and $\mu_3$). The convection is characterized by the velocity $\boldsymbol{\beta} = (x_1(1-x_1), 0)$, being $\boldsymbol{x}o = (x{o, 0}, x_1)$ the coordinate vector on the parametrized domain $\Omega_o(\mu_0)$.
The problem is characterized by four parameters. The first parameter $\mu_0$ controls the shape of deformable subdomain $\Omega_2(\mu_0)$. The heat transfer between the domains can be taken into account by means of the Péclet number, which will be labeled as the parameter $\mu_1$. The ranges of the two first parameters are the following:
$$\mu_0 \in [0.1,10.0] \quad \text{and} \quad \mu_1 \in [0.01,10.0]$$
and the two additional heat parameters:
$$\mu_2 \in [0.5,1.5] \quad \text{and} \quad \mu_3 \in [0.5,1.5].$$
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0, \mu_1, \mu_2, \mu_3)
$$
on the parameter domain
$$
\mathbb{P}=[0.1,10.0]\times[0.01,10.0]\times[0.5,1.5]\times[0.5,1.5].
$$
In order to obtain a faster (yet, provably accurate) approximation of the problem, and avoiding any remeshing, we pursue a model reduction by means of a certified reduced basis reduced order method from a fixed reference domain.
The successive constraints method will be used to evaluate the stability factors.
2. Parametrized formulation
Let $u_o(\boldsymbol{\mu})$ be the temperature in the domain $\Omega_o(\mu_0)$.
We will directly provide a weak formulation for this problem
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u_o(\boldsymbol{\mu})\in\mathbb{V}_o(\boldsymbol{\mu})$ such that</center>
$$a_o\left(u_o(\boldsymbol{\mu}),v_o;\boldsymbol{\mu}\right)=f_o(v_o;\boldsymbol{\mu})\quad \forall v_o\in\mathbb{V}_o(\boldsymbol{\mu})$$
where
the function space $\mathbb{V}o(\boldsymbol{\mu})$ is defined as
$$
\mathbb{V}_o(\mu_0) = \left{ v \in H^1(\Omega_o(\mu_0)): v|{\Gamma_{o,1} \cup \Gamma_{o,5} \cup \Gamma_{o,6}} = 0, v|{\Gamma{o,2}(\mu_0) \cup \Gamma_{o,2}(\mu_0)} = 1 \right}
$$
Note that, as in the previous tutorial, the function space is parameter dependent due to the shape variation.
the parametrized bilinear form $a_o(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V}o(\boldsymbol{\mu}) \times \mathbb{V}_o(\boldsymbol{\mu}) \to \mathbb{R}$ is defined by
$$a_o(u_o,v_o;\boldsymbol{\mu}) = \mu_1\int{\Omega_o(\mu_0)} \nabla u_o \cdot \nabla v_o \ d\boldsymbol{x} + \int_{\Omega_o(\mu_0)} x_1(1-x_1) \partial_{x} u_o\ v_o \ d\boldsymbol{x},$$
the parametrized linear form $f_o(\cdot; \boldsymbol{\mu}): \mathbb{V}_o(\boldsymbol{\mu}) \to \mathbb{R}$ is defined by
$$f_o(v_o;\boldsymbol{\mu}) = 0.$$
The successive constraints method will be used to compute the stability factor of the bilinear form $a_o(\cdot, \cdot; \boldsymbol{\mu})$.
End of explanation
@SCM()
@PullBackFormsToReferenceDomain()
@ShapeParametrization(
("x[0]", "x[1]"), # subdomain 1
("mu[0]*(x[0] - 1) + 1", "x[1]"), # subdomain 2
)
class Graetz(EllipticCoerciveProblem):
# Default initialization of members
@generate_function_space_for_stability_factor
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticCoerciveProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.u = TrialFunction(V)
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=subdomains)
self.ds = Measure("ds")(subdomain_data=boundaries)
# Store the velocity expression
self.vel = Expression("x[1]*(1-x[1])", element=self.V.ufl_element())
# Customize eigen solver parameters
self._eigen_solver_parameters.update({
"bounding_box_minimum": {
"problem_type": "gen_hermitian", "spectral_transform": "shift-and-invert",
"spectral_shift": 1.e-5, "linear_solver": "mumps"
},
"bounding_box_maximum": {
"problem_type": "gen_hermitian", "spectral_transform": "shift-and-invert",
"spectral_shift": 1.e5, "linear_solver": "mumps"
},
"stability_factor": {
"problem_type": "gen_hermitian", "spectral_transform": "shift-and-invert",
"spectral_shift": 1.e-5, "linear_solver": "mumps"
}
})
# Return custom problem name
def name(self):
return "Graetz2"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_stability_factor
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = mu[1]
theta_a1 = 1.0
return (theta_a0, theta_a1)
elif term == "f":
theta_f0 = 1.0
return (theta_f0, )
elif term == "dirichlet_bc":
theta_bc0 = mu[2]
theta_bc1 = mu[3]
return (theta_bc0, theta_bc1)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_stability_factor
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
u = self.u
vel = self.vel
a0 = inner(grad(u), grad(v)) * dx
a1 = vel * u.dx(0) * v * dx
return (a0, a1)
elif term == "f":
f0 = Constant(0.0) * v * dx
return (f0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),
DirichletBC(self.V, Constant(1.0), self.boundaries, 2),
DirichletBC(self.V, Constant(0.0), self.boundaries, 3),
DirichletBC(self.V, Constant(0.0), self.boundaries, 5),
DirichletBC(self.V, Constant(1.0), self.boundaries, 6),
DirichletBC(self.V, Constant(0.0), self.boundaries, 7),
DirichletBC(self.V, Constant(0.0), self.boundaries, 8)]
bc1 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),
DirichletBC(self.V, Constant(0.0), self.boundaries, 2),
DirichletBC(self.V, Constant(1.0), self.boundaries, 3),
DirichletBC(self.V, Constant(1.0), self.boundaries, 5),
DirichletBC(self.V, Constant(0.0), self.boundaries, 6),
DirichletBC(self.V, Constant(0.0), self.boundaries, 7),
DirichletBC(self.V, Constant(0.0), self.boundaries, 8)]
return (bc0, bc1)
elif term == "inner_product":
u = self.u
x0 = inner(grad(u), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
Explanation: 3. Affine decomposition
In order to obtain an affine decomposition, we proceed as in the previous tutorial and recast the problem on a fixed, parameter independent, reference domain $\Omega$. As reference domain which choose the one characterized by $\mu_0 = 1$ which we generate through the generate_mesh notebook provided in the data folder.
As in the previous tutorial, we pull back the problem to the reference domain $\Omega$.
End of explanation
mesh = Mesh("data/graetz_2.xml")
subdomains = MeshFunction("size_t", mesh, "data/graetz_physical_region_2.xml")
boundaries = MeshFunction("size_t", mesh, "data/graetz_facet_region_2.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh_2.ipynb notebook.
End of explanation
V = FunctionSpace(mesh, "Lagrange", 1)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = Graetz(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.1, 10.0), (0.01, 10.0), (0.5, 1.5), (0.5, 1.5)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the Graetz class
End of explanation
reduction_method = ReducedBasis(problem)
reduction_method.set_Nmax(30, SCM=50)
reduction_method.set_tolerance(1e-5, SCM=1e-3)
Explanation: 4.4. Prepare reduction with a reduced basis method
End of explanation
lifting_mu = (1.0, 1.0, 1.0, 1.0)
problem.set_mu(lifting_mu)
reduction_method.initialize_training_set(500, SCM=250)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (10.0, 0.01, 1.0, 1.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(100, SCM=100)
reduction_method.error_analysis(filename="error_analysis")
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis(filename="speedup_analysis")
Explanation: 4.8. Perform a speedup analysis
End of explanation |
14,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification Uncertainty Analysis in Bayesian Deep Learning with Dropout Variational Inference
Here is astroNN, please take a look if you are interested in astronomy or how neural network applied in astronomy
* Henry Leung - Astronomy student, University of Toronto - henrysky
* Project adviser
Step1: Train the neural network on MNIST training set
Step2: Test the neural network on random MNIST images
You can see from below, most test images are right except the last one the model has a high uncertainty in it. As a human, you can indeed can argue this 5 is badly written can can be read as 6 or even a badly written 8.
Step3: Test the neural network on random MNIST images with 90 degree rotation
Since the neural network is trained on MNIST images without any data argumentation, so if we rotate the MNIST images, the images should look 'alien' to the neural network and the neural network should give us a high unceratinty. And indeed the neural network tells us its very uncertain about the prediction with roated images. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina'
from tensorflow.keras.datasets import mnist
from tensorflow.keras import utils
import numpy as np
import pylab as plt
from astroNN.models import MNIST_BCNN
Explanation: Classification Uncertainty Analysis in Bayesian Deep Learning with Dropout Variational Inference
Here is astroNN, please take a look if you are interested in astronomy or how neural network applied in astronomy
* Henry Leung - Astronomy student, University of Toronto - henrysky
* Project adviser: Jo Bovy - Professor, Department of Astronomy and Astrophysics, University of Toronto - jobovy
* Contact Henry: henrysky.leung [at] utoronto.ca
* This tutorial is created on 16/Mar/2018 with Keras 2.1.5, Tensorflow 1.6.0, Nvidia CuDNN 7.0 for CUDA 9.0 (Optional)
* Updated on 31/Jan/2020 with Tensorflow 2.1.0, Tensorflow Probability 0.9.0
* Updated again on 27/Jan/2020 with Tensorflow 2.4.0, Tensorflow Probability 0.12.0
<br>
For more resources on Bayesian Deep Learning with Dropout Variational Inference, please refer to README.md
First import everything we need
End of explanation
(x_train, y_train), (x_test, y_test) = mnist.load_data()
y_train = utils.to_categorical(y_train, 10)
y_train = y_train.astype(np.float32)
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
# Create a astroNN neural network instance and set the basic parameter
net = MNIST_BCNN()
net.task = 'classification'
net.max_epochs = 5 # Just use 5 epochs for quick result
# Trian the nerual network
net.train(x_train, y_train)
Explanation: Train the neural network on MNIST training set
End of explanation
test_idx = [1, 2, 3, 4, 5, 8]
pred, pred_std = net.test(x_test[test_idx])
for counter, i in enumerate(test_idx):
plt.figure(figsize=(3, 3), dpi=100)
plt.title(f'Predicted Digit {pred[counter]}, Real Answer: {y_test[i]:{1}} \n'
f'Total Uncertainty (Entropy): {(pred_std["total"][counter]):.{2}}')
plt.imshow(x_test[i])
plt.show()
plt.close('all')
plt.clf()
Explanation: Test the neural network on random MNIST images
You can see from below, most test images are right except the last one the model has a high uncertainty in it. As a human, you can indeed can argue this 5 is badly written can can be read as 6 or even a badly written 8.
End of explanation
test_rot_idx = [9, 10, 11]
test_rot = x_test[test_rot_idx]
for counter, j in enumerate(test_rot):
test_rot[counter] = np.rot90(j)
pred_rot, pred_rot_std = net.test(test_rot)
for counter, i in enumerate(test_rot_idx):
plt.figure(figsize=(3, 3), dpi=100)
plt.title(f'Predicted Digit {pred_rot[counter]}, Real Answer: {y_test[i]:{1}} \n'
f'Total Uncertainty (Entropy): {(pred_rot_std["total"][counter]):.{2}}')
plt.imshow(test_rot[counter])
plt.show()
plt.close('all')
plt.clf()
Explanation: Test the neural network on random MNIST images with 90 degree rotation
Since the neural network is trained on MNIST images without any data argumentation, so if we rotate the MNIST images, the images should look 'alien' to the neural network and the neural network should give us a high unceratinty. And indeed the neural network tells us its very uncertain about the prediction with roated images.
End of explanation |
14,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis on Movie Reviews using LSTM RNN Model
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
Load Libraries
Step1: Load and Read Datasets
Step2: Preprocessing Data
Step3: Padding Sequences
Step4: Training LSTM RNN Model
Long short-term memory (LSTM) is a recurrent neural network (RNN) architecture that remembers values over arbitrary intervals. Stored values are not modified as learning proceeds. RNNs allow forward and backward connections between neurons.
An LSTM network contains LSTM units instead of, or in addition to, other network units. An LSTM unit remembers values for either long or short time periods. The key to this ability is that it uses no activation function within its recurrent components. Thus, the stored value is not iteratively modified and the gradient does not tend to vanish when trained with backpropagation through time.
keras.layers.recurrent.LSTM(units, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0)
units
Step5: Using Convolutional Neural Network (CNN) + LSTM
We add a one-dimensional CNN Conv1D() and a max pooling layer MaxPooling1D() after the Embedding layer which then feed the features to the LSTM. We use a set of 32 features with filter length of 3. The pooling layer has the standard length of 2 to halve the feature map size.
Step6: Creating Submission | Python Code:
import numpy as np
import pandas as pd
from gensim import corpora
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import SnowballStemmer
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Embedding
from keras.layers import LSTM
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
Explanation: Sentiment Analysis on Movie Reviews using LSTM RNN Model
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
Load Libraries
End of explanation
train = pd.read_csv('train.tsv', sep='\t', header=0)
test = pd.read_csv('test.tsv', sep='\t', header=0)
train.shape, test.shape
train.head()
test.head()
raw_docs_train = train['Phrase'].values
raw_docs_test = test['Phrase'].values
sentiment_train = train['Sentiment'].values
num_labels = len(np.unique(sentiment_train))
np.unique(sentiment_train)
Explanation: Load and Read Datasets
End of explanation
stop_words = set(stopwords.words('english'))
print (stop_words)
stop_words.update(['.', ',', '"', "'", ':', ';', '(', ')', '[', ']', '{', '}'])
print (stop_words)
stemmer = SnowballStemmer('english')
print "pre-processing train docs..."
processed_docs_train = []
for index, doc in enumerate(raw_docs_train):
tokens = word_tokenize(doc)
filtered = [word for word in tokens if word not in stop_words]
stemmed = [stemmer.stem(word) for word in filtered]
processed_docs_train.append(stemmed)
if index == 0:
print ('\n')
print (doc)
print ('\n')
print (tokens)
print ('\n')
print (filtered)
print ('\n')
print (stemmed)
print "pre-processing test docs..."
processed_docs_test = []
for doc in raw_docs_test:
tokens = word_tokenize(doc)
filtered = [word for word in tokens if word not in stop_words]
stemmed = [stemmer.stem(word) for word in filtered]
processed_docs_test.append(stemmed)
processed_docs_all = np.concatenate((processed_docs_train, processed_docs_test), axis=0)
dictionary = corpora.Dictionary(processed_docs_all)
dictionary_size = len(dictionary.keys())
print "dictionary size: ", dictionary_size
dictionary[0], dictionary[14]
print "converting to token ids..."
word_id_train, word_id_len = [], []
for index,doc in enumerate(processed_docs_train):
word_ids = [dictionary.token2id[word] for word in doc]
word_id_train.append(word_ids)
word_id_len.append(len(word_ids))
if index == 0:
print (doc)
print (word_ids)
print (word_id_train)
print (word_id_len)
word_id_test, word_ids = [], []
for doc in processed_docs_test:
word_ids = [dictionary.token2id[word] for word in doc]
word_id_test.append(word_ids)
word_id_len.append(len(word_ids))
seq_len = np.round((np.mean(word_id_len) + 2*np.std(word_id_len))).astype(int)
print (np.mean(word_id_len))
print (np.std(word_id_len))
print (seq_len)
Explanation: Preprocessing Data
End of explanation
#pad sequences
word_id_train = sequence.pad_sequences(np.array(word_id_train), maxlen=seq_len)
word_id_test = sequence.pad_sequences(np.array(word_id_test), maxlen=seq_len)
y_train_enc = np_utils.to_categorical(sentiment_train, num_labels)
print (word_id_train)
print (y_train_enc)
Explanation: Padding Sequences
End of explanation
#LSTM
print "fitting LSTM ..."
model = Sequential()
model.add(Embedding(dictionary_size, 128))
model.add(Dropout(0.2))
model.add(LSTM(128))
model.add(Dropout(0.2))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(word_id_train, y_train_enc, epochs=3, batch_size=256, verbose=1)
Explanation: Training LSTM RNN Model
Long short-term memory (LSTM) is a recurrent neural network (RNN) architecture that remembers values over arbitrary intervals. Stored values are not modified as learning proceeds. RNNs allow forward and backward connections between neurons.
An LSTM network contains LSTM units instead of, or in addition to, other network units. An LSTM unit remembers values for either long or short time periods. The key to this ability is that it uses no activation function within its recurrent components. Thus, the stored value is not iteratively modified and the gradient does not tend to vanish when trained with backpropagation through time.
keras.layers.recurrent.LSTM(units, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0)
units: Positive integer, dimensionality of the output space.
dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
Source: https://keras.io/layers/recurrent/#lstm
Embedding Layer
Embedding Layer is used to:
One-hot encoded vectors are high-dimensional and sparse. Let's assume that we are doing Natural Language Processing (NLP) and have a dictionary of 2000 words. This means that, when using one-hot encoding, each word will be represented by a vector containing 2000 integers. And 1999 of these integers are zeros. In a big dataset this approach is not computationally efficient.
The vectors of each embedding get updated while training the neural network. This allows us to visualize relationships between words, but also between everything that can be turned into a vector through an embedding layer.
keras.layers.embeddings.Embedding(input_dim, output_dim, embeddings_initializer='uniform', embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None)
Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
This layer can only be used as the first layer in a model.
Source: https://keras.io/layers/embeddings/
Example:
model.add(Embedding(1000, 64, input_length=10))
In the above example code, the model will take as input an integer matrix of size (batch, input_length). The largest integer (i.e. word index) in the input should be no larger than 999 (vocabulary size).
Now model.output_shape == (None, 10, 64), where None is the batch dimension.
End of explanation
#LSTM
print "fitting LSTM ..."
model = Sequential()
model.add(Embedding(dictionary_size, 128))
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
# sigmoid activation for binary classification
# softmax activation for multi-class classification
model.add(Dense(num_labels, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(word_id_train, y_train_enc, epochs=3, batch_size=256, verbose=1)
Explanation: Using Convolutional Neural Network (CNN) + LSTM
We add a one-dimensional CNN Conv1D() and a max pooling layer MaxPooling1D() after the Embedding layer which then feed the features to the LSTM. We use a set of 32 features with filter length of 3. The pooling layer has the standard length of 2 to halve the feature map size.
End of explanation
test_pred = model.predict_classes(word_id_test)
test_pred
#make a submission
test['Sentiment'] = test_pred.reshape(-1,1)
header = ['PhraseId', 'Sentiment']
test.to_csv('./submission_lstm_cnn.csv', columns=header, index=False, header=True)
Explanation: Creating Submission
End of explanation |
14,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example Uses Of “%%script” Magic
The %%script cell magic allows the invocation of any number of external languages without the need for installing any custom Jupyter kernels. The drawback is that there is no context maintained between one cell invocation and the next. Another drawback is that the notebook interface insists on syntax-colouring the cell contents as though it were Python code.
The syntax is
%%script commandline
Simple example
Step1: Use with SBCL
Step2: You probably want to turn off the startup banner message, and the “*” characters from the default prompting for input by the REPL. The --script option does this, but it also turns off automatic output of expression values
Step3: This may be OK for running functional scripts, but it is not so convenient for simple experimentation.
Alternatively, the --noinform option turns off the startup banner. You can then change the prompt to the empty string by setting the SBCL-specific sb-aclrepl
Step4: Just to make clear the fact that function names and variable names exist in separate name spaces
Step5: Note that access to command-line arguments is not a standard feature of Common Lisp. SBCL provides an extension to do this
Step6: My example of computing the first n terms of a Fibonacci series
Step7: Jake’s example of lexical binding in action
Step8: A more elaborate %%script example | Python Code:
%%script bash
echo 'hi there!'
Explanation: Example Uses Of “%%script” Magic
The %%script cell magic allows the invocation of any number of external languages without the need for installing any custom Jupyter kernels. The drawback is that there is no context maintained between one cell invocation and the next. Another drawback is that the notebook interface insists on syntax-colouring the cell contents as though it were Python code.
The syntax is
%%script commandline
Simple example: bash
End of explanation
%%script sbcl
(+ 2 2)
Explanation: Use with SBCL:
End of explanation
%%script sbcl --script
(+ 2 2) ; produces no output
(princ (+ 2 2)) ; need to do explicit output
Explanation: You probably want to turn off the startup banner message, and the “*” characters from the default prompting for input by the REPL. The --script option does this, but it also turns off automatic output of expression values:
End of explanation
%%script sbcl --noinform --eval "(require 'sb-aclrepl)" --eval "(setq sb-aclrepl:*prompt* \"\")"
(+ 2 2)
Explanation: This may be OK for running functional scripts, but it is not so convenient for simple experimentation.
Alternatively, the --noinform option turns off the startup banner. You can then change the prompt to the empty string by setting the SBCL-specific sb-aclrepl:*prompt* variable. To access this, you need to require the sb-aclrepl package. To avoid spurious output, this can all be done with --eval options on the command line:
End of explanation
%%script sbcl --script
(setf + 2)
(princ (+ + +))
Explanation: Just to make clear the fact that function names and variable names exist in separate name spaces: in the following example, “+” is used as the name of a variable and to refer to the built-in addition function.
End of explanation
%%script sbcl --script /dev/stdin the quick brown fox
(print sb-ext:*posix-argv*)
Explanation: Note that access to command-line arguments is not a standard feature of Common Lisp. SBCL provides an extension to do this:
End of explanation
%%script sbcl --script
(defvar n 15) ; change as required
(do
(
(a 1 (+ a b))
(b 1 a)
(i 0 (+ i 1))
)
((>= i n))
(format *standard-output* "~A~%" a)
) ; do
Explanation: My example of computing the first n terms of a Fibonacci series:
End of explanation
%%script sbcl --noinform --eval "(require 'sb-aclrepl)" --eval "(setq sb-aclrepl:*prompt* \"\")"
(setf (symbol-function 'counter)
(let ((count 0))
(lambda ()
(if (> count 5)
(setf count 0)
(incf count)
) ; if
) ; lambda
) ; let
)
(counter)
(counter)
(counter)
Explanation: Jake’s example of lexical binding in action:
End of explanation
%%script bash
set -e
WORKDIR=$(mktemp -d)
cd "$WORKDIR"
cat >test.c <<EOD
#include <stdio.h>
int main(int argc, char **argv)
{
fputs("hello, world!\n", stdout);
} /*main*/
EOD
gcc -o test test.c
./test
cd
rm -rfv "$WORKDIR"
Explanation: A more elaborate %%script example: compiling and running an entire C program in a cell!
End of explanation |
14,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
One way of running programs in python is by executing a script, with run <script.py> in python or python <script.py> in terminal.
What if you realize that something in the script is wrong after you have executed the file, or for whatever reason you want to interupt the program?
You can use ctrl+c to abort the program which, essentially, is throwing an "exception" -- KeyboardInterrupt exception. We will briefly talk about Exception later in this notebook.
If you are writing some new code (to a python script) and you are unsure whether or not it will work, instead of doing run <script.py> and then manually interupting your code with ctrl+c, there are other more elegant ways. In this notebook, we will go over some ways of debugging in python.
1) basic level
Step1: As we know, 5/10 - 10 = -9.5 and not -10, so something must be wrong inside the function. In this simple example, it may be super obvious that we are dividing an integer with an integer, and will get back an integer. (Division between integers is defined as returning the integer part of the result, throwing away the remainder. The same division operator does real division when operating with floats, very confusing, right?).
But this is good enough to show why simple print statements will suffice in some cases.
Ok, assuming that we didn't know what the problem was, we will use print to find out what went wrong.
Step2: From this, it's clear that a/b is the problem since a/b = 1/2 and not 0. And you can quickly go an fix that step. For example by using float(b), or by multiplying by 1. , the dot makes it a float.
But using print statements may be inconvenient if the code takes a long time to run, and also you may want to chcek the values of other variables to diagnose the problem. If you use crtl+C at this point, you will lose the values stored inside all variables. Perhaps you would want to go back and put another print statement inside the code and run it again to check another variable. But this goes on... and you may have to go back many times!
Alternatively, you can use the pdb module, which is an interactive source debugger. The variables are presevered at breakpoint, and can interactively step through each line of your code.
(see more at https
Step3: After you've enabled pdb, type help to show available commands. Some commands are e.g. step, quit, restart.
If you have set pdb on before an exception is triggered, (I)python can call the interactive pdb debugger after the traceback printout.
If you want to activate the debugger AFTER an exception is caught/fired, without having to rerun your code, you can use the %debug magic (or debug in ipython).
If you are running some python scripts, where instead of running code line by line you want to run a large chunk of code before checking the variables or stepping through the code line-by-line, it's useful to use import pdb; pdb.set_trace().
Go to ipython or terminal and execute pdb1.py to see how it is used in practice inside python scripts.
If you know where you want to exit the code a priori, you can use sys.exit().
Step4: Catching Errors
Some common types of errors
Step5: But sometimes you may want to use if... else instead of try...except.
If the program knows how to fall back to a default, that's not an unexpected event
Exceptions should only be used to handle exceptional cases
e.g. something requiring users' attention
Conditions
Booleans are equivalent to 0 (False) and 1 (True) inside python | Python Code:
# Let's first define a broken function
def blah(a, b):
c = 10
return a/b - c
# call the function
# define some varables to pass to the function
aa = 5
bb = 10
print blah(aa, bb) # call the function
Explanation: One way of running programs in python is by executing a script, with run <script.py> in python or python <script.py> in terminal.
What if you realize that something in the script is wrong after you have executed the file, or for whatever reason you want to interupt the program?
You can use ctrl+c to abort the program which, essentially, is throwing an "exception" -- KeyboardInterrupt exception. We will briefly talk about Exception later in this notebook.
If you are writing some new code (to a python script) and you are unsure whether or not it will work, instead of doing run <script.py> and then manually interupting your code with ctrl+c, there are other more elegant ways. In this notebook, we will go over some ways of debugging in python.
1) basic level: using print statements
End of explanation
def blah(a, b):
c = 10
print "a: ", a
print "b: ", b
print "c: ", c
print "a/b = %d/%d = %f" %(a,b,a/b)
print "output:", a/b - c
return a/b - c
blah(aa, bb)
Explanation: As we know, 5/10 - 10 = -9.5 and not -10, so something must be wrong inside the function. In this simple example, it may be super obvious that we are dividing an integer with an integer, and will get back an integer. (Division between integers is defined as returning the integer part of the result, throwing away the remainder. The same division operator does real division when operating with floats, very confusing, right?).
But this is good enough to show why simple print statements will suffice in some cases.
Ok, assuming that we didn't know what the problem was, we will use print to find out what went wrong.
End of explanation
%pdb 0
% pdb off
% pdb on
%pdb
%pdb
Explanation: From this, it's clear that a/b is the problem since a/b = 1/2 and not 0. And you can quickly go an fix that step. For example by using float(b), or by multiplying by 1. , the dot makes it a float.
But using print statements may be inconvenient if the code takes a long time to run, and also you may want to chcek the values of other variables to diagnose the problem. If you use crtl+C at this point, you will lose the values stored inside all variables. Perhaps you would want to go back and put another print statement inside the code and run it again to check another variable. But this goes on... and you may have to go back many times!
Alternatively, you can use the pdb module, which is an interactive source debugger. The variables are presevered at breakpoint, and can interactively step through each line of your code.
(see more at https://pymotw.com/2/pdb/)
To use, you can enable pdb either before an Exception is caught, or after. In Jupyter notebook, pdb can be enabled with the magic command %pdb, %pdb on, %pdb 1, disabled with %pdb, %pdb off or %pdb 0.
End of explanation
import sys
a = [1,2,3]
print a
sys.exit()
b = 'hahaha'
print b
Explanation: After you've enabled pdb, type help to show available commands. Some commands are e.g. step, quit, restart.
If you have set pdb on before an exception is triggered, (I)python can call the interactive pdb debugger after the traceback printout.
If you want to activate the debugger AFTER an exception is caught/fired, without having to rerun your code, you can use the %debug magic (or debug in ipython).
If you are running some python scripts, where instead of running code line by line you want to run a large chunk of code before checking the variables or stepping through the code line-by-line, it's useful to use import pdb; pdb.set_trace().
Go to ipython or terminal and execute pdb1.py to see how it is used in practice inside python scripts.
If you know where you want to exit the code a priori, you can use sys.exit().
End of explanation
# Way to handle errors inside scripts
try:
# what we want the code to do
except: # when the above lines generate errors, will immediately jump to exception handler here, not finishing all the lines in try
# Do something else
# Some Example usage of try…except:
# use default behavior if encounter IOError
try:
import astropy
except ImportError:
print("Astropy not installed...")
# Slightly more complex:
# Try, raise, except, else, finally
try:
print ('blah')
raise ValueError() # throws an error
except ValueError, Err: # only catches 0 division errors
print ("We caught an error! ")
else:
print ("here, if it didn't go through except...no errors are caught")
finally:
print ("literally, finally... Useful for cleaning files, or closing files.")
# If we didn't have an error...
#
try:
print ('blah')
# raise ValueError() # throws an error
except ValueError, Err: # only catches 0 division errors
print ("We caught an error! ")
else:
print ("here, if it didn't go through except... no errors are caught")
finally:
print ("literally, finally... Useful for cleaning files, or closing files.")
Explanation: Catching Errors
Some common types of errors:
NameError:
undefined variables
Logic error:
harder to debug
usually associate with the equation missing something
IOError
TypeError
End of explanation
import numpy as np
mask = [True, True, False]
print np.sum(mask) # same as counting number where mask == True
debug = False
if debug:
print "..."
debug = True
if debug:
print "..."
# define a number
x = 33
# print it if it is greater than 30 but smaller than 50
if x > 30 and x < 50:
print x
# print if number not np.nan
if not np.isnan(x):
print x
# Introducing numpy.where()
import numpy as np
np.where?
# Example 1
a = [0.1, 1, 3, 10, 100]
a = np.array(a) # so we can use np.where
# one way..
conditionIdx = ((a<=10) & (a>=1))
print conditionIdx # boolean
new = a[conditionIdx]
# or directly
new_a = a[((a <= 10) & (a>=1))]
# you can also use np.where
new_a = a[np.where((a <= 10) & (a>=1))]
# Example 2 -- replacement using np.where
beam_ga = np.where(tz > 0, img[tx, ty, tz], 0)
# np.where(if condition is TRUE, then TRUE operation, else)
# Here, to mask out beam value for z<0
Explanation: But sometimes you may want to use if... else instead of try...except.
If the program knows how to fall back to a default, that's not an unexpected event
Exceptions should only be used to handle exceptional cases
e.g. something requiring users' attention
Conditions
Booleans are equivalent to 0 (False) and 1 (True) inside python
End of explanation |
14,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Machine Learning
LA Team Submission ##
Lukas Mosser, Alfredo De la Fuente
In this python notebook we explore many different machine learning algorithms to outperform the prediction model proposed in the prediction facies from wel logs challenge. Particulary, this is a classification problem that belongs to the area of supervised learning. Our approach involves a series of well-known machine learning and statistical algorithms to minimize the defined prediction error functions.
We will organize our present work in the following areas of analysis
Step1: We load the training data to start the exploration stage.
Step2: We declare the fields "Formation" and "Well Name" as categorical variables and then map them to integer values.
Step3: Observation
We could remove the NaN values from the PE variable for further analysis, going from 4149 total values to just 3232 valid values for PE, which has 3232. However, because we will be using XGBoost Algorithm that handles missing data, this won't be necessary.
Step4: We must realize the classification problem is highly imbalanced, therefore making it more challenging to approach.
Step5: We produce correlation plot to observe relationship between variables. The target variable 'Facies' is highly correlated to 'NM_M', 'PE' and 'ILD_log10'.
Step6: Data Analysis
XGboost will be our weapon of choice. XGBoost (or Extreme Gradient Boosting) is a sophisticated algorithm that corresponds to an advanced implementation of gradient boosting algorithm. Despite it's complexity, it is fairly easy to use and very powerful to deal with many different types of irregularities in data. It has been no surprise then, that is has been extensively applied in many machine learning competitions to obtain very promising results.
For the above reasons, to implement the XGBoost algorithm for our regression problem, we will use the above preprocessed data without modifications.
Step7: In order to evaluate our classification model accurary we will use the our following defined metrics, based on the confusion matrix once the classification is performed. The first metric only considers misclassification error and the second one takes into account the fact that facies could be misclassified if they belong to a same group with similar geological characteristics.
Step8: Although XGBoost is a very straightforward algorithm to implement, it's difficulty arises in dealing with a big number of hyperparameters. For that reason, we need to develop routines to tune this parameters to optimize the algorithm performance on the data prediction.
We will use a Cross-Validation approach to improve our model by tuning parameters at each step. There are three types of parameters to consider
Step9: By performing cross-validation routines we can use the produced results to measure how well the model generalizes and at the same time tune the hyperparameters. In this case, we will explore how the results look like if we vary the learning rate.
Step10: Results
We will see the performance of the model by taking the average accuracy and adjacency accuracy in Leaving Out One Well Cross Validation routine.
Step11: We obtain an Average F1 Score from fitting the model to a leaving out one well 0.528226
We import the testing data and apply our trained model to make a prediction. | Python Code:
import xgboost as xgb
print xgb.__version__
%matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
from pandas import set_option
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
from sklearn import preprocessing
from mpl_toolkits.axes_grid1 import make_axes_locatable
from sklearn.cross_validation import KFold, train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.learning_curve import learning_curve
from sklearn.metrics import confusion_matrix, f1_score
from sklearn.ensemble import RandomForestClassifier
from classification_utilities import display_cm, display_adj_cm
Explanation: Facies classification using Machine Learning
LA Team Submission ##
Lukas Mosser, Alfredo De la Fuente
In this python notebook we explore many different machine learning algorithms to outperform the prediction model proposed in the prediction facies from wel logs challenge. Particulary, this is a classification problem that belongs to the area of supervised learning. Our approach involves a series of well-known machine learning and statistical algorithms to minimize the defined prediction error functions.
We will organize our present work in the following areas of analysis:
- Problem Modeling
- Data Preprocessing
- Data Analysis
- Results
Problem Modeling
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Data Preprocessing
Let's import all the libraries that will be particularly needed for the analysis.
IMPORTANT: We need to install the xgboost package to run this notebook ( %sh pip install xgboost )
End of explanation
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = '../facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data
Explanation: We load the training data to start the exploration stage.
End of explanation
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
Explanation: We declare the fields "Formation" and "Well Name" as categorical variables and then map them to integer values.
End of explanation
#PE_mask = training_data['PE'].notnull().values
#training_data = training_data[PE_mask]
Explanation: Observation
We could remove the NaN values from the PE variable for further analysis, going from 4149 total values to just 3232 valid values for PE, which has 3232. However, because we will be using XGBoost Algorithm that handles missing data, this won't be necessary.
End of explanation
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
Explanation: We must realize the classification problem is highly imbalanced, therefore making it more challenging to approach.
End of explanation
plt.figure(figsize=(10, 10))
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
training_data.describe()
Explanation: We produce correlation plot to observe relationship between variables. The target variable 'Facies' is highly correlated to 'NM_M', 'PE' and 'ILD_log10'.
End of explanation
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
type(Y_train)
Explanation: Data Analysis
XGboost will be our weapon of choice. XGBoost (or Extreme Gradient Boosting) is a sophisticated algorithm that corresponds to an advanced implementation of gradient boosting algorithm. Despite it's complexity, it is fairly easy to use and very powerful to deal with many different types of irregularities in data. It has been no surprise then, that is has been extensively applied in many machine learning competitions to obtain very promising results.
For the above reasons, to implement the XGBoost algorithm for our regression problem, we will use the above preprocessed data without modifications.
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
Explanation: In order to evaluate our classification model accurary we will use the our following defined metrics, based on the confusion matrix once the classification is performed. The first metric only considers misclassification error and the second one takes into account the fact that facies could be misclassified if they belong to a same group with similar geological characteristics.
End of explanation
# Cross Validation parameters
cv_folds = 10
rounds = 100
# Proposed Initial Model
xgb1 = xgb.XGBClassifier( learning_rate =0.01, n_estimators=500, max_depth=6,
min_child_weight=1, gamma=0, subsample=0.8,
colsample_bytree=0.8, objective='multi:softmax',
nthread=4, scale_pos_weight=1, seed=27)
xgb_param_1 = xgb1.get_xgb_params()
xgb_param_1['num_class'] = 9
# Perform cross-validation
cvresult = xgb.cv(xgb_param_1, dtrain, num_boost_round=xgb_param_1['n_estimators'],
stratified = True, nfold=cv_folds, metrics='merror', early_stopping_rounds=rounds)
print "\nCross Validation Training Report Summary"
print cvresult.tail()
xgb1.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
xgb1.fit(X_train, Y_train,eval_metric='merror')
#Predict training set:
predictions = xgb1.predict(X_train)
#Print model report
# Confusion Matrix
conf = confusion_matrix(Y_train, predictions )
# Print Results
print "\nModel Report"
print "-Accuracy: %.6f" % ( accuracy(conf) )
print "-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) )
print "\nConfusion Matrix"
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
# Print Feature Importance
feat_imp = pd.Series(xgb1.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
Explanation: Although XGBoost is a very straightforward algorithm to implement, it's difficulty arises in dealing with a big number of hyperparameters. For that reason, we need to develop routines to tune this parameters to optimize the algorithm performance on the data prediction.
We will use a Cross-Validation approach to improve our model by tuning parameters at each step. There are three types of parameters to consider:
General Parameters: Guide the overall functioning
Booster Parameters: Guide the individual booster (tree/regression) at each step
Learning Task Parameters: Guide the optimization performed
We must realize that our training data is quite reduced, therefore to evaluate our model generalization
End of explanation
print("Parameter optimization")
grid_search1 = GridSearchCV(xgb1,{'learning_rate':[0.001,0.01,0.05] , 'n_estimators':[100,200,500]},
scoring='accuracy' , n_jobs = 4)
grid_search1.fit(X_train,Y_train)
print("Best Set of Parameters")
grid_search1.grid_scores_, grid_search1.best_params_, grid_search1.best_score_
# Cross Validation parameters
cv_folds = 10
rounds = 100
# Proposed Initial Model
xgb2 = xgb.XGBClassifier( learning_rate =0.001, n_estimators=500, max_depth=6,
min_child_weight=1, gamma=0, subsample=0.8,
colsample_bytree=0.8, objective='multi:softmax',
nthread=4, scale_pos_weight=1, seed=27)
xgb_param_2 = xgb2.get_xgb_params()
xgb_param_2['num_class'] = 9
# Perform cross-validation
cvresult = xgb.cv(xgb_param_2, dtrain, num_boost_round=xgb_param_2['n_estimators'],
nfold=cv_folds, metrics='merror', early_stopping_rounds=rounds)
print "\nCross Validation Training Report Summary"
print cvresult.tail()
xgb2.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
xgb2.fit(X_train, Y_train,eval_metric='merror')
#Predict training set:
predictions = xgb2.predict(X_train)
#Print model report
# Confusion Matrix
conf = confusion_matrix(Y_train, predictions )
# Print Results
print "\nModel Report"
print "-Accuracy: %.6f" % ( accuracy(conf) )
print "-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) )
# Confusion Matrix
print "\nConfusion Matrix"
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
# Print Feature Importance
feat_imp = pd.Series(xgb2.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
print("Parameter optimization")
grid_search1 = GridSearchCV(xgb2,{'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100] },
scoring='accuracy' , n_jobs = 4)
grid_search1.fit(X_train,Y_train)
print("Best Set of Parameters")
grid_search1.grid_scores_, grid_search1.best_params_, grid_search1.best_score_
#Final Model
cv_folds = 10
rounds = 100
xgb_final = xgb.XGBClassifier( learning_rate =0.001, n_estimators=500, max_depth=6,
min_child_weight=1, gamma=0, subsample=0.8, reg_alpha = 1,
colsample_bytree=0.8, objective='multi:softmax',
nthread=4, scale_pos_weight=1, seed=27)
xgb_param_final = xgb_final.get_xgb_params()
xgb_param_final['num_class'] = 9
# Perform cross-validation
cvresult = xgb.cv(xgb_param_final, dtrain, num_boost_round=xgb_param_final['n_estimators'],
nfold=cv_folds, metrics='merror', early_stopping_rounds=rounds)
print "\nCross Validation Training Report Summary"
print cvresult.tail()
xgb_final.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
xgb_final.fit(X_train, Y_train,eval_metric='merror')
#Predict training set:
predictions = xgb_final.predict(X_train)
#Print model report
# Confusion Matrix
print "\nConfusion Matrix"
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
# Print Results
print "\nModel Report"
print "-Accuracy: %.6f" % ( accuracy(conf) )
print "-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) )
Explanation: By performing cross-validation routines we can use the produced results to measure how well the model generalizes and at the same time tune the hyperparameters. In this case, we will explore how the results look like if we vary the learning rate.
End of explanation
# Import data
filename = '../facies_vectors.csv'
data = pd.read_csv(filename)
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Leave out one well for prediction
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train = data['Facies' ] - 1
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Model
model_final = xgb.XGBClassifier( learning_rate =0.001, n_estimators=500, max_depth=6,
min_child_weight=1, gamma=0, subsample=0.8, reg_alpha = 10,
colsample_bytree=0.8, objective='multi:softmax',
nthread=4, scale_pos_weight=1, seed=27)
#Fit the algorithm on the data
model_final.fit( train_X , train_Y , eval_metric = 'merror' )
# model_final = RandomForestClassifier(n_estimators=1000) # RANDOM FORREST
#model_final.fit(train_X, train_Y)
#Predict training set:
predictions = model_final.predict(test_X)
#Print model report
print "\n------------------------------------------------------"
print "Leaving out well " + well_names[i]
# Confusion Matrix
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
# Print Results
print "\nModel Report"
print "-Accuracy: %.6f" % ( accuracy(conf) )
print "-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) )
print "-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) )
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print "\nConfusion Matrix Results"
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print "\n------------------------------------------------------"
print "Final Results"
print "-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1)))
Explanation: Results
We will see the performance of the model by taking the average accuracy and adjacency accuracy in Leaving Out One Well Cross Validation routine.
End of explanation
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = xgb_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction.csv')
Explanation: We obtain an Average F1 Score from fitting the model to a leaving out one well 0.528226
We import the testing data and apply our trained model to make a prediction.
End of explanation |
14,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convertir arreglos de numpy en tensores
Step1: Crear tensores directamente
Step2: Operaciones Básicas
Para ver todas las operaciones --> https
Step3: Hacer una sesión interactiva
El levantar una sesión es computacionalmente costoso, por eso es preferible que se haga una sola vez. Cuando se están haciendo pruebas con código es comun levantar un sesión interactiva una sola vez, e invocar a la función eval() que reutiliza la misma sesión sin tener que estar creando sesiones nuevas cada vez que se quieran obtener resultados.
Step4: Usando una variable
Un ejemplo sobre-simplificado en donde el booleano spike se activa cuando el dato nuevo excede en 5 unidades al dato anterior. Se imprime en cada iteracion el estado del booleano para verificar si hubi un Spike o no.
Step5: Guardar variables
Step6: Cargar una variable guardada
Step7: Visualizando operaciones
Como ejemplo se usara el calculo de promedio ponderado que se calcula como
Step8: NOTA | Python Code:
m1 = [[1.0, 2.0],
[3.0, 4.0]]
m2 = np.array([[1.0, 2.0],
[3.0, 4.0]],dtype=np.float32)
m3 = tf.constant([[1.0, 2.0],
[3.0, 4.0]])
print(type(m1))
print(type(m2))
print(type(m3))
t1 = tf.convert_to_tensor(m1, dtype=tf.float32)
t2 = tf.convert_to_tensor(m2, dtype=tf.float32)
t3 = tf.convert_to_tensor(m3, dtype=tf.float32)
print(type(t1))
print(type(t2))
print(type(t3))
Explanation: Convertir arreglos de numpy en tensores
End of explanation
m1 = tf.constant([1.,2.])
m2 = tf.constant([[1],[2]])
m3 = tf.constant([
[[1,2],
[3,4],
[5,6]],
[[7,8],
[9,10],
[11,12]] ])
print(m1)
print(m2)
print(m3)
m_zeros = tf.zeros([1,2])
m_ones = tf.ones([3,3])
m_sevens = tf.ones([2,3,2])*7
print(m_zeros)
print(m_ones)
print(m_sevens)
Explanation: Crear tensores directamente
End of explanation
x = tf.constant([1.,2.])
y = tf.constant([5.,6.])
# Definir la operacion negacion sobre el tensor x
neg_op = tf.negative(x)
print(neg_op)
#tf.add(x, y) # Add two tensors of the same type, x + y
#tf.subtract(x, y) # Subtract tensors of the same type, x - y
#tf.multiply(x, y) # Multiply two tensors element-wise
#tf.pow(x, y) # Take the element-wise power of x to y
#tf.exp(x) # Equivalent to pow(e, x), where e is Euler’s number (2.718...)
#tf.sqrt(x) # Equivalent to pow(x, 0.5)
#tf.div(x, y) # Take the element-wise division of x and y
#tf.truediv(x, y) # Same as tf.div, except casts the arguments as a float
#tf.floordiv(x, y) # Same as truediv, except rounds down the final answer into an integer
#tf.mod(x, y) # Takes the element-wise remainder from division
# Ejecutar las operaciones definidas e imprimir los resultados
with tf.Session() as sess:
result = sess.run(neg_op)
# Imprimir el resultado computado
print(result)
Explanation: Operaciones Básicas
Para ver todas las operaciones --> https://www.tensorflow.org/api_guides/python/math_ops
El código realmente corre hasta que se ejecuta una sesión. Esta es la manera que tiene tensorflow de desacoplar el código de ML del harware en el que correrá. Entonces lo que se hace es definir las operaciones que queremos computar y después, en una sesión (totalmente configurable, pero ese es otro tema) se ejecutan efectivamente esas líneas de código.
End of explanation
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.constant([[1.,2.]])
neg_x = tf.negative(x)
result = neg_x.eval()
print(result)
sess.close()
Explanation: Hacer una sesión interactiva
El levantar una sesión es computacionalmente costoso, por eso es preferible que se haga una sola vez. Cuando se están haciendo pruebas con código es comun levantar un sesión interactiva una sola vez, e invocar a la función eval() que reutiliza la misma sesión sin tener que estar creando sesiones nuevas cada vez que se quieran obtener resultados.
End of explanation
import tensorflow as tf
sess = tf.InteractiveSession()
raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13]
spike = tf.Variable(False)
spike.initializer.run()
for i in range(1, len(raw_data)):
if raw_data[i] - raw_data[i-1] > 5:
updater = tf.assign(spike, True)
updater.eval()
else:
tf.assign(spike, False).eval()
print("Spike", spike.eval())
sess.close()
Explanation: Usando una variable
Un ejemplo sobre-simplificado en donde el booleano spike se activa cuando el dato nuevo excede en 5 unidades al dato anterior. Se imprime en cada iteracion el estado del booleano para verificar si hubi un Spike o no.
End of explanation
import tensorflow as tf
sess = tf.InteractiveSession()
raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13]
spikes = tf.Variable([False]*len(raw_data),name='spikes')
spikes.initializer.run()
saver = tf.train.Saver({"spikes": spikes})
for i in range(1, len(raw_data)):
if raw_data[i] - raw_data[i-1] > 5:
spikes_val = spikes.eval()
spikes_val[i] = True
updater = tf.assign(spikes,spikes_val)
updater.eval()
save_path = saver.save(sess, "./checkpoints/spikes.ckpt")
print("spikes data saved in file: %s" % save_path)
spikes_val = spikes.eval()
print("SPIKES: {}".format(spikes_val))
sess.close()
Explanation: Guardar variables
End of explanation
import tensorflow as tf
sess = tf.InteractiveSession()
loaded_spikes = tf.Variable([False]*8, name='loaded_spikes')
saver = tf.train.Saver({"spikes": loaded_spikes})
saver.restore(sess, "./checkpoints/spikes.ckpt")
print(loaded_spikes.eval())
sess.close()
Explanation: Cargar una variable guardada
End of explanation
import tensorflow as tf
import numpy as np
raw_data = np.random.normal(10,1,100)
alpha = tf.constant(0.05)
curr_value = tf.placeholder(tf.float32)
prev_avg = tf.Variable(0.)
update_avg = alpha * curr_value + (1 - alpha) * prev_avg
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(len(raw_data)):
curr_avg = sess.run(update_avg, feed_dict={curr_value:raw_data[i]})
sess.run(tf.assign(prev_avg, curr_avg))
print(raw_data[i], curr_avg)
Explanation: Visualizando operaciones
Como ejemplo se usara el calculo de promedio ponderado que se calcula como:
$$ Avg_{t} = f(Avg_{t-1}, x_{t}) = (1 - a)Avg_{t-1} + ax_{t} $$
Primero se define el algoritmo y se imprime en consola... En el siguiente bloque se volvera a hacer el mismo algoritmo pero con las adiciones necesarias para poder visualizarlo en tensorboard.
End of explanation
import tensorflow as tf
import numpy as np
raw_data = np.random.normal(10,1,100)
alpha = tf.constant(0.05)
curr_value = tf.placeholder(tf.float32)
prev_avg = tf.Variable(0.)
update_avg = alpha * curr_value + (1 - alpha) * prev_avg
avg_hist = tf.summary.scalar("running_average", update_avg)
value_hist = tf.summary.scalar("incoming_values", curr_value)
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("./logs")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(len(raw_data)):
summary_str, curr_avg = sess.run([merged,update_avg], feed_dict={curr_value:raw_data[i]})
sess.run(tf.assign(prev_avg, curr_avg))
print(raw_data[i], curr_avg)
writer.add_summary(summary_str, i)
Explanation: NOTA: Para visualizar se utilizara tensorboard. Si se esta usando zsh puede ser que no encuentre el comando tensorboard. Normalmente se arregla instalando tensorboard con pip y reinicializando la terminal zsh (cuando este es el problema si uno se cambia a la terminal sh el comando si existe!)
Se visualiza ejecutando en terminal:
tensorboard --logdir=./logs
A continuacion se utiliza el SummaryWriter para poder visualizar en el tensorboard lo que esta pasando con nuestro algoritmo.
End of explanation |
14,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
14,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NANO106 - Symmetry Computations on $mmm (D_{2h})$ Point Group
by Shyue Ping Ong
This notebook demonstrates the computation of orbits in the mmm point group. It is part of course material for UCSD's NANO106 - Crystallography of Materials. Unlike the $m\overline{3}m (O_h)$ version, this duplicates relevant code from the symmetry package to explicitly demonstrate the priniciples of generating point group symmetry operations.
Preliminaries
Let's start by importing the numpy, sympy and other packages we need.
Step1: We will now define a useful function for checking existence of np.arrays in a list of arrays. It is not the most efficient implementation, but would suffice for our purposes.
Step2: Generating the Symmetry Operations
Next, we specify the generators for mmm point group, which are the three mirror planes. Note that the existence of the three two-fold rotation axes is implied by the existence of the three mirror planes.
Step3: We will now generate all the group symmetry operation matrices from the generators.
Step5: Computing Orbits
Using sympy, we specify the symbolic symbols x, y, z to represent position coordinates. We also define a function to generate the orbit given a set of symmetry operations and a point p.
Step6: Orbit for General Position
Step7: Orbit for Special Position on two-fold rotation axes
Step8: The orbit is similar for the other two-fold axes on the a and b axes are similar.
Orbit for Special Position on mirror planes
Positions on the mirror on the a-b plane have coordinates (x, y, 0). | Python Code:
import numpy as np
import itertools
from sympy import symbols
Explanation: NANO106 - Symmetry Computations on $mmm (D_{2h})$ Point Group
by Shyue Ping Ong
This notebook demonstrates the computation of orbits in the mmm point group. It is part of course material for UCSD's NANO106 - Crystallography of Materials. Unlike the $m\overline{3}m (O_h)$ version, this duplicates relevant code from the symmetry package to explicitly demonstrate the priniciples of generating point group symmetry operations.
Preliminaries
Let's start by importing the numpy, sympy and other packages we need.
End of explanation
def in_array_list(array_list, a):
for i in array_list:
if np.all(np.equal(a, i)):
return True
return False
Explanation: We will now define a useful function for checking existence of np.arrays in a list of arrays. It is not the most efficient implementation, but would suffice for our purposes.
End of explanation
generators = []
for i in xrange(3):
g = np.eye(3).astype(np.int)
g[i, i] = -1
generators.append(g)
Explanation: Generating the Symmetry Operations
Next, we specify the generators for mmm point group, which are the three mirror planes. Note that the existence of the three two-fold rotation axes is implied by the existence of the three mirror planes.
End of explanation
symm_ops = []
symm_ops.extend(generators)
new_ops = generators
while len(new_ops) > 0:
gen_ops = []
for g1, g2 in itertools.product(new_ops, symm_ops):
#Note that we cast the op to int to improve presentation of the results.
#This is fine in crystallographic reference frame.
op = np.dot(g1, g2)
if not in_array_list(symm_ops, op):
gen_ops.append(op)
symm_ops.append(op)
op = np.dot(g2, g1)
if not in_array_list(symm_ops, op):
gen_ops.append(op)
symm_ops.append(op)
new_ops = gen_ops
print "The order of the group is %d. The group matrices are:" % len(symm_ops)
for op in symm_ops:
print op
Explanation: We will now generate all the group symmetry operation matrices from the generators.
End of explanation
x, y, z = symbols("x y z")
def get_orbit(symm_ops, p):
Given a set of symmops and a point p, this function returns the orbit
orbit = []
for o in symm_ops:
pp = np.dot(o, p)
if not in_array_list(orbit, pp):
orbit.append(pp)
return orbit
Explanation: Computing Orbits
Using sympy, we specify the symbolic symbols x, y, z to represent position coordinates. We also define a function to generate the orbit given a set of symmetry operations and a point p.
End of explanation
p = np.array([x, y, z])
print "For the general position %s, the orbit is " % str(p)
for o in get_orbit(symm_ops, p):
print o,
Explanation: Orbit for General Position
End of explanation
p = np.array([0, 0, z])
orb = get_orbit(symm_ops, p)
print "For the special position %s on the two-fold axis, the orbit comprise %d points:" % (str(p), len(orb))
for o in orb:
print o,
Explanation: Orbit for Special Position on two-fold rotation axes
End of explanation
p = np.array([x, y, 0])
orb = get_orbit(symm_ops, p)
print "For the special position %s on the two-fold axis, the orbit comprise %d points:" % (str(p), len(orb))
for o in orb:
print o,
Explanation: The orbit is similar for the other two-fold axes on the a and b axes are similar.
Orbit for Special Position on mirror planes
Positions on the mirror on the a-b plane have coordinates (x, y, 0).
End of explanation |
14,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Keras-Basics" data-toc-modified-id="Keras-Basics-1"><span class="toc-item-num">1 </span>Keras Basics</a></span><ul class="toc-item"><li><span><a href="#Saving-and-loading-the-models" data-toc-modified-id="Saving-and-loading-the-models-1.1"><span class="toc-item-num">1.1 </span>Saving and loading the models</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Keras Basics
Basic Keras API to build a simple multi-layer neural network.
Step2: Basics of training a model
Step3: Once our model looks good, we can configure its learning process with .compile(), where you need to specify which optimizer to use, and the loss function ( categorical_crossentropy is the typical one for multi-class classification) and the metrics to track.
Finally, .fit() the model by passing in the training, validation set, the number of epochs and batch size. For the batch size, we typically specify this number to be power of 2 for computing efficiency.
Step4: Saving and loading the models
It is not recommended to use pickle or cPickle to save a Keras model. By saving it as a HDF5 file, we can preserve the configuration and weights of the model. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
from keras.datasets import mnist
from keras.utils import np_utils
from keras.optimizers import RMSprop
from keras.models import Sequential, load_model
from keras.layers.core import Dense, Dropout, Activation
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,keras
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Keras-Basics" data-toc-modified-id="Keras-Basics-1"><span class="toc-item-num">1 </span>Keras Basics</a></span><ul class="toc-item"><li><span><a href="#Saving-and-loading-the-models" data-toc-modified-id="Saving-and-loading-the-models-1.1"><span class="toc-item-num">1.1 </span>Saving and loading the models</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
n_classes = 10
n_features = 784 # mnist is a 28 * 28 image
# load the dataset and some preprocessing step that can be skipped
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, n_features)
X_test = X_test.reshape(10000, n_features)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# images takes values between 0 - 255, we can normalize it
# by dividing every number by 255
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices (one-hot encoding)
# note: you HAVE to to this step
Y_train = np_utils.to_categorical(y_train, n_classes)
Y_test = np_utils.to_categorical(y_test , n_classes)
Explanation: Keras Basics
Basic Keras API to build a simple multi-layer neural network.
End of explanation
# define the model
model = Sequential()
model.add(Dense(512, input_dim = n_features))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(n_classes))
model.add(Activation('softmax'))
# we can check the summary to check the number of parameters
model.summary()
Explanation: Basics of training a model:
The easiest way to build models in keras is to use Sequential model and the .add() method to stack layers together in sequence to build up our network.
We start with Dense (fully-connected layers), where we specify how many nodes you wish to have for the layer. Since the first layer that we're going to add is the input layer, we have to make sure that the input_dim parameter matches the number of features (columns) in the training set. Then after the first layer, we don't need to specify the size of the input anymore.
Then we specify the Activation function for that layer, and add a Dropout layer if we wish.
For the last Dense and Activation layer we need to specify the number of class as the output and softmax to tell it to output the predicted class's probability.
End of explanation
model.compile(loss = 'categorical_crossentropy', optimizer = RMSprop(), metrics = ['accuracy'])
n_epochs = 10
batch_size = 128
history = model.fit(
X_train,
Y_train,
batch_size = batch_size,
epochs = n_epochs,
verbose = 1, # set it to 0 if we don't want to have progess bars
validation_data = (X_test, Y_test)
)
# history attribute stores the training and validation score and loss
history.history
# .evaluate gives the loss and metric evaluation score for the dataset,
# here the result matches the validation set's history above
print('metrics: ', model.metrics_names)
score = model.evaluate(X_test, Y_test, verbose = 0)
score
# stores the weight of the model,
# it's a list, note that the length is 6 because we have 3 dense layer
# and each one has it's associated bias term
weights = model.get_weights()
print(len(weights))
# W1 should have 784, 512 for the 784
# feauture column and the 512 the number
# of dense nodes that we've specified
W1, b1, W2, b2, W3, b3 = weights
print(W1.shape)
print(b1.shape)
# predict the accuracy
y_pred = model.predict_classes(X_test, verbose = 0)
accuracy = np.sum(y_test == y_pred) / X_test.shape[0]
print('valid accuracy: %.2f' % (accuracy * 100))
Explanation: Once our model looks good, we can configure its learning process with .compile(), where you need to specify which optimizer to use, and the loss function ( categorical_crossentropy is the typical one for multi-class classification) and the metrics to track.
Finally, .fit() the model by passing in the training, validation set, the number of epochs and batch size. For the batch size, we typically specify this number to be power of 2 for computing efficiency.
End of explanation
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
# testing: predict the accuracy using the loaded model
y_pred = model.predict_classes(X_test, verbose = 0)
accuracy = np.sum(y_test == y_pred) / X_test.shape[0]
print('valid accuracy: %.2f' % (accuracy * 100))
Explanation: Saving and loading the models
It is not recommended to use pickle or cPickle to save a Keras model. By saving it as a HDF5 file, we can preserve the configuration and weights of the model.
End of explanation |
14,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build a fraud detection model on Vertex AI
Step1: <table align="left">
<td>
<a href="https
Step2: Install the latest version of the Vertex AI client library.
Run the following command in your notebook environment to install the Vertex SDK for Python
Step3: Run the following command in your notebook environment to install witwidget
Step4: Run the following command in your notebook environment to install joblib
Step5: Run the following command in your notebook environment to install scikit-learn
Step6: Run the following command in your notebook environment to install fsspec
Step7: Run the following command in your notebook environment to install gcsfs
Step8: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step9: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API. {TODO
Step10: Otherwise, set your project ID here.
Step11: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step12: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step13: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you create a model in Vertex AI using the Cloud SDK, you give a Cloud Storage path where the trained model is saved.
In this tutorial, Vertex AI saves the trained model to a Cloud Storage bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step14: Only if your bucket doesn't already exist
Step15: Finally, validate access to your Cloud Storage bucket by examining its contents
Step16: Tutorial
Import required libraries
Step17: Analyze the dataset
<a name="section-5"></a>
Take a quick look at the dataset and the number of rows.
Step18: Check for null values.
Step19: Check the type of transactions involved.
Step20: Working with imbalanced data
Althuogh the outcome variable "isFraud" seems to be very imbalanced in the current dataset, a base model can be trained on it to check the quality of fraudulent transactions in the data and if needed, counter measures like undersampling of majority class or oversampling of the minority class can be considered.
Step21: Prepare data for modeling
To prepare the dataset for training, a few columns need to be dropped that contain either unique data ('nameOrig','nameDest') or redundant fields ('isFlaggedFraud'). The categorical field "type" which describes the type of transaction and is important for fraud detection needs to be one-hot encoded.
Step22: Remove the outcome variable from the training data.
Step23: Split the data and assign 70% for training and 30% for testing.
Step24: Fit a random forest model
<a name="section-6"></a>
Fit a simple random forest classifier on the preprocessed training dataset.
Step25: Analyzing Results
<a name="section-7"></a>
The model returns good scores and the confusion matrix confirms that this model can indeed work with imbalanced data.
Step26: Use RandomForestClassifier's feature_importances_ function to get a better understanding about which features were the most useful to the model.
Step27: Save the model to a Cloud Storage path
<a name="section-8"></a>
Step28: Create a model in Vertex AI
<a name="section-9"></a>
Step29: Create an Endpoint
<a name="section-10"></a>
Step30: Deploy the model to the created Endpoint
Configure the deployment name, machine type, and other parameters for the deployment.
Step31: What-If Tool
<a name="section-11"></a>
The What-If Tool can be used to analyze the model predictions on a test data. See a brief introduction to the What-If Tool. In this tutorial, the What-If Tool will be configured and run on the model trained locally, and on the model deployed on Vertex AI Endpoint in the previous steps.
WitConfigBuilder provides the set_ai_platform_model() method to configure the What-If Tool with a model deployed as a version on Ai Platform models. This feature currently supports Ai Platform only but not Vertex AI models. Fortunately, there is also an option to pass a custom function for generating predictions through the set_custom_predict_fn() method where either the locally trained model or a function that returns predictions from a Vertex AI model can be passed.
Prepare test samples
Save some samples from the test data for both the available classes (Fraud/not-Fraud) to analyze the model using the What-If Tool.
Step32: Running the What-If Tool on the local model
Step33: Running the What-If Tool on the deployed Vertex AI model
Step34: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step35: Clean up
<a name="section-12"></a>
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Build a fraud detection model on Vertex AI
End of explanation
import os
import google.auth
USER_FLAG = ""
# Google Cloud Notebook requires dependencies to be installed with '--user'
if "default" in dir(google.auth):
USER_FLAG = "--user"
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/fraud_detection/fraud-detection-model.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/fraud_detection/fraud-detection-model.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/workbench/fraud_detection/fraud-detection-model.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Table of contents
Overview
Dataset
Objective
Costs
Analyze the dataset
Fit a random forest model
Analyzing results
Save the model to a Cloud Storagae path
Create a model in Vertex AI
Create an Endpoint
What-If Tool
Clean up
Overview
<a name="section-1"></a>
This tutorial shows you how to build, deploy, and analyze predictions from a simple random forest model using tools like scikit-learn, Vertex AI, and the What-IF Tool (WIT) on a synthetic fraud transaction dataset to solve a financial fraud detection problem.
Dataset
<a name="section-2"></a>
The dataset used in this tutorial is publicly available at Kaggle. See Synthetic Financial Datasets For Fraud Detection.
Objective
<a name="section-3"></a>
This tutorial demonstrates data analysis and model-building using a synthetic financial dataset. The model is trained on identifying fraudulent cases among the transactions. Then, the trained model is deployed on a Vertex AI Endpoint and analyzed using the What-If Tool. The steps taken in this tutorial are as follows:
Installation of required libraries
Reading the dataset from a Cloud Storage bucket
Performing exploratory analysis on the dataset
Preprocessing the dataset
Training a random forest model using scikit-learn
Saving the model to a Cloud Storage bucket
Creating a Vertex AI model resource and deploying to an endpoint
Running the What-If Tool on test data
Un-deploying the model and cleaning up the model resources
Costs
<a name="section-4"></a>
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
End of explanation
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: Install the latest version of the Vertex AI client library.
Run the following command in your notebook environment to install the Vertex SDK for Python:
End of explanation
! pip install {USER_FLAG} witwidget
Explanation: Run the following command in your notebook environment to install witwidget:
End of explanation
! pip install {USER_FLAG} joblib
Explanation: Run the following command in your notebook environment to install joblib:
End of explanation
! pip install {USER_FLAG} scikit-learn
Explanation: Run the following command in your notebook environment to install scikit-learn:
End of explanation
! pip install {USER_FLAG} fsspec
Explanation: Run the following command in your notebook environment to install fsspec:
End of explanation
! pip install {USER_FLAG} gcsfs
Explanation: Run the following command in your notebook environment to install gcsfs:
End of explanation
# Automatically restart kernel after installs
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API. {TODO: Update the APIs needed for your tutorial. Edit the API names, and update the link to append the API IDs, separating each one with a comma. For example, container.googleapis.com,cloudbuild.googleapis.com}
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
! gcloud config set project $PROJECT_ID
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "-vertex-ai-" + TIMESTAMP
BUCKET_URI = f"gs://{BUCKET_NAME}"
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you create a model in Vertex AI using the Cloud SDK, you give a Cloud Storage path where the trained model is saved.
In this tutorial, Vertex AI saves the trained model to a Cloud Storage bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import warnings
import joblib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from google.cloud import aiplatform, storage
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import (average_precision_score, classification_report,
confusion_matrix, f1_score)
from sklearn.model_selection import train_test_split
from witwidget.notebook.visualization import WitConfigBuilder, WitWidget
warnings.filterwarnings("ignore")
# Load dataset
df = pd.read_csv(
"gs://cloud-samples-data/vertex-ai/managed_notebooks/fraud_detection/fraud_detection_data.csv"
)
Explanation: Tutorial
Import required libraries
End of explanation
print("shape : ", df.shape)
df.head()
Explanation: Analyze the dataset
<a name="section-5"></a>
Take a quick look at the dataset and the number of rows.
End of explanation
df.isnull().sum()
Explanation: Check for null values.
End of explanation
print(df.type.value_counts())
var = df.groupby("type").amount.sum()
fig = plt.figure()
ax1 = fig.add_subplot(1, 1, 1)
var.plot(kind="bar")
ax1.set_title("Total amount per transaction type")
ax1.set_xlabel("Type of Transaction")
ax1.set_ylabel("Amount")
Explanation: Check the type of transactions involved.
End of explanation
# Count number of fraudulent/non-fraudulent transactions
df.isFraud.value_counts()
piedata = df.groupby(["isFlaggedFraud"]).sum()
f, axes = plt.subplots(1, 1, figsize=(6, 6))
axes.set_title("% of fraud transaction detected")
piedata.plot(
kind="pie", y="isFraud", ax=axes, fontsize=14, shadow=False, autopct="%1.1f%%"
)
axes.set_ylabel("")
plt.legend(loc="upper left", labels=["Not Detected", "Detected"])
plt.show()
Explanation: Working with imbalanced data
Althuogh the outcome variable "isFraud" seems to be very imbalanced in the current dataset, a base model can be trained on it to check the quality of fraudulent transactions in the data and if needed, counter measures like undersampling of majority class or oversampling of the minority class can be considered.
End of explanation
df.drop(["nameOrig", "nameDest", "isFlaggedFraud"], axis=1, inplace=True)
X = pd.concat([df.drop("type", axis=1), pd.get_dummies(df["type"])], axis=1)
X.head()
Explanation: Prepare data for modeling
To prepare the dataset for training, a few columns need to be dropped that contain either unique data ('nameOrig','nameDest') or redundant fields ('isFlaggedFraud'). The categorical field "type" which describes the type of transaction and is important for fraud detection needs to be one-hot encoded.
End of explanation
y = X[["isFraud"]]
X = X.drop(["isFraud"], axis=1)
Explanation: Remove the outcome variable from the training data.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42, shuffle=False
)
print(X_train.shape, X_test.shape)
Explanation: Split the data and assign 70% for training and 30% for testing.
End of explanation
print("before initiating")
forest = RandomForestClassifier(verbose=1)
print("after initiating")
forest.fit(X_train, y_train)
print("after fitting")
Explanation: Fit a random forest model
<a name="section-6"></a>
Fit a simple random forest classifier on the preprocessed training dataset.
End of explanation
print("before predicting")
y_prob = forest.predict_proba(X_test)
print("after predicting y_prob")
y_pred = forest.predict(X_test)
print("AUPRC :", (average_precision_score(y_test, y_prob[:, 1])))
print("F1 - score :", (f1_score(y_test, y_pred)))
print("Confusion_matrix : ")
print(confusion_matrix(y_test, y_pred))
print("classification_report")
print(classification_report(y_test, y_pred))
print("after printing classification_report")
Explanation: Analyzing Results
<a name="section-7"></a>
The model returns good scores and the confusion matrix confirms that this model can indeed work with imbalanced data.
End of explanation
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)
forest_importances = pd.Series(importances, index=list(X_train))
fig, ax = plt.subplots()
forest_importances.plot.bar(yerr=std, ax=ax)
ax.set_title("Feature Importance for Fraud Transaction Detection Model")
ax.set_ylabel("Importance")
fig.tight_layout()
Explanation: Use RandomForestClassifier's feature_importances_ function to get a better understanding about which features were the most useful to the model.
End of explanation
# save the trained model to a local file "model.joblib"
FILE_NAME = "model.joblib"
joblib.dump(forest, FILE_NAME)
# Upload the saved model file to Cloud Storage
BLOB_PATH = "[your-blob-path]"
BLOB_NAME = os.path.join(BLOB_PATH, FILE_NAME)
bucket = storage.Client(PROJECT_ID).bucket(BUCKET_NAME)
blob = bucket.blob(BLOB_NAME)
blob.upload_from_filename(FILE_NAME)
Explanation: Save the model to a Cloud Storage path
<a name="section-8"></a>
End of explanation
MODEL_DISPLAY_NAME = "[your-model-display-name]"
ARTIFACT_GCS_PATH = f"{BUCKET_URI}/{BLOB_PATH}"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.1-0:latest"
)
# Create a Vertex AI model resource
aiplatform.init(project=PROJECT_ID, location=REGION)
model = aiplatform.Model.upload(
display_name=MODEL_DISPLAY_NAME,
artifact_uri=ARTIFACT_GCS_PATH,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
model.wait()
print(model.display_name)
print(model.resource_name)
Explanation: Create a model in Vertex AI
<a name="section-9"></a>
End of explanation
ENDPOINT_DISPLAY_NAME = "[your-endpoint-display-name]"
endpoint = aiplatform.Endpoint.create(display_name=ENDPOINT_DISPLAY_NAME)
print(endpoint.display_name)
print(endpoint.resource_name)
Explanation: Create an Endpoint
<a name="section-10"></a>
End of explanation
DEPLOYED_MODEL_NAME = "[your-deployed-model-name]"
MACHINE_TYPE = "n1-standard-2"
# deploy the model to the endpoint
model.deploy(
endpoint=endpoint,
deployed_model_display_name=DEPLOYED_MODEL_NAME,
machine_type=MACHINE_TYPE,
)
model.wait()
print(model.display_name)
print(model.resource_name)
Explanation: Deploy the model to the created Endpoint
Configure the deployment name, machine type, and other parameters for the deployment.
End of explanation
# collect 50 samples for each class-label from the test data
pos_samples = y_test[y_test["isFraud"] == 1].sample(50).index
neg_samples = y_test[y_test["isFraud"] == 0].sample(50).index
test_samples_y = pd.concat([y_test.loc[pos_samples], y_test.loc[neg_samples]])
test_samples_X = X_test.loc[test_samples_y.index].copy()
Explanation: What-If Tool
<a name="section-11"></a>
The What-If Tool can be used to analyze the model predictions on a test data. See a brief introduction to the What-If Tool. In this tutorial, the What-If Tool will be configured and run on the model trained locally, and on the model deployed on Vertex AI Endpoint in the previous steps.
WitConfigBuilder provides the set_ai_platform_model() method to configure the What-If Tool with a model deployed as a version on Ai Platform models. This feature currently supports Ai Platform only but not Vertex AI models. Fortunately, there is also an option to pass a custom function for generating predictions through the set_custom_predict_fn() method where either the locally trained model or a function that returns predictions from a Vertex AI model can be passed.
Prepare test samples
Save some samples from the test data for both the available classes (Fraud/not-Fraud) to analyze the model using the What-If Tool.
End of explanation
# define target and labels
TARGET_FEATURE = "isFraud"
LABEL_VOCAB = ["not-fraud", "fraud"]
# define the function to adjust the predictions
def adjust_prediction(pred):
return [1 - pred, pred]
# Combine the features and labels into one array for the What-If Tool
test_examples = np.hstack(
(test_samples_X.to_numpy(), test_samples_y.to_numpy().reshape(-1, 1))
)
# Configure the WIT to run on the locally trained model
config_builder = (
WitConfigBuilder(
test_examples.tolist(), test_samples_X.columns.tolist() + ["isFraud"]
)
.set_custom_predict_fn(forest.predict_proba)
.set_target_feature(TARGET_FEATURE)
.set_label_vocab(LABEL_VOCAB)
)
# display the WIT widget
WitWidget(config_builder, height=600)
Explanation: Running the What-If Tool on the local model
End of explanation
# configure the target and class-labels
TARGET_FEATURE = "isFraud"
LABEL_VOCAB = ["not-fraud", "fraud"]
# function to return predictions from the deployed Model
def endpoint_predict_sample(instances: list):
prediction = endpoint.predict(instances=instances)
preds = [[1 - i, i] for i in prediction.predictions]
return preds
# Combine the features and labels into one array for the What-If Tool
test_examples = np.hstack(
(test_samples_X.to_numpy(), test_samples_y.to_numpy().reshape(-1, 1))
)
# Configure the WIT with the prediction function
config_builder = (
WitConfigBuilder(
test_examples.tolist(), test_samples_X.columns.tolist() + ["isFraud"]
)
.set_custom_predict_fn(endpoint_predict_sample)
.set_target_feature(TARGET_FEATURE)
.set_label_vocab(LABEL_VOCAB)
)
# run the WIT-widget
WitWidget(config_builder, height=400)
Explanation: Running the What-If Tool on the deployed Vertex AI model
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
# delete the endpoint
endpoint.delete()
# delete the model
model.delete()
delete_bucket = True
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: Clean up
<a name="section-12"></a>
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
14,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Table of Contents
<p><div class="lev1 toc-item"><a href="#Exponential-growth" data-toc-modified-id="Exponential-growth-1"><span class="toc-item-num">1 </span>Exponential growth</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-11"><span class="toc-item-num">1.1 </span>ERY</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-12"><span class="toc-item-num">1.2 </span>PAR</a></div><div class="lev1 toc-item"><a href="#Two-epoch-model" data-toc-modified-id="Two-epoch-model-2"><span class="toc-item-num">2 </span>Two epoch model</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-21"><span class="toc-item-num">2.1 </span>ERY</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-22"><span class="toc-item-num">2.2 </span>PAR</a></div><div class="lev1 toc-item"><a href="#Bottlegrowth" data-toc-modified-id="Bottlegrowth-3"><span class="toc-item-num">3 </span>Bottlegrowth</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-31"><span class="toc-item-num">3.1 </span>ERY</a></div><div class="lev3 toc-item"><a href="#Interpretation" data-toc-modified-id="Interpretation-311"><span class="toc-item-num">3.1.1 </span>Interpretation</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-32"><span class="toc-item-num">3.2 </span>PAR</a></div><div class="lev3 toc-item"><a href="#Interpretation" data-toc-modified-id="Interpretation-321"><span class="toc-item-num">3.2.1 </span>Interpretation</a></div><div class="lev1 toc-item"><a href="#Three-Epoch" data-toc-modified-id="Three-Epoch-4"><span class="toc-item-num">4 </span>Three Epoch</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-41"><span class="toc-item-num">4.1 </span>ERY</a></div><div class="lev3 toc-item"><a href="#Interpretation" data-toc-modified-id="Interpretation-411"><span class="toc-item-num">4.1.1 </span>Interpretation</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-42"><span class="toc-item-num">4.2 </span>PAR</a></div>
Step2: Exponential growth
Step3: ERY
Step4: The time parameter is hitting the upper boundary that I set. The exponential growth model cannot be fit to the ery spectrum with Ludovics correction applied.
PAR
Step5: The time parameter is hitting the upper boundary that I set. The exponential growth model can therefore not be fit to the PAR spectrum with Ludovic's correction applied.
Two epoch model
Step6: ERY
Step7: Still hitting upper boundary on time. The two epoch model cannot be fit to the ERY spectrum with Ludovic's correction applied.
PAR
Step8: All hitting the upper boundary on the time parameter. The two epoch model cannot be fit to the PAR spectrum with Ludovic's correction applied.
Bottlegrowth
Step9: ERY
Step10: This looks like convergence.
Interpretation
Step11: I think such a high effective population size is not realistic.
PAR
Step12: This looks like convergence.
Interpretation
Step13: An effective population size of 36 million is obviously to high. I therefore cannot regard this model fitting as successful.
Three Epoch
Step14: ERY
Step15: Reasonable convergence. Divergent parameter value combinations have the same likelihood. The optimal parameter values from the optimisation runs 16 and 10 (adjacent in the table) show that quite different demographic scenarios can have almost identical likelihood. This is not unusual.
Interpretation
Step16: PAR
Step17: There is no convergence. The three epoch model could not be fit to the PAR spectrum with Ludivic's correction. | Python Code:
from ipyparallel import Client
cl = Client()
cl.ids
%%px --local
# run whole cell on all engines a well as in the local IPython session
import numpy as np
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
import dadi
%%px --local
# import 1D spectrum of ery on all engines:
fs_ery = dadi.Spectrum.from_file('ERY_modified.sfs')
# import 1D spectrum of par on all engines:
fs_par = dadi.Spectrum.from_file('PAR_modified.sfs')
%matplotlib inline
import pylab
pylab.rcParams['figure.figsize'] = [12, 10]
pylab.rcParams['font.size'] = 14
pylab.plot(fs_ery, 'ro--', label='ery', markersize=12)
pylab.plot(fs_par, 'g>--', label='par', markersize=12)
pylab.grid()
pylab.xlabel('minor allele count')
pylab.ylabel('')
pylab.legend()
pylab.title('1D spectra - Ludivics correction applied')
def run_dadi(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
if perturb == True:
p_init = dadi.Misc.perturb_params(p_init, fold=fold,
upper_bound=upper_bound, lower_bound=lower_bound)
# note upper_bound and lower_bound variables are expected to be in the namespace of each engine
# run optimisation of paramters
popt = dadi_opt_func(p0=p_init, data=sfs, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=verbose, maxiter=maxiter, full_output=full_output, \
fixed_params=fixed_params)
# pickle to file
import dill
name = outname[:] # make copy of file name stub!
for p in p_init:
name += "_%.4f" % (p)
with open(name + ".dill", "w") as fh:
dill.dump((p_init, popt), fh)
return p_init, popt
from glob import glob
import dill
from utility_functions import *
import pandas as pd
lbview = cl.load_balanced_view()
from itertools import repeat
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Exponential-growth" data-toc-modified-id="Exponential-growth-1"><span class="toc-item-num">1 </span>Exponential growth</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-11"><span class="toc-item-num">1.1 </span>ERY</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-12"><span class="toc-item-num">1.2 </span>PAR</a></div><div class="lev1 toc-item"><a href="#Two-epoch-model" data-toc-modified-id="Two-epoch-model-2"><span class="toc-item-num">2 </span>Two epoch model</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-21"><span class="toc-item-num">2.1 </span>ERY</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-22"><span class="toc-item-num">2.2 </span>PAR</a></div><div class="lev1 toc-item"><a href="#Bottlegrowth" data-toc-modified-id="Bottlegrowth-3"><span class="toc-item-num">3 </span>Bottlegrowth</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-31"><span class="toc-item-num">3.1 </span>ERY</a></div><div class="lev3 toc-item"><a href="#Interpretation" data-toc-modified-id="Interpretation-311"><span class="toc-item-num">3.1.1 </span>Interpretation</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-32"><span class="toc-item-num">3.2 </span>PAR</a></div><div class="lev3 toc-item"><a href="#Interpretation" data-toc-modified-id="Interpretation-321"><span class="toc-item-num">3.2.1 </span>Interpretation</a></div><div class="lev1 toc-item"><a href="#Three-Epoch" data-toc-modified-id="Three-Epoch-4"><span class="toc-item-num">4 </span>Three Epoch</a></div><div class="lev2 toc-item"><a href="#ERY" data-toc-modified-id="ERY-41"><span class="toc-item-num">4.1 </span>ERY</a></div><div class="lev3 toc-item"><a href="#Interpretation" data-toc-modified-id="Interpretation-411"><span class="toc-item-num">4.1.1 </span>Interpretation</a></div><div class="lev2 toc-item"><a href="#PAR" data-toc-modified-id="PAR-42"><span class="toc-item-num">4.2 </span>PAR</a></div>
End of explanation
dadi.Demographics1D.growth?
%%px --local
func_ex = dadi.Numerics.make_extrap_log_func(dadi.Demographics1D.growth)
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 4]
lower_bound = [1e-4, 0]
Explanation: Exponential growth
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_ery
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 300 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/expgrowth" # set file name stub for opt. result files
fixed_params = None
# set starting values for perturbation
p0 = [1, 1]
#ar_ery = lbview.map(run_dadi, repeat(p0, 10))
ar_ery.get()
# set starting values for perturbation
p0 = [0.1, 0.1]
#ar_ery = lbview.map(run_dadi, repeat(p0, 10))
# set starting values for perturbation
p0 = [10, 0.1]
#ar_ery1 = lbview.map(run_dadi, repeat(p0, 10))
ar_ery = []
for filename in glob("OUT_1D_models/expgrowth*dill"):
ar_ery.append(dill.load(open(filename)))
get_flag_count(ar_ery, NM=True)
import pandas as pd
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 6] # increasing upper bound of time parameter
lower_bound = [1e-4, 0]
# set starting values for perturbation
p0 = [2, 1]
#ar_ery1 = lbview.map(run_dadi, repeat(p0, 10))
ar_ery = []
for filename in glob("OUT_1D_models/expgrowth*dill"):
ar_ery.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 8] # increasing upper bound of time parameter
lower_bound = [1e-4, 0]
# set starting values for perturbation
p0 = [2, 1]
ar_ery1 = lbview.map(run_dadi, repeat(p0, 10))
ar_ery = []
for filename in glob("OUT_1D_models/expgrowth*dill"):
ar_ery.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True).head(15)
Explanation: ERY
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_par
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 300 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/PAR_expgrowth" # set file name stub for opt. result files
fixed_params = None
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 4]
lower_bound = [1e-4, 0]
# set starting values for perturbation
p0 = [1, 1]
ar_par = lbview.map(run_dadi, repeat(p0, 10))
ar_par = []
for filename in glob("OUT_1D_models/PAR_expgrowth*dill"):
ar_par.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_par]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True).head(15)
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 8]
lower_bound = [1e-4, 0]
# set starting values for perturbation
p0 = [1, 1]
ar_par = lbview.map(run_dadi, repeat(p0, 10))
ar_par = []
for filename in glob("OUT_1D_models/PAR_expgrowth*dill"):
ar_par.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_par]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True).head(15)
Explanation: The time parameter is hitting the upper boundary that I set. The exponential growth model cannot be fit to the ery spectrum with Ludovics correction applied.
PAR
End of explanation
dadi.Demographics1D.two_epoch?
%%px --local
func_ex = dadi.Numerics.make_extrap_log_func(dadi.Demographics1D.two_epoch)
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 4]
lower_bound = [1e-4, 0]
Explanation: The time parameter is hitting the upper boundary that I set. The exponential growth model can therefore not be fit to the PAR spectrum with Ludovic's correction applied.
Two epoch model
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_ery
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/ERY_twoEpoch" # set file name stub for opt. result files
fixed_params = None
# set starting values for perturbation
p0 = [1, 1]
ar_ery = lbview.map(run_dadi, repeat(p0, 10))
ar_ery = []
for filename in glob("OUT_1D_models/ERY_twoEpoch*dill"):
ar_ery.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 8]
lower_bound = [1e-4, 0]
# set starting values for perturbation
p0 = [1, 1]
ar_ery = lbview.map(run_dadi, repeat(p0, 10))
ar_ery = []
for filename in glob("OUT_1D_models/ERY_twoEpoch*dill"):
ar_ery.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: ERY
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_par
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/PAR_twoEpoch" # set file name stub for opt. result files
fixed_params = None
p0 = [1, 1]
ar_par = lbview.map(run_dadi, repeat(p0, 10))
ar_par = []
for filename in glob("OUT_1D_models/PAR_twoEpoch*dill"):
ar_par.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_par]
df = pd.DataFrame(data=returned, \
columns=['nu1_0', 'T_0', 'nu1_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: Still hitting upper boundary on time. The two epoch model cannot be fit to the ERY spectrum with Ludovic's correction applied.
PAR
End of explanation
dadi.Demographics1D.bottlegrowth?
%%px --local
func_ex = dadi.Numerics.make_extrap_log_func(dadi.Demographics1D.bottlegrowth)
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 1e4, 8]
lower_bound = [1e-4, 1e-4, 0]
Explanation: All hitting the upper boundary on the time parameter. The two epoch model cannot be fit to the PAR spectrum with Ludovic's correction applied.
Bottlegrowth
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_ery
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/ERY_bottlegrowth" # set file name stub for opt. result files
fixed_params = None
# set starting values for perturbation
p0 = [1, 1, 1]
#ar_ery = lbview.map(run_dadi, repeat(p0, 10))
ar_ery = []
for filename in glob("OUT_1D_models/ERY_bottlegrowth*dill"):
ar_ery.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nuB_0', 'nuF_0', 'T_0', 'nuB_opt', 'nuF_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
# set starting values for perturbation
p0 = [55, 1.3, 1.5]
#ar_ery = lbview.map(run_dadi, repeat(p0, 10))
ar_ery = []
for filename in glob("OUT_1D_models/ERY_bottlegrowth*dill"):
ar_ery.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nuB_0', 'nuF_0', 'T_0', 'nuB_opt', 'nuF_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: ERY
End of explanation
popt = np.array( df.sort_values(by='-logL', ascending=True).iloc[0, 3:6] )
popt
# calculate best-fit model spectrum
model_spectrum = func_ex(popt, ns, pts_l)
theta = dadi.Inference.optimal_sfs_scaling(model_spectrum, fs_ery)
mu = 3e-9
L = fs_ery.data.sum()
print "The optimal value of theta per site for the ancestral population is {0:.4f}.".format(theta/L)
Nref = theta/L/mu/4
Nref
print "At time {0:,} generations ago, the ERY population size instantaneously increased by almost 55-fold (to {1:,}).".format(int(popt[2]*2*Nref), int(popt[0]*Nref))
Explanation: This looks like convergence.
Interpretation
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_par
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/PAR_bottlegrowth" # set file name stub for opt. result files
fixed_params = None
%%px --local
# set lower and upper bounds to nu1 and T
upper_bound = [1e4, 1e4, 6]
lower_bound = [1e-4, 1e-4, 0]
# set starting values for perturbation
p0 = [1, 1, 1]
#ar_par = lbview.map(run_dadi, repeat(p0, 10))
ar_par = []
for filename in glob("OUT_1D_models/PAR_bottlegrowth*dill"):
ar_par.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_par]
df = pd.DataFrame(data=returned, \
columns=['nuB_0', 'nuF_0', 'T_0', 'nuB_opt', 'nuF_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
cl[:]['maxiter'] = 100
# set starting values for perturbation
p0 = [100, 2, 1.2]
ar_par = lbview.map(run_dadi, repeat(p0, 10))
ar_par = []
for filename in glob("OUT_1D_models/PAR_bottlegrowth*dill"):
ar_par.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_par]
df = pd.DataFrame(data=returned, \
columns=['nuB_0', 'nuF_0', 'T_0', 'nuB_opt', 'nuF_opt', 'T_opt', '-logL'])
df.sort_values(by='-logL', ascending=True).head(10)
Explanation: I think such a high effective population size is not realistic.
PAR
End of explanation
popt = np.array( df.sort_values(by='-logL', ascending=True).iloc[0, 3:6] )
popt
# calculate best-fit model spectrum
model_spectrum = func_ex(popt, ns, pts_l)
theta = dadi.Inference.optimal_sfs_scaling(model_spectrum, fs_par)
mu = 3e-9
L = fs_par.data.sum()
print "The optimal value of theta per site for the ancestral population is {0:.4f}.".format(theta/L)
Nref = theta/L/mu/4
Nref
print "At time {0:,} generations ago, the PAR population size instantaneously increased by almost 124-fold (to {1:,}).".format(int(popt[2]*2*Nref), int(popt[0]*Nref))
Explanation: This looks like convergence.
Interpretation
End of explanation
dadi.Demographics1D.three_epoch?
%%px --local
func_ex = dadi.Numerics.make_extrap_log_func(dadi.Demographics1D.three_epoch)
%%px --local
# set lower and upper bounds to nuB, nuF, TB and TF
upper_bound = [1e4, 1e4, 6, 6]
lower_bound = [1e-4, 1e-4, 0, 0]
Explanation: An effective population size of 36 million is obviously to high. I therefore cannot regard this model fitting as successful.
Three Epoch
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_ery
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/ERY_threeEpoch" # set file name stub for opt. result files
fixed_params = None
# set starting values for perturbation
p0 = [10, 1, 1, 1]
#ar_ery = lbview.map(run_dadi, repeat(p0, 20))
ar_ery = []
for filename in glob("OUT_1D_models/ERY_threeEpoch*dill"):
ar_ery.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_ery]
df = pd.DataFrame(data=returned, \
columns=['nuB_0', 'nuF_0', 'TB_0', 'TF_0', 'nuB_opt', 'nuF_opt', 'TB_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: ERY
End of explanation
popt = np.array( df.sort_values(by='-logL', ascending=True).iloc[0, 4:8] )
popt
# calculate best-fit model spectrum
model_spectrum = func_ex(popt, ns, pts_l)
theta = dadi.Inference.optimal_sfs_scaling(model_spectrum, fs_ery)
mu = 3e-9
L = fs_ery.data.sum()
print "The optimal value of theta per site for the ancestral population is {0:.4f}.".format(theta/L)
Nref = theta/L/mu/4
Nref
print "At time {0:,} generations ago, the ERY population size instantaneously increased by almost 10-fold (to {1:,}).".format(int((popt[2]+popt[3])*2*Nref), int(popt[0]*Nref)),
print "It then kept this population constant for {0:,} generations.".format(int(popt[2]*2*Nref)),
print "At time {0:,} generations in the past, the ERY population then decreased to 1.3 fold of the ancient population size or {1:,}.".format(int(popt[3]*2*Nref), int(popt[1]*Nref))
Explanation: Reasonable convergence. Divergent parameter value combinations have the same likelihood. The optimal parameter values from the optimisation runs 16 and 10 (adjacent in the table) show that quite different demographic scenarios can have almost identical likelihood. This is not unusual.
Interpretation
End of explanation
%%px --local
# set up global variables on engines required for run_dadi function call
ns = fs_ery.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_par
perturb = True
fold = 2 # perturb randomly up to `fold` times 2-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "MODIFIED_SPECTRA/OUT_1D_models/PAR_threeEpoch" # set file name stub for opt. result files
fixed_params = None
# set starting values for perturbation
p0 = [100, 2, 1, 1]
#ar_par = lbview.map(run_dadi, repeat(p0, 20))
ar_par = []
for filename in glob("OUT_1D_models/PAR_threeEpoch*dill"):
ar_par.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_par]
df = pd.DataFrame(data=returned, \
columns=['nuB_0', 'nuF_0', 'TB_0', 'TF_0', 'nuB_opt', 'nuF_opt', 'TB_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: PAR
End of explanation
%%px --local
fold = 1
maxiter = 300
# set starting values for perturbation
p0 = [20, 1e-2, 0.8, 1e-3]
ar_par = lbview.map(run_dadi, repeat(p0, 10))
ar_par = []
for filename in glob("OUT_1D_models/PAR_threeEpoch*dill"):
ar_par.append(dill.load(open(filename)))
l = 2*len(p0)+1
# show all parameter combinations
returned = [flatten(out)[:l] for out in ar_par]
df = pd.DataFrame(data=returned, \
columns=['nuB_0', 'nuF_0', 'TB_0', 'TF_0', 'nuB_opt', 'nuF_opt', 'TB_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True).head(20)
Explanation: There is no convergence. The three epoch model could not be fit to the PAR spectrum with Ludivic's correction.
End of explanation |
14,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to Feature Engineering!
In this course you'll learn about one of the most important steps on the way to building a great machine learning model
Step1: You can see here the various ingredients going into each variety of concrete. We'll see in a moment how adding some additional synthetic features derived from these can help a model to learn important relationships among them.
We'll first establish a baseline by training the model on the un-augmented dataset. This will help us determine whether our new features are actually useful.
Establishing baselines like this is good practice at the start of the feature engineering process. A baseline score can help you decide whether your new features are worth keeping, or whether you should discard them and possibly try something else.
Step2: If you ever cook at home, you might know that the ratio of ingredients in a recipe is usually a better predictor of how the recipe turns out than their absolute amounts. We might reason then that ratios of the features above would be a good predictor of CompressiveStrength.
The cell below adds three new ratio features to the dataset. | Python Code:
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
df = pd.read_csv("../input/fe-course-data/concrete.csv")
df.head()
Explanation: Welcome to Feature Engineering!
In this course you'll learn about one of the most important steps on the way to building a great machine learning model: feature engineering. You'll learn how to:
- determine which features are the most important with mutual information
- invent new features in several real-world problem domains
- encode high-cardinality categoricals with a target encoding
- create segmentation features with k-means clustering
- decompose a dataset's variation into features with principal component analysis
The hands-on exercises build up to a complete notebook that applies all of these techniques to make a submission to the House Prices Getting Started competition. After completing this course, you'll have several ideas that you can use to further improve your performance.
Are you ready? Let's go!
The Goal of Feature Engineering
The goal of feature engineering is simply to make your data better suited to the problem at hand.
Consider "apparent temperature" measures like the heat index and the wind chill. These quantities attempt to measure the perceived temperature to humans based on air temperature, humidity, and wind speed, things which we can measure directly. You could think of an apparent temperature as the result of a kind of feature engineering, an attempt to make the observed data more relevant to what we actually care about: how it actually feels outside!
You might perform feature engineering to:
- improve a model's predictive performance
- reduce computational or data needs
- improve interpretability of the results
A Guiding Principle of Feature Engineering
For a feature to be useful, it must have a relationship to the target that your model is able to learn. Linear models, for instance, are only able to learn linear relationships. So, when using a linear model, your goal is to transform the features to make their relationship to the target linear.
The key idea here is that a transformation you apply to a feature becomes in essence a part of the model itself. Say you were trying to predict the Price of square plots of land from the Length of one side. Fitting a linear model directly to Length gives poor results: the relationship is not linear.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/5D1z24N.png" width=300, alt="A scatterplot of Length along the x-axis and Price along the y-axis, the points increasing in a curve, with a poorly-fitting line superimposed.">
<figcaption style="textalign: center; font-style: italic"><center>A linear model fits poorly with only Length as feature.
</center></figcaption>
</figure>
If we square the Length feature to get 'Area', however, we create a linear relationship. Adding Area to the feature set means this linear model can now fit a parabola. Squaring a feature, in other words, gave the linear model the ability to fit squared features.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/BLRsYOK.png" width=600, alt="Left: Area now on the x-axis. The points increasing in a linear shape, with a well-fitting line superimposed. Right: Length on the x-axis now. The points increase in a curve as before, and a well-fitting curve is superimposed.">
<figcaption style="textalign: center; font-style: italic"><center><strong>Left:</strong> The fit to Area is much better. <strong>Right:</strong> Which makes the fit to Length better as well.
</center></figcaption>
</figure>
This should show you why there can be such a high return on time invested in feature engineering. Whatever relationships your model can't learn, you can provide yourself through transformations. As you develop your feature set, think about what information your model could use to achieve its best performance.
Example - Concrete Formulations
To illustrate these ideas we'll see how adding a few synthetic features to a dataset can improve the predictive performance of a random forest model.
The Concrete dataset contains a variety of concrete formulations and the resulting product's compressive strength, which is a measure of how much load that kind of concrete can bear. The task for this dataset is to predict a concrete's compressive strength given its formulation.
End of explanation
X = df.copy()
y = X.pop("CompressiveStrength")
# Train and score baseline model
baseline = RandomForestRegressor(criterion="mae", random_state=0)
baseline_score = cross_val_score(
baseline, X, y, cv=5, scoring="neg_mean_absolute_error"
)
baseline_score = -1 * baseline_score.mean()
print(f"MAE Baseline Score: {baseline_score:.4}")
Explanation: You can see here the various ingredients going into each variety of concrete. We'll see in a moment how adding some additional synthetic features derived from these can help a model to learn important relationships among them.
We'll first establish a baseline by training the model on the un-augmented dataset. This will help us determine whether our new features are actually useful.
Establishing baselines like this is good practice at the start of the feature engineering process. A baseline score can help you decide whether your new features are worth keeping, or whether you should discard them and possibly try something else.
End of explanation
X = df.copy()
y = X.pop("CompressiveStrength")
# Create synthetic features
X["FCRatio"] = X["FineAggregate"] / X["CoarseAggregate"]
X["AggCmtRatio"] = (X["CoarseAggregate"] + X["FineAggregate"]) / X["Cement"]
X["WtrCmtRatio"] = X["Water"] / X["Cement"]
# Train and score model on dataset with additional ratio features
model = RandomForestRegressor(criterion="mae", random_state=0)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_absolute_error"
)
score = -1 * score.mean()
print(f"MAE Score with Ratio Features: {score:.4}")
Explanation: If you ever cook at home, you might know that the ratio of ingredients in a recipe is usually a better predictor of how the recipe turns out than their absolute amounts. We might reason then that ratios of the features above would be a good predictor of CompressiveStrength.
The cell below adds three new ratio features to the dataset.
End of explanation |
14,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 19
Monday, November 13th 2017
Joins with SQLite, pandas
Starting Up
You can connect to the saved database from last time if you want. Alternatively, for extra practice, you can just recreate it from the datasets provided in the .txt files. That's what I'll do.
Step1: Recap
Last time, you played with a bunch of SQLite commands to query and update the tables in the database.
One thing we didn't get to was how to query the contributors table based off of a query in the candidates table. For example, suppose you want to query which contributors donated to Obama. You could use a nested SELECT statement to accomplish that.
Step2: Joins
The last example involved querying data from multiple tables.
In particular, we combined columns from the two related tables (related through the FOREIGN KEY).
This leads to the idea of joining multiple tables together. SQL has a set of commands to handle different types of joins. SQLite does not support the full suite of join commands offered by SQL but you should still be able to get the main ideas from the limited command set.
We'll begin with the INNER JOIN.
INNER JOIN
The idea here is that you will combine the tables if the values of certain columns are the same between the two tables. In our example, we will join the two tables based on the candidate id. The result of the INNER JOIN will be a new table consisting of the columns we requested and containing the common data. Since we are joining based off of the candidate id, we will not be excluding any rows.
Example
Here are two tables. Table A has the form
Step3: Reading things in is quite easy with pandas.
Notice that pandas populates empty fields with NaN values.
The id column in the contributors dataset is superfluous. Let's delete it.
Step4: Very nice! And we used the head method to print out the first five rows.
Creating a Table with pandas
We can use pandas to create tables in a database.
First, let's create a new database since we've already done a lot on our test database.
Step5: Last time, we opened the data files with Python and then manually used SQLite commands to populate the individual tables. We can use pandas instead like so.
Step6: How big is our table?
Step7: We can visualize the data in our pandas-populated table. No surprises here except that pandas did everything for us.
Step8: Querying a table with pandas
One Way
Step9: Another Way
Step10: More Queries
Step11: Exercises
Use pandas to populate the contributors table.
Query the contributors tables with the following
Step12: Selecting Columns
Step13: Exercises
Sort the contributors table by amount and order in descending order.
Select the first_name and amount columns.
Select the last_name and first_name columns and drop duplicates.
Count how many there are after the duplicates have been dropped.
Altering Tables
Creating a new column is quite easy with pandas.
Step14: We can change an existing field as well.
Step15: You may recall that SQLite doesn't have the functionality to drop a column. It's a one-liner with pandas.
Step16: Exercises
Create a name column for the contributors table with field entries of the form "last name, first name"
For contributors from the state of "PA", change the name to "X".
Delete the newly created name column.
Aggregation
We'd like to get information about the tables such as the maximum amount contributed to the candidates. Here are a bunch of way to describe the tables.
Step17: It's not very interesting with the candidates table because the candidates table only has one numeric column.
Exercise
Use the describe() method on the contributors table.
I'll use the contributors table to do some demos now.
Step18: There is also a version of the LIMIT clause. It's very intuitive with pandas.
Step19: The usual Python slicing works just fine!
Joins with pandas
pandas has some some documentation on joins
Step20: Somewhat organized example
Step21: Other Joins with pandas
We didn't cover all possible joins because SQLite can only handle the few that we did discuss. As mentioned, there are workarounds for some things in SQLite, but not evertyhing. Fortunately, pandas can handle pretty much everything. Here are a few joins that pandas can handle
Step22: Right Outer Join with pandas
Step23: Full Outer Join with pandas | Python Code:
import sqlite3
import numpy as np
import pandas as pd
import time
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
db = sqlite3.connect('L19DB_demo.sqlite')
cursor = db.cursor()
cursor.execute("DROP TABLE IF EXISTS candidates")
cursor.execute("DROP TABLE IF EXISTS contributors")
cursor.execute("PRAGMA foreign_keys=1")
cursor.execute('''CREATE TABLE candidates (
id INTEGER PRIMARY KEY NOT NULL,
first_name TEXT,
last_name TEXT,
middle_init TEXT,
party TEXT NOT NULL)''')
db.commit() # Commit changes to the database
cursor.execute('''CREATE TABLE contributors (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
last_name TEXT,
first_name TEXT,
middle_name TEXT,
street_1 TEXT,
street_2 TEXT,
city TEXT,
state TEXT,
zip TEXT,
amount REAL,
date DATETIME,
candidate_id INTEGER NOT NULL,
FOREIGN KEY(candidate_id) REFERENCES candidates(id))''')
db.commit()
with open ("candidates.txt") as candidates:
next(candidates) # jump over the header
for line in candidates.readlines():
cid, first_name, last_name, middle_name, party = line.strip().split('|')
vals_to_insert = (int(cid), first_name, last_name, middle_name, party)
cursor.execute('''INSERT INTO candidates
(id, first_name, last_name, middle_init, party)
VALUES (?, ?, ?, ?, ?)''', vals_to_insert)
with open ("contributors.txt") as contributors:
next(contributors)
for line in contributors.readlines():
cid, last_name, first_name, middle_name, street_1, street_2, \
city, state, zip_code, amount, date, candidate_id = line.strip().split('|')
vals_to_insert = (last_name, first_name, middle_name, street_1, street_2,
city, state, int(zip_code), amount, date, candidate_id)
cursor.execute('''INSERT INTO contributors (last_name, first_name, middle_name,
street_1, street_2, city, state, zip, amount, date, candidate_id)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)
candidate_cols = [col[1] for col in cursor.execute("PRAGMA table_info(candidates)")]
contributor_cols = [col[1] for col in cursor.execute("PRAGMA table_info(contributors)")]
def viz_tables(cols, query):
q = cursor.execute(query).fetchall()
framelist = []
for i, col_name in enumerate(cols):
framelist.append((col_name, [col[i] for col in q]))
return pd.DataFrame.from_items(framelist)
Explanation: Lecture 19
Monday, November 13th 2017
Joins with SQLite, pandas
Starting Up
You can connect to the saved database from last time if you want. Alternatively, for extra practice, you can just recreate it from the datasets provided in the .txt files. That's what I'll do.
End of explanation
query = '''SELECT * FROM contributors WHERE candidate_id = (SELECT id from candidates WHERE last_name = "Obama")'''
viz_tables(contributor_cols, query)
Explanation: Recap
Last time, you played with a bunch of SQLite commands to query and update the tables in the database.
One thing we didn't get to was how to query the contributors table based off of a query in the candidates table. For example, suppose you want to query which contributors donated to Obama. You could use a nested SELECT statement to accomplish that.
End of explanation
# Using pandas naming convention
dfcand = pd.read_csv("candidates.txt", sep="|")
dfcand
dfcontr = pd.read_csv("contributors.txt", sep="|")
dfcontr
Explanation: Joins
The last example involved querying data from multiple tables.
In particular, we combined columns from the two related tables (related through the FOREIGN KEY).
This leads to the idea of joining multiple tables together. SQL has a set of commands to handle different types of joins. SQLite does not support the full suite of join commands offered by SQL but you should still be able to get the main ideas from the limited command set.
We'll begin with the INNER JOIN.
INNER JOIN
The idea here is that you will combine the tables if the values of certain columns are the same between the two tables. In our example, we will join the two tables based on the candidate id. The result of the INNER JOIN will be a new table consisting of the columns we requested and containing the common data. Since we are joining based off of the candidate id, we will not be excluding any rows.
Example
Here are two tables. Table A has the form:
| nA | attr | idA |
| :::: | ::::: | ::: |
| s1 | 23 | 0 |
| s2 | 7 | 2 |
and table B has the form:
| nB | attr | idB |
| :::: | ::::: | ::: |
| t1 | 60 | 0 |
| t2 | 14 | 7 |
| t3 | 22 | 2 |
Table A is associated with Table B through a foreign key on the id column.
If we join the two tables by comparing the id columns and selecting the nA, nB, and attr columns then we'll get
| nA | A.attr | nB | B.attr |
| :::: | ::::::: | ::: | :::::: |
| s1 | 23 | t1 | 60 |
| s2 | 7 | t3 | 22 |
The SQLite code to do this join would be
sql
SELECT nA, A.attr, nB, B.attr FROM A INNER JOIN B ON B.idB = A.idA
Notice that the second row in table B is gone because the id values are not the same.
Thoughts
What is SQL doing with this operation? It may help to visualize this with a Venn diagram. Table A has rows with values corresponding to the idA attribute. Column B has rows with values corresponding to the idB attribute. The INNER JOIN will combine the two tables such that rows with common entries in the id attributes are included. We essentially have the following Venn diagram.
Exercises
Using an INNER JOIN, join the candidates and contributors tables by comparing the candidate_id and candidates_id columns. Display your joined table with the columns contributors.last_name, contributors.first_name, and candidates.last_name.
Do the same inner join as in the last part, but this time append a WHERE clause to select a specific candidate's last name.
LEFT JOIN or LEFT OUTER JOIN
There are many ways to combine two tables. We just explored one possibility in which we combined the tables based upon the intersection of the two tables (the INNER JOIN).
Now we'll talk about the LEFT JOIN or LEFT OUTER JOIN.
In words, the LEFT JOIN is combining the tables based upon what is in the intersection of the two tables and what is in the "reference" table.
We can consider our toy example in two guises:
Example A
Let's do a LEFT JOIN of table B from table A. That is, we'd like to make a new table by putting table B into table A. In this case, we'll consider table A our "reference" table. We're comparing by the id column again. We know that these two tables share ids 0 and 2 and table A doesn't have anything else in it. The resulting table is:
| nA | A.attr | nB | B.attr |
| :::: | ::::::: | ::: | :::::: |
| s1 | 23 | t1 | 60 |
| s2 | 7 | t3 | 22 |
That's not very exciting. It's the same result as from the INNER JOIN. We can do another example that may be more enlightening.
Example B
Let's do a LEFT JOIN of table A from table B. That is, we'd like to make a new table by putting table A into table B. In this case, we'll consider table B our "reference" table. Again, we use the id column from comparison. We know that these two tables share ids 0 and 2. This time, table B also contains the id 7, which is not shared by table A. The resulting table is:
| nA | A.attr | nB | B.attr |
| :::: | ::::::: | ::: | :::::: |
| s1 | 23 | t1 | 60 |
| None | NaN | t2 | 14 |
| s2 | 7 | t3 | 22 |
Notice that SQLite filed in the missing entries for us. This is necessary for completion of the requested join.
The SQLite commands to accomplish all of this are:
sql
SELECT nA, A.attr, nB, B.attr FROM A LEFT JOIN B ON B.idB = A.idA
and
sql
SELECT nA, A.attr, nB, B.attr FROM B LEFT JOIN A ON A.idA = B.idB
Here is a visualization using Venn diagrams of the LEFT JOIN.
Exercises
Use the following two tables to do the first two exercises in this section. Table A has the form:
| nA | attr | idA |
| :::: | ::::: | ::: |
| s1 | 23 | 0 |
| s2 | 7 | 2 |
| s3 | 15 | 2 |
| s4 | 31 | 0 |
and table B has the form:
| nB | attr | idB |
| :::: | ::::: | ::: |
| t1 | 60 | 0 |
| t2 | 14 | 7 |
| t3 | 22 | 2 |
Draw the table that would result from a LEFT JOIN using table A as the reference and the id columns for comparison.
Draw the table that would result from a LEFT JOIN using table B as the reference and the id columns for comparison.
Create a new table with the following form:
| average contribution | number of contributors | candidate last name |
| :::::::::::::::::::: | :::::::::::::::::::::: | ::::::::::::::::::: |
| ... | ... | ... |
The table should be created using the LEFT JOIN clause on the contributors table by joining the candidates table by the id column. The average contribution column and number of contributors column should be obtained using the AVG and COUNT SQL functions. Finally, you should use the GROUP BY clause on the candidates last name.
pandas
We've been working with databases for the last few lectures and learning SQLite commands to work with and manipulate the databases. There is a Python package called pandas that provides broad support for data structures. It can be used to interact with relationsional databases through its own methods and even through SQL commands.
In the last part of this lecture, you will get to redo a bunch of the database exercises using pandas.
We won't be able to cover pandas from the ground up, but it's a well-documented library and is fairly easy to get up and running. Here's the website: pandas.
Reading a datafile into pandas
End of explanation
del dfcontr['id']
dfcontr.head()
Explanation: Reading things in is quite easy with pandas.
Notice that pandas populates empty fields with NaN values.
The id column in the contributors dataset is superfluous. Let's delete it.
End of explanation
dbp = sqlite3.connect('L19_pandas_DB.sqlite')
csr = dbp.cursor()
csr.execute("DROP TABLE IF EXISTS candidates")
csr.execute("DROP TABLE IF EXISTS contributors")
csr.execute("PRAGMA foreign_keys=1")
csr.execute('''CREATE TABLE candidates (
id INTEGER PRIMARY KEY NOT NULL,
first_name TEXT,
last_name TEXT,
middle_name TEXT,
party TEXT NOT NULL)''')
dbp.commit() # Commit changes to the database
csr.execute('''CREATE TABLE contributors (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
last_name TEXT,
first_name TEXT,
middle_name TEXT,
street_1 TEXT,
street_2 TEXT,
city TEXT,
state TEXT,
zip TEXT,
amount REAL,
date DATETIME,
candidate_id INTEGER NOT NULL,
FOREIGN KEY(candidate_id) REFERENCES candidates(id))''')
dbp.commit()
Explanation: Very nice! And we used the head method to print out the first five rows.
Creating a Table with pandas
We can use pandas to create tables in a database.
First, let's create a new database since we've already done a lot on our test database.
End of explanation
dfcand.to_sql("candidates", dbp, if_exists="append", index=False)
Explanation: Last time, we opened the data files with Python and then manually used SQLite commands to populate the individual tables. We can use pandas instead like so.
End of explanation
dfcand.shape
Explanation: How big is our table?
End of explanation
query = '''SELECT * FROM candidates'''
csr.execute(query).fetchall()
Explanation: We can visualize the data in our pandas-populated table. No surprises here except that pandas did everything for us.
End of explanation
dfcand.query("first_name=='Mike' & party=='D'")
Explanation: Querying a table with pandas
One Way
End of explanation
dfcand[(dfcand.first_name=="Mike") & (dfcand.party=="D")]
Explanation: Another Way
End of explanation
dfcand[dfcand.middle_name.notnull()]
dfcand[dfcand.first_name.isin(['Mike', 'Hillary'])]
Explanation: More Queries
End of explanation
dfcand.sort_values(by='party')
dfcand.sort_values(by='party', ascending=False)
Explanation: Exercises
Use pandas to populate the contributors table.
Query the contributors tables with the following:
List entries where the state is "VA" and the amount is less than $\$400.00$.
List entries where the state is "NULL".
List entries for the states of Texas and Pennsylvania.
List entries where the amount contributed is between $\$10.00$ and $\$50.00$.
Sorting
End of explanation
dfcand[['last_name', 'party']]
dfcand[['last_name', 'party']].count()
dfcand[['first_name']].drop_duplicates()
dfcand[['first_name']].drop_duplicates().count()
Explanation: Selecting Columns
End of explanation
dfcand['name'] = dfcand['last_name'] + ", " + dfcand['first_name']
dfcand
Explanation: Exercises
Sort the contributors table by amount and order in descending order.
Select the first_name and amount columns.
Select the last_name and first_name columns and drop duplicates.
Count how many there are after the duplicates have been dropped.
Altering Tables
Creating a new column is quite easy with pandas.
End of explanation
dfcand.loc[dfcand.first_name == "Mike", "name"]
dfcand.loc[dfcand.first_name == "Mike", "name"] = "Mikey"
dfcand.query("first_name == 'Mike'")
Explanation: We can change an existing field as well.
End of explanation
del dfcand['name']
dfcand
Explanation: You may recall that SQLite doesn't have the functionality to drop a column. It's a one-liner with pandas.
End of explanation
dfcand.describe()
Explanation: Exercises
Create a name column for the contributors table with field entries of the form "last name, first name"
For contributors from the state of "PA", change the name to "X".
Delete the newly created name column.
Aggregation
We'd like to get information about the tables such as the maximum amount contributed to the candidates. Here are a bunch of way to describe the tables.
End of explanation
dfcontr.amount.max()
dfcontr[dfcontr.amount==dfcontr.amount.max()]
dfcontr.groupby("state").sum()
dfcontr.groupby("state")["amount"].sum()
dfcontr.state.unique()
Explanation: It's not very interesting with the candidates table because the candidates table only has one numeric column.
Exercise
Use the describe() method on the contributors table.
I'll use the contributors table to do some demos now.
End of explanation
dfcand[0:3]
Explanation: There is also a version of the LIMIT clause. It's very intuitive with pandas.
End of explanation
cols_wanted = ['last_name_x', 'first_name_x', 'candidate_id', 'id', 'last_name_y']
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id")[cols_wanted]
Explanation: The usual Python slicing works just fine!
Joins with pandas
pandas has some some documentation on joins: Merge, join, and concatenate. If you want some more reinforcement on the concepts from earlier regarding JOIN, then the pandas documentation may be a good place to get it.
You may also be interested in a comparison with SQL.
To do joins with pandas, we use the merge command.
Here's an example of an explicit inner join:
End of explanation
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id")[cols_wanted].groupby('last_name_y').describe()
Explanation: Somewhat organized example
End of explanation
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id", how="left")[cols_wanted]
Explanation: Other Joins with pandas
We didn't cover all possible joins because SQLite can only handle the few that we did discuss. As mentioned, there are workarounds for some things in SQLite, but not evertyhing. Fortunately, pandas can handle pretty much everything. Here are a few joins that pandas can handle:
* LEFT OUTER (already discussed)
* RIGHT OUTER - Think of the "opposite" of a LEFT OUTER join (shade the intersection and right set in the Venn diagram).
* FULL OUTER - Combine everything from both tables (shade the entire Venn diagram)
Left Outer Join with pandas
End of explanation
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id", how="right")[cols_wanted]
Explanation: Right Outer Join with pandas
End of explanation
dfcontr.merge(dfcand, left_on="candidate_id", right_on="id", how="outer")[cols_wanted]
Explanation: Full Outer Join with pandas
End of explanation |
14,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split()] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int.get(word, target_vocab_to_int['<UNK>']) for word in line.split()] for line in target_text.split('\n')]
for i in range(len(target_id_text)):
target_id_text[i].append(target_vocab_to_int['<EOS>'])
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
# TODO - Should this be string or int32 ?
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
rnn_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, rnn_state = tf.nn.dynamic_rnn(rnn_cell, rnn_inputs, dtype=tf.float32)
return rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
drop = tf.nn.dropout(train_pred, keep_prob)
train_logits = output_fn(drop)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
# Ignoring keep_prob as we would not want to use dropout during inference
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
with tf.variable_scope("decoding") as decoding_scope:
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
weights = tf.truncated_normal_initializer(stddev=0.1)
biases = tf.zeros_initializer()
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None,
scope=decoding_scope,
weights_initializer=weights,
biases_initializer=biases)
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length,
decoding_scope, output_fn, keep_prob)
decoding_scope.reuse_variables()
infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], sequence_length - 1, vocab_size,
decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size,
initializer = tf.random_uniform_initializer(-1,1))
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size], -1, 1))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs - test up to 30
epochs = 5
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 64
decoding_embedding_size = 64
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.75
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
word_ids = []
for word in sentence.lower().split():
word_ids.append(vocab_to_int.get(word, vocab_to_int['<UNK>']))
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
14,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MM
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
14,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aprendizaje no supervisado parte 1 - transformación
Muchas formas de aprendizaje no supervisado, como reducción de dimensionalidad, aprendizaje de variedades y extracción de características, encuentran una nueva representación de los datos de entrada sin ninguna variable adicional (al contrario que en aprendizaje supervisado, los algoritmos nos supervisados no requieren o consideran variables objetivo como en los casos anteriores de clasificación y regresión).
<img src="figures/unsupervised_workflow.svg" width="100%">
Un ejemplo muy básico es el rescalado de los datos, que es un requisito para muchos algoritmos de aprendizaje automático ya que no son invariantes a escala (aunque el reescalado de los datos es más bien un método de preprocesamiento ya que no hay mucho aprendizaje). Existen muchas técnicas de reescalado y, en el siguiente ejemplo, veremos un método particular que se denomina "estandarización". Con este método, reescalaremos los datos para que cada característica esté centrada en el cero (media=0) con varianza unitaria (desviación típica = 1).
Por ejemplo, si tenemos un dataset de una dimensión con los datos $[1, 2, 3, 4, 5]$, los valores estandarizados serían
Step1: Aunque la estandarización es un método muy básico (y su código es simple, como acabamos de ver) scikit-learn implemente una clase StandardScaler para realizar los cálculos. En secciones posteriores veremos porqué es mejor usar la interfaz de scikit-learn que el código anterior.
Aplicar un algoritmo de preprocesamiento tiene una interfaz muy similar a la que se usa para los algoritmos supervisados que hemos visto hasta el momento. Para coger más práctica con la interfaz Transformer de scikit-learn, vamos a empezar cargando el dataset iris y reescalándolo
Step2: El dataset iris no está "centrado", es decir, tiene media distinta de cero y desviación típica distinta para cada componente
Step3: Para usar un método de preprocesamiento, primero importamos el estimador, en este caso, StandardScaler, y luego lo instanciamos
Step4: Como con los algoritmos de regresión y clasificación, llamamos a fit para aprender el modelo de los datos. Como es un modelo no supervisado, solo le pasamos X, no y. Esto simplemente calcula la media y la desviación típica.
Step5: Ahora podemos reescalar los datos aplicando el método transform (no predict)
Step6: X_train_scaled tiene el mismo número de ejemplos y características, pero la media ha sido restada y todos las variables tienen desviación típica unitaria
Step7: Resumiendo, el método fit ajusta el estimador a los datos que le proporcionamos. En este paso, el estimador estima los parámetros de los datos (p.ej. media y desviación típica). Después, si aplicamos transform, estos parámetros se utilizan para transformar un dataset (el método transform no modifica los parámetros).
Es importante indicar que la misma transformación se utiliza para los datos de entrenamiento y de test. Como consecuencia, la media y desviación típica en test no tienen porque ser 0 y 1
Step8: La transformación en entrenamiento y test debe ser siempre la misma, para que tenga sentido lo que estamos haciendo. Por ejemplo
Step9: Hay muchas formas de escalar los datos. La más común es el StandardScaler que hemos mencionada, pero hay otras clases útiles como
Step10: Análisis de componentes principales
Una transformación no supervisada algo más interesante es el Análisis de Componentes Principales (Principal Component Analysis, PCA). Es una técnica para reducir la dimensionalidad de los datos, creando una proyección lineal. Es decir, encontramos características nuevas para representar los datos que son una combinación lineal de los datos originales (lo cual es equivalente a rotar los datos). De esta forma, podemos pensar en el PCA como una proyección de nuestros datos en un nuevo espacio de características.
La forma en que el PCA encuentra estas nuevas direcciones es buscando direcciones de máxima varianza. Normalmente, solo unas pocas componentes principales son capaces explicar la mayor parte de la varianza y el resto se pueden obviar. La premisa es reducir el tamaño (dimensionalidad) del dataset, al mismo tiempo que se captura la mayor parte de información. Hay muchas razones por las que es bueno reducir la dimensionalidad de un dataset
Step11: Veamos ahora todos los pasos con más detalle.
Creamos una nube Gaussiana de puntos, que es rotada
Step12: Como siempre, instanciamos nuestro modelo PCA. Por defecto, todas las componentes se mantienen
Step13: Después, ajustamos el PCA a los datos. Como PCA es un algoritmo no supervisado, no hay que suministrar ninguna y.
Step14: Después podemos transformar los datos, proyectando en las componentes principales
Step15: Ahora vamos a usar una sola componente principal
Step16: El PCA encuentra sitúa la primera componente en la diagonal de los datos (máxima variabilidad) y la segunda perpendicular a la primera. Las componentes siempre son ortogonales entre si.
Reducción de la dimensionalidad para visualización con PCA
Considera el dataset de dígitos. No puede ser visualizado en un único gráfico 2D, porque tiene 64 características. Vamos a extraer 2 dimensiones para visualizarlo, utilizando este ejemplo de scikit learn. | Python Code:
ary = np.array([1, 2, 3, 4, 5])
ary_standardized = (ary - ary.mean()) / ary.std()
ary_standardized
Explanation: Aprendizaje no supervisado parte 1 - transformación
Muchas formas de aprendizaje no supervisado, como reducción de dimensionalidad, aprendizaje de variedades y extracción de características, encuentran una nueva representación de los datos de entrada sin ninguna variable adicional (al contrario que en aprendizaje supervisado, los algoritmos nos supervisados no requieren o consideran variables objetivo como en los casos anteriores de clasificación y regresión).
<img src="figures/unsupervised_workflow.svg" width="100%">
Un ejemplo muy básico es el rescalado de los datos, que es un requisito para muchos algoritmos de aprendizaje automático ya que no son invariantes a escala (aunque el reescalado de los datos es más bien un método de preprocesamiento ya que no hay mucho aprendizaje). Existen muchas técnicas de reescalado y, en el siguiente ejemplo, veremos un método particular que se denomina "estandarización". Con este método, reescalaremos los datos para que cada característica esté centrada en el cero (media=0) con varianza unitaria (desviación típica = 1).
Por ejemplo, si tenemos un dataset de una dimensión con los datos $[1, 2, 3, 4, 5]$, los valores estandarizados serían:
1 -> -1.41
2 -> -0.71
3 -> 0.0
4 -> 0.71
5 -> 1.41
los cuales se pueden obtener con la ecuación $x_{standardized} = \frac{x - \mu_x}{\sigma_x}$, donde $\mu$ es la media muestral, y $\sigma$ la desviación típica.
End of explanation
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=0)
print(X_train.shape)
Explanation: Aunque la estandarización es un método muy básico (y su código es simple, como acabamos de ver) scikit-learn implemente una clase StandardScaler para realizar los cálculos. En secciones posteriores veremos porqué es mejor usar la interfaz de scikit-learn que el código anterior.
Aplicar un algoritmo de preprocesamiento tiene una interfaz muy similar a la que se usa para los algoritmos supervisados que hemos visto hasta el momento. Para coger más práctica con la interfaz Transformer de scikit-learn, vamos a empezar cargando el dataset iris y reescalándolo:
End of explanation
print("media : %s " % X_train.mean(axis=0))
print("desviacion típica : %s " % X_train.std(axis=0))
Explanation: El dataset iris no está "centrado", es decir, tiene media distinta de cero y desviación típica distinta para cada componente:
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
Explanation: Para usar un método de preprocesamiento, primero importamos el estimador, en este caso, StandardScaler, y luego lo instanciamos:
End of explanation
scaler.fit(X_train)
print(scaler.mean_)
print(scaler.scale_)
Explanation: Como con los algoritmos de regresión y clasificación, llamamos a fit para aprender el modelo de los datos. Como es un modelo no supervisado, solo le pasamos X, no y. Esto simplemente calcula la media y la desviación típica.
End of explanation
X_train_scaled = scaler.transform(X_train)
Explanation: Ahora podemos reescalar los datos aplicando el método transform (no predict):
End of explanation
print(X_train_scaled.shape)
print("media : %s " % X_train_scaled.mean(axis=0))
print("desviación típica : %s " % X_train_scaled.std(axis=0))
Explanation: X_train_scaled tiene el mismo número de ejemplos y características, pero la media ha sido restada y todos las variables tienen desviación típica unitaria:
End of explanation
X_test_scaled = scaler.transform(X_test)
print("medias de los datos de test: %s" % X_test_scaled.mean(axis=0))
Explanation: Resumiendo, el método fit ajusta el estimador a los datos que le proporcionamos. En este paso, el estimador estima los parámetros de los datos (p.ej. media y desviación típica). Después, si aplicamos transform, estos parámetros se utilizan para transformar un dataset (el método transform no modifica los parámetros).
Es importante indicar que la misma transformación se utiliza para los datos de entrenamiento y de test. Como consecuencia, la media y desviación típica en test no tienen porque ser 0 y 1:
End of explanation
from figures import plot_relative_scaling
plot_relative_scaling()
Explanation: La transformación en entrenamiento y test debe ser siempre la misma, para que tenga sentido lo que estamos haciendo. Por ejemplo:
End of explanation
from figures import plot_scaling
plot_scaling()
Explanation: Hay muchas formas de escalar los datos. La más común es el StandardScaler que hemos mencionada, pero hay otras clases útiles como:
- MinMaxScaler: reescalar los datos para que se ajusten a un mínimo y un máximo (normalmente, entre 0 y 1)
- RobustScaler: utilizar otros estadísticos más robustos como la mediana o los cuartiles, en lugar de la media y la desviación típica.
- Normalizer: normaliza cada ejemplo individualmente para que tengan como norma (L1 o L2) la unidad. Por defecto, se utiliza L2.
End of explanation
from figures import plot_pca_illustration
plot_pca_illustration()
Explanation: Análisis de componentes principales
Una transformación no supervisada algo más interesante es el Análisis de Componentes Principales (Principal Component Analysis, PCA). Es una técnica para reducir la dimensionalidad de los datos, creando una proyección lineal. Es decir, encontramos características nuevas para representar los datos que son una combinación lineal de los datos originales (lo cual es equivalente a rotar los datos). De esta forma, podemos pensar en el PCA como una proyección de nuestros datos en un nuevo espacio de características.
La forma en que el PCA encuentra estas nuevas direcciones es buscando direcciones de máxima varianza. Normalmente, solo unas pocas componentes principales son capaces explicar la mayor parte de la varianza y el resto se pueden obviar. La premisa es reducir el tamaño (dimensionalidad) del dataset, al mismo tiempo que se captura la mayor parte de información. Hay muchas razones por las que es bueno reducir la dimensionalidad de un dataset: reducimos el coste computacional de los algoritmos de aprendizaje, reducimos el espacio en disco y ayudamos a combatir la llamada maldición de la dimensionalidad (curse of dimensionality), que discutiremos después más a fondo.
Para ilustrar como puede funcionar una rotación, primero la mostraremos en datos bidimensionales y mantendremos las dos componentes principales:
End of explanation
rnd = np.random.RandomState(5)
X_ = rnd.normal(size=(300, 2))
X_blob = np.dot(X_, rnd.normal(size=(2, 2)))+rnd.normal(size=2)
y = X_[:, 0] > 0
plt.scatter(X_blob[:, 0], X_blob[:, 1], c=y, linewidths=0, s=30)
plt.xlabel("característica 1")
plt.ylabel("característica 2");
Explanation: Veamos ahora todos los pasos con más detalle.
Creamos una nube Gaussiana de puntos, que es rotada:
End of explanation
from sklearn.decomposition import PCA
pca = PCA()
Explanation: Como siempre, instanciamos nuestro modelo PCA. Por defecto, todas las componentes se mantienen:
End of explanation
pca.fit(X_blob)
Explanation: Después, ajustamos el PCA a los datos. Como PCA es un algoritmo no supervisado, no hay que suministrar ninguna y.
End of explanation
X_pca = pca.transform(X_blob)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, linewidths=0, s=30)
plt.xlabel("primera componente principal")
plt.ylabel("segunda componente principal");
Explanation: Después podemos transformar los datos, proyectando en las componentes principales:
End of explanation
pca = PCA(n_components=1).fit(X_blob)
X_blob.shape
X_pca = pca.transform(X_blob)
print(X_pca.shape)
plt.scatter(X_pca[:, 0], np.zeros(X_pca.shape[0]), c=y, linewidths=0, s=30)
plt.xlabel("primera componente principal");
Explanation: Ahora vamos a usar una sola componente principal
End of explanation
from figures import digits_plot
digits_plot()
Explanation: El PCA encuentra sitúa la primera componente en la diagonal de los datos (máxima variabilidad) y la segunda perpendicular a la primera. Las componentes siempre son ortogonales entre si.
Reducción de la dimensionalidad para visualización con PCA
Considera el dataset de dígitos. No puede ser visualizado en un único gráfico 2D, porque tiene 64 características. Vamos a extraer 2 dimensiones para visualizarlo, utilizando este ejemplo de scikit learn.
End of explanation |
14,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Can define what spread beyond which you assume player has a 0 or 100% chance of winning - using 300 as first guess.
Also, spreads now range only from 0 to positive numbers, because trailing by 50 and winning is the same outcome and leading by 50 and losing (just swapping the players' perspectives)
Step1: The 50% win line is likely a little bit above 0 spread, because when you end a turn with 0 spread, your opponent on average gets an extra half-turn more than you for the rest of the game. Let's fine that line.
Step2: Opening turn scores
Step3: Apply smoothing
We want the win percentage to increase monotonically with spread, even though we have a limited sample size and this may not always be true. Therefore, we want to be able to average win percentages over neighboring scenarios (similar spread difference and similar # of tiles remaining). | Python Code:
max_spread = 300
counter_dict_by_spread_and_tiles_remaining = {x:{
spread:0 for spread in range(max_spread,-max_spread-1,-1)} for x in range(0,94)}
win_counter_dict_by_spread_and_tiles_remaining = deepcopy(counter_dict_by_spread_and_tiles_remaining)
t0=time.time()
print('There are {} games'.format(len(win_dict)))
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
# truncate spread to the range -max_spread to max_spread
end_of_turn_tiles_left = int(row[10])-int(row[7])
end_of_turn_spread = min(max(int(row[6])-int(row[11]),-max_spread),max_spread)
if end_of_turn_tiles_left > 0:
counter_dict_by_spread_and_tiles_remaining[end_of_turn_tiles_left][end_of_turn_spread] += 1
if row[0]=='p1':
win_counter_dict_by_spread_and_tiles_remaining[
end_of_turn_tiles_left][end_of_turn_spread] += win_dict[row[1]]
else:
win_counter_dict_by_spread_and_tiles_remaining[
end_of_turn_tiles_left][end_of_turn_spread] += (1-win_dict[row[1]])
# debug rows
# if i<10:
# print(row)
# print(end_of_turn_spread)
# print(end_of_turn_tiles_left)
# print(counter_dict_by_spread_and_tiles_remaining[end_of_turn_tiles_left][end_of_turn_spread])
# print(win_counter_dict_by_spread_and_tiles_remaining[end_of_turn_tiles_left][end_of_turn_spread])
count_df = pd.DataFrame(counter_dict_by_spread_and_tiles_remaining)
win_df = pd.DataFrame(win_counter_dict_by_spread_and_tiles_remaining)
win_pct_df = win_df/count_df
fig,ax = plt.subplots(figsize=(12,8))
sns.heatmap(win_pct_df, ax=ax)
ax.set_xlabel('Tiles remaining')
ax.set_ylabel('Game spread')
ax.set_title('Win % by tiles remaining and spread')
plt.savefig('win_pct.jpg')
count_df.iloc[300:350,79:]
Explanation: Can define what spread beyond which you assume player has a 0 or 100% chance of winning - using 300 as first guess.
Also, spreads now range only from 0 to positive numbers, because trailing by 50 and winning is the same outcome and leading by 50 and losing (just swapping the players' perspectives)
End of explanation
win_pct_df.iloc[250:350,79:]
Explanation: The 50% win line is likely a little bit above 0 spread, because when you end a turn with 0 spread, your opponent on average gets an extra half-turn more than you for the rest of the game. Let's fine that line.
End of explanation
pd.options.display.max_rows = 999
Explanation: Opening turn scores
End of explanation
counter_dict_by_opening_turn_score = {x:0 for x in range(0,131)}
win_counter_dict_by_opening_turn_score = {x:0 for x in range(0,131)}
rows = []
t0=time.time()
print('There are {} games'.format(len(win_dict)))
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
if row[2]=='1':
counter_dict_by_opening_turn_score[int(row[5])] += 1
# check which player went first
if row[0]=='p1':
win_counter_dict_by_opening_turn_score[int(row[5])] += win_dict[row[1]]
rows.append([int(row[5]), win_dict[row[1]]])
else:
win_counter_dict_by_opening_turn_score[int(row[5])] += 1-win_dict[row[1]]
rows.append([int(row[5]), 1-win_dict[row[1]]])
# # debug rows
# if i<10:
# print(row)
tst_df=pd.DataFrame(rows).rename(columns={0:'opening turn score',1:'win'})
opening_turn_count = pd.Series(counter_dict_by_opening_turn_score)
opening_turn_win_count = pd.Series(win_counter_dict_by_opening_turn_score)
opening_turn_win_pct = opening_turn_win_count/opening_turn_count
tst = opening_turn_win_pct.dropna()
opening_turn_win_pct
fig,ax=plt.subplots()
plt.plot(tst)
plt.savefig('plot1.png')
fig,ax=plt.subplots()
sns.regplot(x='opening turn score',y='win',data=tst_df,x_estimator=np.mean,ax=ax)
plt.savefig('regression_plot.png')
fig,ax=plt.subplots()
sns.regplot(x='opening turn score',y='win',data=tst_df,x_estimator=np.mean,ax=ax,fit_reg=False)
plt.savefig('regression_plot_no_fitline.png')
Explanation: Apply smoothing
We want the win percentage to increase monotonically with spread, even though we have a limited sample size and this may not always be true. Therefore, we want to be able to average win percentages over neighboring scenarios (similar spread difference and similar # of tiles remaining).
End of explanation |
14,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kampff lab - Ultra dense survey
Here a description of the dataset
Step1: create a DataIO (and remove if already exists)
Step2: CatalogueConstructor
Run all chain in one shot.
Step3: Noise measurement
Step4: Inspect waveform quality at catalogue level
Step5: construct catalogue
Step6: apply peeler
This is the real spike sorting
Step7: final inspection of cells | Python Code:
# suposing the datset is downloaded here
workdir = '/media/samuel/dataspikesorting/DataSpikeSortingHD2/kampff/ultra dense/'
filename = workdir + 'T2/amplifier2017-02-08T21_38_55.bin'
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import tridesclous as tdc
from tridesclous import DataIO, CatalogueConstructor, Peeler
import os, shutil
Explanation: Kampff lab - Ultra dense survey
Here a description of the dataset:
http://www.kampff-lab.org/ultra-dense-survey/
Here the official publication of this open dataset:
https://crcns.org/data-sets/methods/hdr-1/about-hdr-1
And a paper is being preparing here:
https://doi.org/10.1101/275818
Introduction
This dataset explore optimal size and density of electrodes.
Here 255 extracellular electrodes (5 x 5 μm and spacing of 1 μm)
Download
Dataset must downloaded locally and manually from crcns or from the google drive in "workdir" path.
The PRB file
tridesclous need a PRB file that describe the geometry of probe.
Create it by copy/paste or download it via github.
End of explanation
dirname = workdir + 'tdc_amplifier2017-02-02T17_18_46'
if os.path.exists(dirname):
#remove is already exists
shutil.rmtree(dirname)
dataio = DataIO(dirname=dirname)
# feed DataIO with one file
dataio.set_data_source(type='RawData', filenames=[filename],
sample_rate=20000., dtype='int16', total_channel=256,
bit_to_microVolt=0.195)
print(dataio)
# set the probe file
dataio.set_probe_file('kampff_ultra_dense_256.prb')
Explanation: create a DataIO (and remove if already exists)
End of explanation
cc = CatalogueConstructor(dataio=dataio, chan_grp=0)
fullchain_kargs = {
'duration' : 300.,
'preprocessor' : {
'highpass_freq' : 400.,
'lowpass_freq' : 5000.,
'smooth_size' : 0,
'chunksize' : 1024,
'lostfront_chunksize' : 128,
'signalpreprocessor_engine' : 'numpy',
},
'peak_detector' : {
'peakdetector_engine' : 'numpy',
'peak_sign' : '-',
'relative_threshold' : 5.,
'peak_span' : 0.0002,
},
'noise_snippet' : {
'nb_snippet' : 300,
},
'extract_waveforms' : {
'n_left' : -20,
'n_right' : 30,
'mode' : 'rand',
'nb_max' : 20000,
'align_waveform' : False,
},
'clean_waveforms' : {
'alien_value_threshold' : 100.,
},
}
feat_method = 'peak_max'
feat_kargs = {}
clust_method = 'sawchaincut'
clust_kargs = {}
tdc.apply_all_catalogue_steps(cc, fullchain_kargs,
feat_method, feat_kargs,clust_method, clust_kargs)
print(cc)
Explanation: CatalogueConstructor
Run all chain in one shot.
End of explanation
dataio = DataIO(dirname=dirname)
tdc.summary_noise(dataio=dataio, chan_grp=0)
Explanation: Noise measurement
End of explanation
tdc.summary_catalogue_clusters(dataio=dataio, chan_grp=0, label=0)
Explanation: Inspect waveform quality at catalogue level
End of explanation
cc.make_catalogue_for_peeler()
Explanation: construct catalogue
End of explanation
initial_catalogue = dataio.load_catalogue(chan_grp=0)
peeler = Peeler(dataio)
peeler.change_params(catalogue=initial_catalogue,
use_sparse_template=True,
sparse_threshold_mad=1.5,
use_opencl_with_sparse=True,
cl_platform_index=1,
cl_device_index=0)
peeler.run(duration=300.,
progressbar=True)
Explanation: apply peeler
This is the real spike sorting: find spike that correcpond to catalogue templates.
End of explanation
tdc.summary_after_peeler_clusters(dataio, chan_grp=0, label=0, neighborhood_radius=None, show_channels=False)
tdc.summary_after_peeler_clusters(dataio, chan_grp=0, label=1, neighborhood_radius=None, show_channels=False)
Explanation: final inspection of cells
End of explanation |
14,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification - Decision Tree Primer
Classify Iris (flowers) by their sepal/petal width/length to their species
Step1: Task
Step2: Wait, how do I know that the Decision Tree works???
A
Step3: Feature importance
TODO | Python Code:
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from plotting_utilities import plot_decision_tree, plot_feature_importances
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
iris = load_iris()
iris.DESCR.split('\n')
# IN: Features aka Predictors
print(iris.data.dtype)
print(iris.data.shape)
print(iris.feature_names)
iris.data[:5,:]
# OUT: Target, here: species
print(iris.target.dtype)
print(iris.target.shape)
print(iris.target_names)
iris.target[:5]
Explanation: Classification - Decision Tree Primer
Classify Iris (flowers) by their sepal/petal width/length to their species: 'setosa' 'versicolor' 'virginica'
Original Image
End of explanation
X = iris.data
y = iris.target
# TODO: Try with and without max_depth (setting also avoids overfitting)
# clf = DecisionTreeClassifier().fit(X, y)
clf = DecisionTreeClassifier(max_depth = 3).fit(X, y)
plot_decision_tree(clf, iris.feature_names, iris.target_names)
Explanation: Task: Create a Decision Tree
to be able to classify an unseen Iris by sepal/petal with into its species: 'setosa' 'versicolor' 'virginica'
End of explanation
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = 3)
# Train the classifier only with the trainings data
clf = DecisionTreeClassifier().fit(X_train, y_train)
# predict for the test data and compare with the actual outcome
y_pred = clf.predict(X_test)
from sklearn.metrics import confusion_matrix
print(" ------ Predicted ")
print(" Actual ")
confusion_matrix(y_test, y_pred)
print('Accuracy of Decision Tree classifier on test set == sum(TP)/sum(): {}'.format((15+11+11)/(15+11+11+1)))
print('Accuracy of Decision Tree classifier on test set with "score"-function: {:.2f}'
.format(clf.score(X_test, y_test)))
Explanation: Wait, how do I know that the Decision Tree works???
A: Split your data into test and train and evaluate with the test data.
End of explanation
plt.figure(figsize=(10,4), dpi=80)
plot_feature_importances(clf, np.array(iris.feature_names))
plt.show()
print('Feature names : {}'.format(iris.feature_names))
print('Feature importances: {}'.format(clf.feature_importances_))
Explanation: Feature importance
TODO: Compare with level in Tree
End of explanation |
14,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Task #2
Step2: Task #3
Step3: Task #4
Step4: Task #5
Step5: Task #6
Step6: Task #7 | Python Code:
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/moviereviews2.tsv', sep='\t')
df.head()
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Text Classification Assessment - Solution
This assessment is very much like the Text Classification Project we just completed, and the dataset is very similar.
The moviereviews2.tsv dataset contains the text of 6000 movie reviews. 3000 are positive, 3000 are negative, and the text has been preprocessed as a tab-delimited file. As before, labels are given as pos and neg.
We've included 20 reviews that contain either NaN data, or have strings made up of whitespace.
For more information on this dataset visit http://ai.stanford.edu/~amaas/data/sentiment/
Task #1: Perform imports and load the dataset into a pandas DataFrame
For this exercise you can load the dataset from '../TextFiles/moviereviews2.tsv'.
End of explanation
# Check for NaN values:
df.isnull().sum()
# Check for whitespace strings (it's OK if there aren't any!):
blanks = [] # start with an empty list
for i,lb,rv in df.itertuples(): # iterate over the DataFrame
if type(rv)==str: # avoid NaN values
if rv.isspace(): # test 'review' for whitespace
blanks.append(i) # add matching index numbers to the list
len(blanks)
Explanation: Task #2: Check for missing values:
End of explanation
df.dropna(inplace=True)
Explanation: Task #3: Remove NaN values:
End of explanation
df['label'].value_counts()
Explanation: Task #4: Take a quick look at the label column:
End of explanation
from sklearn.model_selection import train_test_split
X = df['review']
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Explanation: Task #5: Split the data into train & test sets:
You may use whatever settings you like. To compare your results to the solution notebook, use test_size=0.33, random_state=42
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', LinearSVC()),
])
# Feed the training data through the pipeline
text_clf.fit(X_train, y_train)
Explanation: Task #6: Build a pipeline to vectorize the date, then train and fit a model
You may use whatever model you like. To compare your results to the solution notebook, use LinearSVC.
End of explanation
# Form a prediction set
predictions = text_clf.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test,predictions))
# Print a classification report
print(metrics.classification_report(y_test,predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))
Explanation: Task #7: Run predictions and analyze the results
End of explanation |
14,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="top"></a>
<div style="width
Step1: In this case, we'll just stick with the standard meteorological data. The "realtime" data from NDBC contains approximately 45 days of data from each buoy. We'll retreive that record for buoy 51002 and then do some cleaning of the data.
Step2: Let's get rid of the columns with all missing data. We could use the drop method and manually name all of the columns, but that would require us to know which are all NaN and that sounds like manual labor - something that programmers hate. Pandas has the dropna method that allows us to drop rows or columns where any or all values are NaN. In this case, let's drop all columns with all NaN values.
Step3: <div class="alert alert-success">
<b>EXERCISE</b>
Step4: Solution
Step5: Finally, we need to trim down the data. The file contains 45 days worth of observations. Let's look at the last week's worth of data.
Step6: We're almost ready, but now the index column is not that meaningful. It starts at a non-zero row, which is fine with our initial file, but let's re-zero the index so we have a nice clean data frame to start with.
Step7: <a href="#top">Top</a>
<hr style="height
Step8: We'll start by plotting the windspeed observations from the buoy.
Step9: Our x axis labels look a little crowded - let's try only labeling each day in our time series.
Step10: Now we can add wind gust speeds to the same plot as a dashed yellow line.
Step11: <div class="alert alert-success">
<b>EXERCISE</b>
Step12: Solution
<div class="alert alert-info">
<b>Tip</b>
Step13: <a href="#top">Top</a>
<hr style="height
Step14: That is less than ideal. We can't see detail in the data profiles! We can create a twin of the x-axis and have a secondary y-axis on the right side of the plot. We'll create a totally new figure here.
Step15: We're closer, but the data are plotting over the legend and not included in the legend. That's because the legend is associated with our primary y-axis. We need to append that data from the second y-axis.
Step16: <div class="alert alert-success">
<b>EXERCISE</b>
Step17: Solution | Python Code:
from siphon.simplewebservice.ndbc import NDBC
data_types = NDBC.buoy_data_types('46042')
print(data_types)
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Basic Time Series Plotting</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://matplotlib.org/_images/date_demo.png" alt="METAR" style="height: 300px;"></div>
Overview:
Teaching: 45 minutes
Exercises: 30 minutes
Questions
How can we obtain buoy data from the NDBC?
How are plots created in Python?
What features does Matplotlib have for improving our time series plots?
How can multiple y-axes be used in a single plot?
Objectives
<a href="#loaddata">Obtaining data</a>
<a href="#basictimeseries">Basic timeseries plotting</a>
<a href="#multiy">Multiple y-axes</a>
<a name="loaddata"></a>
Obtaining Data
To learn about time series analysis, we first need to find some data and get it into Python. In this case we're going to use data from the National Data Buoy Center. We'll use the pandas library for our data subset and manipulation operations after obtaining the data with siphon.
Each buoy has many types of data availabe, you can read all about it in the NDBC Web Data Guide. There is a mechanism in siphon to see which data types are available for a given buoy.
End of explanation
df = NDBC.realtime_observations('46042')
df.tail()
Explanation: In this case, we'll just stick with the standard meteorological data. The "realtime" data from NDBC contains approximately 45 days of data from each buoy. We'll retreive that record for buoy 51002 and then do some cleaning of the data.
End of explanation
df = df.dropna(axis='columns', how='all')
df.head()
Explanation: Let's get rid of the columns with all missing data. We could use the drop method and manually name all of the columns, but that would require us to know which are all NaN and that sounds like manual labor - something that programmers hate. Pandas has the dropna method that allows us to drop rows or columns where any or all values are NaN. In this case, let's drop all columns with all NaN values.
End of explanation
# Your code goes here
# supl_obs =
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Use the realtime_observations method to retreive supplemental data for buoy 41002. **Note** assign the data to something other that df or you'll have to rerun the data download cell above. We suggest using the name supl_obs.</li>
</ul>
</div>
End of explanation
# %load solutions/get_obs.py
Explanation: Solution
End of explanation
import pandas as pd
idx = df.time >= (pd.Timestamp.utcnow() - pd.Timedelta(days=7))
df = df[idx]
df.head()
Explanation: Finally, we need to trim down the data. The file contains 45 days worth of observations. Let's look at the last week's worth of data.
End of explanation
df.reset_index(drop=True, inplace=True)
df.head()
Explanation: We're almost ready, but now the index column is not that meaningful. It starts at a non-zero row, which is fine with our initial file, but let's re-zero the index so we have a nice clean data frame to start with.
End of explanation
# Convention for import of the pyplot interface
import matplotlib.pyplot as plt
# Set-up to have matplotlib use its support for notebook inline plots
%matplotlib inline
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="basictimeseries"></a>
Basic Timeseries Plotting
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. We're going to learn the basics of creating timeseries plots with matplotlib by plotting buoy wind, gust, temperature, and pressure data.
End of explanation
plt.rc('font', size=12)
fig, ax = plt.subplots(figsize=(10, 6))
# Specify how our lines should look
ax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Speed (m/s)')
ax.set_title('Buoy Wind Data')
ax.grid(True)
ax.legend(loc='upper left');
Explanation: We'll start by plotting the windspeed observations from the buoy.
End of explanation
# Helpers to format and locate ticks for dates
from matplotlib.dates import DateFormatter, DayLocator
# Set the x-axis to do major ticks on the days and label them like '07/20'
ax.xaxis.set_major_locator(DayLocator())
ax.xaxis.set_major_formatter(DateFormatter('%m/%d'))
fig
Explanation: Our x axis labels look a little crowded - let's try only labeling each day in our time series.
End of explanation
# Use linestyle keyword to style our plot
ax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--',
label='Wind Gust')
# Redisplay the legend to show our new wind gust line
ax.legend(loc='upper left')
fig
Explanation: Now we can add wind gust speeds to the same plot as a dashed yellow line.
End of explanation
# Your code goes here
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Create your own figure and axes (<code>myfig, myax = plt.subplots(figsize=(10, 6))</code>) which plots temperature.</li>
<li>Change the x-axis major tick labels to display the shortened month and date (i.e. 'Sep DD' where DD is the day number). Look at the
<a href="https://docs.python.org/3.6/library/datetime.html#strftime-and-strptime-behavior">
table of formatters</a> for help.
<li>Make sure you include a legend and labels!</li>
<li><b>BONUS:</b> try changing the <code>linestyle</code>, e.g., a blue dashed line.</li>
</ul>
</div>
End of explanation
# %load solutions/basic_plot.py
Explanation: Solution
<div class="alert alert-info">
<b>Tip</b>:
If your figure goes sideways as you try multiple things, try running the notebook up to this point again
by using the Cell -> Run All Above option in the menu bar.
</div>
End of explanation
# plot pressure data on same figure
ax.plot(df.time, df.pressure, color='black', label='Pressure')
ax.set_ylabel('Pressure')
ax.legend(loc='upper left')
fig
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="multiy"></a>
Multiple y-axes
What if we wanted to plot another variable in vastly different units on our plot? <br/>
Let's return to our wind data plot and add pressure.
End of explanation
fig, ax = plt.subplots(figsize=(10, 6))
axb = ax.twinx()
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Speed (m/s)')
ax.set_title('Buoy Data')
ax.grid(True)
# Plotting on the first y-axis
ax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')
ax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust')
ax.legend(loc='upper left');
# Plotting on the second y-axis
axb.set_ylabel('Pressure (hPa)')
axb.plot(df.time, df.pressure, color='black', label='pressure')
ax.xaxis.set_major_locator(DayLocator())
ax.xaxis.set_major_formatter(DateFormatter('%b %d'))
Explanation: That is less than ideal. We can't see detail in the data profiles! We can create a twin of the x-axis and have a secondary y-axis on the right side of the plot. We'll create a totally new figure here.
End of explanation
fig, ax = plt.subplots(figsize=(10, 6))
axb = ax.twinx()
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Speed (m/s)')
ax.set_title('Buoy 41056 Wind Data')
ax.grid(True)
# Plotting on the first y-axis
ax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')
ax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust')
# Plotting on the second y-axis
axb.set_ylabel('Pressure (hPa)')
axb.plot(df.time, df.pressure, color='black', label='pressure')
ax.xaxis.set_major_locator(DayLocator())
ax.xaxis.set_major_formatter(DateFormatter('%b %d'))
# Handling of getting lines and labels from all axes for a single legend
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = axb.get_legend_handles_labels()
axb.legend(lines + lines2, labels + labels2, loc='upper left');
Explanation: We're closer, but the data are plotting over the legend and not included in the legend. That's because the legend is associated with our primary y-axis. We need to append that data from the second y-axis.
End of explanation
# Your code goes here
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
Create your own plot that has the following elements:
<ul>
<li>A blue line representing the wave height measurements.</li>
<li>A green line representing wind speed on a secondary y-axis</li>
<li>Proper labels/title.</li>
<li>**Bonus**: Make the wave height data plot as points only with no line. Look at the documentation for the linestyle and marker arguments.</li>
</ul>
</div>
End of explanation
# %load solutions/adv_plot.py
Explanation: Solution
End of explanation |
14,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5. Getting stuff done with Python
In this unit, we are going to learn a new data structure with richer features compared to lists. The problem with lists, while flexible enough to store different data types in one variable, is that it does not provide us a way to "label" that information with meta data. For example in a list ['Andy', 28, 980.15], we would like to be able to associate with the value Andy a label or key 'name' so that we don't have to keep on recalling that a staff's name is located in the first position of the list.
Secondly, we will cover the for and if else construct.
Technically speaking, armed with the basic tools that we have learnt from the previous unit, we could go about writing code and basic scripts. But with these keywords, what can achieved is rather limited. Worse, the process of writing code will be tedious and unenlightening. From a scientific and analytic point of view, writing code is not only an ends to obtaining the results we want, but a way for us to structure our thinking and come to an understanding of the problem.
5.1 Learning objectives for this unit
To use the dict data structure and related methods.
To apply boolen conditionals to control if, if else and if elif statements.
To use the for loop to iterate through repetitive calculations.
6. The dictionary
Lists - while easy to create- have the weakness that one cannot easily retrieve data that has been already stored in it. Since the primary means of retrieving information in a list is via indexing and slicing, you need to know the exact integer positions of each data stored in the list. This can (and will often) lead to human errors in programming. Furthermore, an integer based recall is unenlightening. Other people who reads your code will find it difficult to understand what is being written.
To remedy this, Python has built into its base package a data structure called dictionaries. A dictionary is simply a key-value pairing like so $$(key_1, value_1), (key_2, value_2)\ldots, (key_n, value_n)$$ where $key_i$ are usually (but not always) strings and $value_i$ any Python object (int, str, list and even other dict!)
If my_dictionary is a dictionary. Then calling my_dictionary[key_1] will return you the value associated with key_1 in my_dictionary, say value_1.
6.1 Creating dictionaries
Dictionaries are created using curly braces { }. Inside the curly braces, we simply list down all the key-value pairs with a colon
Step1: 6.2 Updating dictionaries
Very often, we need to change dictionary values and/or add more entries to our dictionary.
Step2: An implication of this is that we can start with an empty dictionary and add keys as we go along.
And empty dictionary is created by assigning a variable to an instance of class dict by calling dict().
Step3: 6.2.1 Dictionary exercise
Here's what I want you to do. Go around the room and find out the favourite food of three colleagues here. Then update this empty dictionary with 3 key-value pairings where keys are your colleague's name and value his/her favourite food. Have fun!
Step4: 6.2.2 Using the .update method
To combine two dictionaries, we use the .update method.
Step5: 6.2.3 Creating dictionaries using kwargs
Dictionaries can also be created by calling dict() with keyword arguments (or kwargs). When we create dictionaries like this, the key value pairs are written as
$$ key_1 = value_1, key_2 = value_2, \ldots $$ again each seperated by a comma.
Step6: Note from the example above that when creating dictionaries, DO NOT enclose the keys within quotation marks. However, when accessing the value of a dictionary by its key, you MUST use quotation marks.
Step7: 7. If and boolen conditionals
Any algorithm worth its salt will have conditionals in it. Very rarely does an algorithm just proceed in a linear fashion, one step following another step. More often than not, one step will follow from another if some condition is satisfied.
This situation can be programmed in Python by using the if keyword. The general construct of an if statement looks like this
Step8: Here is our first example using the if keyword.
Step9: The variable x has been assigned the value 300. When Python sees the statement if x == 300
Step10: The conditional checks and see whether the remainder of 13 when divided by 2 is 0. (Recall that's what % does.) Since 13 returns 1 remainder when divided by 2, the condition is false. Hence the string 'This is an even number' is not printed.
7.2 if else statements
If you look at the previous example, one can see an obvious problem with this block of code. What if y=14? Then the output would consist of two print statements 'This is an even number' and followed by "I guess it's odd then". This is not desireable. In fact we want to print 'This is an even number' only when y is even and "I guess it's odd then" only when y is odd. To code this into Python, we use the else keyword.
Code blocks under the else keyword are executed when the conditionals evaluate to False.
The format of of if else statements are as below
Step11: Try re-executing the cell above with various values of y and see the effect on the output.
Nested if else statements make for horrible coding. It makes code hard to read and understand. Surely there must be a better way!
7.3 elif to rescue
We use elif when we need to test conditionals in a sequential manner. A sequence of conditionals is tested, one after another until the first True condition is encountered. Then, the code block corresponding to that conditional is executed and program continues on without checking the other conditionals.
The format of elif statements looks like this
Step12: See how much more elegant this is instead of nested if else statements.
Now let's try another example. This time we use elif to program Python to assign letter grades to student marks on an an exam.
Here are the letter grade and their assigned intervals.
| Grades | Interval |
|
Step13: As before, play around with the various values of marks to make sure that elif structure is working as intended. Notice that I phrased the conditional only to check agains a lower bound. This is because Python will only execute the code block corresponding to first True conditional in the elif sequence. Even if subsequent conditionals evaluate to true, their code is not run.
8. Performing repetitive tasks. Using the for loop
Most algorithms consist of repeating a certain calculations or computer tasks in an almost similiar manner. To give a scenario, I could instruct Python to print the names of three staff members to the output. I could go about fulfilling this task by writing the following code
print("Staff member Lisa")
print("Staff member Mark")
print("Staff member Andy")
Clearly this is repetitive and tedious. The for keyword allows us to simplify code by writing a single intruction - here print and looping this over a list consisting of staff names [Lisa, Mark, Andy].
The general format of a for loop is given by the following
Step14: But Python allows us to write a more readable form of the for loop. So the following is equivalent to the above and is preferred.
Step15: Note that the "counter variable" staff_name is actually a variable containing the current item in the list as the loop progresses. I could have used any name I wanted for the variable - I could use i to represent the staff's name. But I chose staff_name for readability purposes. As the variable staff_name runs through each item of staff, the code block print is executed with the current value of staff_name. Once that is done, the variable is updated with the next item in the list and the block is executed once more. This proceeds until the end of the list is reached and the for loop terminates.
8.2 Combining for and if statements
To further control what each iteration of the for loop does, we can combine if statements within for statements to trigger certain instructions when a particular iteration fulfils certain conditions.
If we need to exit a for loop prematurely, we can use the break keyword. This is usually used in conjuction with if. When conditions for break is fulfilled, the for loop terminates immediately. Python will not evaluate for statements for remaining items in the iterating list.
Step16: Here's a more mathematical usage of the for statement. Suppose we want to compute the decimal expansion of $\sqrt{2}$ accurate to 3 decimal places.After searching Wikipedia, I came up with this recursive formula $$\begin{align}a_0 &=1 \ a_{n+1} &= \frac{a_n}{2}+\frac{1}{a_n}\end{align}$$ Here's how we could implement this.
Step17: 8.2.1 Your mission, should you choose to accept it
...is to list down all prime numbers less than 100.
A prime number $p$ is a number which is divisible only by 1 and itself. | Python Code:
# creating a dictionary and assigning it to a variable
staff = {'name': 'Andy', 'age': 28, 'email': '[email protected]' }
staff['name']
staff['age']
print(staff['email'])
# A dictionary is of class dict
print(type(staff))
# list of all keys, note the brackets at the end.
# .keys is a method associated to dictionaries
staff.keys()
# list of all values, in no particular order
staff.values()
# list all key-value pairings using .items
staff.items()
Explanation: 5. Getting stuff done with Python
In this unit, we are going to learn a new data structure with richer features compared to lists. The problem with lists, while flexible enough to store different data types in one variable, is that it does not provide us a way to "label" that information with meta data. For example in a list ['Andy', 28, 980.15], we would like to be able to associate with the value Andy a label or key 'name' so that we don't have to keep on recalling that a staff's name is located in the first position of the list.
Secondly, we will cover the for and if else construct.
Technically speaking, armed with the basic tools that we have learnt from the previous unit, we could go about writing code and basic scripts. But with these keywords, what can achieved is rather limited. Worse, the process of writing code will be tedious and unenlightening. From a scientific and analytic point of view, writing code is not only an ends to obtaining the results we want, but a way for us to structure our thinking and come to an understanding of the problem.
5.1 Learning objectives for this unit
To use the dict data structure and related methods.
To apply boolen conditionals to control if, if else and if elif statements.
To use the for loop to iterate through repetitive calculations.
6. The dictionary
Lists - while easy to create- have the weakness that one cannot easily retrieve data that has been already stored in it. Since the primary means of retrieving information in a list is via indexing and slicing, you need to know the exact integer positions of each data stored in the list. This can (and will often) lead to human errors in programming. Furthermore, an integer based recall is unenlightening. Other people who reads your code will find it difficult to understand what is being written.
To remedy this, Python has built into its base package a data structure called dictionaries. A dictionary is simply a key-value pairing like so $$(key_1, value_1), (key_2, value_2)\ldots, (key_n, value_n)$$ where $key_i$ are usually (but not always) strings and $value_i$ any Python object (int, str, list and even other dict!)
If my_dictionary is a dictionary. Then calling my_dictionary[key_1] will return you the value associated with key_1 in my_dictionary, say value_1.
6.1 Creating dictionaries
Dictionaries are created using curly braces { }. Inside the curly braces, we simply list down all the key-value pairs with a colon :. Different pairs are seperated by a ,.
End of explanation
# Hey, Andy mistakenly keyed in his age. He is actually 29 years old!
staff['age'] = 29
print(staff)
# HR wants us to record down his staff ID.
staff['id'] = 12345
print(staff)
# Let's check the list of keys
staff.keys()
Explanation: 6.2 Updating dictionaries
Very often, we need to change dictionary values and/or add more entries to our dictionary.
End of explanation
favourite_food = dict() # You could also type favourite_food = {}
print(favourite_food)
Explanation: An implication of this is that we can start with an empty dictionary and add keys as we go along.
And empty dictionary is created by assigning a variable to an instance of class dict by calling dict().
End of explanation
# update your dictionary here
# and print the dictionary
print(favourite_food)
Explanation: 6.2.1 Dictionary exercise
Here's what I want you to do. Go around the room and find out the favourite food of three colleagues here. Then update this empty dictionary with 3 key-value pairings where keys are your colleague's name and value his/her favourite food. Have fun!
End of explanation
staff.update({'salary': 980.15, 'department':'finance', 'colleagues': ['George', 'Liz']})
# Who are Andy's colleagues? Enter answer below
# Which department does he work in? Enter answer below
Explanation: 6.2.2 Using the .update method
To combine two dictionaries, we use the .update method.
End of explanation
my_favourite_things = dict(food="Assam Laksa", music="classical", number = 2)
# my favourite number
my_favourite_things['number']
Explanation: 6.2.3 Creating dictionaries using kwargs
Dictionaries can also be created by calling dict() with keyword arguments (or kwargs). When we create dictionaries like this, the key value pairs are written as
$$ key_1 = value_1, key_2 = value_2, \ldots $$ again each seperated by a comma.
End of explanation
# An error
my_favourite_things[food]
# ...but this is correct
food = 'food'
my_favourite_things[food]
Explanation: Note from the example above that when creating dictionaries, DO NOT enclose the keys within quotation marks. However, when accessing the value of a dictionary by its key, you MUST use quotation marks.
End of explanation
# examples of binary comparison
1 < 2
# compound statements
1 < 2 or 1 == 2
# using bitwise operators
1<2 & 1==2
Explanation: 7. If and boolen conditionals
Any algorithm worth its salt will have conditionals in it. Very rarely does an algorithm just proceed in a linear fashion, one step following another step. More often than not, one step will follow from another if some condition is satisfied.
This situation can be programmed in Python by using the if keyword. The general construct of an if statement looks like this:
if condition :
do something
Python will only execute the code block do something only if condition evaluate to True. Otherwise it ignores it. Notice the lack of { } surrounding the code block to be executed. The Python interpreter relies on indentation and newline to "know" which code block is conditioned on.
This enforces good coding style and makes it so nice to program with Python. No more worrying about unmatched braces!
7.1 Conditionals
Since conditionals are logical statements, we need some binary comparison operators
== logical equals to
> strictly greater than
< strictly less than
!= logical not equals to
We can create compound statements by using the keywords or and and. Their bitwise versions are | and &.
End of explanation
x = 300
if x == 300:
print('This is Sparta!')
Explanation: Here is our first example using the if keyword.
End of explanation
y = 13
print(y)
if y % 2 == 0:
print('This is an even number')
print("I guess it's odd then")
Explanation: The variable x has been assigned the value 300. When Python sees the statement if x == 300: Python checks the conditional x==300 and evaluates it. Since x was assigned the value 300, the conditional is indeed True. When this happens, it executes the indented code block below the conditional, in this case print the string 'This is Sparta!'
End of explanation
y = 22
if y%2 ==0:
print("{} is an even number".format(y))
else:
print("{} is an odd number".format(y))
y = 13
if y%2 ==0:
print("{} is an even number".format(y))
else:
print("{} is an odd number".format(y))
# Nested if else statements
y = 25
remainder = y%3
if remainder == 0:
print("{} is divisible by 3".format(y))
else:
print("{} is not divisible by 3".format(y))
if remainder ==1:
print("But has remainder {}".format(remainder))
else:
print("But has remainder {}".format(remainder))
Explanation: The conditional checks and see whether the remainder of 13 when divided by 2 is 0. (Recall that's what % does.) Since 13 returns 1 remainder when divided by 2, the condition is false. Hence the string 'This is an even number' is not printed.
7.2 if else statements
If you look at the previous example, one can see an obvious problem with this block of code. What if y=14? Then the output would consist of two print statements 'This is an even number' and followed by "I guess it's odd then". This is not desireable. In fact we want to print 'This is an even number' only when y is even and "I guess it's odd then" only when y is odd. To code this into Python, we use the else keyword.
Code blocks under the else keyword are executed when the conditionals evaluate to False.
The format of of if else statements are as below:
if condition:
do something
else:
do something else
End of explanation
y=25
remainder = y%3
if remainder == 0:
div = 'is'
s = 'Hence'
elif remainder == 1:
div = 'is not'
s = 'But'
elif remainder == 2:
div = 'is not'
s = 'But'
print('{} {} divisible by 3\n{} has remainder {}'.format(y, div, s, remainder))
Explanation: Try re-executing the cell above with various values of y and see the effect on the output.
Nested if else statements make for horrible coding. It makes code hard to read and understand. Surely there must be a better way!
7.3 elif to rescue
We use elif when we need to test conditionals in a sequential manner. A sequence of conditionals is tested, one after another until the first True condition is encountered. Then, the code block corresponding to that conditional is executed and program continues on without checking the other conditionals.
The format of elif statements looks like this:
if $condition_1$:
do code 1
elif $condition_2$:
do code 2
$\vdots$
elif $condition_n$:
do code n
elif chains can be "terminated" either by an elif statement itself or by an else if there is no final conditional to be evaluated.
Let's try to refactor (recode) the nested if else statements above using the elif keyword.
End of explanation
marks = 78.35
if marks >= 80:
grade = 'A'
elif marks >= 70:
grade = 'B'
elif marks >= 60:
grade = 'C'
elif marks >= 50:
grade = 'D'
elif marks >= 45:
grade = 'E'
else:
grade = 'F'
print('Student obtained %.1f marks in the test and hence is awarded %s for this module' % (marks, grade))
Explanation: See how much more elegant this is instead of nested if else statements.
Now let's try another example. This time we use elif to program Python to assign letter grades to student marks on an an exam.
Here are the letter grade and their assigned intervals.
| Grades | Interval |
|:--------:|:----------:|
|A |[80, 100] |
|B |[70, 80) |
|C |[60, 70) |
|D | [50, 60) |
|E | [45, 50) |
|F | [0, 45)|
End of explanation
staff = ['Lisa', 'Mark', 'Andy']
for i in range(0,3): # range(0,3) is a function that produces a sequence of numbers in the form of a list: [0,1,2]
print("Staff member "+staff[i])
Explanation: As before, play around with the various values of marks to make sure that elif structure is working as intended. Notice that I phrased the conditional only to check agains a lower bound. This is because Python will only execute the code block corresponding to first True conditional in the elif sequence. Even if subsequent conditionals evaluate to true, their code is not run.
8. Performing repetitive tasks. Using the for loop
Most algorithms consist of repeating a certain calculations or computer tasks in an almost similiar manner. To give a scenario, I could instruct Python to print the names of three staff members to the output. I could go about fulfilling this task by writing the following code
print("Staff member Lisa")
print("Staff member Mark")
print("Staff member Andy")
Clearly this is repetitive and tedious. The for keyword allows us to simplify code by writing a single intruction - here print and looping this over a list consisting of staff names [Lisa, Mark, Andy].
The general format of a for loop is given by the following:
for item_n in list:
do code substituted with item_n
8.1 How a for loop works
Let's start by doing the above by looping over the index of the list. The way it is usually done in Javascript or C is to declare a "counter variable" i and increment it by one to perform the next loop of the algorithm. We could do something like this in Python.
End of explanation
for staff_name in staff:
print("Staff member "+ staff_name)
Explanation: But Python allows us to write a more readable form of the for loop. So the following is equivalent to the above and is preferred.
End of explanation
# A common programming interview task. Print 'foo' if x is divisible by 3 and 'bar' if it is divisible by 5 and 'baz'
# if x is divisible by both 3 and 5. Do this for numbers 1 to 15.
for num in range(1,16): # range(1,16) produces a list of numbers started from 1 and ending at 15.
if num % 3 == 0 and num % 5 !=0:
print('%d foo' % (num))
elif num % 5 == 0 and num % 3 != 0:
print('%d bar' % (num))
elif num % 5 == 0 and num % 3 == 0:
print('%d baz' % (num))
else:
print('%d' % (num))
Explanation: Note that the "counter variable" staff_name is actually a variable containing the current item in the list as the loop progresses. I could have used any name I wanted for the variable - I could use i to represent the staff's name. But I chose staff_name for readability purposes. As the variable staff_name runs through each item of staff, the code block print is executed with the current value of staff_name. Once that is done, the variable is updated with the next item in the list and the block is executed once more. This proceeds until the end of the list is reached and the for loop terminates.
8.2 Combining for and if statements
To further control what each iteration of the for loop does, we can combine if statements within for statements to trigger certain instructions when a particular iteration fulfils certain conditions.
If we need to exit a for loop prematurely, we can use the break keyword. This is usually used in conjuction with if. When conditions for break is fulfilled, the for loop terminates immediately. Python will not evaluate for statements for remaining items in the iterating list.
End of explanation
max_iter = 10
a = 1
# Since _ is considered a valid variable name, we can use this to
# "suppress" counting indices.
for _ in range(0, max_iter):
a_next = a/2.0 + 1/a
if abs(a_next-a) < 1e-4: # You can use engineering format numbers in Python
print("Required accuracy found! Breaking out of the loop.")
break
a = a_next
print("Approximation of sqrt(2) is: %.3f" % (a))
Explanation: Here's a more mathematical usage of the for statement. Suppose we want to compute the decimal expansion of $\sqrt{2}$ accurate to 3 decimal places.After searching Wikipedia, I came up with this recursive formula $$\begin{align}a_0 &=1 \ a_{n+1} &= \frac{a_n}{2}+\frac{1}{a_n}\end{align}$$ Here's how we could implement this.
End of explanation
# Answer
Explanation: 8.2.1 Your mission, should you choose to accept it
...is to list down all prime numbers less than 100.
A prime number $p$ is a number which is divisible only by 1 and itself.
End of explanation |
14,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Step1: The least common multiple of the numbers 1 to 10 is 2520. We are asked to find that of the numbers 1 to 20.
Version 1 - Integer Factorization
We already implemented lcm in problem 3, and the easiest way would be for us to simply use that. However, as we mentioned, it is not very efficient. Let's instead look at other ways of implementing it.
Version 2 - Simple algorithm
Given a list of $n$ integers $X = (x_1, x_2, \dotsc, x_n)$, we can find the least common multiple of the integers in $X$ by the following
Step2: This is way too slow! Let's try something else!
Version 3 - Division by Primes
Step3: MUCH better.
Version 4 - GCD and the Euclidean algorithm
$$\mathrm{lcd}(a, b) = \frac{a \cdot b}{\mathrm{gcd}(a, b)}$$ | Python Code:
from six.moves import range
all_divides = lambda m, *numbers: all(m % n == 0 for n in numbers)
all_divides(2520, *range(1, 10))
Explanation: 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
End of explanation
# First we need a predicate to test
# if all elements of a list are equal
# There are a number of ways to do this
pairs = lambda lst: zip(lst[1:], lst[:-1])
all_equals = lambda lst: all(x == y for x, y in pairs)
all_equals = lambda lst: lst[1:] == lst[:-1]
all_equals = lambda lst: len(set(lst)) < 2
# We'll also need argmin. Note that NumPy
# comes bundled with all of these, but
# they're trivial, why not implement them ourselves!
argmin = lambda lst: lst.index(min(lst))
def _lcm_recursive(nums, nums_new):
if all_equals(nums_new):
# return any element
# why not the first one
return nums_new[0]
k = argmin(nums_new)
nums_new[k] += nums[k]
return _lcm(nums, nums_new)
def _lcm_iterative(nums):
nums_new = list(nums) # remember to use list for deep copy
while not all_equals(nums_new):
k = argmin(nums_new)
nums_new[k] += nums[k]
return nums_new[0]
# comment one out
lcm = lambda *nums: _lcm_recursive(list(nums), list(nums))
lcm = lambda *nums: _lcm_iterative(nums)
lcm(4, 7, 12, 21, 42)
lcm(*range(1, 10+1))
lcm(*range(1, 20))
Explanation: The least common multiple of the numbers 1 to 10 is 2520. We are asked to find that of the numbers 1 to 20.
Version 1 - Integer Factorization
We already implemented lcm in problem 3, and the easiest way would be for us to simply use that. However, as we mentioned, it is not very efficient. Let's instead look at other ways of implementing it.
Version 2 - Simple algorithm
Given a list of $n$ integers $X = (x_1, x_2, \dotsc, x_n)$, we can find the least common multiple of the integers in $X$ by the following:
Let $X^{(0)} = (x_1^{(0)}, x_2^{(0)}, \dotsc, x_n^{(0)}) = (x_1, x_2, \dotsc, x_n) = X$ and $X^{(m+1)} = (x_1^{(m+1)}, x_2^{(m+1)}, \dotsc, x_n^{(m+1)})$ where
$$
x_k^{(m+1)} = x_k^{(m)} + \begin{cases}
x_k^{(0)} & \text{if } \min(X^{(m)}) = x_k^{(m)} \
0 & \text{otherwise}
\end{cases}
$$
Once all $X$ are equal, in every entry is the LCM.
<!-- TEASER_END -->
End of explanation
%load_ext autoreload
%autoreload 2
from common.utils import prime_range, reconstruct
from collections import defaultdict, Counter
def _lcm_prime_divisors(nums):
divides_count = Counter()
for p in prime_range(max(nums)+1):
for n in nums:
tmp = 0
while n % p == 0:
tmp += 1
n /= p
if tmp > divides_count[p]:
divides_count[p] = tmp
return reconstruct(divides_count)
lcm = lambda *nums: _lcm_prime_divisors(nums)
lcm(4, 7, 12, 21, 42)
lcm(*range(1, 11))
lcm(*range(1, 21))
Explanation: This is way too slow! Let's try something else!
Version 3 - Division by Primes
End of explanation
# TODO
Explanation: MUCH better.
Version 4 - GCD and the Euclidean algorithm
$$\mathrm{lcd}(a, b) = \frac{a \cdot b}{\mathrm{gcd}(a, b)}$$
End of explanation |
14,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Data Analysis, 3rd ed
Chapter 2, demo 4
Authors
Step1: Calculate results
Step2: Plot results | Python Code:
# Import necessary packages
import numpy as np
from scipy.stats import beta
%matplotlib inline
import matplotlib.pyplot as plt
import arviz as az
# add utilities directory to path
import os, sys
util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))
if util_path not in sys.path and os.path.exists(util_path):
sys.path.insert(0, util_path)
# import from utilities
import plot_tools
# edit default plot settings
plt.rc('font', size=12)
az.style.use("arviz-grayscale")
Explanation: Bayesian Data Analysis, 3rd ed
Chapter 2, demo 4
Authors:
- Aki Vehtari aki.vehtari@aalto.fi
- Tuomas Sivula tuomas.sivula@aalto.fi
Probability of a girl birth given placenta previa (BDA3 p. 37).
Calculate the posterior distribution on a discrete grid of points by multiplying the likelihood and a non-conjugate prior at each point, and normalizing over the points. Simulate samples from the resulting non-standard posterior distribution using inverse cdf using the discrete grid.
End of explanation
# data (437,543)
a = 437
b = 543
# grid of nx points
nx = 1000
x = np.linspace(0, 1, nx)
# compute density of non-conjugate prior in grid
# this non-conjugate prior is same as in Figure 2.4 in the book
pp = np.ones(nx)
ascent = (0.385 <= x) & (x <= 0.485)
descent = (0.485 <= x) & (x <= 0.585)
pm = 11
pp[ascent] = np.linspace(1, pm, np.count_nonzero(ascent))
pp[descent] = np.linspace(pm, 1, np.count_nonzero(descent))
# normalize the prior
pp /= np.sum(pp)
# unnormalised non-conjugate posterior in grid
po = beta.pdf(x, a, b)*pp
po /= np.sum(po)
# cumulative
pc = np.cumsum(po)
# inverse-cdf sampling
# get n uniform random numbers from [0,1]
n = 10000
r = np.random.rand(n)
# map each r into corresponding grid point x:
# [0, pc[0]) map into x[0] and [pc[i-1], pc[i]), i>0, map into x[i]
rr = x[np.sum(pc[:,np.newaxis] < r, axis=0)]
Explanation: Calculate results
End of explanation
# plot 3 subplots
fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(6, 8), constrained_layout=False)
# show only x-axis
plot_tools.modify_axes.only_x(axes)
# manually adjust spacing
fig.subplots_adjust(hspace=0.5)
# posterior with uniform prior Beta(1,1)
axes[0].plot(x, beta.pdf(x, a+1, b+1))
axes[0].set_title('Posterior with uniform prior')
# non-conjugate prior
axes[1].plot(x, pp)
axes[1].set_title('Non-conjugate prior')
# posterior with non-conjugate prior
axes[2].plot(x, po)
axes[2].set_title('Posterior with non-conjugate prior')
# cosmetics
#for ax in axes:
# ax.set_ylim((0, ax.get_ylim()[1]))
# set custom x-limits
axes[0].set_xlim((0.35, 0.65));
plt.figure(figsize=(8, 6))
fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(6, 8))
plot_tools.modify_axes.only_x(axes)
axes[0].plot(x, po)
axes[0].set_xlim((0.38, 0.52))
axes[0].set_title("Non-conjugate posterior")
axes[1].plot(x, pc)
axes[1].set_title("Posterior-cdf")
az.plot_posterior(rr, kind="hist", point_estimate=None, hdi_prob="hide", ax=axes[2], bins=30)
axes[2].set_title("Histogram of posterior samples")
Explanation: Plot results
End of explanation |
14,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a data-toc-modified-id="define-exp.-growth,-then-bottleneck-model-function-1" href="#define-exp.-growth,-then-bottleneck-model-function"><span class="toc-item-num">1 </span>define exp. growth, then bottleneck model function</a></div><div class="lev2 toc-item"><a data-toc-modified-id="parallelus-11" href="#parallelus"><span class="toc-item-num">1.1 </span><em>parallelus</em></a></div>
Step1: I am going to use unfolded spectra from ANGSD that are folded in dadi.
Step3: define exp. growth, then bottleneck model function
Step7: This model specifies exponential growth/decline toward $\nu_B$ for some time $TB$, after which the population size undergoes an instantaneous size change to the contemporary size (ratio with $N_{ref}$).
Step8: The optimised parameter values do not deviate very much from the initial parameter values.
Step9: In all successful optimisations above, $\nu_F$, the ratio of contemporary population size to ancient population size, converges to a value below 1/3.
Step10: parallelus
Step11: Again, note that the inferred optimal parameter values only deviate slightly from the initial values and that all optimal parameter combinations have the same likelihood.
Step12: The initial parameter values specified above (and then perturbed) are too far away from an optimum. | Python Code:
from ipyparallel import Client
cl = Client()
cl.ids
%%px --local
# run whole cell on all engines a well as in the local IPython session
import numpy # dadi calls numpy (not np)
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
import dadi
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a data-toc-modified-id="define-exp.-growth,-then-bottleneck-model-function-1" href="#define-exp.-growth,-then-bottleneck-model-function"><span class="toc-item-num">1 </span>define exp. growth, then bottleneck model function</a></div><div class="lev2 toc-item"><a data-toc-modified-id="parallelus-11" href="#parallelus"><span class="toc-item-num">1.1 </span><em>parallelus</em></a></div>
End of explanation
%%px --local
# import 1D spectrum of ery on all engines:
fs_ery = dadi.Spectrum.from_file('ERY.unfolded.sfs').fold()
# import 1D spectrum of ery on all engines:
fs_par = dadi.Spectrum.from_file('PAR.unfolded.sfs').fold()
Explanation: I am going to use unfolded spectra from ANGSD that are folded in dadi.
End of explanation
%psource dadi.Demographics1D.growth
%%px --local
def expGrowth_bottleneck(params, ns, pts):
exponential growth followed by instantanous size change.
params = (nuB,TB,nuF,TF)
ns = (n1,)
nuB: Ratio of population size after exponential growth to ancient
population size
nuF: Ratio of contemporary to ancient population size
TB: Time during which exponential growth happened
(in units of 2*Na generations)
TF: Time in the past at which instantanous size change happened (after exp. growth)
n1: Number of samples in resulting Spectrum
pts: Number of grid points to use in integration.
nuB,TB,nuF,TF = params
xx = dadi.Numerics.default_grid(pts)
phi = dadi.PhiManip.phi_1D(xx)
nu_func = lambda t: numpy.exp(numpy.log(nuB) * t/TB)
phi = dadi.Integration.one_pop(phi, xx, TB, nu_func)
phi = dadi.Integration.one_pop(phi, xx, TF, nuF)
fs = dadi.Spectrum.from_phi(phi, ns, (xx,))
return fs
Explanation: define exp. growth, then bottleneck model function
End of explanation
%%px --local
# create link to function that specifies the model
func = expGrowth_bottleneck
# create extrapolating version of the model function
func_ex = dadi.Numerics.make_extrap_log_func(func)
%%px
# set up global variables on engines required for run_dadi function call
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_ery # use ERY spectrum
perturb = True
fold = 2 # perturb randomly up to 6-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "OUT_expGrowth_bottleneck/ERY_perturb" # set file name stub for opt. result files
# set lower and upper bounds to nu and T
upper_bound = [1e4, 5, 1e4, 5]
lower_bound = [1e-4, 0, 1e-4, 0]
ns = fs_ery.sample_sizes # both populations have the same sample size
fs_ery.pop_ids = ['ery']
fs_par.pop_ids = ['par']
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
lbview = cl.load_balanced_view()
def run_dadi(p_init): # for the function to be called with map, it needs to have one input variable
p_init: initial parameter values to run optimisation from
if perturb == True:
p_init = dadi.Misc.perturb_params(p_init, fold=fold,
upper_bound=upper_bound, lower_bound=lower_bound)
# note upper_bound and lower_bound variables are expected to be in the namespace of each engine
# run optimisation of paramters
popt = dadi_opt_func(p0=p_init, data=sfs, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=verbose, maxiter=maxiter, full_output=full_output)
# pickle to file
import dill
name = outname[:] # make copy of file name stub!
for p in p_init:
name += "_%.4f" % (p)
with open(name + ".dill", "w") as fh:
dill.dump((p_init, popt), fh)
return p_init, popt
from itertools import repeat
# perturb parameter neutral values
p0 = [1, 1, 1, 1]
ar_ery_1 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=True)
def get_flag_count(out, NM=True):
out: list of tuples, each containing p_init and popt + additional info, including warnflags
as produced by run_dadi.py
from collections import defaultdict
if NM: # if ar from Nelder-Mead
i = 4 # the warnflag is reported at index position 4 in the output array
else: # ar from BFGS optimisation
i = 6
warnflag = defaultdict(int)
for res in out:
if res[1][i] == 1: # notice the change in indexing
warnflag[1] +=1
elif res[1][i] == 2:
warnflag[2] += 1
elif res[1][i] == 0:
warnflag[0] += 1
else:
warnflag[999] +=1
if NM:
print "success", warnflag[0]
print "Maximum number of function evaluations made.", warnflag[1]
print "Maximum number of iterations reached.", warnflag[2]
print "unknown flag", warnflag[999]
else:
print "success", warnflag[0]
print "Maximum number of iterations exceeded.", warnflag[1]
print "Gradient and/or function calls not changing.", warnflag[2]
print "unknown flag", warnflag[999]
get_flag_count(ar_ery_1, NM=True)
def flatten(array):
Returns a list of flattened elements of every inner lists (or tuples)
****RECURSIVE****
import numpy
res = []
for el in array:
if isinstance(el, (list, tuple, numpy.ndarray)):
res.extend(flatten(el))
continue
res.append(el)
return list( res )
success = [flatten(out)[:9] for out in ar_ery_1 if out[1][4] == 0]
import pandas as pd
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
print success[0][4:8]
# perturb previous optimal parameter combination
p0 = success[0][4:8]
ar_ery_1 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=True)
get_flag_count(ar_ery_1, NM=True)
success = [flatten(out)[:9] for out in ar_ery_1 if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: This model specifies exponential growth/decline toward $\nu_B$ for some time $TB$, after which the population size undergoes an instantaneous size change to the contemporary size (ratio with $N_{ref}$).
End of explanation
# specify the initial parameter values, they will be randomly perturbed by up to a factor of 4
p0 = [100, 0.01, 0.1, 1] # quick exp. growth, then long period of small size
ar_ery_2 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=True)
success = [flatten(out)[:9] for out in ar_ery_2 if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
# specify the initial parameter values, they will be randomly perturbed by up to a factor of 4
p0 = [0.01, 0.01, 10, 1] # quick exp. decline, then long period of increased size
ar_ery_3 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=True)
success = [flatten(out)[:9] for out in ar_ery_3 if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: The optimised parameter values do not deviate very much from the initial parameter values.
End of explanation
# perturb previous optimal parameter combination
p0 = success[0][4:8]
ar_ery_4 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=True)
get_flag_count(ar_ery_4, NM=True)
success = [flatten(out)[:9] for out in ar_ery_4 if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
# save optimisation results to file
import dill
optout = []
for ar_ery in (ar_ery_1, ar_ery_2, ar_ery_3, ar_ery_4):
optout.extend(list(ar_ery.get()))
dill.dump(optout, open("OUT_expGrowth_bottleneck/ERY_perturb_ar_ery.dill", "w"))
Explanation: In all successful optimisations above, $\nu_F$, the ratio of contemporary population size to ancient population size, converges to a value below 1/3.
End of explanation
%%px
# set up global variables on engines required for run_dadi function call
dadi_opt_func = dadi.Inference.optimize_log_fmin # uses Nelder-Mead algorithm
sfs = fs_par # use PAR spectrum
perturb = True
fold = 2 # perturb randomly up to 6-fold
maxiter = 100 # run a maximum of 100 iterations
verbose = 0
full_output = True # need to have full output to get the warnflags (see below)
outname = "OUT_expGrowth_bottleneck/PAR_perturb" # set file name stub for opt. result files
p0 = [1, 1, 1, 1]
ar_par_1 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=False)
%ll OUT_expGrowth_bottleneck/PAR*
ar_par_1 = []
import glob
for filename in glob.glob("OUT_expGrowth_bottleneck/PAR*"):
ar_par_1.append(dill.load(open(filename)))
get_flag_count(ar_par_1, NM=True)
success = [flatten(out)[:9] for out in ar_par_1 if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
success[1][4:8]
p0 = success[1][4:8]
ar_par_2 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=False)
get_flag_count(ar_par_2, NM=True)
success = [flatten(out)[:9] for out in ar_par_2 if out[1][4] == 0]
df = pd.DataFrame(data=success, \
columns=['nuB_0','TB_0', 'nuF_0', 'TF_0', 'nuB_opt', 'TB_opt', 'nuF_opt', 'TF_opt', '-logL'])
df.sort_values(by='-logL', ascending=True)
Explanation: parallelus
End of explanation
p0 = [0.01, 0.01, 10, 1] # quick exp. decline, then long period of increased size
ar_par_3 = lbview.map(run_dadi, repeat(p0, 10), block=False, order=False)
get_flag_count(ar_par_3, NM=True)
Explanation: Again, note that the inferred optimal parameter values only deviate slightly from the initial values and that all optimal parameter combinations have the same likelihood.
End of explanation
optout = []
for ar_par in (ar_par_1, list(ar_par_2.get())):
optout.extend(ar_par)
dill.dump(optout, open("OUT_expGrowth_bottleneck/PAR_perturb_ar_ery.dill", "w"))
Explanation: The initial parameter values specified above (and then perturbed) are too far away from an optimum.
End of explanation |
14,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial for recording a guitar string stroke and detecting its pitch
I use the python library called sounddevice which allows to easily record audio and represent the result as a numpy array.
We will use two different methods for detecting the pitch and compare their results.
For reference, here is the list of frequencies of all 6 strings expected for a well tuned guitar
Step1: First of all, check the list of available audio devices on the system
I use an external USB sound card called Sound Blaster E1
Step2: We define the length we want to record in seconds and the sampling rate to 44100 Hz
Step3: We can now record 2 seconds worth of audio
For this tutorial, I have played the D string of my guitar.
The result is a numpy array we store in the myrecording variable
Step4: Let's plot a section of this array to look at it first
We notice a pretty periodic signal with a clear fundamental frequency
Step5: Pitch detection using Fast Fourier Transform
We use numpy to compute the discrete Fourier transform of the signal
Step6: We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency
Step7: We find the frequency corresponding to the maximum of this Fourier transform, and calculate the corresponding real frequency by re-multiplying by the sampling rate
Step8: This methid has detected that my guitar string stroke has fundamental frequency of 149.94 Hz, which is indeed very close to the expected frequency of the D string of a well tuned guitar (target if 146.83 Hz)
My guitar was not very well tuned | Python Code:
import sounddevice as sd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Tutorial for recording a guitar string stroke and detecting its pitch
I use the python library called sounddevice which allows to easily record audio and represent the result as a numpy array.
We will use two different methods for detecting the pitch and compare their results.
For reference, here is the list of frequencies of all 6 strings expected for a well tuned guitar:
String | Frequency | Scientific pitch notation
--- | --- | ---
1 (E) | 329.63 Hz | E4
2 (B) | 246.94 Hz | B3
3 (G) | 196.00 Hz | G3
4 (D) | 146.83 Hz | D3
5 (A) | 110.00 Hz | A2
6 (E) | 82.41 Hz | E2
End of explanation
sd.query_devices()
Explanation: First of all, check the list of available audio devices on the system
I use an external USB sound card called Sound Blaster E1: this is the one we will use here
End of explanation
device = 0 # we use my USB sound card device
duration = 2 # seconds
fs = 44100 # samples by second
Explanation: We define the length we want to record in seconds and the sampling rate to 44100 Hz
End of explanation
myrecording = sd.rec(duration * fs, samplerate=fs, channels=1, device=0)
Explanation: We can now record 2 seconds worth of audio
For this tutorial, I have played the D string of my guitar.
The result is a numpy array we store in the myrecording variable
End of explanation
df = pd.DataFrame(myrecording)
df.loc[25000:30000].plot()
Explanation: Let's plot a section of this array to look at it first
We notice a pretty periodic signal with a clear fundamental frequency: which makes sense since a guitar string vibrates producing an almost purely sinuzoidal wave
End of explanation
fourier = np.fft.fft(rec)
Explanation: Pitch detection using Fast Fourier Transform
We use numpy to compute the discrete Fourier transform of the signal:
End of explanation
plt.plot(abs(fourier[:len(fourier)/10]))
Explanation: We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency:
End of explanation
f_max_index = np.argmax(abs(fourier[:fourier.size/2]))
freqs = np.fft.fftfreq(len(fourier))
freqs[f_max_index]*fs
Explanation: We find the frequency corresponding to the maximum of this Fourier transform, and calculate the corresponding real frequency by re-multiplying by the sampling rate
End of explanation
rec = myrecording.ravel()
rec = rec[25000:30000]
autocorr = np.correlate(rec, rec, mode='same')
plt.plot(autocorr)
Explanation: This methid has detected that my guitar string stroke has fundamental frequency of 149.94 Hz, which is indeed very close to the expected frequency of the D string of a well tuned guitar (target if 146.83 Hz)
My guitar was not very well tuned: this indicates I should slightly tune down my 4th string
Using Autocorrelation method for pitch detection
End of explanation |
14,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sérialisation avec protobuf
protobuf optimise la sérialisation de deux façons. Elle accélère l'écriture et la lecture des données et permet aussi un accès rapide à une information précise dans désérialiser les autres. Elle réalise cela en imposant un schéma strict de données.
Step2: Schéma
On récupère l'exemple du tutorial.
Step3: Compilation
Il faut d'abord récupérer le compilateur. Cela peut se faire depuis le site de protobuf ou sur Linux (Ubuntu/Debian) apt-get install protobuf-compiler pour obtenir le programme protoc.
Step4: On écrit le format sur disque.
Step5: Et on peut compiler.
Step6: Un fichier a été généré.
Step7: Import du module créé
Pour utliser protobuf, il faut importer le module créé.
Step8: On créé un enregistrement.
Step9: Sérialisation en chaîne de caractères
Step10: Plusieurs chaînes de caractères
Step11: Sérialisation JSON | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Sérialisation avec protobuf
protobuf optimise la sérialisation de deux façons. Elle accélère l'écriture et la lecture des données et permet aussi un accès rapide à une information précise dans désérialiser les autres. Elle réalise cela en imposant un schéma strict de données.
End of explanation
schema =
syntax = "proto2";
package tutorial;
message Person {
required string name = 1;
required int32 id = 2;
optional string email = 3;
enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}
message PhoneNumber {
required string number = 1;
optional PhoneType type = 2 [default = HOME];
}
repeated PhoneNumber phones = 4;
}
message AddressBook {
repeated Person people = 1;
}
Explanation: Schéma
On récupère l'exemple du tutorial.
End of explanation
import google.protobuf as gp
version = gp.__version__
if version == "3.5.2.post1":
version = "3.5.1"
version
import sys, os
if sys.platform.startswith("win"):
url = "https://github.com/google/protobuf/releases/download/v{0}/protoc-{0}-win32.zip".format(version)
name = "protoc-{0}-win32.zip".format(version)
exe = 'protoc.exe'
else:
url = "https://github.com/google/protobuf/releases/download/v{0}/protoc-{0}-linux-x86_64.zip".format(version)
exe = 'protoc'
name = "protoc-{0}-linux-x86_64.zip".format(version)
protoc = os.path.join("bin", exe)
if not os.path.exists(name):
from pyquickhelper.filehelper import download
try:
download(url)
except Exception as e:
raise Exception("Unable to download '{0}'\nERROR\n{1}".format(url, e))
else:
print(name)
if not os.path.exists(protoc):
from pyquickhelper.filehelper import unzip_files
unzip_files(name,where_to='.')
if not os.path.exists(protoc):
raise FileNotFoundError(protoc)
Explanation: Compilation
Il faut d'abord récupérer le compilateur. Cela peut se faire depuis le site de protobuf ou sur Linux (Ubuntu/Debian) apt-get install protobuf-compiler pour obtenir le programme protoc.
End of explanation
with open('schema.proto', 'w') as f:
f.write(schema)
Explanation: On écrit le format sur disque.
End of explanation
from pyquickhelper.loghelper import run_cmd
cmd = '{0} --python_out=. schema.proto'.format(protoc)
try:
out, err = run_cmd(cmd=cmd, wait=True)
except PermissionError as e:
# Sous Linux si ne marche pas avec bin/protoc, on utilise
# protoc directement à supposer que le package
# protobuf-compiler a été installé.
if not sys.platform.startswith("win"):
protoc = "protoc"
cmd = '{0} --python_out=. schema.proto'.format(protoc)
try:
out, err = run_cmd(cmd=cmd, wait=True)
except Exception as e:
mes = "CMD: {0}".format(cmd)
raise Exception("Unable to use {0}\n{1}".format(protoc, mes)) from e
else:
mes = "CMD: {0}".format(cmd)
raise Exception("Unable to use {0}\n{1}".format(protoc, mes)) from e
print("\n----\n".join([out, err]))
Explanation: Et on peut compiler.
End of explanation
[_ for _ in os.listdir(".") if '.py' in _]
with open('schema_pb2.py', 'r') as f:
content = f.read()
print(content[:1000])
Explanation: Un fichier a été généré.
End of explanation
import schema_pb2
Explanation: Import du module créé
Pour utliser protobuf, il faut importer le module créé.
End of explanation
person = schema_pb2.Person()
person.id = 1234
person.name = "John Doe"
person.email = "[email protected]"
phone = person.phones.add()
phone.number = "555-4321"
phone.type = schema_pb2.Person.HOME
person
Explanation: On créé un enregistrement.
End of explanation
res = person.SerializeToString()
type(res), res
%timeit person.SerializeToString()
pers = schema_pb2.Person.FromString(res)
pers
pers = schema_pb2.Person()
pers.ParseFromString(res)
pers
%timeit schema_pb2.Person.FromString(res)
%timeit pers.ParseFromString(res)
Explanation: Sérialisation en chaîne de caractères
End of explanation
db = []
person = schema_pb2.Person()
person.id = 1234
person.name = "John Doe"
person.email = "[email protected]"
phone = person.phones.add()
phone.number = "555-4321"
phone.type = schema_pb2.Person.HOME
db.append(person)
person = schema_pb2.Person()
person.id = 5678
person.name = "Johnette Doette"
person.email = "[email protected]"
phone = person.phones.add()
phone.number = "777-1234"
phone.type = schema_pb2.Person.MOBILE
db.append(person)
import struct
from io import BytesIO
buffer = BytesIO()
for p in db:
size = p.ByteSize()
buffer.write(struct.pack('i', size))
buffer.write(p.SerializeToString())
res = buffer.getvalue()
res
from google.protobuf.internal.decoder import _DecodeVarint32
db2 = []
buffer = BytesIO(res)
n = 0
while True:
bsize = buffer.read(4)
if len(bsize) == 0:
# C'est fini.
break
size = struct.unpack('i', bsize)[0]
data = buffer.read(size)
p = schema_pb2.Person.FromString(data)
db2.append(p)
db2[0], db2[1]
Explanation: Plusieurs chaînes de caractères
End of explanation
from google.protobuf.json_format import MessageToJson
print(MessageToJson(pers))
%timeit MessageToJson(pers)
from google.protobuf.json_format import Parse as ParseJson
js = MessageToJson(pers)
res = ParseJson(js, message=schema_pb2.Person())
res
%timeit ParseJson(js, message=schema_pb2.Person())
Explanation: Sérialisation JSON
End of explanation |
14,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arithmetic Operations
Import the LArray library
Step1: Load the population array from the demography_eurostat dataset
Step2: Basics
One can do all usual arithmetic operations on an array, it will apply the operation to all elements individually
Step3: <div class="alert alert-warning">
**Warning
Step4: More interestingly, binary operators as above also works between two arrays.
Let us imagine a rate of population growth which is constant over time but different by gender and country
Step5: <div class="alert alert-info">
**Note
Step6: Axis order does not matter much (except for output)
You can do operations between arrays having different axes order.
The axis order of the result is the same as the left array
Step7: Axes must be compatible
Arithmetic operations between two arrays only works when they have compatible axes (i.e. same list of labels in the same order).
Step8: Order of labels matters
Step9: No extra or missing labels are permitted
Step10: Ignoring labels (risky)
<div class="alert alert-warning">
**Warning
Step11: Extra Or Missing Axes (Broadcasting)
The condition that axes must be compatible only applies on common axes.
Making arithmetic operations between two arrays having the same axes is intuitive.
However, arithmetic operations between two arrays can be performed even if the second array has extra and/or missing axes compared to the first one. Such mechanism is called broadcasting. It allows to make a lot of arithmetic operations without using any loop. This is a great advantage since using loops in Python can be highly time consuming (especially nested loops) and should be avoided as much as possible.
To understand how broadcasting works, let us start with a simple example.
We assume we have the population of both men and women cumulated for each country
Step12: We also assume we have the proportion of each gender in the population and that proportion is supposed to be the same for all countries
Step13: Using the two 1D arrays above, we can naively compute the population by country and gender as follow
Step14: Relying on the broadcasting mechanism, the calculation above becomes
Step15: In the calculation above, LArray automatically creates a resulting array with axes given by the union of the axes of the two arrays involved in the arithmetic operation.
Let us do the same calculation but we add a common time axis
Step16: Without the broadcasting mechanism, the computation of the population by country, gender and year would have been
Step17: Once again, the above calculation can be simplified as
Step18: <div class="alert alert-warning">
**Warning
Step19: Boolean Operations
Python comparison operators are
Step20: Comparison operations can be combined using Python bitwise operators
Step21: The returned boolean array can then be used in selections and assignments
Step22: Boolean operations can be made between arrays
Step23: To test if all values between are equals, use the equals method | Python Code:
from larray import *
Explanation: Arithmetic Operations
Import the LArray library:
End of explanation
# load the 'demography_eurostat' dataset
demography_eurostat = load_example_data('demography_eurostat')
# extract the 'country', 'gender' and 'time' axes
country = demography_eurostat.country
gender = demography_eurostat.gender
time = demography_eurostat.time
# extract the 'population' array
population = demography_eurostat.population
# show the 'population' array
population
Explanation: Load the population array from the demography_eurostat dataset:
End of explanation
# 'true' division
population_in_millions = population / 1_000_000
population_in_millions
# 'floor' division
population_in_millions = population // 1_000_000
population_in_millions
Explanation: Basics
One can do all usual arithmetic operations on an array, it will apply the operation to all elements individually
End of explanation
# % means modulo (aka remainder of division)
population % 1_000_000
# ** means raising to the power
print(ndtest(4))
ndtest(4) ** 3
Explanation: <div class="alert alert-warning">
**Warning:** Python has two different division operators:
- the 'true' division (/) always returns a float.
- the 'floor' division (//) returns an integer result (discarding any fractional result).
</div>
End of explanation
growth_rate = Array(data=[[1.011, 1.010], [1.013, 1.011], [1.010, 1.009]], axes=[country, gender])
growth_rate
# we store the population of the year 2017 in a new variable
population_2017 = population[2017]
population_2017
# perform an arithmetic operation between two arrays
population_2018 = population_2017 * growth_rate
population_2018
Explanation: More interestingly, binary operators as above also works between two arrays.
Let us imagine a rate of population growth which is constant over time but different by gender and country:
End of explanation
# force the resulting matrix to be an integer matrix
population_2018 = (population_2017 * growth_rate).astype(int)
population_2018
Explanation: <div class="alert alert-info">
**Note:** Be careful when mixing different data types.
You can use the method [astype](../_generated/larray.Array.astype.rst#larray.Array.astype) to change the data type of an array.
</div>
End of explanation
# let's change the order of axes of the 'constant_growth_rate' array
transposed_growth_rate = growth_rate.transpose()
# look at the order of the new 'transposed_growth_rate' array:
# 'gender' is the first axis while 'country' is the second
transposed_growth_rate
# look at the order of the 'population_2017' array:
# 'country' is the first axis while 'gender' is the second
population_2017
# LArray doesn't care of axes order when performing
# arithmetic operations between arrays
population_2018 = population_2017 * transposed_growth_rate
population_2018
Explanation: Axis order does not matter much (except for output)
You can do operations between arrays having different axes order.
The axis order of the result is the same as the left array
End of explanation
# show 'population_2017'
population_2017
Explanation: Axes must be compatible
Arithmetic operations between two arrays only works when they have compatible axes (i.e. same list of labels in the same order).
End of explanation
# let us imagine that the labels of the 'country' axis
# of the 'constant_growth_rate' array are in a different order
# than in the 'population_2017' array
reordered_growth_rate = growth_rate.reindex('country', ['Germany', 'Belgium', 'France'])
reordered_growth_rate
# when doing arithmetic operations,
# the order of labels counts
try:
population_2018 = population_2017 * reordered_growth_rate
except Exception as e:
print(type(e).__name__, e)
Explanation: Order of labels matters
End of explanation
# let us imagine that the 'country' axis of
# the 'constant_growth_rate' array has an extra
# label 'Netherlands' compared to the same axis of
# the 'population_2017' array
growth_rate_netherlands = Array([1.012, 1.], population.gender)
growth_rate_extra_country = growth_rate.append('country', growth_rate_netherlands, label='Netherlands')
growth_rate_extra_country
# when doing arithmetic operations,
# no extra or missing labels are permitted
try:
population_2018 = population_2017 * growth_rate_extra_country
except Exception as e:
print(type(e).__name__, e)
Explanation: No extra or missing labels are permitted
End of explanation
# let us imagine that the labels of the 'country' axis
# of the 'constant_growth_rate' array are the
# country codes instead of the country full names
growth_rate_country_codes = growth_rate.set_labels('country', ['BE', 'FR', 'DE'])
growth_rate_country_codes
# use the .ignore_labels() method on axis 'country'
# to avoid the incompatible axes error (risky)
population_2018 = population_2017 * growth_rate_country_codes.ignore_labels('country')
population_2018
Explanation: Ignoring labels (risky)
<div class="alert alert-warning">
**Warning:** Operations between two arrays only works when they have compatible axes (i.e. same labels) but this behavior can be override via the [ignore_labels](../_generated/larray.Array.ignore_labels.rst#larray.Array.ignore_labels) method.
In that case only the position on the axis is used and not the labels.
Using this method is done at your own risk and SHOULD NEVER BEEN USED IN A MODEL.
Use this method only for quick tests or rapid data exploration.
</div>
End of explanation
population_by_country = population_2017['Male'] + population_2017['Female']
population_by_country
Explanation: Extra Or Missing Axes (Broadcasting)
The condition that axes must be compatible only applies on common axes.
Making arithmetic operations between two arrays having the same axes is intuitive.
However, arithmetic operations between two arrays can be performed even if the second array has extra and/or missing axes compared to the first one. Such mechanism is called broadcasting. It allows to make a lot of arithmetic operations without using any loop. This is a great advantage since using loops in Python can be highly time consuming (especially nested loops) and should be avoided as much as possible.
To understand how broadcasting works, let us start with a simple example.
We assume we have the population of both men and women cumulated for each country:
End of explanation
gender_proportion = Array([0.49, 0.51], gender)
gender_proportion
Explanation: We also assume we have the proportion of each gender in the population and that proportion is supposed to be the same for all countries:
End of explanation
# define a new variable with both 'country' and 'gender' axes to store the result
population_by_country_and_gender = zeros([country, gender], dtype=int)
# loop over the 'country' and 'gender' axes
for c in country:
for g in gender:
population_by_country_and_gender[c, g] = population_by_country[c] * gender_proportion[g]
# display the result
population_by_country_and_gender
Explanation: Using the two 1D arrays above, we can naively compute the population by country and gender as follow:
End of explanation
# the outer product is done automatically.
# No need to use any loop -> saves a lot of computation time
population_by_country_and_gender = population_by_country * gender_proportion
# display the result
population_by_country_and_gender.astype(int)
Explanation: Relying on the broadcasting mechanism, the calculation above becomes:
End of explanation
population_by_country_and_year = population['Male'] + population['Female']
population_by_country_and_year
gender_proportion_by_year = Array([[0.49, 0.485, 0.495, 0.492, 0.498],
[0.51, 0.515, 0.505, 0.508, 0.502]], [gender, time])
gender_proportion_by_year
Explanation: In the calculation above, LArray automatically creates a resulting array with axes given by the union of the axes of the two arrays involved in the arithmetic operation.
Let us do the same calculation but we add a common time axis:
End of explanation
# define a new variable to store the result.
# Its axes is the union of the axes of the two arrays
# involved in the arithmetic operation
population_by_country_gender_year = zeros([country, gender, time], dtype=int)
# loop over axes which are not present in both arrays
# involved in the arithmetic operation
for c in country:
for g in gender:
# all subsets below have the same 'time' axis
population_by_country_gender_year[c, g] = population_by_country_and_year[c] * gender_proportion_by_year[g]
population_by_country_gender_year
Explanation: Without the broadcasting mechanism, the computation of the population by country, gender and year would have been:
End of explanation
# No need to use any loop -> saves a lot of computation time
population_by_country_gender_year = population_by_country_and_year * gender_proportion_by_year
# display the result
population_by_country_gender_year.astype(int)
Explanation: Once again, the above calculation can be simplified as:
End of explanation
gender_proportion_by_year = gender_proportion_by_year.rename('time', 'period')
gender_proportion_by_year
population_by_country_and_year
# the two arrays below have a "time" axis with two different names: 'time' and 'period'.
# LArray will treat the "time" axis of the two arrays as two different "time" axes
population_by_country_gender_year = population_by_country_and_year * gender_proportion_by_year
# as a consequence, the result of the multiplication of the two arrays is not what we expected
population_by_country_gender_year.astype(int)
Explanation: <div class="alert alert-warning">
**Warning:** Broadcasting is a powerful mechanism but can be confusing at first. It can lead to unexpected results.
In particular, if axes which are supposed to be common are not, you will get a resulting array with extra axes you didn't want.
</div>
For example, imagine that the name of the time axis is time for the first array but period for the second:
End of explanation
# test which values are greater than 10 millions
population > 10e6
Explanation: Boolean Operations
Python comparison operators are:
| Operator | Meaning |
|-----------|-------------------------|
|== | equal |
|!= | not equal |
|> | greater than |
|>= | greater than or equal |
|< | less than |
|<= | less than or equal |
Applying a comparison operator on an array returns a boolean array:
End of explanation
# test which values are greater than 10 millions and less than 40 millions
(population > 10e6) & (population < 40e6)
# test which values are less than 10 millions or greater than 40 millions
(population < 10e6) | (population > 40e6)
# test which values are not less than 10 millions
~(population < 10e6)
Explanation: Comparison operations can be combined using Python bitwise operators:
| Operator | Meaning |
|----------|------------------------------------- |
| & | and |
| \| | or |
| ~ | not |
End of explanation
population_copy = population.copy()
# set all values greater than 40 millions to 40 millions
population_copy[population_copy > 40e6] = 40e6
population_copy
Explanation: The returned boolean array can then be used in selections and assignments:
End of explanation
# test where the two arrays have the same values
population == population_copy
Explanation: Boolean operations can be made between arrays:
End of explanation
population.equals(population_copy)
Explanation: To test if all values between are equals, use the equals method:
End of explanation |
14,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2 - Probability
This chapter introduces probability theory (and the differences between frequentists and baysians), some common statistics and examples of discrete and continous distributions. It also presents transformation of variables, monte carlo methods and information theory.
Rules of probability
Sum rule
The probability of the conjunction of two events (or assertions) is given by
Step1: Laplace
$$ P(x~|~\mu, b) = \frac{1}{2b}~\exp\left(-\frac{|x-\mu|}{b}\right)$$
Step2: Gamma
$$ P(T~|~\text{shape}=a,\text{rate}=b) = \frac{b^a}{\Gamma(a)}~T^{a-1}e^{-Tb}$$
Where
$$\Gamma(x) = \int_{0}^{\infty}u^{x-1}e^{-u}~\mathrm{d}u$$
Step3: Exponential
$$P(x~|~\lambda) = \lambda e^{-\lambda x}$$
Chi-squared
$$P(x~|~\nu) = Gamma\left(x~\Bigg|~\frac{\nu}{2}, \frac{1}{2}\right)$$
Beta
$$P(x~|~a,b) = \frac{1}{B(a,b)}x^{a-1}(1-x)^{b-1}$$
Where
$$B(a,b) = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$$
Pareto
$$P(x~|~k,m) = km^kx^{-(k+1)}\mathbb{I}(x\geq m)$$
Student t
$$P(x~|~\nu) = \frac{\Gamma\left(\frac{\nu + 1}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right)\sqrt{\nu\pi}}\left(1 + \frac{x^2}{\nu}\right)^{-\frac{\nu + 1}{2}}$$
where $\nu$ is the number of degrees of freedom
Joint probability distributions
Multivariate Normal
$$P(\mathbf{x}~|~\mathbf{\mu}, \mathbf{\Sigma}) = \frac{1}{(2\pi)^{D/2} \left|\mathbf{\Sigma}\right|^{1/2}} \exp\left[-\frac{1}{2}(\mathbf{x} - \mathbf{\mu})^\intercal \Sigma^{-1}(\mathbf{x} - \mathbf{\mu})\right]$$
where $\mathbf{\mu} = \mathbb{E}[\mathbf{x}]$ is the mean and $\mathbf{\Sigma} = cov[\mathbf{x}]$ is the covariance matrix.
Step4: Multivariate Student t
$$P(\mathbf{x}~|~\mathbf{\mu}, \mathbf{\Sigma}, \nu) = \frac{\Gamma(\nu/2 + D/2)}{\Gamma(\nu/2)}~\big|\pi \mathbf{V}\big|^{-1/2} \times \left[1 + (\mathbf{\mathbf{x}} - \mathbf{\mu})^\intercal \mathbf{V}^{-1}(\mathbf{\mathbf{x}} - \mathbf{\mu})\right]^{-\frac{\nu+D}{2}}$$
Where $\Sigma$ is called \emph{scale matrix}, $\nu$ is a scalar that controls how fat the tails of the distribution are (the bigger it is, the slimmer are the tails, and the distribution tends to the multivariate normal) and $V = \nu\Sigma$. this distributions has the following properties | Python Code:
ax = plt.subplot(111)
plot_dist(stats.norm, -4, 4, ax)
Explanation: Chapter 2 - Probability
This chapter introduces probability theory (and the differences between frequentists and baysians), some common statistics and examples of discrete and continous distributions. It also presents transformation of variables, monte carlo methods and information theory.
Rules of probability
Sum rule
The probability of the conjunction of two events (or assertions) is given by:
$$p(A\ or\ B) = p(A) + p(B) - p(A\ and\ B)$$
Product rule
The probability of the event $A$ and $B$ is given by:
$$p(A\ and\ B) = p(A|B)p(B)$$
Conditional
The probability of the event $A$ given that the event $B$ is true is given by:
$$p(A|B) = \frac{p(A\ and\ B)}{p(B)}$$
Bayes rule
The Bayes rule is:
$$p(X|Y) = \frac{p(X)p(Y|X)}{p(Y)}$$
This can be derived from the sum and the product rule
Independence
The events $A$ and $B$ are independent if:
$$p(A\ and\ B) = p(A)p(B)$$
Note that this is basically the product rule, with the condition that $p(A|B) = p(A)$.
Continuous random variables
To deal with continuous random variables we define $F(x) = p(X\leq x)$. This means that:
$$p(a < X \leq b) = F(b) - F(a)$$
Common Statistics
Quantiles
The $\alpha$ quantile of a cdf $F$, denoted $F^{-1}(\alpha)$, is the value $x_\alpha$ such that
$$F(x_\alpha) = P(X \leq x_\alpha) = \alpha$$
The value $F^{-1}(0.5)$ is the median of the distribution.
Mean
The mean or expected value of a discrete distribution, commonly denoted by $\mu$, is defined as
$$\mathbb{E}[X] = \sum_{x\in\mathcal{X}}x~p(x)$$
Whereas for a continuous distribution, the mean is defined as
$$\mathbb{E}[X] = \int_{\mathcal{X}}x~p(x)$$
Variance
The variance, denoted by $\sigma^2$, is measure of the "spread" of a distribution, defined as
$$\sigma^2 = \mathrm{var}[X] = \mathbb{E}{(X-\mu)^2}$$
where $\mu = \mathbb{E}[X]$. A useful result is
$$\mathbb{E}[X^2] = \sigma^2 + \mu^2$$
The standard deviation is defined as $\mathrm{std}[X] = \sqrt{\sigma^2} = \sigma$
Covariance and correlation
The covariance between two random variable measures the degree which they are linearly related
$$\mathrm{cov}[X, Y] = \mathbb{E}[(X - \mathbb{E}[X])(Y - \mathbb{E}[Y])] = \mathbb{E}[XY] - \mathbb{E}[X]\mathbb{E}[Y]$$
If $\mathbf{x}$ is a d-dimensional vector, it covariance matrix is given by:
$$\mathrm{cov}[\mathbf{x}] = \mathbb{E}\left[(X - \mathbb{E}[X])(X - \mathbb{E}[X])\right] = $$
$$= \begin{bmatrix}\mathrm{var}[X_1] & \mathrm{cov}[X_1, X_2] & \dots & \mathrm{cov}[X_1, X_d] \
\mathrm{cov}[X_2, X_1] & \mathrm{var}[X_2] & \dots & \mathrm{cov}[X_2, X_d] \
\vdots & \vdots & \ddots & \vdots \
\mathrm{cov}[X_d, X_1] & \mathrm{cov}[X_1, X_2] & \dots & \mathrm{var}[X_d] \
\end{bmatrix}$$
The Pearson correlation coefficient between two random variables $X$ and $Y$ is given by:
$$\mathrm{corr}\left[X, Y\right] = \frac{\mathrm{cov}\left[X, Y\right]}{\sqrt{\mathrm{var}[X]\mathrm{var}[Y]}}$$
Common discrete distributions
Bernoulli:
$$\mathrm{Ber}(x~|~\theta) =
\left{
\begin{array}{ll}
\theta & \mbox{if } x = 1 \
1-\theta & \mbox{if } x = 0
\end{array}
\right.
$$
Binomial:
$$\mathrm{Bin}(k~|~n,\theta) = \binom{n}{k}\theta^k(1-\theta)^{n-k}$$
Multinomial:
$$\mathrm{Mu}(x~|~n,\theta) = \binom{n}{x_1,...,x_K}\prod_{j=1}^{K}\theta_{j}^{x_j}$$
the multinomial coeffiecient is defined as
$$\binom{n}{x_1,...,x_K} = \frac{n!}{x_1! \dots x_K!}$$
Poisson
$$\mathrm{Poi}(x~|~\lambda) = e^{-\lambda}\frac{\lambda^x}{x!}$$
Empirical distribution
Given a dataset $\mathcal{D} = {x_1, \dots, x_N}$, the empirical distribution is defined as
$$p_{\mathrm{emp}}(A) = \frac{1}{N}\sum_{i=1}^{N}w_i \delta_{x_i}(A)$$
where $0\leq w_i \leq 1$ and $\sum w_i = 1$
Common continuous distributions
Normal
$$ P(x~|~\mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}~e^{-\frac{1}{2\sigma^2}(x-\mu)^2} $$
End of explanation
ax = plt.subplot(111)
plot_dist(stats.laplace, -6, 6, ax)
Explanation: Laplace
$$ P(x~|~\mu, b) = \frac{1}{2b}~\exp\left(-\frac{|x-\mu|}{b}\right)$$
End of explanation
ax = plt.subplot(111)
plot_dist(stats.gamma(1,2), 0, 6, ax)
Explanation: Gamma
$$ P(T~|~\text{shape}=a,\text{rate}=b) = \frac{b^a}{\Gamma(a)}~T^{a-1}e^{-Tb}$$
Where
$$\Gamma(x) = \int_{0}^{\infty}u^{x-1}e^{-u}~\mathrm{d}u$$
End of explanation
x = np.arange(-4.0, 4.0, 0.1)
y = np.arange(-3.0, 3.0, 0.1)
X, Y = np.meshgrid(x, y)
Z = mlab.bivariate_normal(X, Y, sigmaxy=0.7)
plt.contour(X,Y,Z);
Explanation: Exponential
$$P(x~|~\lambda) = \lambda e^{-\lambda x}$$
Chi-squared
$$P(x~|~\nu) = Gamma\left(x~\Bigg|~\frac{\nu}{2}, \frac{1}{2}\right)$$
Beta
$$P(x~|~a,b) = \frac{1}{B(a,b)}x^{a-1}(1-x)^{b-1}$$
Where
$$B(a,b) = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$$
Pareto
$$P(x~|~k,m) = km^kx^{-(k+1)}\mathbb{I}(x\geq m)$$
Student t
$$P(x~|~\nu) = \frac{\Gamma\left(\frac{\nu + 1}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right)\sqrt{\nu\pi}}\left(1 + \frac{x^2}{\nu}\right)^{-\frac{\nu + 1}{2}}$$
where $\nu$ is the number of degrees of freedom
Joint probability distributions
Multivariate Normal
$$P(\mathbf{x}~|~\mathbf{\mu}, \mathbf{\Sigma}) = \frac{1}{(2\pi)^{D/2} \left|\mathbf{\Sigma}\right|^{1/2}} \exp\left[-\frac{1}{2}(\mathbf{x} - \mathbf{\mu})^\intercal \Sigma^{-1}(\mathbf{x} - \mathbf{\mu})\right]$$
where $\mathbf{\mu} = \mathbb{E}[\mathbf{x}]$ is the mean and $\mathbf{\Sigma} = cov[\mathbf{x}]$ is the covariance matrix.
End of explanation
def plot_circle(radius):
fig, ax = plt.subplots(figsize=(5,5))
ax.set_xlim(-radius, radius)
ax.set_ylim(-radius, radius)
circle = plt.Circle((0,0), radius, color='k', fill=False)
ax.add_artist(circle)
return ax
def in_circle(point, radius):
x, y = point
return x**2 + y**2 <= radius**2
def sample_points(a, b, n):
for i in range(n):
x = np.random.uniform(a, b)
y = np.random.uniform(a, b)
yield x, y
def compute_pi(n_points, radius=1):
ax = plot_circle(radius)
count = 0
for point in sample_points(-radius, radius, n_points):
color = 'b'
if in_circle(point, radius):
count += 1
color = 'r'
ax.scatter(point[0], point[1], color=color)
return 4 * count/n_points
pi = compute_pi(10000)
print("The value of pi is:", pi)
Explanation: Multivariate Student t
$$P(\mathbf{x}~|~\mathbf{\mu}, \mathbf{\Sigma}, \nu) = \frac{\Gamma(\nu/2 + D/2)}{\Gamma(\nu/2)}~\big|\pi \mathbf{V}\big|^{-1/2} \times \left[1 + (\mathbf{\mathbf{x}} - \mathbf{\mu})^\intercal \mathbf{V}^{-1}(\mathbf{\mathbf{x}} - \mathbf{\mu})\right]^{-\frac{\nu+D}{2}}$$
Where $\Sigma$ is called \emph{scale matrix}, $\nu$ is a scalar that controls how fat the tails of the distribution are (the bigger it is, the slimmer are the tails, and the distribution tends to the multivariate normal) and $V = \nu\Sigma$. this distributions has the following properties: $\mu$ is the mean and also the median, and $\frac{\nu}{\nu - 2}\Sigma$.
Dirichlet
$$P(x~|~\alpha) = \frac{1}{B(\alpha)} \prod_{k=1}^{K} x_k^{\alpha_k - 1} \mathbb{I}(x \in S_k)$$
where $S_k = \left{x: 0 \leq x_k \leq 1, \sum_{k=1}^{K} x_k = 1\right}$ is the support of the distribution and $B(\alpha) = \frac{\prod_{k=1}^{K} \Gamma(\alpha_k)}{\Gamma\left(\sum_{k=1}^{K} \alpha_k\right)}$ is the beta function for N variables.
Central limit theorem
Consider $N$ idenpendent and identically distributed random varibles with pdf $p(x_i)$ (not necessarily gaussian) with mean $\mu$ and variance $\sigma^2$. Let $S_N = \sum_{i=1}^{N} X_i$ be the sum of these random variable. the central limit theorem states that:
$$P(S_N = s) = \frac{1}{\sqrt{2\pi N\sigma^2}}\exp\left(-\frac{(s - N\mu)^2}{2N\sigma^2}\right)$$
That is, the distribution of
$$Z_N = \frac{S_N - N\mu}{\sigma\sqrt{N}} = \frac{\bar{X} - \mu}{\sigma/\sqrt{N}}$$
where \bar{X} is the empirical mean, converges to the standard normal.
Transformation of random variables
Linear tranformations
If $\mathbf{y} = f(\mathbf{x}) = \mathbf{A}\mathbf{x} + \mathbf{b}$ then:
$$\mathbb{E}[\mathbf{y}] = \mathbf{A}\mathbf{\mu} + \mathbf{b}$$
$$\mathrm{cov}[y] = \mathbf{A} \mathbf{\Sigma} \mathbf{A}^\intercal$$
General tranformations
Discrete random variable
If $X$ is a discrete random variable, we can derive the pdf of $y = f(x)$ by summing up the probability mass for all $x$ such that $f(x) = y$:
$$P(y) = \sum_{x:~f(x)=y} P(x)$$
Continuous random variable
If X is continuous we work with the cdf and do instead:
$$P_y(y) = P_y(f(X) \leq y) = P_y(X \leq f^{-1}(y)) = P_y(f^{-1}(y))$$
Taking the derivatives we get:
$$p_y(y) = p_x(x) \left|\frac{dx}{dy}\right|$$
Multivariate transformation
If $X$ is a multivariate continuous random variable we get:
$$p_y(y) = p_x(x) \left|\mathrm{det} ~ \mathbf{J}_{\mathbf{y} \rightarrow \mathbf{x}}\right|$$
Where $\mathbf{J}$ is the jacobian matrix.
Monte Carlo Methods
Using Monte Carlo to compute distribution of a transformed variable
First generate $S$ samples from the distribution, call them $x_1, \dots, x_S$. We approximate $f(X)$ by using the empirical distribution of ${f(x_s)}_{s=1}^{S}$. To compute expected value of any function of a random variable we do:
$$\mathbb{E}[f(X)] = \int f(x)p(x)dx \approx \frac{1}{S}\sum_{s=1}^{S} f(x_s)$$
By varying the function $f$ we can approximate quantities as mean, variave, cdf and median of a variable
Using Monte Carlo to estimate $\pi$
We know that the area of a circle is given by $\pi r^2$ but it is also given by:
$$A = \int_{-r}^r \int_{-r}^r \mathbb{I}(x^2 + y^2 \leq r^2) dx dy$$
therefore $\pi = \frac{A}{r^2}$. To use Monte Carlo to estimate $A$, we simply take $p(x)$ and $p(y)$ as uniform distributions on $[-r,r]$ so that $p(x) = p(y) = 1/2r$ and $f(x,y)$ to be $\mathbb{I}(x^2 + y^2 \leq r^2)$. So, by using the Monte Carlo approximation:
$$A = 4r^2 \int_{-r}^r \int_{-r}^r \mathbb{I}(x^2 + y^2 \leq r^2) p(x) p(y)dx dy = 4r^2 \frac{1}{S} \sum_{s=1}^{S} f(x_s, y_s)$$
End of explanation |
14,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lin and Miranda (2008)
This method, described in Lin and Miranda (2008), estimates the maximum inelastic displacement of an existing structure based on the maximum elastic displacement response of its equivalent linear system without the need of iterations, based on the strength ratio. The equivalent linear system has a longer period of vibration and a higher viscous damping than the original system. The estimation of these parameters is based on the strength ratio $R$.
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are
Step4: Obtain the damage probability matrix
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Plot vulnerability function
Step10: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
from rmtk.vulnerability.derivation_fragility.equivalent_linearization.lin_miranda_2008 import lin_miranda_2008
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: Lin and Miranda (2008)
This method, described in Lin and Miranda (2008), estimates the maximum inelastic displacement of an existing structure based on the maximum elastic displacement response of its equivalent linear system without the need of iterations, based on the strength ratio. The equivalent linear system has a longer period of vibration and a higher viscous damping than the original system. The estimation of these parameters is based on the strength ratio $R$.
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = "../../../../../../rmtk_data/accelerograms"
gmrs = utils.read_gmrs(gmrs_folder)
minT, maxT = 0.1, 2.0
utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../../rmtk_data/damage_model.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
PDM, Sds = lin_miranda_2008.calculate_fragility(capacity_curves, gmrs, damage_model)
Explanation: Obtain the damage probability matrix
End of explanation
IMT = "Sd"
period = 2.0
damping_ratio = 0.05
regression_method = "max likelihood"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
2. period: This parameter defines the time period of the fundamental mode of vibration of the structure.
3. damping_ratio: This parameter defines the damping ratio for the structure.
4. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
14,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Caracterización de una lente oftálmica mediante la técnica de los Anillos de Newton
Consultar el manual de uso de los cuadernos interactivos (notebooks) que se encuentra disponible en el Campus Virtual
Grupo de trabajo
En esta celda los integrantes del grupo
Step1: <div class='alert'> Tarea 2. Determinar el radio de los anillos oscuros. </div>
Empleando la imagen anterior vamos a medir el diámetro de los anillos oscuros. Se pueden medir directamente en la pantalla del ordenador o imprimiendo la figura en una hoja de papel. Escribir en la siguiente tabla el diámetro de los anillos oscuros en las unidades indicadas (añadir tantas filas como sean necesarias manteniendo el formato) y explicar como se han medido (pantalla del ordenador, papel).
Número del anillo | Diámetro del anillo (mm)
-------| ------
1 | 37
2 | 51
3 | 63
Explicación del proceso
(Escribir aquí la explicación del proceso)
Estos diámetros tienen el aumento $\beta$ correspondiente al sistema óptico de medida y al tamaño de la figura en la pantalla del ordenador o en la hoja de papel. Teniendo en cuenta la escala de referencia que aparece en la figura calcular el aumento $\beta$ de los diámetros medidos (escribir el valor). Esta medida debe realizarse en las mismas condiciones que las de los diámetros.
$\beta$ =
Usando dicho aumento podemos obtener el radio real de los anillos oscuros.
Radio = (Diámetro / 2) / $\beta$
Escribir otra tabla con los valores finales de los radios de los anillos oscuros en las unidades indicadas.
Número | Radio real (mm)
---| ---
1 | 5.1389
2 | 7.0833
3 | 8.75
<div class='alert'> Tarea 3. Análisis de los datos. Ajuste lineal de los radios de los anillos oscuros </div>
Empleando los radios de los anillos oscuros obtenidos en la Tarea 2 vamos a representar gráficamente el radio al cuadrado en función del número del anillo oscuro. Dicha representación debería darnos una dependencia lineal cuya pendiente es el radio de curvatura multiplicado por la longitud de onda de la luz empleada en el experimento, es decir,
pendiente = $\lambda$ R
Como hemos empleando luz blanca para realizar el experimento vamos a emplear una longitud de onda central del visible | Python Code:
# MODIFICAR EL NOMBRE DEL FICHERO IMAGEN. LUEGO EJECUTAR
########################################################
nombre_fichero_imagen="IMG_20141121_122547.jpg" # Incluir el nombre completo con extensión del fichero imagen
# DESDE AQUÍ NO TOCAR
##############################################################################################################################
%pylab inline
from IPython.core.display import Image,display
Image(filename=nombre_fichero_imagen)
Explanation: Caracterización de una lente oftálmica mediante la técnica de los Anillos de Newton
Consultar el manual de uso de los cuadernos interactivos (notebooks) que se encuentra disponible en el Campus Virtual
Grupo de trabajo
En esta celda los integrantes del grupo: modificar el texto
Juan Antonio Fernández
Alberto Pérez
Juan
Incluir la dirección de correo electrónico del responsable del grupo
<div class='alert'> Tarea 1. Figura con el patrón de interferencias (Anillo de Newton). </div>
Estudiaremos como se determina el radio de curvatura de un lente plano-convexa empleando la técnica interferométrica conocida como Anillos de Newton. Para ello vamos a simular nuestra lente plano-convexa empleando el sistema de dos superficies proporcionado por el profesor.
Vamos a mostrar la imagen con el patrón de interferencias medido. Para ello es necesario subir a vuestra cuenta de SageMath la imagen que habéis tomado. Si pincháis en "New" (en la parte superior izquierda del Notebook), podeís arrastrar directametne vuestra imagen al "Drop Files".
En la siguiente celda de código escribir el nombre de la imagen para que se pueda mostrar. El texto que aparece después del símbolo # son comentarios.
End of explanation
# MODIFICAR LOS VALORES DE LOS RADIOS DE LOS ANILLOS. LUEGO EJECUTAR
###########################################################################
%pylab inline
radio= array([ 5.1389, 7.0833, 8.75 ]) # Incluir los radios de los anillos oscuros en mm y separados por comas
# DESDE AQUÍ NO TOCAR
##############################################################################################################################
m=linspace(1,size(radio),size(radio)) # Vector con los números de los anillos
radio2=radio*radio # Vector con los radios de los anillos al cuadrado
a,b = polyfit(m,radio2,1) # Ajuste de los datos a una recta donde a es la pendiente y b la ordenada en el origen
plot(m,radio2,'o',m,a*m+b,'-')
xlabel('Numero del anillo');ylabel('Radio$^2$ (mm$^2$)') # Escribimos los nombres de los ejes
te = "pendiente = %f mm$^2$" % a;title(te); # Se muestra el valor de la pendiente;
Explanation: <div class='alert'> Tarea 2. Determinar el radio de los anillos oscuros. </div>
Empleando la imagen anterior vamos a medir el diámetro de los anillos oscuros. Se pueden medir directamente en la pantalla del ordenador o imprimiendo la figura en una hoja de papel. Escribir en la siguiente tabla el diámetro de los anillos oscuros en las unidades indicadas (añadir tantas filas como sean necesarias manteniendo el formato) y explicar como se han medido (pantalla del ordenador, papel).
Número del anillo | Diámetro del anillo (mm)
-------| ------
1 | 37
2 | 51
3 | 63
Explicación del proceso
(Escribir aquí la explicación del proceso)
Estos diámetros tienen el aumento $\beta$ correspondiente al sistema óptico de medida y al tamaño de la figura en la pantalla del ordenador o en la hoja de papel. Teniendo en cuenta la escala de referencia que aparece en la figura calcular el aumento $\beta$ de los diámetros medidos (escribir el valor). Esta medida debe realizarse en las mismas condiciones que las de los diámetros.
$\beta$ =
Usando dicho aumento podemos obtener el radio real de los anillos oscuros.
Radio = (Diámetro / 2) / $\beta$
Escribir otra tabla con los valores finales de los radios de los anillos oscuros en las unidades indicadas.
Número | Radio real (mm)
---| ---
1 | 5.1389
2 | 7.0833
3 | 8.75
<div class='alert'> Tarea 3. Análisis de los datos. Ajuste lineal de los radios de los anillos oscuros </div>
Empleando los radios de los anillos oscuros obtenidos en la Tarea 2 vamos a representar gráficamente el radio al cuadrado en función del número del anillo oscuro. Dicha representación debería darnos una dependencia lineal cuya pendiente es el radio de curvatura multiplicado por la longitud de onda de la luz empleada en el experimento, es decir,
pendiente = $\lambda$ R
Como hemos empleando luz blanca para realizar el experimento vamos a emplear una longitud de onda central del visible: $\lambda$=550 nm.
En la siguiente celda de código se representan los datos y se realiza el ajuste lineal para obtener el valor de la pendiente (aparece escrito en la figura). Dicho valor tendrá las unidades de los radios al cuadrado, es decir, si los radios de los anillos se introducen en mm, entonces la pendiente tendrá dimensiones de mm$^2$.
End of explanation |
14,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Edit this next cell to choose a different country / year report
Step1: These next few conversions don't really work. The PPP data field seems wrong.
Step2: Gini can be calculated directly from $L(p)$, although the reported Gini is modelled.
Step3: General Quadratic
General quadratic lorenz curve is estimated as (Villasenor & Arnold, 1989)
$$
L(1-L) = a(p^2 - L) + bL(p-1) + c(p-L)
$$
First we examine the basic regression diagnostics
Step4: And the estimated coefficients
Step5: Finally we can visualise what the distribution implied actually looks like
Step6: For comparison, here we also fit the spline model.
Step7: Comparing the two
Step8: Model summary stats - TODO
Step9: Distribution statistics
Step10: Beta Lorenz
We find the estimating equation here p. 29
Step11: Now we can visualise the distribution. We calculate things numerically since the references are unclear on a closed-form expression.
Step12: Now we can plot all three distributions on the same axes. | Python Code:
# CHN_1_2013.json
# BGD_3_1988.5.json
# IND_1_1987.5.json
# ARG_2_1987.json
# EST_3_1998.json
# Minimum (spline vs GQ) computed = 19.856 given = 75.812 difference = 73.809%
# Maximum (spline vs GQ) computed = 4974.0 given = 11400.0 difference = 56.363%
with open("../jsoncache/EST_3_1998.json","r") as f:
d = json.loads(f.read())
for k in d['dataset']:
print(k.ljust(20),d['dataset'][k])
Explanation: Edit this next cell to choose a different country / year report:
End of explanation
# Check poverty line conversion
DAYS_PER_MONTH = 30.4167
line_month_ppp_given = d['inputs']['line_month_ppp']
print("Poverty line (PPP):", line_month_ppp_given)
# Check data mean
sample_mean_ppp_given = d['inputs']['mean_month_ppp']
print("Data mean (PPP):", sample_mean_ppp_given)
#implied_ppp = d['sample']['mean_month_lcu'] / d['sample']['mean_month_ppp']
#print("Implied PPP:", implied_ppp, "cf.", ppp)
Explanation: These next few conversions don't really work. The PPP data field seems wrong.
End of explanation
# Load the Lorenz curve
lorenz = pd.DataFrame(d['lorenz'])
lorenz = lorenz.drop("index",1)
lorenz = lorenz.append(pd.DataFrame({"L": 0, "p": 0}, index = [-1]))
lorenz = lorenz.sort_values("p")
lorenz['dp'] = lorenz.p.shift(-1)[:-1] - lorenz.p[:-1]
lorenz['dL'] = lorenz.L.shift(-1)[:-1] - lorenz.L[:-1]
lorenz['dLdp'] = lorenz.dL / lorenz.dp
# Now, F(y) = inverse of Q(p)
lorenz['y'] = lorenz.dLdp * sample_mean_ppp_given
# Calc and compare Ginis
G_calc = 1 - sum(0.5 * lorenz.dp[:-1] * (lorenz.L.shift(-1)[:-1] + lorenz.L[:-1])) / 0.5
G_given = d['dist']['Gini'] / 100.0
myassert("Empirical Gini:",G_calc, G_given)
Explanation: Gini can be calculated directly from $L(p)$, although the reported Gini is modelled.
End of explanation
lorenz['GQ_lhs'] = lorenz.L * (1 - lorenz.L)
lorenz['GQ_A'] = lorenz.p*lorenz.p - lorenz.L
lorenz['GQ_B'] = lorenz.L * (lorenz.p - 1)
lorenz['GQ_C'] = lorenz.p - lorenz.L
# Note: we exclude the endpoints of the Lorenz curve from estimation hence 1:-1
result = sm.OLS(lorenz.GQ_lhs[1:-1], lorenz.iloc[1:-1][['GQ_A','GQ_B','GQ_C']]).fit()
myassert("Ymean:", np.mean(lorenz[1:-1].GQ_lhs), d['quadratic']['reg']['ymean'])
myassert("SST:", result.centered_tss, d['quadratic']['reg']['SST'])
myassert("SSE:", result.ssr, d['quadratic']['reg']['SSE'])
myassert("MSE:", result.mse_resid, d['quadratic']['reg']['MSE'])
myassert("RMSE:", math.sqrt(result.mse_resid), d['quadratic']['reg']['RMSE'])
myassert("R^2:", result.rsquared, d['quadratic']['reg']['R2'])
Explanation: General Quadratic
General quadratic lorenz curve is estimated as (Villasenor & Arnold, 1989)
$$
L(1-L) = a(p^2 - L) + bL(p-1) + c(p-L)
$$
First we examine the basic regression diagnostics
End of explanation
for param in ('A','B','C'):
myassert(param+".coef:", result.params['GQ_'+param], d['quadratic']['reg']['params'][param]['coef'])
myassert(param+".stderr:", result.bse['GQ_'+param], d['quadratic']['reg']['params'][param]['se'])
myassert(param+".tval:", result.tvalues['GQ_'+param], d['quadratic']['reg']['params'][param]['t'])
print()
Explanation: And the estimated coefficients
End of explanation
##########################################
plt.rcParams["figure.figsize"] = (12,2.5)
fig, ax = plt.subplots(1, 4)
##########################################
import scipy.integrate
a = d['quadratic']['reg']['params']['A']['coef']
b = d['quadratic']['reg']['params']['B']['coef']
c = d['quadratic']['reg']['params']['C']['coef']
mu = sample_mean_ppp_given
nu = -b * mu / 2
tau = mu * (4 * a - b**2) ** (1/2) / 2
eta1 = 2 * (c / (a + b + c + 1) + b/2) * (4 *a - b**2)**(-1/2)
eta2 = 2 * ((2*a + b + c)/(a + c - 1) + b / a)*(4*a - b**2)**(-1/2)
lower = tau*eta1+nu
upper = tau*eta2+nu
# Hacky way to normalise
gq_pdf_integral = 1
gq_pdf = lambda y: (1 + ((y - nu)/tau)**2)**(-3/2) / gq_pdf_integral * (y >= lower) * (y <= upper)
gq_cdf = integral(gq_pdf, lower=lower)
gq_pdf_integral = gq_cdf(upper)
gq_quantile = inverse(gq_cdf, domain=(lower,upper))
ygrid = np.linspace(0, gq_quantile(0.95), 1000)
pgrid = np.linspace(0, 1, 1000)
themax = np.nanmax(gq_pdf(ygrid))
ax[1].plot(pgrid, gq_quantile(pgrid))
ax[2].plot(ygrid, gq_cdf(ygrid))
ax[3].plot(ygrid, gq_pdf(ygrid))
ax[3].vlines(x=d['inputs']['line_month_ppp'],ymin=0,ymax=themax,linestyle="dashed");
Explanation: Finally we can visualise what the distribution implied actually looks like
End of explanation
##########################################
plt.rcParams["figure.figsize"] = (12,2.5)
fig, ax = plt.subplots(1, 4)
##########################################
thehead = int(len(lorenz)*0.1)
themiddle = len(lorenz) - thehead - 2 - 2
lorenz.w = ([100, 100] + [10] * thehead) + ([1] * themiddle) + [1, 1]
#lorenz.w = [10]*thehead + [1]*(len(lorenz)-thehead)
lorenz_interp = scipy.interpolate.UnivariateSpline(lorenz.p,lorenz.L,w=lorenz.w,k=5,s=1e-6)
#lorenz_interp = scipy.interpolate.CubicSpline(lorenz.p, lorenz.L,bc_type=bc_natural)
quantile = lambda p: sample_mean_ppp_given * lorenz_interp.derivative()(p)
cdf = inverse(quantile)
pdf = derivative(cdf)
pgrid = np.linspace(0, 1, 1000)
ax[0].plot(pgrid, lorenz_interp(pgrid))
ax[1].plot(pgrid, quantile(pgrid))
ygrid = np.linspace(0, quantile(0.95), 1000)
ax[2].plot(ygrid, cdf(ygrid))
ax[3].plot(ygrid, pdf(ygrid));
Explanation: For comparison, here we also fit the spline model.
End of explanation
min_ceiling = (lorenz.L[0]-lorenz.L[-1])/(lorenz.p[0]-lorenz.p[-1])*sample_mean_ppp_given
max_floor = (lorenz.L[len(lorenz)-2]-lorenz.L[len(lorenz)-3])/(lorenz.p[len(lorenz)-2]-lorenz.p[len(lorenz)-3])*sample_mean_ppp_given
myassert("Minimum (GQ vs pts ceil)",lower,min_ceiling)
myassert("Minimum (spline vs pts ceil)",quantile(0),min_ceiling)
myassert("Maximum (GQ vs pts floor)",upper,max_floor)
myassert("Maximum (spline vs pts floor)",quantile(1),max_floor)
Explanation: Comparing the two
End of explanation
myassert("SSE Lorenz:", result.ssr, d['quadratic']['summary']['sse_fitted']) #WRONG
# sse_up_to_hcindex
Explanation: Model summary stats - TODO
End of explanation
HC_calc = float(gq_cdf(line_month_ppp_given))
HC_given = d['quadratic']['dist']['HC'] / 100.0
myassert("HC",HC_calc,HC_given)
median_calc = float(gq_quantile(0.5))
median_given = d['quadratic']['dist']['median_ppp']
myassert("Median",median_calc,median_given)
Explanation: Distribution statistics
End of explanation
# Generates warnings as endpoints shouldn't really be included in estimation
lorenz['beta_lhs'] = np.log(lorenz.p - lorenz.L)
lorenz['beta_A'] = 1
lorenz['beta_B'] = np.log(lorenz.p)
lorenz['beta_C'] = np.log(1-lorenz.p)
# Note: we exclude the endpoints of the Lorenz curve from estimation hence 1:-1
result = sm.OLS(lorenz.beta_lhs[1:-1], lorenz.iloc[1:-1][['beta_A','beta_B','beta_C']]).fit()
myassert("Ymean:", np.mean(lorenz[1:-1].beta_lhs), d['beta']['reg']['ymean'])
myassert("SST:", result.centered_tss, d['beta']['reg']['SST'])
myassert("SSE:", result.ssr, d['beta']['reg']['SSE'])
myassert("MSE:", result.mse_resid, d['beta']['reg']['MSE'])
myassert("RMSE:", math.sqrt(result.mse_resid), d['beta']['reg']['RMSE'])
myassert("R^2:", result.rsquared, d['beta']['reg']['R2'])
for param in ('A','B','C'):
myassert(param+".coef:", result.params['beta_'+param], d['beta']['reg']['params'][param]['coef'])
myassert(param+".stderr:", result.bse['beta_'+param], d['beta']['reg']['params'][param]['se'])
myassert(param+".tval:", result.tvalues['beta_'+param], d['beta']['reg']['params'][param]['t'])
print()
theta = np.exp(result.params['beta_A'])
gamma = result.params['beta_B']
delta = result.params['beta_C']
myassert("Implied theta",theta,d['beta']['implied']['theta'])
myassert("Implied gamma",gamma,d['beta']['implied']['gamma'])
myassert("Implied delta",delta,d['beta']['implied']['delta'])
Explanation: Beta Lorenz
We find the estimating equation here p. 29:
$$
\log(p - L) = \log(\theta) + \gamma \log(p) + \delta \log(1-p)
$$
This book Kakwani (1980) is also cited but the above is not obvious within. Many papers cite Kakwani (1980)'s Econometrica paper but that is clearly an incorrection citation as it's on a different topic.
End of explanation
##########################################
plt.rcParams["figure.figsize"] = (12,2.5)
fig, ax = plt.subplots(1, 4)
##########################################
beta_lorenz = lambda p: p - theta * p ** gamma * (1 - p) ** delta
beta_quantile = lambda p: derivative(beta_lorenz)(p) * sample_mean_ppp_given
beta_cdf = inverse(beta_quantile, domain=(1e-6,1-1e-6))
beta_pdf = derivative(beta_cdf)
ax[0].plot(pgrid, beta_lorenz(pgrid))
ax[1].plot(pgrid, beta_quantile(pgrid))
ax[2].plot(ygrid, beta_cdf(ygrid))
ax[3].plot(ygrid, beta_pdf(ygrid))
ax[3].vlines(x=d['inputs']['line_month_ppp'],ymin=0,ymax=themax,linestyle="dashed");
Explanation: Now we can visualise the distribution. We calculate things numerically since the references are unclear on a closed-form expression.
End of explanation
plt.plot(ygrid, pdf(ygrid), color="r")
plt.plot(ygrid, gq_pdf(ygrid), color="b")
plt.plot(ygrid, beta_pdf(ygrid), color="g")
plt.vlines(x=d['inputs']['line_month_ppp'],ymin=0,ymax=themax,linestyle="dashed");
Explanation: Now we can plot all three distributions on the same axes.
End of explanation |
14,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithm
1. Detailed Pseudocode
Input Space
Step1: 2. Actual Algorithm Code
Step2: 3. Algorithm Space
Success Space
Practically, this algorithm will succeed when there is one or a few very similar assignments that minimize the cost funciton.
Failure Space
Practically, this algorithm will fail when there are multiple differing assignments that minimize the cost function
4. Functionality Data Sets
The following two are two data sets with solutions that can be run to ensure the success of the hungarian algorithm in the simple case
Step3: 5. Validation Data Set Properties
Validataion Data 1
This data exists in 2 dimensions with 2 features and should converge after the initialization step. It has exactly 1 optimum pairing
Validation Data 2
This data exists in 2 dimensions with 3 features. It should converge after at least 1 iterative loop, and has exactly 1 optimum pairing
6. Data Visualization Code
Step4: Simulation
1. Functionality Testing
Step5: Functionality testing had perfect results. This means that the algorithm is ready to move on to the validation testing phase
2. Validation Testing
1. Get requisite Data
Step6: 2. Toy Simulation
In this simulation, I will apply a known, small, rigid body transformation to the pipeline output, and then cluster both the original volume and the transformed volume. The hungarian algorithm will then be used to register these volumes. | Python Code:
######################################
###THIS IS PSEUDOCODE, WILL NOT RUN###
######################################
def hungarian(costMatrix):
p1CostMatrix = costMatrix.copy()
p2CostMatrix = costMatrix.copy().T
#for every point in the first set
for all p1 in p1CostMatrix:
#find its minimum weighted edge
minVal = min(costMatrix[p1])
#subtract minimum weight from all edges
p1CostMatrix[p1] = p1CostMatrix[p1] - minVal
#for every point in the second set
for all p2 in p2CostMatrix:
#find the minimum weighted edge
minVal = min(p2CostMatrix[p2])
#subtract the minimum weight from all edges
p1CostMatrix[p1] = p1CostMatrix[p1] - minVal
#generate adjacency matrix of only the 0 weight
#after the initial 2 steps
initialMatrix = zeros_like(costMatrix)
for y, x in p1CostMatrix.zero():
initialMatrix[y][x] = 1
for y, x, in p2CostMatrix.zero():
initialMatrix[y][x] = 1
#get the maximal matching after the initial step
matching = minimalMatching(initialMatrix)
#if the initialization solves the problem, we are done
if matching.fullRank():
return matching
#if not, run iterative step until convergence
while not matching.fullRank():
#get the minimum edge that is not yet paired
minRemainingWeight = min(matching.rowsWithoutPivot())
minRemainingP1 = argmin(matching.rowsWithoutPivot())
#subtract that weight from the remaining graph at that edge
initialMatrix[minRemainingP1] -= minRemainingWeight
matching = minimalMatching(initialMatrix)
return matching
Explanation: Algorithm
1. Detailed Pseudocode
Input Space: A cost matrix for two sets of points $P_1$ and $P_2$, where both point sets have identical length $l$
Output Space: An optimal pairing of all $p_1 \in P_1$ with exactly one partner $p_2 \in P_2$ such that $\Sigma \text{Cost}(p_1, p_2)$ is minimized
Algorithm:
End of explanation
#The algorithm can be called with the following
def hungarian(costMat):
#write the matlab argument to disk so the native process can access it
sio.savemat('matlabArg.mat', mdict={'matlabArg':costMat})
#run the matlab process
os.system('matlab -nodisplay -r \"load(\'/home/bstadt/Desktop/lab/background/matlabArg.mat\'); munkres(matlabArg); exit()\"')
#load the results
matlabOut = sio.loadmat('assignment.mat')['assignment']
os.system('rm assignment.mat matlabArg.mat')
#return matlab output as numpy array
return np.array(matlabOut)-1
def loss(cluster1, cluster2):
c1Centroid = cluster1.centroid
c2Centroid = cluster2.centroid
error = math.sqrt(
(c1Centroid[0] - c2Centroid[0])**2 +
(c1Centroid[1] - c2Centroid[1])**2 +
(c1Centroid[2] - c2Centroid[2])**2 +
(cluster1.compactness - cluster2.compactness)**2 +
(cluster1.volume - cluster2.volume)**2
)
return error
#The function for creating the cost matrix
def genCostMatrix(clusterList1, clusterList2):
#check for bipartality
'''
if not len(clusterList1) == len(clusterList2):
print 'Cluster lists must have the same size'
return
'''
costMatrix = np.zeros((len(clusterList1), len(clusterList2)))
for cIdx1 in range(len(clusterList1)):
for cIdx2 in range(len(clusterList2)):
costMatrix[cIdx1][cIdx2] = loss(clusterList1[cIdx1], clusterList2[cIdx2])
return costMatrix
#the function used to force a bipartite graph
def forceBipartite(clusterList1, clusterList2):
l1 = len(clusterList1)
l2 = len(clusterList2)
#if already compliant
if l1 == l2:
return clusterList1, clusterList2
#if there are too many points in l1
elif l1 > l2:
diff = l1 - l2
for i in range(diff):
delIdx = randrange(0, len(clusterList1))
del clusterList1[delIdx]
else:
diff = l2 - l1
for i in range(diff):
delIdx = randrange(0, len(clusterList2))
del clusterList2[delIdx]
return clusterList1, clusterList2
Explanation: 2. Actual Algorithm Code
End of explanation
funcData1 = np.identity(2)
funcData2 = np.array([[4., 1., 3.], [2., 0., 5.], [3., 2., 2.]])
print 'Data:'
print funcData1
print 'Optimal Pairing:'
print '[[0, 1], [1, 0]]'
print '\nData:'
print funcData2
print 'Optimal Pairing:'
print '[[0, 1], [1, 0], [2, 2]]'
Explanation: 3. Algorithm Space
Success Space
Practically, this algorithm will succeed when there is one or a few very similar assignments that minimize the cost funciton.
Failure Space
Practically, this algorithm will fail when there are multiple differing assignments that minimize the cost function
4. Functionality Data Sets
The following two are two data sets with solutions that can be run to ensure the success of the hungarian algorithm in the simple case
End of explanation
def toDiff(imgA, imgB):
ret = np.empty((imgA.shape[0], imgA.shape[1], 3), dtype=np.uint8)
for y in range(imgA.shape[0]):
for x in range(imgA.shape[1]):
if imgA[y][x] and not imgB[y][x]:
ret[y][x][0] = 255
ret[y][x][1] = 0
ret[y][x][2] = 0
elif not imgA[y][x] and imgB[y][x]:
ret[y][x][0] = 0
ret[y][x][1] = 255
ret[y][x][2] = 0
elif imgA[y][x] and imgB[y][x]:
ret[y][x][0] = 255
ret[y][x][1] = 255
ret[y][x][2] = 0
else:
ret[y][x][0] = 255
ret[y][x][1] = 255
ret[y][x][2] = 255
return ret
def visDiff(sliceA, sliceB):
disp = toDiff(sliceA, sliceB)
return disp
Explanation: 5. Validation Data Set Properties
Validataion Data 1
This data exists in 2 dimensions with 2 features and should converge after the initialization step. It has exactly 1 optimum pairing
Validation Data 2
This data exists in 2 dimensions with 3 features. It should converge after at least 1 iterative loop, and has exactly 1 optimum pairing
6. Data Visualization Code
End of explanation
funcTest1 = zip(*hungarian(funcData1))
funcTest2 = zip(*hungarian(funcData2))
print 'Test1: ', funcTest1 == [(1.,), (0.,)]
print '\n\tExpected: ',[(1.,), (0.,)]
print '\n\tActual: ', funcTest1
print '\n'
print 'Test2: ', funcTest2 == [(1.,), (0.,), (2.,)]
print '\n\tExpected: ',[(1.,), (0.,), (2.,)]
print '\n\tActual: ', funcTest2
Explanation: Simulation
1. Functionality Testing
End of explanation
#import the pickled versions of the real data
tp1 = pickle.load(open('../code/tests/synthDat/realDataRaw_t0.synth', 'r'))
tp2 = pickle.load(open('../code/tests/synthDat/realDataRaw_t1.synth', 'r'))
#cut the data to a reasonable size for testing
tp1TestData = tp1[:7]
tp2TestData = tp2[:7]
#run the data through the pipeline
tp1PostPipe = cLib.otsuVox(pLib.pipeline(tp1TestData))
tp2PostPipe = cLib.otsuVox(pLib.pipeline(tp2TestData))
#cut out the ill defined sections
tp1PostPipe = tp1PostPipe[1:6]
tp2PostPipe = tp2PostPipe[1:6]
#Display the data to be used for testing
for i in range(tp1PostPipe.shape[0]):
fig = plt.figure()
plt.title('Time Point 1 Pipe Output at z='+str(i))
plt.imshow(tp1PostPipe[i], cmap='gray')
plt.show()
#Display the data to be used for testing
for i in range(tp2PostPipe.shape[0]):
fig = plt.figure()
plt.title('Time Point 2 Pipe Output at z='+str(i))
plt.imshow(tp2PostPipe[i], cmap='gray')
plt.show()
Explanation: Functionality testing had perfect results. This means that the algorithm is ready to move on to the validation testing phase
2. Validation Testing
1. Get requisite Data
End of explanation
transform = hype.get3DRigid(pitch=0., yaw=.15, roll=0., xT=0., yT=0., zT=0.)
transformVolume = hype.apply3DRigid(tp1PostPipe, transform, True)
for i in range(tp1PostPipe.shape[0]):
plt.figure()
plt.title('Initial Disperity at z='+str(i))
disp = visDiff(tp1PostPipe[i], transformVolume[i])
plt.imshow(disp)
plt.show()
#get the cluster lists from both the base and the transformation
tp1BaseClusters = cLib.connectedComponents(tp1PostPipe)
tp1TransClusters = cLib.connectedComponents(transformVolume)
#generate cost matrix
costMat = genCostMatrix(tp1TransClusters, tp1BaseClusters)
assignments = hungarian(costMat)
Explanation: 2. Toy Simulation
In this simulation, I will apply a known, small, rigid body transformation to the pipeline output, and then cluster both the original volume and the transformed volume. The hungarian algorithm will then be used to register these volumes.
End of explanation |
14,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started with TensorFlow
何はともあれ TensorFlow を始めてみましょう!
Step1: Hello TensorFlow
Python を使って足し算をしてみましょう(決して馬鹿にしているわけではなく大真面目です)!
Step2: 当然ですが 3.0 と答えが表示されます。
今度は TensorFlow で同じような足し算をやってみましょう。
Step3: Tensor というオブジェクトが表示されますね。
実はまだこの時点ではデータフローグラフが作成されただけで、足し算は行われていません。
足し算を実行するにはセッションを介して実行する必要があります。
Step4: 行列演算
NumPy と TensorFlow で行列やベクトルの計算方法を比較してみましょう。
NumPy
Step5: TensorFlow
Step6: TensorFlow + placeholder
計算をするたびにデータフローグラフを作っていると、データフローグラフがどんどん肥大化してしまいます。
そのため、 TensorFlow には一部の値を差し替えつつデータフローグラフの共通部分を再利用するための placeholder という仕組みが存在しています。
Step7: 典型的な使い方として、学習時に使うデータを placeholder で定義しておくという方法があります。
placeholder に対して少しずつ学習用データを流していくというのが TensorFlow で機械学習のアルゴリズムを実装するときの定石です。
tf.Variable
TensorFlow では、計算に使われた値は基本的に捨てられていきます。
tf.add の結果はセッションを介して実行するたびに計算し直すことになりますし、 tf.constant の値はメモリ上に保持されず必要に応じて再生成されます。
後から tf.constant の値を書き換えることもできません。
ただし tf.Variable だけが例外で、値をメモリ上に保持し続けて後から書き換えることも可能になっています。
ニューラルネットやその他多くの機械学習手法で weight として使うことが想定されています。
Step8: tf.Variable 最初に初期化処理を行う必要があるので init_op を実行します。
その後 v の値を表示すると assign_op を実行する前後で値が書き換えられていることが確認できます。 | Python Code:
import tensorflow as tf
import numpy as np
print(tf.__version__)
Explanation: Getting Started with TensorFlow
何はともあれ TensorFlow を始めてみましょう!
End of explanation
a = 1.
b = 2.
c = a + b
print(c)
Explanation: Hello TensorFlow
Python を使って足し算をしてみましょう(決して馬鹿にしているわけではなく大真面目です)!
End of explanation
a = tf.constant(1.)
b = tf.constant(2.)
c = tf.add(a, b)
print(c)
Explanation: 当然ですが 3.0 と答えが表示されます。
今度は TensorFlow で同じような足し算をやってみましょう。
End of explanation
with tf.Session() as sess:
result = sess.run(c)
print(result)
Explanation: Tensor というオブジェクトが表示されますね。
実はまだこの時点ではデータフローグラフが作成されただけで、足し算は行われていません。
足し算を実行するにはセッションを介して実行する必要があります。
End of explanation
a = np.array([5, 3, 8])
b = np.array([3, -1, 2])
c = np.add(a, b)
print(c)
Explanation: 行列演算
NumPy と TensorFlow で行列やベクトルの計算方法を比較してみましょう。
NumPy
End of explanation
a = tf.constant([5, 3, 8])
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
print(c)
with tf.Session() as sess:
result = sess.run(c)
print(result)
Explanation: TensorFlow
End of explanation
a = tf.placeholder(dtype=tf.int32, shape=(None,))
b = tf.placeholder(dtype=tf.int32, shape=(None,))
c = tf.add(a, b)
with tf.Session() as sess:
result1 = sess.run(c, feed_dict={a: [3, 4, 5], b: [-1, 2, 3]})
result2 = sess.run(c, feed_dict={a: [1, 2, 3], b: [3, 2, 1]})
print(result1)
print(result2)
Explanation: TensorFlow + placeholder
計算をするたびにデータフローグラフを作っていると、データフローグラフがどんどん肥大化してしまいます。
そのため、 TensorFlow には一部の値を差し替えつつデータフローグラフの共通部分を再利用するための placeholder という仕組みが存在しています。
End of explanation
v = tf.Variable([1, 2])
assign_op = tf.assign(v, [2, 3])
init_op = tf.global_variables_initializer()
Explanation: 典型的な使い方として、学習時に使うデータを placeholder で定義しておくという方法があります。
placeholder に対して少しずつ学習用データを流していくというのが TensorFlow で機械学習のアルゴリズムを実装するときの定石です。
tf.Variable
TensorFlow では、計算に使われた値は基本的に捨てられていきます。
tf.add の結果はセッションを介して実行するたびに計算し直すことになりますし、 tf.constant の値はメモリ上に保持されず必要に応じて再生成されます。
後から tf.constant の値を書き換えることもできません。
ただし tf.Variable だけが例外で、値をメモリ上に保持し続けて後から書き換えることも可能になっています。
ニューラルネットやその他多くの機械学習手法で weight として使うことが想定されています。
End of explanation
with tf.Session() as sess:
sess.run(init_op)
print(sess.run(v))
sess.run(assign_v)
print(sess.run(v))
Explanation: tf.Variable 最初に初期化処理を行う必要があるので init_op を実行します。
その後 v の値を表示すると assign_op を実行する前後で値が書き換えられていることが確認できます。
End of explanation |
14,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial with the Diva synthesizer controlled with Dynamical Movement Primitives
This tutorials shows how to run an agent learning to produce sound trajectories with a simulated vocal tract, through autonomous exploration and imitation.
Requirements
Step1: The diva configuration (diva_cfg) can be modified. For instance, here only 7 of the 10 articulatory parameters are used (the 7 most important) and the 3 others are set to 0, but you can use 10 instead of 7 parameters. Also, DIVA outputs F0 (pitch), F1, F2 and F3 but here the environment will output only F1 and F2 because we set s_used = [1, 2]. We also defined the motor bounds [-1, 1] and the sensory bounds of F1 and F2 in octaves (or log2(Hz))
Step2: The DIVAEnvironment outputs the frequency of the first and second formants (F1 and F2) in octaves - or log2(Hz) - but here we plot their frequencies in Hz, with the axes reversed and F2 on the x axis as in Praat conventions. Each dot is one sound produced with a random articulatory position (and you should have heard them if you enabled audio and turned sound on.
Trajectory of sounds
The Diva Environment can also take a trajectory of sounds as input and will then output the trajectory of the corresponding formants. Here we define an articulatory trajectory where only the first articulator (the jaw) moves, from 1 to -1. We also plot the trajectory in formant space.
Step3: Let's listen to the sound trajectory
Step4: II. Generating sound trajectories with Dynamical Movement Primitives
In order to define an arbitrary trajectory of the 7 articulators for n time steps, we would need 7 x n values. However, if we consider that a realistic vocal tract cannot produce a trajectory with quick discontinuities, so we could use a representation of trajectories with less degrees of freedom. Also, a compact representation of sound trajectories will make learning by an agent easier. The Dynamical Movement Primitives (DMPs) framework allows to represent smooth trajectories with few parameters. See here for a nice tutorial on DMPs. In this framework, each of the 7 articulators position will be parameterized by the starting and ending point of the trajectory plus one parameter on each basis function to modify the trajectory from the starting point to the end.
Step5: Here we defined a Diva Environment with articulators trajectories generated through DMPs. We designed DMPs with 2 basis functions so that each articulator has 4 parameters
Step6: 's' contains the 10-steps trajectory of the two formants. We plot this trajectory along with some vowels
Step7: To have a closer look at DMPs, we can plot the shape of the two basis functions that are used to generate a trajectory of an articulator given one weight on each basis function plus the start and end position of the articulator.
Step8: Here is the sound trajectory generated with the random set of parameters
Step9: And now we plot 20 trajectories generated with random DMP parameters.
Step10: III. Goal Babbling
In this section we run an experiment where an agent explores with the random goal babbling (GB) strategy.
See this tutorial for a comparison of different exploration strategies in a setup with a simulated robotic arm grasping a ball.
An agent with a goal babbling exploration strategy chooses a new goal at each exploration iteration, the goal being a target in its observation space. Here, the agent controls DMP parameters and observes formant trajectories. It will thus generates goal formant trajectories, and try to reach those trajectories (produce the sound trajectories) given its current sensorimotor model.
Step11: We define the sensorimotor with the Nearest Neighbor algorithm (NN). Given a target s_goal, the model infers a motor command (DMP parameters) m that could help to reach s_goal. To do so, it looks at the previous observed formant trajectory s_NN that is the closest to s_goal, and outputs the motor command m that was used to reach s_NN plus some exploration noise to explore new motor commands (Gaussian of standard deviation sigma_explo_ratio). See here for a tutorial on other available sensorimotor models.
We perform here 2000 iterations, with first 100 iterations of motor babbling (random motor commands), and then 20% of motor babbling and 80% of random goal babbling.
Step12: We can now look at the sounds that the agent managed to produce. Let's imagine that we want the agent to produce the word /uye/, a sequence of three vowels, /u/, /y/ and /e/. In the following, we define the goal formant trajectory, and ask the agent the best motor parameters it knows to reach this formant trajectory (without adding exploration noise).
Step13: The target sound trajectory is plotted in blue, and the best trajectory reached by the agent is in red. Note that the agent did not know that it would be tested on that trajectory.
Step14: IV. Imitation
In this section, we give a target sound trajectory to the agent so that it tries to imitate it.
We also run 2000 iterations, where the agent generates random goals for 1000 iteration and then tries to imitate the target sound trajectory /uye/.
Step15: The distance to the target word /uye/ is lower, which means that the agent learned to reproduce the sound trajectory more acurately by focusing on imitating it. However, with goal babbling, the agent also learned reasonable sounds for other words. | Python Code:
from __future__ import print_function
import os
import numpy as np
from explauto.environment.diva import DivaEnvironment
diva_cfg = dict(diva_path=os.path.join(os.getenv("HOME"), 'software/DIVAsimulink/'),
synth="octave",
m_mins = np.array([-1]*7), # motor bounds
m_maxs = np.array([1]*7),
s_mins = np.array([ 7.5, 9.25]), # sensory bounds
s_maxs = np.array([ 9.5 , 11.25]),
m_used = range(7), # articulatory parameters used from 0 to 9
s_used = range(1, 3), # formants output from F0 to F3
audio = True) # if sound is played
environment = DivaEnvironment(**diva_cfg)
Explanation: Tutorial with the Diva synthesizer controlled with Dynamical Movement Primitives
This tutorials shows how to run an agent learning to produce sound trajectories with a simulated vocal tract, through autonomous exploration and imitation.
Requirements:
- Explauto
- DIVA
- Matlab and pymatlab, or Octave and oct2py
- pyaudio
DIVA is a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic,
and neuroimaging data concerning the control of speech movements, as describe on this page. The code of the model is open-source and is avaible here. You will have to download and unzip it to run this tutorial.
The DIVA model uses an articulatory synthesizer, i.e. a computer simulation of the human vocal tract allowing to generate the sound wave resulting from articulator movements involving the jaw, the tongue, the lips ... This is this articulatory synthesizer that we will use, independently of the neural model. For more information please refer to the documentation in the pdf provided in the DIVA zip archive.
Also, if using Matlab, it needs to be aware of your DIVA installation path (i.e. the path of the unzipped DIVA directory). You will have to add it by editing your search path permanently.
Content:
- I. Setting up a Diva Environment
- II. Generating sound trajectories with Dynamical Movement Primitives
- III. Goal Babbling
- IV. Imitation
The four parts are code-independent in the sense that you can restart your kernel and run code from any part, you don't need to rerun from the beginning.
I. Setting up a Diva Environment
The DIVA synthesizer takes 13 articulatory positions and returns the formants of the corresponding sound (F0 to F4). For example, the first articulator globally corresponds to an open/close dimension mainly involving the jaw, as shown in the figure below extracted from the DIVA documentation (the pdf in the zip archive). It illustrates the movements induced by the 10 first articulators (left to right, top to bottom), the 3 last ones controlling the pitch, the pressure and the voicing (see the DIVA documentation for more details). All articulatory positions should be in the range $[-1, 1]$.
Let's define a first Explauto environment that contains the DIVA synthesizer. If you use Matlab, don't forget to tell him where is your installation of DIVA, and then specify synth="matlab" in the following. If using Octave, specify the diva_path and synth="octave".
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
for m in environment.random_motors(100):
#print m
formants = environment.update(m)
#print formants
plt.loglog(2.**formants[1], 2.**formants[0], 'o')
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
Explanation: The diva configuration (diva_cfg) can be modified. For instance, here only 7 of the 10 articulatory parameters are used (the 7 most important) and the 3 others are set to 0, but you can use 10 instead of 7 parameters. Also, DIVA outputs F0 (pitch), F1, F2 and F3 but here the environment will output only F1 and F2 because we set s_used = [1, 2]. We also defined the motor bounds [-1, 1] and the sensory bounds of F1 and F2 in octaves (or log2(Hz)): e.g. the environment will always output F1 between 7.5 and 9.5.
Random sounds
If your installation of DIVA and explauto are working well, you should now be able to produce some sounds with random articulatory positions:
End of explanation
m_traj = np.zeros((1000, 7))
m_traj[:, 0] = np.linspace(1, -1, 1000) # The jaw moves linearly for 1 (completely close) to -1 (completely open)
s = environment.update(m_traj)
plt.loglog(2.**s[0][1], 2.**s[0][0], "ro") # Start: red dot
plt.loglog([2.**formants[1] for formants in s], [2.**formants[0] for formants in s], 'b-')
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
Explanation: The DIVAEnvironment outputs the frequency of the first and second formants (F1 and F2) in octaves - or log2(Hz) - but here we plot their frequencies in Hz, with the axes reversed and F2 on the x axis as in Praat conventions. Each dot is one sound produced with a random articulatory position (and you should have heard them if you enabled audio and turned sound on.
Trajectory of sounds
The Diva Environment can also take a trajectory of sounds as input and will then output the trajectory of the corresponding formants. Here we define an articulatory trajectory where only the first articulator (the jaw) moves, from 1 to -1. We also plot the trajectory in formant space.
End of explanation
import IPython
IPython.display.Audio(environment.sound_wave(environment.art_traj).flatten(), rate=11025)
Explanation: Let's listen to the sound trajectory:
End of explanation
import os
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from explauto.environment.diva import DivaDMPEnvironment
diva_cfg = dict(diva_path=os.path.join(os.getenv("HOME"), 'software/DIVAsimulink/'),
synth="octave",
m_mins = np.array([-1] * 28),
m_maxs = np.array([1] * 28),
s_mins = np.array([7.5]*10 + [9.25]*10),
s_maxs = np.array([9.5]*10 + [11.25]*10),
m_used = range(7),
s_used = range(1, 3),
n_dmps = 7, # parameters controlled by DMPs
n_bfs = 2, # basis functions
dmp_move_steps = 50, # trajectory time steps
dmp_max_param = 300., # max value of the weights on basis functions
sensory_traj_samples = 10, # samples of the formant trajectory to output
audio = True)
environment = DivaDMPEnvironment(**diva_cfg)
Explanation: II. Generating sound trajectories with Dynamical Movement Primitives
In order to define an arbitrary trajectory of the 7 articulators for n time steps, we would need 7 x n values. However, if we consider that a realistic vocal tract cannot produce a trajectory with quick discontinuities, so we could use a representation of trajectories with less degrees of freedom. Also, a compact representation of sound trajectories will make learning by an agent easier. The Dynamical Movement Primitives (DMPs) framework allows to represent smooth trajectories with few parameters. See here for a nice tutorial on DMPs. In this framework, each of the 7 articulators position will be parameterized by the starting and ending point of the trajectory plus one parameter on each basis function to modify the trajectory from the starting point to the end.
End of explanation
m = environment.random_motors()[0]
s = environment.update(m)
traj = environment.trajectory(m)
plt.plot(traj)
plt.ylim([-1., 1.])
plt.xlabel("Time (DMP steps)", fontsize=16)
plt.ylabel("Articulatory parameters", fontsize=16)
Explanation: Here we defined a Diva Environment with articulators trajectories generated through DMPs. We designed DMPs with 2 basis functions so that each articulator has 4 parameters: the starting and end position plus 2 weights for the basis functions. The total number of parameters is thus 7 x 4 = 28.
Given a set of 28 motor parameters, the environment outputs a formant trajectory (with 'sensory_traj_samples' time steps).
In the following we generate 28 random parameters that we feed to the environment which computes a trajectory of the 7 articulators (through DMPds), and then the correponding formant trajectory (through DIVA). We first plot the articulators trajectory generated through DMPs:
End of explanation
print("s", s)
plt.loglog([2.**f for f in s[len(s)//2:]], [2.**f for f in s[:len(s)//2]], color="r")
# Plot some vowels
v_o = list(np.log2([500, 900]))
v_y = list(np.log2([300, 1700]))
v_u = list(np.log2([300, 800]))
v_e = list(np.log2([400, 2200]))
v_i = list(np.log2([300, 2300]))
v_a = list(np.log2([800, 1300]))
vowels = dict(o=v_o, y=v_y, u=v_u, e=v_e, i=v_i, a=v_a)
for v in vowels.keys():
p = plt.plot(2.**vowels[v][1], 2.**vowels[v][0], "o", label="/" + v + "/", markersize=12)
legend = plt.legend(frameon=True, fontsize=14, ncol=4, loc="lower center")
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
Explanation: 's' contains the 10-steps trajectory of the two formants. We plot this trajectory along with some vowels:
End of explanation
print("centers:", environment.dmp.dmp.c)
print("variances:", environment.dmp.dmp.h)
bfs_shapes = environment.dmp.dmp.gen_psi(environment.dmp.dmp.cs.rollout())
plt.plot(bfs_shapes)
Explanation: To have a closer look at DMPs, we can plot the shape of the two basis functions that are used to generate a trajectory of an articulator given one weight on each basis function plus the start and end position of the articulator.
End of explanation
import IPython
IPython.display.Audio(environment.sound_wave(environment.art_traj).flatten(), rate=11025)
Explanation: Here is the sound trajectory generated with the random set of parameters:
End of explanation
for m in environment.random_motors(20):
s = environment.update(m)
plt.loglog([2.**f[1] for f in environment.formants_traj], [2.**f[0] for f in environment.formants_traj], color="r", alpha=0.5)
# Plot some vowels
v_o = list(np.log2([500, 900]))
v_y = list(np.log2([300, 1700]))
v_u = list(np.log2([300, 800]))
v_e = list(np.log2([400, 2200]))
v_i = list(np.log2([300, 2300]))
v_a = list(np.log2([800, 1300]))
vowels = dict(o=v_o, y=v_y, u=v_u, e=v_e, i=v_i, a=v_a)
for v in vowels.keys():
p = plt.plot(2.**vowels[v][1], 2.**vowels[v][0], "o", label="/" + v + "/", markersize=12)
legend = plt.legend(frameon=True, fontsize=14, ncol=4, loc="lower center")
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
Explanation: And now we plot 20 trajectories generated with random DMP parameters.
End of explanation
import os
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from explauto.environment.diva import DivaDMPEnvironment
from explauto.sensorimotor_model.non_parametric import NonParametric
from explauto.utils import rand_bounds
diva_cfg = dict(diva_path=os.path.join(os.getenv("HOME"), 'software/DIVAsimulink/'),
synth="octave",
m_mins = np.array([-1] * 28),
m_maxs = np.array([1] * 28),
s_mins = np.array([7.5]*10 + [9.25]*10),
s_maxs = np.array([9.5]*10 + [11.25]*10),
m_used = range(7),
s_used = range(1, 3),
n_dmps = 7, # parameters controlled by DMPs
n_bfs = 2, # basis functions
dmp_move_steps = 50, # trajectory time steps
dmp_max_param = 300., # max value of the weights on basis functions
sensory_traj_samples = 10, # samples of the formant trajectory to output
audio = True)
environment = DivaDMPEnvironment(**diva_cfg)
Explanation: III. Goal Babbling
In this section we run an experiment where an agent explores with the random goal babbling (GB) strategy.
See this tutorial for a comparison of different exploration strategies in a setup with a simulated robotic arm grasping a ball.
An agent with a goal babbling exploration strategy chooses a new goal at each exploration iteration, the goal being a target in its observation space. Here, the agent controls DMP parameters and observes formant trajectories. It will thus generates goal formant trajectories, and try to reach those trajectories (produce the sound trajectories) given its current sensorimotor model.
End of explanation
# Parameters to change:
audio = False # If sound is played
iterations = 2000 # Number of iterations
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
# Initialization of the sensorimotor model
sm_model = NonParametric(environment.conf, sigma_explo_ratio=sigma_explo_ratio, fwd='NN', inv='NN')
for i in range(iterations):
if i < 100 or np.random.random() < 0.2:
# Do random motor babbling in first 100 iterations and then in 20% of the iterations
m = environment.random_motors()[0]
else:
# Sample a random goal in the sensory space:
s_goal = rand_bounds(environment.conf.s_bounds)[0]
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm (plus exploration noise):
m = sm_model.inverse_prediction(s_goal)
s = environment.update(m, audio=audio) # observe the sensory effect s=(x, y): the last position of the ball
sm_model.update(m, s) # update sensorimotor model
plt.loglog([2.**f[1] for f in environment.formants_traj], [2.**f[0] for f in environment.formants_traj], color="r", alpha=0.2)
# Plot some vowels
v_o = list(np.log2([500, 900]))
v_y = list(np.log2([300, 1700]))
v_u = list(np.log2([300, 800]))
v_e = list(np.log2([400, 2200]))
v_i = list(np.log2([300, 2300]))
v_a = list(np.log2([800, 1300]))
vowels = dict(o=v_o, y=v_y, u=v_u, e=v_e, i=v_i, a=v_a)
for v in vowels.keys():
p = plt.plot(2.**vowels[v][1], 2.**vowels[v][0], "o", label="/" + v + "/", markersize=12)
legend = plt.legend(frameon=True, fontsize=14, ncol=4, loc="lower center")
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
Explanation: We define the sensorimotor with the Nearest Neighbor algorithm (NN). Given a target s_goal, the model infers a motor command (DMP parameters) m that could help to reach s_goal. To do so, it looks at the previous observed formant trajectory s_NN that is the closest to s_goal, and outputs the motor command m that was used to reach s_NN plus some exploration noise to explore new motor commands (Gaussian of standard deviation sigma_explo_ratio). See here for a tutorial on other available sensorimotor models.
We perform here 2000 iterations, with first 100 iterations of motor babbling (random motor commands), and then 20% of motor babbling and 80% of random goal babbling.
End of explanation
# goal trajectory = /uye/
uye = list(np.linspace(v_u[0], v_y[0], 5)) + list(np.linspace(v_y[0], v_e[0], 5)) + list(np.linspace(v_u[1], v_y[1], 5)) + list(np.linspace(v_y[1], v_e[1], 5))
plt.loglog([2.**f for f in uye[10:]], [2.**f for f in uye[:10]], "b")
# best trajectory for uye
sm_model.mode = "exploit"
m = sm_model.inverse_prediction(uye)
s = environment.update(m)
plt.loglog([2.**f[1] for f in environment.formants_traj], [2.**f[0] for f in environment.formants_traj], color="r", alpha=0.2)
error = np.linalg.norm(np.array(s) - np.array(uye))
print("Distance to goal /uye/:", error)
# Plot some vowels
v_o = list(np.log2([500, 900]))
v_y = list(np.log2([300, 1700]))
v_u = list(np.log2([300, 800]))
v_e = list(np.log2([400, 2200]))
v_i = list(np.log2([300, 2300]))
v_a = list(np.log2([800, 1300]))
vowels = dict(o=v_o, y=v_y, u=v_u, e=v_e, i=v_i, a=v_a)
for v in vowels.keys():
p = plt.plot(2.**vowels[v][1], 2.**vowels[v][0], "o", label="/" + v + "/", markersize=12)
legend = plt.legend(frameon=True, fontsize=14, ncol=4, loc="lower center")
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
Explanation: We can now look at the sounds that the agent managed to produce. Let's imagine that we want the agent to produce the word /uye/, a sequence of three vowels, /u/, /y/ and /e/. In the following, we define the goal formant trajectory, and ask the agent the best motor parameters it knows to reach this formant trajectory (without adding exploration noise).
End of explanation
import IPython
IPython.display.Audio(environment.sound_wave(environment.art_traj).flatten(), rate=11025)
Explanation: The target sound trajectory is plotted in blue, and the best trajectory reached by the agent is in red. Note that the agent did not know that it would be tested on that trajectory.
End of explanation
import os
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from explauto.environment.diva import DivaDMPEnvironment
from explauto.sensorimotor_model.non_parametric import NonParametric
from explauto.utils import rand_bounds
diva_cfg = dict(diva_path=os.path.join(os.getenv("HOME"), 'software/DIVAsimulink/'),
synth="octave",
m_mins = np.array([-1] * 28),
m_maxs = np.array([1] * 28),
s_mins = np.array([7.5]*10 + [9.25]*10),
s_maxs = np.array([9.5]*10 + [11.25]*10),
m_used = range(7),
s_used = range(1, 3),
n_dmps = 7, # parameters controlled by DMPs
n_bfs = 2, # basis functions
dmp_move_steps = 50, # trajectory time steps
dmp_max_param = 300., # max value of the weights on basis functions
sensory_traj_samples = 10, # samples of the formant trajectory to output
audio = True)
environment = DivaDMPEnvironment(**diva_cfg)
# Parameters to change:
audio = False # If sound is played
iterations = 2000 # Number of iterations
sigma_explo_ratio = 0.05 # Exploration noise (standard deviation)
# Goal to imitate
uye = list(np.linspace(v_u[0], v_y[0], 5)) + list(np.linspace(v_y[0], v_e[0], 5)) + list(np.linspace(v_u[1], v_y[1], 5)) + list(np.linspace(v_y[1], v_e[1], 5))
# Initialization of the sensorimotor model
sm_model = NonParametric(environment.conf, sigma_explo_ratio=sigma_explo_ratio, fwd='NN', inv='NN')
for i in range(iterations):
if i < 100 or np.random.random() < 0.2:
# Do random motor babbling in first 10 iterations and then in 20% of the iterations
m = environment.random_motors()[0]
else:
if i < 1000:
# Sample a random goal in the sensory space:
s_goal = rand_bounds(environment.conf.s_bounds)[0]
else:
s_goal = uye # Imitates the uye word
# Infer a motor command to reach that goal using the Nearest Neighbor algorithm (plus exploration noise):
m = sm_model.inverse_prediction(s_goal)
s = environment.update(m, audio=audio) # observe the sensory effect s=(x, y): the last position of the ball
sm_model.update(m, s) # update sensorimotor model
plt.loglog([2.**f[1] for f in environment.formants_traj], [2.**f[0] for f in environment.formants_traj], color="r", alpha=0.2)
# Plot some vowels
v_o = list(np.log2([500, 900]))
v_y = list(np.log2([300, 1700]))
v_u = list(np.log2([300, 800]))
v_e = list(np.log2([400, 2200]))
v_i = list(np.log2([300, 2300]))
v_a = list(np.log2([800, 1300]))
vowels = dict(o=v_o, y=v_y, u=v_u, e=v_e, i=v_i, a=v_a)
for v in vowels.keys():
p = plt.plot(2.**vowels[v][1], 2.**vowels[v][0], "o", label="/" + v + "/", markersize=12)
legend = plt.legend(frameon=True, fontsize=14, ncol=4, loc="lower center")
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
# goal trajectory = /uye/
uye = list(np.linspace(v_u[0], v_y[0], 5)) + list(np.linspace(v_y[0], v_e[0], 5)) + list(np.linspace(v_u[1], v_y[1], 5)) + list(np.linspace(v_y[1], v_e[1], 5))
plt.loglog([2.**f for f in uye[10:]], [2.**f for f in uye[:10]], "b")
# best trajectory for uye
sm_model.mode = "exploit"
m = sm_model.inverse_prediction(uye)
s = environment.update(m)
plt.loglog([2.**f[1] for f in environment.formants_traj], [2.**f[0] for f in environment.formants_traj], color="r", alpha=0.2)
error = np.linalg.norm(np.array(s) - np.array(uye))
print("Distance to goal /uye/:", error)
# Plot some vowels
v_o = list(np.log2([500, 900]))
v_y = list(np.log2([300, 1700]))
v_u = list(np.log2([300, 800]))
v_e = list(np.log2([400, 2200]))
v_i = list(np.log2([300, 2300]))
v_a = list(np.log2([800, 1300]))
vowels = dict(o=v_o, y=v_y, u=v_u, e=v_e, i=v_i, a=v_a)
for v in vowels.keys():
p = plt.plot(2.**vowels[v][1], 2.**vowels[v][0], "o", label="/" + v + "/", markersize=12)
legend = plt.legend(frameon=True, fontsize=14, ncol=4, loc="lower center")
plt.xlabel("F2 (Hz)", fontsize=16)
plt.ylabel("F1 (Hz)", fontsize=16)
plt.xlim([3000., 500])
plt.ylim([1200., 200.])
Explanation: IV. Imitation
In this section, we give a target sound trajectory to the agent so that it tries to imitate it.
We also run 2000 iterations, where the agent generates random goals for 1000 iteration and then tries to imitate the target sound trajectory /uye/.
End of explanation
import IPython
def say(word):
v1 = vowels[word[0]]
v2 = vowels[word[1]]
v3 = vowels[word[2]]
traj = list(np.linspace(v1[0], v2[0], 5)) + list(np.linspace(v2[0], v3[0], 5)) + list(np.linspace(v1[1], v2[1], 5)) + list(np.linspace(v2[1], v3[1], 5))
# best trajectory for uye
sm_model.mode = "exploit"
m = sm_model.inverse_prediction(traj)
s = environment.update(m)
error = np.linalg.norm(np.array(s) - np.array(traj))
print("Distance to goal /", word, "/:", error)
return IPython.display.Audio(environment.sound_wave(environment.art_traj).flatten(), rate=11025)
say("uye")
say("ieo")
say("iee")
Explanation: The distance to the target word /uye/ is lower, which means that the agent learned to reproduce the sound trajectory more acurately by focusing on imitating it. However, with goal babbling, the agent also learned reasonable sounds for other words.
End of explanation |
14,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image models on MNIST using Estimator.
<hr/>
This <a href="mnist_models.ipynb">companion notebook</a> extends the basic harness of this notebook to a variety of models including DNN, CNN, dropout, pooling etc.
Step1: Exploring the data
Let's download MNIST data and examine the shape. We will need these numbers ...
Step2: Define the model.
Let's start with a very simple linear classifier. All our models will have this basic interface -- they will take an image and return probabilities.
Step3: Write Input Functions
As usual, we need to specify input functions for training, evaluation, and predicition.
Step4: Create train_and_evaluate function
tf.estimator.train_and_evaluate does distributed training.
Step5: This is the main() function | Python Code:
import numpy as np
import shutil
import os
import tensorflow as tf
print(tf.__version__)
Explanation: MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image models on MNIST using Estimator.
<hr/>
This <a href="mnist_models.ipynb">companion notebook</a> extends the basic harness of this notebook to a variety of models including DNN, CNN, dropout, pooling etc.
End of explanation
HEIGHT = 28
WIDTH = 28
NCLASSES = 10
# Get mnist data
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Scale our features between 0 and 1
x_train, x_test = x_train / 255.0, x_test / 255.0
# Convert labels to categorical one-hot encoding
y_train = tf.keras.utils.to_categorical(y = y_train, num_classes = NCLASSES)
y_test = tf.keras.utils.to_categorical(y = y_test, num_classes = NCLASSES)
print("x_train.shape = {}".format(x_train.shape))
print("y_train.shape = {}".format(y_train.shape))
print("x_test.shape = {}".format(x_test.shape))
print("y_test.shape = {}".format(y_test.shape))
import matplotlib.pyplot as plt
IMGNO = 12
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
Explanation: Exploring the data
Let's download MNIST data and examine the shape. We will need these numbers ...
End of explanation
# Build Keras Model Using Keras Sequential API
def linear_model():
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape = [HEIGHT, WIDTH], name = "image"))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units = NCLASSES, activation = tf.nn.softmax, name = "probabilities"))
return model
Explanation: Define the model.
Let's start with a very simple linear classifier. All our models will have this basic interface -- they will take an image and return probabilities.
End of explanation
# Create training input function
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x = {"image": x_train},
y = y_train,
batch_size = 100,
num_epochs = None,
shuffle = True,
queue_capacity = 5000
)
# Create evaluation input function
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x = {"image": x_test},
y = y_test,
batch_size = 100,
num_epochs = 1,
shuffle = False,
queue_capacity = 5000
)
# Create serving input function for inference
def serving_input_fn():
placeholders = {"image": tf.placeholder(dtype = tf.float32, shape = [None, HEIGHT, WIDTH])}
features = placeholders # as-is
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = placeholders)
Explanation: Write Input Functions
As usual, we need to specify input functions for training, evaluation, and predicition.
End of explanation
def train_and_evaluate(output_dir, hparams):
# Build Keras model
model = linear_model()
# Compile Keras model with optimizer, loss function, and eval metrics
model.compile(
optimizer = "adam",
loss = "categorical_crossentropy",
metrics = ["accuracy"])
# Convert Keras model to an Estimator
estimator = tf.keras.estimator.model_to_estimator(
keras_model = model,
model_dir = output_dir)
# Set estimator's train_spec to use train_input_fn and train for so many steps
train_spec = tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = hparams["train_steps"])
# Create exporter that uses serving_input_fn to create saved_model for serving
exporter = tf.estimator.LatestExporter(
name = "exporter",
serving_input_receiver_fn = serving_input_fn)
# Set estimator's eval_spec to use eval_input_fn and export saved_model
eval_spec = tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
exporters = exporter)
# Run train_and_evaluate loop
tf.estimator.train_and_evaluate(
estimator = estimator,
train_spec = train_spec,
eval_spec = eval_spec)
Explanation: Create train_and_evaluate function
tf.estimator.train_and_evaluate does distributed training.
End of explanation
OUTDIR = "mnist/learned"
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
hparams = {"train_steps": 1000, "learning_rate": 0.01}
train_and_evaluate(OUTDIR, hparams)
Explanation: This is the main() function
End of explanation |
14,044 | Given the following text description, write Python code to implement the functionality described.
Description:
Count Knights that can attack a given pawn in an N * N board
Function to count the knights that are attacking the pawn in an M * M board ; Stores count of knights that are attacking the pawn ; Traverse the knights array ; Stores absolute difference of X co - ordinate of i - th knight and pawn ; Stores absolute difference of Y co - ordinate of i - th knight and pawn ; If X is 1 and Y is 2 or X is 2 and Y is 1 ; Update cntKnights ; Driver code ; Stores total count of knights
| Python Code:
def cntKnightsAttackPawn(knights , pawn , M ) :
cntKnights = 0 ;
for i in range(M ) :
X = abs(knights[i ][0 ] - pawn[0 ] ) ;
Y = abs(knights[i ][1 ] - pawn[1 ] ) ;
if(( X == 1 and Y == 2 ) or(X == 2 and Y == 1 ) ) :
cntKnights += 1 ;
return cntKnights ;
if __name__== ' __main __' :
knights =[[ 0 , 4 ] ,[4 , 5 ] ,[1 , 4 ] ,[3 , 1 ] ] ;
pawn =[2 , 3 ] ;
M = len(knights ) ;
print(cntKnightsAttackPawn(knights , pawn , M ) ) ;
|
14,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traffic Sign Classification with Keras
Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
Dataset
The network you'll build with Keras is similar to the example in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here
Step1: Overview
Here are the steps you'll take to build the network
Step2: Load the Data
Start by importing the data from the pickle file.
Step3: Preprocess the Data
Shuffle the data
Normalize the features using Min-Max scaling between -0.5 and 0.5
One-Hot Encode the labels
Shuffle the data
Hint
Step4: Normalize the features
Hint
Step5: One-Hot Encode the labels
Hint
Step6: Keras Sequential Model
```python
from keras.models import Sequential
Create the Sequential model
model = Sequential()
``
Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.
Keras Layer
A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this
Step7: Training a Sequential Model
You built a multi-layer neural network in Keras, now let's look at training a neural network.
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
...
Configures the learning process and metrics
model.compile('sgd', 'mean_squared_error', ['accuracy'])
Train the model
History is a record of training loss and metrics
history = model.fit(X_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2)
Calculate test score
test_score = model.evaluate(X_test_data, Y_test_data)
``
The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`.
You can find more optimizers here, loss functions here, and more metrics here.
To train the model, use the fit() function as shown in model.fit(X_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. Typically you won't have to change the verbose parameter but in Jupyter notebooks the update animation can crash the notebook so we set verbose=2, this limits the animation to only update after an epoch is complete. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.
Train the Network
Compile the network using adam optimizer and categorical_crossentropy loss function.
Train the network for ten epochs and validate with 20% of the training data.
Step8: Convolutions
Re-construct the previous network
Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.
Add a ReLU activation after the convolutional layer.
Hint 1
Step9: Pooling
Re-construct the network
Add a 2x2 max pooling layer immediately following your convolutional layer.
Step10: Dropout
Re-construct the network
Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
Step11: Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
Step12: Best Validation Accuracy | Python Code:
from urllib.request import urlretrieve
from os.path import isfile
from tqdm import tqdm
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('train.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Train Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/train.p',
'train.p',
pbar.hook)
if not isfile('test.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Test Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/test.p',
'test.p',
pbar.hook)
print('Training and Test data downloaded.')
Explanation: Traffic Sign Classification with Keras
Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
Dataset
The network you'll build with Keras is similar to the example in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here:
End of explanation
import pickle
import numpy as np
import math
# Fix error with TF and Keras
import tensorflow as tf
print('Modules loaded.')
Explanation: Overview
Here are the steps you'll take to build the network:
Load the training data.
Preprocess the data.
Build a feedforward neural network to classify traffic signs.
Build a convolutional neural network to classify traffic signs.
Evaluate the final neural network on testing data.
Keep an eye on the network’s accuracy over time. Once the accuracy reaches the 98% range, you can be confident that you’ve built and trained an effective model.
End of explanation
with open('train.p', 'rb') as f:
data = pickle.load(f)
# TODO: Load the feature data to the variable X_train
X_train = data['features']
# TODO: Load the label data to the variable y_train
y_train = data['labels']
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert np.array_equal(X_train, data['features']), 'X_train not set to data[\'features\'].'
assert np.array_equal(y_train, data['labels']), 'y_train not set to data[\'labels\'].'
print('Tests passed.')
Explanation: Load the Data
Start by importing the data from the pickle file.
End of explanation
# TODO: Shuffle the data
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert X_train.shape == data['features'].shape, 'X_train has changed shape. The shape shouldn\'t change when shuffling.'
assert y_train.shape == data['labels'].shape, 'y_train has changed shape. The shape shouldn\'t change when shuffling.'
assert not np.array_equal(X_train, data['features']), 'X_train not shuffled.'
assert not np.array_equal(y_train, data['labels']), 'y_train not shuffled.'
print('Tests passed.')
Explanation: Preprocess the Data
Shuffle the data
Normalize the features using Min-Max scaling between -0.5 and 0.5
One-Hot Encode the labels
Shuffle the data
Hint: You can use the scikit-learn shuffle function to shuffle the data.
End of explanation
# TODO: Normalize the data features to the variable X_normalized
def normalize_grayscale(image_data):
a = -0.5
b = 0.5
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
X_normalized = normalize_grayscale(X_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert math.isclose(np.min(X_normalized), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_normalized), 0.5, abs_tol=1e-5), 'The range of the training data is: {} to {}. It must be -0.5 to 0.5'.format(np.min(X_normalized), np.max(X_normalized))
print('Tests passed.')
Explanation: Normalize the features
Hint: You solved this in TensorFlow lab Problem 1.
End of explanation
# TODO: One Hot encode the labels to the variable y_one_hot
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
y_one_hot = label_binarizer.fit_transform(y_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
import collections
assert y_one_hot.shape == (39209, 43), 'y_one_hot is not the correct shape. It\'s {}, it should be (39209, 43)'.format(y_one_hot.shape)
assert next((False for y in y_one_hot if collections.Counter(y) != {0: 42, 1: 1}), True), 'y_one_hot not one-hot encoded.'
print('Tests passed.')
Explanation: One-Hot Encode the labels
Hint: You can use the scikit-learn LabelBinarizer function to one-hot encode the labels.
End of explanation
from keras.models import Sequential
model = Sequential()
# TODO: Build a Multi-layer feedforward neural network with Keras here.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
model.add(Flatten(input_shape=(32, 32, 3)))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.activations import relu, softmax
def check_layers(layers, true_layers):
assert len(true_layers) != 0, 'No layers found'
for layer_i in range(len(layers)):
assert isinstance(true_layers[layer_i], layers[layer_i]), 'Layer {} is not a {} layer'.format(layer_i+1, layers[layer_i].__name__)
assert len(true_layers) == len(layers), '{} layers found, should be {} layers'.format(len(true_layers), len(layers))
check_layers([Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[1].output_shape == (None, 128), 'Second layer output is wrong, it should be (128)'
assert model.layers[2].activation == relu, 'Third layer not a relu activation layer'
assert model.layers[3].output_shape == (None, 43), 'Fourth layer output is wrong, it should be (43)'
assert model.layers[4].activation == softmax, 'Fifth layer not a softmax activation layer'
print('Tests passed.')
Explanation: Keras Sequential Model
```python
from keras.models import Sequential
Create the Sequential model
model = Sequential()
``
Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.
Keras Layer
A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this:
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
Create the Sequential model
model = Sequential()
1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
2nd Layer - Add a fully connected layer
model.add(Dense(100))
3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
4th Layer - Add a fully connected layer
model.add(Dense(60))
5th Layer - Add a ReLU activation layer
model.add(Activation('relu'))
```
Keras will automatically infer the shape of all layers after the first layer. This means you only have to set the input dimensions for the first layer.
The first layer from above, model.add(Flatten(input_shape=(32, 32, 3))), sets the input dimension to (32, 32, 3) and output dimension to (3072=32*32*3). The second layer takes in the output of the first layer and sets the output dimenions to (100). This chain of passing output to the next layer continues until the last layer, which is the output of the model.
Build a Multi-Layer Feedforward Network
Build a multi-layer feedforward neural network to classify the traffic sign images.
Set the first layer to a Flatten layer with the input_shape set to (32, 32, 3)
Set the second layer to Dense layer width to 128 output.
Use a ReLU activation function after the second layer.
Set the output layer width to 43, since there are 43 classes in the dataset.
Use a softmax activation function after the output layer.
To get started, review the Keras documentation about models and layers.
The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.
End of explanation
# TODO: Compile and train the model here.
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, nb_epoch=10, validation_split=0.2, verbose=2)
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.optimizers import Adam
assert model.loss == 'categorical_crossentropy', 'Not using categorical_crossentropy loss function'
assert isinstance(model.optimizer, Adam), 'Not using adam optimizer'
assert len(history.history['acc']) == 10, 'You\'re using {} epochs when you need to use 10 epochs.'.format(len(history.history['acc']))
assert history.history['acc'][-1] > 0.92, 'The training accuracy was: %.3f. It shoud be greater than 0.92' % history.history['acc'][-1]
assert history.history['val_acc'][-1] > 0.85, 'The validation accuracy is: %.3f. It shoud be greater than 0.85' % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Training a Sequential Model
You built a multi-layer neural network in Keras, now let's look at training a neural network.
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
...
Configures the learning process and metrics
model.compile('sgd', 'mean_squared_error', ['accuracy'])
Train the model
History is a record of training loss and metrics
history = model.fit(X_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2)
Calculate test score
test_score = model.evaluate(X_test_data, Y_test_data)
``
The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`.
You can find more optimizers here, loss functions here, and more metrics here.
To train the model, use the fit() function as shown in model.fit(X_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. Typically you won't have to change the verbose parameter but in Jupyter notebooks the update animation can crash the notebook so we set verbose=2, this limits the animation to only update after an epoch is complete. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.
Train the Network
Compile the network using adam optimizer and categorical_crossentropy loss function.
Train the network for ten epochs and validate with 20% of the training data.
End of explanation
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
check_layers([Convolution2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[0].nb_filter == 32, 'Wrong number of filters, it should be 32'
assert model.layers[0].nb_col == model.layers[0].nb_row == 3, 'Kernel size is wrong, it should be a 3x3'
assert model.layers[0].border_mode == 'valid', 'Wrong padding, it should be valid'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Convolutions
Re-construct the previous network
Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.
Add a ReLU activation after the convolutional layer.
Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.
End of explanation
# TODO: Re-construct the network and add a pooling layer after the convolutional layer.
from keras.models import Sequential
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[1].pool_size == (2, 2), 'Second layer must be a max pool layer with pool size of 2x2'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Pooling
Re-construct the network
Add a 2x2 max pooling layer immediately following your convolutional layer.
End of explanation
# TODO: Re-construct the network and add dropout after the pooling layer.
from keras.models import Sequential
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Dropout, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[2].p == 0.5, 'Third layer should be a Dropout of 50%'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2, verbose=2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Dropout
Re-construct the network
Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
End of explanation
# TODO: Build a model
from keras.models import Sequential
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# There is no right or wrong answer. This is for you to explore model creation.
# TODO: Compile and train the model
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, nb_epoch=10, validation_split=0.2, verbose=2)
Explanation: Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
End of explanation
# TODO: Load test data
with open('test.p', 'rb') as f:
data_test = pickle.load(f)
X_test = data_test['features']
y_test = data_test['labels']
# TODO: Preprocess data & one-hot encode the labels
X_normalized_test = normalize_grayscale(X_test)
y_one_hot_test = label_binarizer.fit_transform(y_test)
# TODO: Evaluate model on test data
metrics = model.evaluate(X_normalized_test, y_one_hot_test)
for metric_i in range(len(model.metrics_names)):
metric_name = model.metrics_names[metric_i]
metric_value = metrics[metric_i]
print('{}: {}'.format(metric_name, metric_value))
Explanation: Best Validation Accuracy: (fill in here)
Testing
Once you've picked out your best model, it's time to test it.
Load up the test data and use the evaluate() method to see how well it does.
Hint 1: The evaluate() method should return an array of numbers. Use the metrics_names property to get the labels.
End of explanation |
14,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The line below creates a list of three pairs, each pair containing two pandas.Series objects.
A Series is like a dictionary, only its items are ordered and its values must share a data type. The order keys of the series are its index. It is easy to compose Series objects into a DataFrame.
Step1: This creates a DataFrame from each of the series.
The columns alternate between representing word rankings and representing word counts.
Step2: We should rename the columns to be more descriptive of the data.
Step3: Use the to_csv() function on the DataFrame object to export the data to CSV format, which you can open easily in Excel.
Step4: To filter the data by certain authors before computing the word rankings, provide a list of author names as an argument.
Only emails whose From header includes on of the author names within it will be included in the calculation.
Note that for detecting the author name, the program for now uses simple string inclusion. You may need to try multiple variations of the authors' names in order to catch all emails written by persons of interest. | Python Code:
series = [ordered_words(archive.data) for archive in archives]
Explanation: The line below creates a list of three pairs, each pair containing two pandas.Series objects.
A Series is like a dictionary, only its items are ordered and its values must share a data type. The order keys of the series are its index. It is easy to compose Series objects into a DataFrame.
End of explanation
rankings = pd.concat([series[0][0],
series[0][1],
series[1][0],
series[1][1],
series[2][0],
series[2][1]],axis=1)
# display the first 5 rows of the DataFrame
rankings[:5]
Explanation: This creates a DataFrame from each of the series.
The columns alternate between representing word rankings and representing word counts.
End of explanation
rankings.rename(columns={0: 'ipc-gnso rankings',
1: 'ipc-gnso counts',
2: 'wp4 rankings',
3: 'wp4 counts',
4: 'ncuc-discuss rankings',
5: 'ncuc-discuss counts'},inplace=True)
rankings[:5]
Explanation: We should rename the columns to be more descriptive of the data.
End of explanation
rankings.to_csv("rankings_all.csv",encoding="utf-8")
Explanation: Use the to_csv() function on the DataFrame object to export the data to CSV format, which you can open easily in Excel.
End of explanation
authors = ["Greg Shatan",
"Niels ten Oever"]
ordered_words(archives[0].data, authors=authors)
Explanation: To filter the data by certain authors before computing the word rankings, provide a list of author names as an argument.
Only emails whose From header includes on of the author names within it will be included in the calculation.
Note that for detecting the author name, the program for now uses simple string inclusion. You may need to try multiple variations of the authors' names in order to catch all emails written by persons of interest.
End of explanation |
14,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1
Step3: Transforming Text into Numbers
Step4: Project 2
Step5: Project 3 | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: Project 1: Quick Theory Validation
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: Transforming Text into Numbers
End of explanation
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
Explanation: Project 2: Creating the Input/Output Data
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs
)# signals into final output layer
final_outputs = self.activation_function(final_inputs)# signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# Output error
output_errors = targets - final_inputs # Output layer error is the difference between desired target and actual output.
# Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1.0 - hidden_outputs) # hidden layer gradients
# TUpdate the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += np.dot(hidden_grad * hidden_errors, inputs.T) * self.lr# update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(inputs.T, self.weights_input_to_hidden.T
)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output.T
)# signals into final output layer
final_outputs = np.round(self.activation_function(final_inputs))# signals from final output layer
return final_outputs
network = NeuralNetwork(vocab_size, 25, 1, .01)
network.train(layer_0, get_target_for_label(labels[0]))
### Set the hyperparameters here ###
epochs = 100
learning_rate = 0.001
hidden_nodes = 24
output_nodes = 1
network = NeuralNetwork(vocab_size, hidden_nodes, output_nodes, learning_rate)
for e in range(epochs):
for review, label in zip(reviews, labels):
update_input_layer(review)
network.train(layer_0, get_target_for_label(label))
results = []
for i in range(len(reviews)):
update_input_layer(reviews[i])
results.append((network.run(layer_0), get_target_for_label(labels[i])))
results
np.mean([np.equal(x, y) for (x, y) in results])
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation |
14,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Attrribute data from a csv file and a W from a gal file
Step1: Attrribute data from a csv file and an external W object
Step2: Shapefile and mapping results with PySAL Viz | Python Code:
mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))
mexico.fieldNames
w = ps.open(ps.examples.get_path('mexico.gal')).read()
w.n
cp.addRook2Layer(ps.examples.get_path('mexico.gal'), mexico)
mexico.Wrook
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
mexico.fieldNames
mexico.getVars('pcgdp1940')
# mexico example all together
csvfile = ps.examples.get_path('mexico.csv')
galfile = ps.examples.get_path('mexico.gal')
mexico = cp.importCsvData(csvfile)
cp.addRook2Layer(galfile, mexico)
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
mexico.region2areas.index(2)
mexico.Wrook[0]
mexico.getVars('State')
regions = np.array(mexico.region2areas)
regions
Counter(regions)
Explanation: Attrribute data from a csv file and a W from a gal file
End of explanation
mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))
w = ps.open(ps.examples.get_path('mexico.gal')).read()
cp.addW2Layer(w, mexico)
mexico.Wrook
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
Explanation: Attrribute data from a csv file and an external W object
End of explanation
usf = ps.examples.get_path('us48.shp')
us = cp.loadArcData(usf.split(".")[0])
us.Wqueen
us.fieldNames
uscsv = ps.examples.get_path("usjoin.csv")
f = ps.open(uscsv)
pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T
pci
usy = cp.Layer()
cp.addQueen2Layer(ps.examples.get_path('states48.gal'), usy)
names = ["Y_%d"%v for v in range(1929,2010)]
cp.addArray2Layer(pci, usy, names)
names
usy.fieldNames
usy.getVars('Y_1929')
usy.Wrook
usy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)
#mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
us = cp.Layer()
cp.addQueen2Layer(ps.examples.get_path('states48.gal'), us)
uscsv = ps.examples.get_path("usjoin.csv")
f = ps.open(uscsv)
pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T
names = ["Y_%d"%v for v in range(1929,2010)]
cp.addArray2Layer(pci, us, names)
usy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)
us_alpha = cp.importCsvData(ps.examples.get_path('usjoin.csv'))
alpha_fips = us_alpha.getVars('STATE_FIPS')
alpha_fips
dbf = ps.open(ps.examples.get_path('us48.dbf'))
dbf.header
state_fips = dbf.by_col('STATE_FIPS')
names = dbf.by_col('STATE_NAME')
names
state_fips = map(int, state_fips)
state_fips
# the csv file has the states ordered alphabetically, but this isn't the case for the order in the shapefile so we have to reorder before any choropleths are drawn
alpha_fips = [i[0] for i in alpha_fips.values()]
reorder = [ alpha_fips.index(s) for s in state_fips]
regions = usy.region2areas
regions
from pysal.contrib.viz import mapping as maps
shp = ps.examples.get_path('us48.shp')
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values')
usy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values')
names = ["Y_%d"%i for i in range(1929, 2010)]
#usy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)
usy.cluster('arisel', names, 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='All Years')
ps.version
usy.cluster('arisel', names[:40], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1929-68')
usy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1969-2009')
usy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)
usy.dataOperation("CONSTANT = 1")
usy.Wrook = usy.Wqueen
usy.cluster('maxpTabu', ['Y_1929', 'Y_1929'], threshold=1000, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')
Counter(regions)
usy.getVars('Y_1929')
usy.Wrook
usy.cluster('maxpTabu', ['Y_1929', 'CONSTANT'], threshold=8, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')
regions
Counter(regions)
vars = names
vars.append('CONSTANT')
vars
usy.cluster('maxpTabu', vars, threshold=8, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929-2009')
Counter(regions)
south = cp.loadArcData(ps.examples.get_path("south.shp"))
south.fieldNames
# uncomment if you have some time ;->
#south.cluster('arisel', ['HR70'], 20, wType='queen', inits=10, dissolve=0)
#regions = south.region2areas
shp = ps.examples.get_path('south.shp')
#maps.plot_choropleth(shp, np.array(regions), 'unique_values')
south.dataOperation("CONSTANT = 1")
south.cluster('maxpTabu', ['HR70', 'CONSTANT'], threshold=70, dissolve=0)
regions = south.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions, 'unique_values', title='maxp HR70 threshold=70')
Counter(regions)
Explanation: Shapefile and mapping results with PySAL Viz
End of explanation |
14,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
正则表达式
1 基础部分
管道符号(|)匹配多个正则表达式
Step1: 2.3 search
match 从字符串开始位置进行匹配,但是模式出现在字符串的中间的位置比开始位置的概率大得多
Step2: search 函数将返回字符串开始模式首次出现的位置
Step3: 2.4 匹配多个字符串
Step4: 2.5 匹配任意单个字符(.)
句点不能匹配换行符或者匹配非字符串(空字符串)
Step5: 2.6 创建字符集合([ ])
Step6: 2.7 分组
2.7.1匹配邮箱
Step7: 2.7.2 分组表示
Step8: 2.8 字符串开头或者单词边界
2.8.1 字符串开头或者结尾
Step9: 2.8.2 单词边界
Step10: 2.9 find 模块
Step11: 2.10 sub()和subn()函数
Step12: 2.11 split分割
Step13: 3 搜索和匹配的比较,“贪婪”匹配
Step14: 由于通配符“.”默认贪心的,所以'.+'将会匹配尽可能多的字符,所以
Thu Feb 15 17 | Python Code:
import re
m = re.match('foo', 'foo')
if m is not None: m.group()
m
m = re.match('foo', 'bar')
if m is not None: m.group()
re.match('foo', 'foo on the table').group()
# raise attributeError
re.match('bar', 'foo on the table').group()
Explanation: 正则表达式
1 基础部分
管道符号(|)匹配多个正则表达式:
at | home 匹配 at,home
匹配任意单一字符(.):
t.o 匹配 tao,tzo
字符串和单词开始和结尾位置匹配:
(^) 匹配字符串开始位置:^From 匹配 From 开始的字符串
(\$) 匹配字符串结尾的位置: /bin/tsch\$ 匹配/bin/tsch结束的字符串
(\b) 匹配单词的边界:\bthe 匹配the开头的单词
(\B) 与\b 相反
([]) 创建匹配字符集合:b[aeiu]t 匹配 bat,bet,bit,but
(-) 指定范围匹配: [a-z]匹配a到z的字符
(^)否定:[^aeiou]匹配非元音
*: 出现一次;+: 出现1次和多次;?:出现1次和0次
\d: 匹配数字,\D: 相反;\w 整个字符数字的字符集,\W 相反;\s 空白字符,\S 相反。
(()):进行分组匹配
2 Re模块
2.1 常用函数
comple(pattern, flags=0)
对正则表达式进行编译,返回regex对象
match(pattern, string, flags=0)
尝试用一个正则表达式模式pattern对一个字符串进行匹配,如果匹配成功,返回匹配的对象
search(pattern, string, flags=0)
在字符串中搜索pattern的第一次出现
findall(pattern, string[,flags])和finditer(pattern, string[,flags])
返回字符串中模式所有的出现,返回分别为列表和迭代对象
split(pattern, string, max=0)
根据正则表达式将字符串分割成一个列表
sub(pattern, repl, string, max=0)
把字符串的中符合pattern的部分用repl替换掉
group(num=0)
返回全部匹配对象(或者指定编号是num的子组)
group()
包含全部数组的子组的字符串
2.2 match
End of explanation
m = re.match('foo','seafood')
if m is not None: m.group()
Explanation: 2.3 search
match 从字符串开始位置进行匹配,但是模式出现在字符串的中间的位置比开始位置的概率大得多
End of explanation
re.search('foo', 'seafood').group()
Explanation: search 函数将返回字符串开始模式首次出现的位置
End of explanation
bt = 'bat|bet|bit'
re.match(bt,'bat').group()
re.match(bt, 'blt').group()
re.match(bt, 'He bit me!').group()
re.search(bt, 'He bit me!').group()
Explanation: 2.4 匹配多个字符串
End of explanation
anyend='.end'
re.match(anyend, 'bend').group()
re.match(anyend, 'end').group()
re.search(anyend, '\nend').group()
Explanation: 2.5 匹配任意单个字符(.)
句点不能匹配换行符或者匹配非字符串(空字符串)
End of explanation
pattern = '[cr][23][dp][o2]'
re.match(pattern, 'c3po').group()
re.match(pattern, 'c3do').group()
re.match('r2d2|c3po', 'c2do').group()
re.match('r2d2|c3po', 'r2d2').group()
Explanation: 2.6 创建字符集合([ ])
End of explanation
patt = '\w+@(\w+\.)?\w+\.com'
re.match(patt, '[email protected]').group()
re.match(patt, '[email protected]').group()
# 匹配多个子域名
patt = '\w+@(\w+\.)*\w+\.com'
re.match(patt, '[email protected]').group()
Explanation: 2.7 分组
2.7.1匹配邮箱
End of explanation
patt = '(\w\w\w)-(\d\d\d)'
m = re.match(patt, 'abc-123')
m.group()
m.group(1)
m.group(2)
m.groups()
m = re.match('ab', 'ab')
m.group()
m.groups()
m = re.match('(ab)','ab')
m.groups()
m.group(1)
m = re.match('(a(b))', 'ab')
m.group()
m.group(1)
m.group(2)
m.groups()
Explanation: 2.7.2 分组表示
End of explanation
re.match('^The', 'The end.').group()
# raise attributeError
re.match('^The', 'end. The').group()
Explanation: 2.8 字符串开头或者单词边界
2.8.1 字符串开头或者结尾
End of explanation
re.search(r'\bthe', 'bite the dog').group()
re.search(r'\bthe', 'bitethe dog').group()
re.search(r'\Bthe', 'bitthe dog').group()
Explanation: 2.8.2 单词边界
End of explanation
re.findall('car', 'car')
re.findall('car', 'scary')
re.findall('car', 'carry, the barcardi to the car')
Explanation: 2.9 find 模块
End of explanation
(re.sub('X', 'Mr. Smith', 'attn: X\n\nDear X, \n'))
print re.subn('X', 'Mr. Smith', 'attn: X\n\nDear X, \n')
re.sub('[ae]', 'X', 'abcdedf')
Explanation: 2.10 sub()和subn()函数
End of explanation
re.split(':','str1:str2:str3')
from os import popen
from re import split
f = popen('who', 'r')
for eachLine in f.readlines():
print split('\s\s+|\t', eachLine.strip())
f.close()
Explanation: 2.11 split分割
End of explanation
string = 'Thu Feb 15 17:46:04 2007::[email protected]::1171590364-6-8'
patt = '.+\d+-\d+-\d+'
re.match(patt, string).group()
patt = '.+(\d+-\d+-\d+)'
re.match(patt, string).group(1)
Explanation: 3 搜索和匹配的比较,“贪婪”匹配
End of explanation
patt = '.+?(\d+-\d+-\d+)'
re.match(patt, string).group(1)
Explanation: 由于通配符“.”默认贪心的,所以'.+'将会匹配尽可能多的字符,所以
Thu Feb 15 17:46:04 2007::[email protected]::117159036
将匹配'.+',而分组匹配的内容则是“4-6-8”,非贪婪算法则通过'?'解决
End of explanation |
14,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Linear Regression
In this example, we illustrate how to solve a linear regression problem.
Suppose we have training data, can we fit a neural network on it? The trained neural network (an MLP) is then validated on the test split.
Let's load the python packages first.
Step1: Generate Artificial Data
Assuming we have 1D input and 1D output, let us generate the train data assuminng the underlying function is a polynomial
Step2: Then, let us generate the test data and plot the train and test data together. Note that the test data is beyond the [-1,2] range of x_train. This will enable us to verify if our model can generalize (ie make prediction from never seen before data)
Step3: 3-layer MLP Model
Then, let us build a 3-layer 64-128-1 MLP.
Step4: Qualitative Evaluation
Let us plot the prediction (green dots) vs test dataset (red stars).
We can see that the test data deviates largely when the input is beyond the range of the train data.
Step5: Examine the Train History
The history variable stores value informationn during training such as value of loss function. | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# numpy package
import numpy as np
# for plotting
import matplotlib.pyplot as plt
# keras modules
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import plot_model
Explanation: Simple Linear Regression
In this example, we illustrate how to solve a linear regression problem.
Suppose we have training data, can we fit a neural network on it? The trained neural network (an MLP) is then validated on the test split.
Let's load the python packages first.
End of explanation
# generate train x data
x_train = np.random.uniform(low=-1.0, high=2.0, size=[200,1])
# generate train y data
y_train = x_train*x_train - 2*x_train + 1
plt.figure()
plt.plot(x_train, y_train, 'bo')
Explanation: Generate Artificial Data
Assuming we have 1D input and 1D output, let us generate the train data assuminng the underlying function is a polynomial:
\begin{align}
y= f(x) = x^2 - 2x + 1
\end{align}
End of explanation
# generate test x data
x_test = np.random.uniform(low=-2.0, high=3.0, size=[20,1])
# generate test y data
y_test = x_test*x_test - 2*x_test + 1
plt.plot(x_train, y_train, 'bo', x_test, y_test, 'r*')
plt.show()
Explanation: Then, let us generate the test data and plot the train and test data together. Note that the test data is beyond the [-1,2] range of x_train. This will enable us to verify if our model can generalize (ie make prediction from never seen before data)
End of explanation
# build 2-layer MLP network
model = Sequential(name='linear_regressor')
# 1st layer (input layer) has 64 units (perceptron), input is 1-dim
model.add(Dense(units=64, input_dim=1, activation='relu', name='input_layer'))
# 2nd layer has 128 unit, output is 1-dim
model.add(Dense(units=128, activation='relu', name='hidden_layer'))
# 3rd layer (output layer) has 1 unit, output is 1-dim
model.add(Dense(units=1, name='output_layer'))
# print summary to double check the network
model.summary()
# indicate the loss function and use stochastic gradient descent
# (sgd) as optimizer
model.compile(loss='mse', optimizer='sgd')
# feed the network with complete dataset (1 epoch) 100 times
# batch size of sgd is 4
history = model.fit(x_train, y_train, epochs=100, batch_size=4, validation_data=(x_test, y_test))
# simple validation by predicting the output based on x
y_pred = model.predict(x_test, verbose=0)
Explanation: 3-layer MLP Model
Then, let us build a 3-layer 64-128-1 MLP.
End of explanation
plt.plot(x_test, y_test, 'r*', x_test, y_pred, 'go', )
plt.show()
Explanation: Qualitative Evaluation
Let us plot the prediction (green dots) vs test dataset (red stars).
We can see that the test data deviates largely when the input is beyond the range of the train data.
End of explanation
print(history.history.keys())
# Plot history: MSE
plt.plot(history.history['loss'], label='Train loss')
plt.plot(history.history['val_loss'], label='Test loss')
plt.title('MSE')
plt.ylabel('MSE value')
plt.xlabel('Epochs')
plt.legend(loc="upper right")
plt.show()
Explanation: Examine the Train History
The history variable stores value informationn during training such as value of loss function.
End of explanation |
14,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read and clean data using Python and Pandas
Step1: Read the files and load the datasets
Pre-requisite
Step2: act table has 6 observations of 2 variables
Step3: features table has 561 observations of 2 variables
Step4: subject_test table contains 2947 observations of 1 variable
Step5: The file X_test requires to use as a separator a regular expression, because sometimes more blanks are used
Step6: The X test table has 2947 observations of 561 variables
The file y_test contains the outcome activity label for each observation
Step7: It's also possible to add a column name after creation
Step8: Now let's move to the train folder'
Step9: As you see, the train set has 7352 observations, spread in 3 files
Merge the training and the test datasets
Step10: Now the allSub data frame contains 10299 = 2947+7352 rows.
Note that ignore_index=True is necessary to have an index starting from 0 and ending at 10298, without restarting
after the first 7352 observations.
You can see it using the tail() method
Step11: Now we do the same for the X and Y data sets
Step12: For the Y dataset I used the pandas method append() just to show an alternative merge possibility
Appropriately labels the data set with descriptive variable names..
Uses descriptive activity names to name the activities in the data set
Step13: Merge Subjects and X data frames by columns
Step14: Now the new data frame has 562 columns, ad the last column is the Subject ID.
same for allY
Step15: Now add allY to the new all dataframe
Step16: Now all has 1 column more
Step17: Done, with the first dataframe. Can be put into a file with some write.table function
Step18: But instead, from the data set creates a second, independent tidy data set with the average of each variable for each activity and each subject.
Step19: tidy has 900 rows and 563 columns | Python Code:
import pandas as pd
Explanation: Read and clean data using Python and Pandas
End of explanation
cat UCI\ HAR\ Dataset/activity_labels.txt
act = pd.read_table('UCI HAR Dataset/activity_labels.txt', header=None, sep=' ', names=('ID','Activity'))
act
type(act)
act.columns
Explanation: Read the files and load the datasets
Pre-requisite: the dataset archive has been downloaded and un-compressed in the same directory
End of explanation
features = pd.read_table('UCI HAR Dataset/features.txt', sep=' ', header=None, names=('ID','Sensor'))
features.head()
features.info()
Explanation: act table has 6 observations of 2 variables
End of explanation
testSub = pd.read_table('UCI HAR Dataset/test/subject_test.txt', header=None, names=['SubjectID'])
testSub.shape
testSub.head()
Explanation: features table has 561 observations of 2 variables: ID and sensor's name
End of explanation
testX = pd.read_table('UCI HAR Dataset/test/X_test.txt', sep='\s+', header=None)
Explanation: subject_test table contains 2947 observations of 1 variable: the subject ID
End of explanation
testX.head()
testX.shape
Explanation: The file X_test requires to use as a separator a regular expression, because sometimes more blanks are used
End of explanation
testY = pd.read_table('UCI HAR Dataset/test/y_test.txt', sep=' ', header=None)
testY.shape
testY.head()
testY.tail()
Explanation: The X test table has 2947 observations of 561 variables
The file y_test contains the outcome activity label for each observation
End of explanation
testY.columns = ['ActivityID']
testY.head()
Explanation: It's also possible to add a column name after creation:
End of explanation
trainSub = pd.read_table('UCI HAR Dataset/train/subject_train.txt', header=None, names=['SubjectID'])
trainSub.shape
trainX = pd.read_table('UCI HAR Dataset/train/X_train.txt', sep='\s+', header=None)
trainX.shape
trainY = pd.read_table('UCI HAR Dataset/train/y_train.txt', sep=' ', header=None, names=['ActivityID'])
trainY.shape
Explanation: Now let's move to the train folder'
End of explanation
allSub = pd.concat([trainSub, testSub], ignore_index=True)
allSub.shape
Explanation: As you see, the train set has 7352 observations, spread in 3 files
Merge the training and the test datasets
End of explanation
allSub.tail()
Explanation: Now the allSub data frame contains 10299 = 2947+7352 rows.
Note that ignore_index=True is necessary to have an index starting from 0 and ending at 10298, without restarting
after the first 7352 observations.
You can see it using the tail() method:
End of explanation
allX = pd.concat([trainX, testX], ignore_index = True)
allX.shape
allY = trainY.append(testY, ignore_index=True)
allY.shape
allY.head()
Explanation: Now we do the same for the X and Y data sets
End of explanation
allX.head()
sensorNames = features['Sensor']
allX.columns = sensorNames
allX.head()
allSub.head()
Explanation: For the Y dataset I used the pandas method append() just to show an alternative merge possibility
Appropriately labels the data set with descriptive variable names..
Uses descriptive activity names to name the activities in the data set
End of explanation
all = pd.concat([allX, allSub], axis=1)
all.shape
all.head()
Explanation: Merge Subjects and X data frames by columns
End of explanation
allY.head()
act
allY.tail()
for i in act['ID']:
activity = act[act['ID'] == i]['Activity'] # get activity cell given ID
allY = allY.replace({i: activity.iloc[0]}) # replace this ID with activity string
allY.columns = ['Activity']
allY.head()
allY.tail()
Explanation: Now the new data frame has 562 columns, ad the last column is the Subject ID.
same for allY: add it to main data frame as extra column but first map activity label to activity code
Map activity label to code
End of explanation
allY.shape
all = pd.concat([all, allY], axis=1)
all.shape
Explanation: Now add allY to the new all dataframe
End of explanation
all.head()
Explanation: Now all has 1 column more
End of explanation
all.to_csv("tidyHARdata.csv")
Explanation: Done, with the first dataframe. Can be put into a file with some write.table function
End of explanation
grouped = all.groupby (['SubjectID', 'Activity'])
Explanation: But instead, from the data set creates a second, independent tidy data set with the average of each variable for each activity and each subject.
End of explanation
import numpy as np
tidier = all.groupby (['Activity']).aggregate(np.mean)
tidier = tidier.drop('SubjectID', axis=1)
tidier.head()
tidier.to_csv("tidierHARdata.csv")
Explanation: tidy has 900 rows and 563 columns
End of explanation |
14,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part-of-Speech Tagging with NLTK
This notebook is a quick demonstration of verta's run.log_setup_script() feature.
We'll create a simple and lightweight text tokenizer and part-of-speech tagger using NLTK,
which will require not only installing the nltk package itself,
but also downloading pre-trained text processing models within Python code.
Prepare Verta
Step1: Prepare NLTK
This Notebook was tested with nltk v3.4.5, though many versions should work just fine.
Step2: NLTK requires the separate installation of a tokenizer and part-of-speech tagger before these functionalities can be used.
Step3: Log Model for Deployment
Create Model
Our model will be a thin wrapper around nltk,
returning the constituent tokens and their part-of-speech tags for each input sentence.
Step4: Create Deployment Artifacts
As always, we'll create a couple of descriptive artifacts to let the Verta platform know how to handle our model.
Step6: Create Setup Script
As we did in the beginning of this Notebook,
the deployment needs these NLTK resources downloaded and installed before it can run the model,
so we'll define a short setup script to send over and execute at the beginning of a model deployment.
Step7: Make Live Predictions
Now we can visit the Web App, deploy the model, and make successful predictions! | Python Code:
import six
from verta import Client
from verta.utils import ModelAPI
HOST = "app.verta.ai"
PROJECT_NAME = "Part-of-Speech Tagging"
EXPERIMENT_NAME = "NLTK"
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
run = client.set_experiment_run()
Explanation: Part-of-Speech Tagging with NLTK
This notebook is a quick demonstration of verta's run.log_setup_script() feature.
We'll create a simple and lightweight text tokenizer and part-of-speech tagger using NLTK,
which will require not only installing the nltk package itself,
but also downloading pre-trained text processing models within Python code.
Prepare Verta
End of explanation
import nltk
nltk.__version__
Explanation: Prepare NLTK
This Notebook was tested with nltk v3.4.5, though many versions should work just fine.
End of explanation
# for tokenizing
nltk.download('punkt')
# for part-of-speech tagging
nltk.download('averaged_perceptron_tagger')
Explanation: NLTK requires the separate installation of a tokenizer and part-of-speech tagger before these functionalities can be used.
End of explanation
class TextClassifier:
def __init__(self, nltk):
self.nltk = nltk
def predict(self, data):
predictions = []
for text in data:
tokens = self.nltk.word_tokenize(text)
predictions.append({
'tokens': tokens,
'parts_of_speech': [list(pair) for pair in self.nltk.pos_tag(tokens)],
})
return predictions
model = TextClassifier(nltk)
data = [
"I am a teapot.",
"Just kidding I'm a bug?",
]
model.predict(data)
Explanation: Log Model for Deployment
Create Model
Our model will be a thin wrapper around nltk,
returning the constituent tokens and their part-of-speech tags for each input sentence.
End of explanation
model_api = ModelAPI(data, model.predict(data))
run.log_model(model, model_api=model_api)
run.log_requirements(["nltk"])
Explanation: Create Deployment Artifacts
As always, we'll create a couple of descriptive artifacts to let the Verta platform know how to handle our model.
End of explanation
setup =
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
run.log_setup_script(setup)
Explanation: Create Setup Script
As we did in the beginning of this Notebook,
the deployment needs these NLTK resources downloaded and installed before it can run the model,
so we'll define a short setup script to send over and execute at the beginning of a model deployment.
End of explanation
run
data = [
"Welcome to Verta!",
]
from verta.deployment import DeployedModel
DeployedModel(HOST, run.id).predict(data)
Explanation: Make Live Predictions
Now we can visit the Web App, deploy the model, and make successful predictions!
End of explanation |
14,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a loss function (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called gradient descent. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through backpropagation which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
Note
Step1: In my experience it's more convenient to build the model with a log-softmax output using nn.LogSoftmax or F.log_softmax (documentation). Then you can get the actual probabilites by taking the exponential torch.exp(output). With a log-softmax output, you want to use the negative log likelihood loss, nn.NLLLoss (documentation).
Exercise
Step2: Autograd
Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, autograd, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set requires_grad = True on a tensor. You can do this at creation with the requires_grad keyword, or at any time with x.requires_grad_(True).
You can turn off gradients for a block of code with the torch.no_grad() content
Step3: Below we can see the operation that created y, a power operation PowBackward0.
Step4: The autgrad module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor y to a scalar value, the mean.
Step5: You can check the gradients for x and y but they are empty currently.
Step6: To calculate the gradients, you need to run the .backward method on a Variable, z for example. This will calculate the gradient for z with respect to x
$$
\frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2}
$$
Step7: These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step.
Loss and Autograd together
When we create a network with PyTorch, all of the parameters are initialized with requires_grad = True. This means that when we calculate the loss and call loss.backward(), the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
Step8: Training the network!
There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's optim package. For example we can use stochastic gradient descent with optim.SGD. You can see how to define an optimizer below.
Step9: Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch
Step10: Training for real
Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an epoch. So here we're going to loop through trainloader to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.
Exercise
Step11: With the network trained, we can check out it's predictions. | Python Code:
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
Explanation: Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a loss function (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called gradient descent. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through backpropagation which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
Note: I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.
We update our weights using this gradient with some learning rate $\alpha$.
$$
\large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1}
$$
The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.
Losses in PyTorch
Let's start by seeing how we calculate the loss with PyTorch. Through the nn module, PyTorch provides losses such as the cross-entropy loss (nn.CrossEntropyLoss). You'll usually see the loss assigned to criterion. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.
Something really important to note here. Looking at the documentation for nn.CrossEntropyLoss,
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
The input is expected to contain scores for each class.
This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the logits or scores. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one (read more here). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.
End of explanation
## Solution
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
# Define the loss
criterion = nn.NLLLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our log-probabilities
logps = model(images)
# Calculate the loss with the logps and the labels
loss = criterion(logps, labels)
print(loss)
Explanation: In my experience it's more convenient to build the model with a log-softmax output using nn.LogSoftmax or F.log_softmax (documentation). Then you can get the actual probabilites by taking the exponential torch.exp(output). With a log-softmax output, you want to use the negative log likelihood loss, nn.NLLLoss (documentation).
Exercise: Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss.
End of explanation
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
Explanation: Autograd
Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, autograd, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set requires_grad = True on a tensor. You can do this at creation with the requires_grad keyword, or at any time with x.requires_grad_(True).
You can turn off gradients for a block of code with the torch.no_grad() content:
```python
x = torch.zeros(1, requires_grad=True)
with torch.no_grad():
... y = x * 2
y.requires_grad
False
```
Also, you can turn on or off gradients altogether with torch.set_grad_enabled(True|False).
The gradients are computed with respect to some variable z with z.backward(). This does a backward pass through the operations that created z.
End of explanation
## grad_fn shows the function that generated this variable
print(y.grad_fn)
Explanation: Below we can see the operation that created y, a power operation PowBackward0.
End of explanation
z = y.mean()
print(z)
Explanation: The autgrad module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor y to a scalar value, the mean.
End of explanation
print(x.grad)
Explanation: You can check the gradients for x and y but they are empty currently.
End of explanation
z.backward()
print(x.grad)
print(x/2)
Explanation: To calculate the gradients, you need to run the .backward method on a Variable, z for example. This will calculate the gradient for z with respect to x
$$
\frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2}
$$
End of explanation
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logps = model(images)
loss = criterion(logps, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
Explanation: These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step.
Loss and Autograd together
When we create a network with PyTorch, all of the parameters are initialized with requires_grad = True. This means that when we calculate the loss and call loss.backward(), the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
End of explanation
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
Explanation: Training the network!
There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's optim package. For example we can use stochastic gradient descent with optim.SGD. You can see how to define an optimizer below.
End of explanation
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
Explanation: Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:
Make a forward pass through the network
Use the network output to calculate the loss
Perform a backward pass through the network with loss.backward() to calculate the gradients
Take a step with the optimizer to update the weights
Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code optimizer.zero_grad(). When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.
End of explanation
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
Explanation: Training for real
Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an epoch. So here we're going to loop through trainloader to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.
Exercise: Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.
End of explanation
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
Explanation: With the network trained, we can check out it's predictions.
End of explanation |
14,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create dataframe with missing values
Step2: Drop missing observations
Step3: Drop rows where all cells in that row is NA
Step4: Create a new column full of missing values
Step5: Drop column if they only contain missing values
Step6: Drop rows that contain less than five observations
This is really mostly useful for time series
Step7: Fill in missing data with zeros
Step8: Fill in missing in preTestScore with the mean value of preTestScore
inplace=True means that the changes are saved to the df right away
Step9: Fill in missing in postTestScore with each sex's mean value of postTestScore
Step10: Select some raws but ignore the missing data points | Python Code:
import pandas as pd
import numpy as np
Explanation: Title: Missing Data In Pandas Dataframes
Slug: pandas_missing_data
Summary: Missing Data In Pandas Dataframes
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
import modules
End of explanation
raw_data = {'first_name': ['Jason', np.nan, 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', np.nan, 'Ali', 'Milner', 'Cooze'],
'age': [42, np.nan, 36, 24, 73],
'sex': ['m', np.nan, 'f', 'm', 'f'],
'preTestScore': [4, np.nan, np.nan, 2, 3],
'postTestScore': [25, np.nan, np.nan, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age', 'sex', 'preTestScore', 'postTestScore'])
df
Explanation: Create dataframe with missing values
End of explanation
df_no_missing = df.dropna()
df_no_missing
Explanation: Drop missing observations
End of explanation
df_cleaned = df.dropna(how='all')
df_cleaned
Explanation: Drop rows where all cells in that row is NA
End of explanation
df['location'] = np.nan
df
Explanation: Create a new column full of missing values
End of explanation
df.dropna(axis=1, how='all')
Explanation: Drop column if they only contain missing values
End of explanation
df.dropna(thresh=5)
Explanation: Drop rows that contain less than five observations
This is really mostly useful for time series
End of explanation
df.fillna(0)
Explanation: Fill in missing data with zeros
End of explanation
df["preTestScore"].fillna(df["preTestScore"].mean(), inplace=True)
df
Explanation: Fill in missing in preTestScore with the mean value of preTestScore
inplace=True means that the changes are saved to the df right away
End of explanation
df["postTestScore"].fillna(df.groupby("sex")["postTestScore"].transform("mean"), inplace=True)
df
Explanation: Fill in missing in postTestScore with each sex's mean value of postTestScore
End of explanation
# Select the rows of df where age is not NaN and sex is not NaN
df[df['age'].notnull() & df['sex'].notnull()]
Explanation: Select some raws but ignore the missing data points
End of explanation |
14,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6</div>
Testing The Abstract Base Class - Earlier Version of the Code
This notebook was created because it is easier to test out the core logic separately from the logic that makes web API calls and extracts data from Google Maps. The intent was to test and debug as much of the logic (and/or similar logic) as might be needed ahead of bringing it all together in the final subclass.
Note that code and test results may differ from the final implementation. Some issues with the code were corrected, enhanced, and improved during testing of the Google Maps interacting subclass (a different notebook).
Enrich or Change Larger Dataframe Section by Section
The purpose of the <font color=blue><b>DFBuilder</b></font> object is to allow scanning of a larger dataframe, a small number of rows at a time. It then allows code to be customized to make changes and build up a new dataframe from the results. The operation is in
a standard loop by design. The original use case was to add a field with data accessed from an API off the web, and time delays were necessary (as well as other logic) to prevent (or at least reduce the risk of) server timeouts during operation.
Scanning through the source a few lines at a time, performing the operation and adding back out to the target DF
creates a "caching effect" where data is saved along the way so in the event of a server time-out all is not lost. The resulting DF can then be saved out to a file, code modified, and a re-run of the code can pick up the process where it left off instead of having to start over again.
These tests use a subclass that will never be used in the real world and which does not communicate with the web. The goal of these tests were to shake out problems with the other logic ahead of testing with web interaction.
Step1: Libraries Needed
Import statements included in this notebook are for the main abstract object and a test object.
Step2: Test Data
Input Data Set up Here
Step3: Code Testing
The abstract class which follows is intended to be the "work horse" of this code. Intent is that it gets the developer to the point where all they need to think about is what their final subclass will do to enrich the data. The parent class sets up a loop that can extract from a larger input DF, a small number of rows to be operated on in a temp DF and then be added to an outputDF. In the event of something interrupting the process (a common event when dealing with web APIs), modified rows created before the incident are waiting in output DF and can be extracted. Then code can be restarted or continued to allow building up the rest of the Dataframe without losing previous work or having to go all the way back to the beginnin.
This test notebook sets up a subclass that will never be used in the real world. There are more efficient ways to modify a DF with the example selected for this test. The test's intent is simply to show that most of the core logic works before we test a subclass that is slower and more involved because it actually makes calls to a web API.
Step4: Test Sub Class
Before creating, testing, and debugging a subclass that uses a Google maps API to enrich the data, it is desirable to start simpler. This test object makes use of all of the same logic except for the API calls out to the web to get new data. For this test, the "delay" will be set to 0 since it is not needed. This code just shows that we can loop through an original DF, copy rows 5 at a time to a temporary DF, add columns to them using logic that looks at existing rows, and output to our output DF.
Stopping the code in the middle may allow us to test what happens if the code halts, showing what is stored in the dataframe in the object when this happens.
Step5: Test Loop Math Issue
This section was created to debug an issue with the loop. Successful runs of these these tests show this problem was corrected.
Step6: Simple Test Using Defaults
Step7: Add 5 Rows - Original Data Source
Using buildOutDF() to add to outDF inside the object. This test will repeat the first 5 rows on the end of the DF
Step8: Repeat Testing With Random Sample of Data
Test done again on smaller random sample of original data created as deep copy. If test is run on whole DF, it will complete in 1/10th the time. Testing, however, revealed sone odd quirks in just spot testing the coding features.
Step9: Simulated Interrupt
In the final object, it will be server timeouts from the web that may result in coding failing to complete. The closest we can come to simulating this without introducing web API content is to set a time delay and interrupt the code run in the middle (manually) from within Jupyter. That test as well as some other logic checks follows here.
Step10: Attempt To Replicate Earlier Problem
This problem was originally created in another Notebook without all the tests before it. Strangely, removing some of the tests that preceded the one that was expected to work caused it to fail in an initial run of this notebook. Code has since changed and these tests now show that the content works as expected (problem solved).
Step11: Documentation Tests | Python Code:
who
Explanation: <div align="right">Python 3.6</div>
Testing The Abstract Base Class - Earlier Version of the Code
This notebook was created because it is easier to test out the core logic separately from the logic that makes web API calls and extracts data from Google Maps. The intent was to test and debug as much of the logic (and/or similar logic) as might be needed ahead of bringing it all together in the final subclass.
Note that code and test results may differ from the final implementation. Some issues with the code were corrected, enhanced, and improved during testing of the Google Maps interacting subclass (a different notebook).
Enrich or Change Larger Dataframe Section by Section
The purpose of the <font color=blue><b>DFBuilder</b></font> object is to allow scanning of a larger dataframe, a small number of rows at a time. It then allows code to be customized to make changes and build up a new dataframe from the results. The operation is in
a standard loop by design. The original use case was to add a field with data accessed from an API off the web, and time delays were necessary (as well as other logic) to prevent (or at least reduce the risk of) server timeouts during operation.
Scanning through the source a few lines at a time, performing the operation and adding back out to the target DF
creates a "caching effect" where data is saved along the way so in the event of a server time-out all is not lost. The resulting DF can then be saved out to a file, code modified, and a re-run of the code can pick up the process where it left off instead of having to start over again.
These tests use a subclass that will never be used in the real world and which does not communicate with the web. The goal of these tests were to shake out problems with the other logic ahead of testing with web interaction.
End of explanation
import pandas as pd
import time
## this entire cell may not be needed for this test but will be needed for the next test notebook of final objects
import os
## for larger data and/or make many requests in one day - get Google API key and use these lines:
# os.environ["GOOGLE_API_KEY"] = "YOUR_GOOGLE_API_Key"
## for better security (PROD environments) - install key to server and use just this line to load it:
# os.environ.get('GOOGLE_API_KEY')
# set up geocode
from geopy.geocoders import Nominatim
geolocator = Nominatim()
from geopy.exc import GeocoderTimedOut
Explanation: Libraries Needed
Import statements included in this notebook are for the main abstract object and a test object.
End of explanation
## Test code on a reasonably small DF
tst_lat_lon_df = pd.read_csv("testset_unique_lat_and_lon_vals.csv", index_col=0)
tst_lat_lon_df.describe()
tst_lat_lon_df.tail()
## Create smaller random sample from above DF for further testing
tst_lat_lon_df_sample = tst_lat_lon_df.sample(frac=0.1).copy(deep=True)
# frac=0.1 for 10% or use n=100 for get 100 records
# this variant seemed to create trouble with indexing of the DF in buildOutDF():
# tst_lat_lon_df.copy(deep=True).sample(frac=0.1)
# also: options on reset_index given in next cell were needed as part of the fix
len(tst_lat_lon_df_sample)
tst_lat_lon_df_sample.reset_index(drop=True, inplace=True)
tst_lat_lon_df_sample.iloc[[24,25,67]] ## attempt to fix index and show 3 rows that will be manipulated for testing
# sample: sub_df.iloc[0]['A']
# creating some missing values for testing of error handling in the code
# note: tried tst_lat_lon_df_sample.iloc[67]['lat'] = "" but it seems the pandas dataframe "protects itself"
# the change failed to occur unless setting the numeric field to None
# see notes on roundValue() function later in this document for similar pandas related behaviors
tst_lat_lon_df_sample.iloc[67]['lat'] = None
tst_lat_lon_df_sample.iloc[67]['lon'] = None
tst_lat_lon_df_sample.iloc[24]['lat'] = None
tst_lat_lon_df_sample.iloc[25]['lon'] = None
tst_lat_lon_df_sample.iloc[[24,25,67]]
Explanation: Test Data
Input Data Set up Here
End of explanation
who
from abc import ABCMeta, abstractmethod
import pandas as pd
class DFBuilder(object, metaclass=ABCMeta): # sets up abstract class
def __init__(self,endRw,time_delay): # abstract classes can be subclassed
self.endRow=endRw # but cannot be instantiated
self.delay=time_delay
self.tmpDF=pd.DataFrame() # temp DF will be endRow rows in length
self.outDF=pd.DataFrame() # final DF build in sets of endRow rows so all is not lost in a failure
self.lastIndex = None
# self.start=0
def __str__(self):
return ("Global Settings for this object: \n" +
"endRow: " + str(self.endRow) + "\n" +
"delay: " + str(self.delay) + "\n" +
"Length of outDF: " + str(len(self.outDF)) + "\n" +
"nextIndex: " + str(self.lastIndex)) # if continuing with last added table - index of next rec.
@abstractmethod # abstract method definition in Python
def _modifyTempDF_(): pass # This method will operate on TempDF inside the loop
def buildOutDF(self, inputDF):
'''Scans inputDF using self.endRow rows (default of 5) at a time to do it. It then calls in logic
from _modifyTempDF()_ to make changes to each subset of rows and appends tiny tempDF onto an outDF. When the
subclass is using a web API, self.time_delay tells it how much time to delay each iteration of the loop. All
parameters are set during initialization of the object. Should this function fail in the middle, outDF will
have all work up to the failure. This can be saved out to a DF or csv. The function can be run again on
a subset of the data (the records not encountered yet before the failure).'''
lenDF = len(inputDF)
print("Processing inputDF of length: ", lenDF)
endIndx = 0
i = 0
while i < lenDF:
# print("i: ", i)
endIndx = i + self.endRow
if endIndx > lenDF:
endIndx = lenDF
# print("Range to use: ", i, ":", endIndx)
self.tmpDF = inputDF[i:endIndx].copy(deep=True)
self._modifyTempDF_()
time.sleep(self.delay)
self.outDF = self.outDF.append(self.tmpDF)
self.lastIndex = endIndx
i = endIndx
# print("i at end of loop: ", i)
self.reindex_OutDF()
def reindex_OutDF(self):
self.outDF.reset_index(drop=True, inplace=True)
Explanation: Code Testing
The abstract class which follows is intended to be the "work horse" of this code. Intent is that it gets the developer to the point where all they need to think about is what their final subclass will do to enrich the data. The parent class sets up a loop that can extract from a larger input DF, a small number of rows to be operated on in a temp DF and then be added to an outputDF. In the event of something interrupting the process (a common event when dealing with web APIs), modified rows created before the incident are waiting in output DF and can be extracted. Then code can be restarted or continued to allow building up the rest of the Dataframe without losing previous work or having to go all the way back to the beginnin.
This test notebook sets up a subclass that will never be used in the real world. There are more efficient ways to modify a DF with the example selected for this test. The test's intent is simply to show that most of the core logic works before we test a subclass that is slower and more involved because it actually makes calls to a web API.
End of explanation
class TstModification_DFBuilder(DFBuilder):
'''Test of ability to scan a dataframe x rows at a time and add data columns to it.
There are more efficient ways to round cols in a DF; this object is a test of base logic from the abstract class
ahead of creating a more complex subclass that interacts with the web during the loop. It builds a copy of the
DF a small number of rows at a time and creates some new fields as it does so. Input DF must have "lat" and
"lon" cols. lat=Latitude / lon = Longitude. Defaults set delay to 0 seconds and rows processed at a time to
5 for this test.'''
def __init__(self, endRw=5,time_delay=0):
super().__init__(endRw,time_delay)
def roundValue(self, value, dec_places=4, rtn_null=False):
'''Takes arguments: value, dec_places. Rounds value to dec_places specified (if not specified, default=4.)
rtn_null defaults to False. If True, error handling should result in an empty string being returned.
If false __ErrType__ should be returned to help with debugging code and data by distinguishing why there is
no rounded answer returned. Testing shows that while using round() throws errors if input is not a number,
applying it to a dataframe does not. Try-except code left in for future research but does not seem to ever
get triggered as of this writing.'''
try:
rtnVal = round(value, dec_places)
except TypeError as terr:
print(type(terr))
print(terr)
rtnVal = "__NAN__"
except Exception as eerr:
print(type(eerr))
print(eerr)
rtnVal = "__ERR__"
finally:
if rtn_null==True:
if isinstance(rtnVal, str):
return ""
elif rtnVal is None:
return ""
else:
return rtnVal
else:
return rtnVal
def _modifyTempDF_(self, dec_places=4, rtn_null=False):
'''Create rounded lat and lon columns adding them to tempDF. Defaults round to 4 places and return error
strings in the column if the input value is not a number and cannot be rounded. Note: error handling for
roundValue() may never come into play due to interaction of apply/lambda/roundValue with DataFrames. Attempts
to test this on a dataframe resulted in a dataframe with NaNs in it instead of error text.'''
self.tmpDF["lat_rnd"] = self.tmpDF.apply(lambda x: self.roundValue(x.lat, dec_places, rtn_null), axis=1)
self.tmpDF["lon_rnd"] = self.tmpDF.apply(lambda x: self.roundValue(x.lon, dec_places, rtn_null), axis=1)
Explanation: Test Sub Class
Before creating, testing, and debugging a subclass that uses a Google maps API to enrich the data, it is desirable to start simpler. This test object makes use of all of the same logic except for the API calls out to the web to get new data. For this test, the "delay" will be set to 0 since it is not needed. This code just shows that we can loop through an original DF, copy rows 5 at a time to a temporary DF, add columns to them using logic that looks at existing rows, and output to our output DF.
Stopping the code in the middle may allow us to test what happens if the code halts, showing what is stored in the dataframe in the object when this happens.
End of explanation
# del loopDFTst
# del tstDFbldr
who
loopDFTst = TstModification_DFBuilder()
print(loopDFTst)
loopDFTst.buildOutDF(tst_lat_lon_df)
print("Length of outDF: ", len(loopDFTst.outDF))
loopDFTst.outDF.tail()
tst_lat_lon_df.iloc[[0,1,2,1157,1158,1159]] # looking at start and end of source data
loopDFTst.outDF.iloc[[0,1,2,1157,1158,1159]] # comparing to output rows
# reset the looptest object and check it again with two slices of original DF
del loopDFTst
loopDFTst = TstModification_DFBuilder()
print(loopDFTst)
loopDFTst.buildOutDF(tst_lat_lon_df[0:98]) # use number that does not divide evenly by 5 (endRow=5)
print(loopDFTst)
print("Length of outDF: ", len(loopDFTst.outDF))
loopDFTst.outDF.tail()
loopDFTst.buildOutDF(tst_lat_lon_df[98:100]) # test: missed few records scenario (in this case, adding just 2)
print("Length of outDF: ", len(loopDFTst.outDF))
loopDFTst.outDF.tail()
loopDFTst.buildOutDF(tst_lat_lon_df[100:105]) # test: add amount equal to internal self.endRow variable
print("Length of outDF: ", len(loopDFTst.outDF))
loopDFTst.outDF.tail()
loopDFTst.buildOutDF(tst_lat_lon_df[105:205]) # test: add 100 rows (which is divisble by 5)
print("Length of outDF: ", len(loopDFTst.outDF))
loopDFTst.outDF.tail()
loopDFTst.buildOutDF(tst_lat_lon_df[205:306]) # another add records test
print("Length of outDF: ", len(loopDFTst.outDF))
loopDFTst.outDF.tail()
loopDFTst.buildOutDF(tst_lat_lon_df[306:]) # get the rest added
print("Length of outDF: ", len(loopDFTst.outDF))
loopDFTst.outDF.tail()
## sanity checking:
loopDFTst.outDF.iloc[[0,1,2,1157,1158,1159]]
tst_lat_lon_df.iloc[[0,1,2,1157,1158,1159]]
Explanation: Test Loop Math Issue
This section was created to debug an issue with the loop. Successful runs of these these tests show this problem was corrected.
End of explanation
tstDFbldr = TstModification_DFBuilder()
print(tstDFbldr) ## show defaults set during object build
tst_lat_lon_df.tail()
tstDFbldr.buildOutDF(tst_lat_lon_df) ## executes in under 1 second on 1160 rows
tstDFbldr.outDF.describe() ## use .describe instead of .describe() to see source data ... rounding did work
## display adds zeros, but our rounded fields are rounded to 4 decimals
tstDFbldr.outDF.tail() ## tail of DF after first run of the function
Explanation: Simple Test Using Defaults
End of explanation
tstDFbldr.buildOutDF(tst_lat_lon_df[0:5]) # add copy of 5 rows to the end (like adding more data later)
tstDFbldr.outDF.tail(10) # function added these new rows and fixed the index
# this test is why the reset_index code was added
tstDFbldr.outDF.head(10) ## quick check of the head of the DF
# tstDFbldr.outDF ## uncomment to view whole DF
Explanation: Add 5 Rows - Original Data Source
Using buildOutDF() to add to outDF inside the object. This test will repeat the first 5 rows on the end of the DF
End of explanation
tstDFbldr2 = TstModification_DFBuilder()
tstDFbldr2.buildOutDF(tst_lat_lon_df_sample) ## create another object and run the function on it
tstDFbldr2.outDF.iloc[[24,25,67]] ## NaN produced from empty Lat/Lon values
# reset object by replacing with fresh blank one
tstDFbldr2 = TstModification_DFBuilder(time_delay=1) ## set delay to 1 second so we can interrupt during processing
Explanation: Repeat Testing With Random Sample of Data
Test done again on smaller random sample of original data created as deep copy. If test is run on whole DF, it will complete in 1/10th the time. Testing, however, revealed sone odd quirks in just spot testing the coding features.
End of explanation
tstDFbldr2.buildOutDF(tst_lat_lon_df_sample) ## stop this test in middle for next set of tests
print(tstDFbldr2.delay ) ## show delay used: 1 second
tstDFbldr2.outDF.describe() ## describe resulting DF ... it has only a fraction of the expected rows
## because we stopped the code early during testing ...
tstDFbldr2.outDF.tail() ## index values shown here were cleaned up by changing creation of sample DF
## see comments in data preparation at start of NB
## work-around: this code could be used to reset index with same options as buildOutDF()
# tstDFbldr2.reindex_OutDF()
# tstDFbldr2.outDF.tail()
tstDFbldr2.buildOutDF(tst_lat_lon_df_sample[-5:]) ## first attempt to add last 5 rows again from sample data
tstDFbldr2.outDF.describe() # count was unchanged from previous in earlier iteration of the code
tstDFbldr2.outDF.tail(10) # now it seems to work right
tstDFbldr2.buildOutDF(tst_lat_lon_df_sample[-5:]) ## Try it a second time
tstDFbldr2.outDF.describe() ## note: count is 5 more than before
tstDFbldr2.outDF.tail(10) ## comparison of tail helps confirm new records were added
## but clean index reset occurs this time (as it is supposed to)
## Another investigation
'''Idea: If in doubt as to whether the dataframe being passed in for the second run is mutating correctly or not,
try making a deep copy in steps and resetting the index on the copy as shown here. '''
## problems this was investigating now appear to be fixed
tmpDF1 = tst_lat_lon_df_sample[-5:].copy(deep=True)
tmpDF1.reset_index(drop=True, inplace=True)
tmpDF1
tstDFbldr2.buildOutDF(tmpDF1) ## initial test seems promising
tstDFbldr2.outDF.tail(10) ## as shown here, multiple tests seem to add the new rows every time
tstDFbldr2.buildOutDF(tmpDF1)
tstDFbldr2.outDF.tail(10)
print(tstDFbldr2) ## note: we added tmpDF1 which ended on index 4. This is why "nextIndex" now reads 5
## nextIndex represents next index if we were to cotinue with the next record in the last
## table we added to outDF using the buildOutDF() function
Explanation: Simulated Interrupt
In the final object, it will be server timeouts from the web that may result in coding failing to complete. The closest we can come to simulating this without introducing web API content is to set a time delay and interrupt the code run in the middle (manually) from within Jupyter. That test as well as some other logic checks follows here.
End of explanation
## to illustrate: we try creating a fresh object to see if we can show that problem in this NB
## build 3 here ...
tstDFbld3 = TstModification_DFBuilder(time_delay=1)
tstDFbld3.buildOutDF(tst_lat_lon_df_sample) ## stop this test in middle for next set of tests
print(tstDFbld3.delay) ## show delay used: 1 second
tstDFbld3.outDF.describe() ## describe resulting DF ... it has only a fraction of the expected rows
## because we stopped the code early during testing ...
tstDFbld3.outDF.tail()
tmpDF2 = tst_lat_lon_df_sample[-5:].copy(deep=True)
tmpDF2.reset_index(drop=True, inplace=True)
tmpDF2
tstDFbld3.buildOutDF(tmpDF2) # starting with fresh object and fresh deep copy of the sample
tstDFbld3.outDF.tail() # the problem recurs
# first attempt fails
tstDFbld3.buildOutDF(tmpDF2)
tstDFbld3.outDF.tail() # second and subsequent attempts succeed
tstDFbld3.buildOutDF(tmpDF2)
tstDFbld3.outDF.tail()
Explanation: Attempt To Replicate Earlier Problem
This problem was originally created in another Notebook without all the tests before it. Strangely, removing some of the tests that preceded the one that was expected to work caused it to fail in an initial run of this notebook. Code has since changed and these tests now show that the content works as expected (problem solved).
End of explanation
# create new object to test the docstrings
testObj1 = TstModification_DFBuilder()
help(testObj1)
print(testObj1.__doc__) # note: formatting is messed up if you do not use print() on the doc string
print(testObj1.buildOutDF.__doc__) # buildOutDF
Explanation: Documentation Tests
End of explanation |
14,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Layout Templates
As we showed in Layout and Styling of Jupyter widgets multiple widgets can be arranged together using the flexible GridBox specification. However, use of the specification involves some understanding of CSS properties and may impose sharp learning curve. Here, we will describe layout templates built on top of GridBox that simplify creation of common widget layouts.
Step1: 2x2 Grid
You can easily create a layout with 4 widgets arranged on 2x2 matrix using the TwoByTwoLayout widget
Step2: If you don't define a widget for some of the slots, the layout will automatically re-configure itself by merging neighbouring cells
Step3: You can pass merge=False in the argument of the TwoByTwoLayout constructor if you don't want this behavior
Step4: You can add a missing widget even after the layout initialization
Step5: You can also use the linking feature of widgets to update some property of a widget based on another widget
Step6: You can easily create more complex layouts with custom widgets. For example, you can use bqplot Figure widget to add plots
Step7: AppLayout
AppLayout is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane
Step8: However with the automatic merging feature, it's possible to achieve many other layouts
Step9: You can also modify the relative and absolute widths and heights of the panes using pane_widths and pane_heights arguments. Both accept a sequence of three elements, each of which is either an integer (equivalent to the weight given to the row/column) or a string in the format '1fr' (same as integer) or '100px' (absolute size).
Step10: Grid layout
GridspecLayout is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's GridSpec.
You can use GridspecLayout to define a simple regularly-spaced grid. For example, to create a 4x3 layout
Step11: To make a widget span several columns and/or rows, you can use slice notation
Step12: You can still change properties of the widgets stored in the grid, using the same indexing notation.
Step13: Note
Step14: Note
Step15: Creating scatter plots using GridspecLayout
In these examples, we will demonstrate how to use GridspecLayout and bqplot widget to create a multipanel scatter plot. To run this example you will need to install the bqplot package.
For example, you can use the following snippet to obtain a scatter plot across multiple dimensions
Step16: Style attributes
You can specify extra style properties to modify the layout. For example, you can change the size of the whole layout using the height and width arguments.
Step17: The gap between the panes can be increase or decreased with grid_gap argument
Step18: Additionally, you can control the alignment of widgets within the layout using justify_content and align_items attributes
Step19: For other alignment options it's possible to use common names (top and bottom) or their CSS equivalents (flex-start and flex-end) | Python Code:
# Utils widgets
from ipywidgets import Button, Layout, jslink, IntText, IntSlider
def create_expanded_button(description, button_style):
return Button(description=description, button_style=button_style, layout=Layout(height='auto', width='auto'))
top_left_button = create_expanded_button("Top left", 'info')
top_right_button = create_expanded_button("Top right", 'success')
bottom_left_button = create_expanded_button("Bottom left", 'danger')
bottom_right_button = create_expanded_button("Bottom right", 'warning')
top_left_text = IntText(description='Top left', layout=Layout(width='auto', height='auto'))
top_right_text = IntText(description='Top right', layout=Layout(width='auto', height='auto'))
bottom_left_slider = IntSlider(description='Bottom left', layout=Layout(width='auto', height='auto'))
bottom_right_slider = IntSlider(description='Bottom right', layout=Layout(width='auto', height='auto'))
Explanation: Using Layout Templates
As we showed in Layout and Styling of Jupyter widgets multiple widgets can be arranged together using the flexible GridBox specification. However, use of the specification involves some understanding of CSS properties and may impose sharp learning curve. Here, we will describe layout templates built on top of GridBox that simplify creation of common widget layouts.
End of explanation
from ipywidgets import TwoByTwoLayout
TwoByTwoLayout(top_left=top_left_button,
top_right=top_right_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button)
Explanation: 2x2 Grid
You can easily create a layout with 4 widgets arranged on 2x2 matrix using the TwoByTwoLayout widget:
End of explanation
TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button)
Explanation: If you don't define a widget for some of the slots, the layout will automatically re-configure itself by merging neighbouring cells
End of explanation
TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button,
merge=False)
Explanation: You can pass merge=False in the argument of the TwoByTwoLayout constructor if you don't want this behavior
End of explanation
layout_2x2 = TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button)
layout_2x2
layout_2x2.top_right = top_right_button
Explanation: You can add a missing widget even after the layout initialization:
End of explanation
app = TwoByTwoLayout(top_left=top_left_text, top_right=top_right_text,
bottom_left=bottom_left_slider, bottom_right=bottom_right_slider)
link_left = jslink((app.top_left, 'value'), (app.bottom_left, 'value'))
link_right = jslink((app.top_right, 'value'), (app.bottom_right, 'value'))
app.bottom_right.value = 30
app.top_left.value = 25
app
Explanation: You can also use the linking feature of widgets to update some property of a widget based on another widget:
End of explanation
import bqplot as bq
import numpy as np
size = 100
np.random.seed(0)
x_data = range(size)
y_data = np.random.randn(size)
y_data_2 = np.random.randn(size)
y_data_3 = np.cumsum(np.random.randn(size) * 100.)
x_ord = bq.OrdinalScale()
y_sc = bq.LinearScale()
bar = bq.Bars(x=np.arange(10), y=np.random.rand(10), scales={'x': x_ord, 'y': y_sc})
ax_x = bq.Axis(scale=x_ord)
ax_y = bq.Axis(scale=y_sc, tick_format='0.2f', orientation='vertical')
fig = bq.Figure(marks=[bar], axes=[ax_x, ax_y], padding_x=0.025, padding_y=0.025,
layout=Layout(width='auto', height='90%'))
from ipywidgets import FloatSlider
max_slider = FloatSlider(min=0, max=10, default_value=2, description="Max: ",
layout=Layout(width='auto', height='auto'))
min_slider = FloatSlider(min=-1, max=10, description="Min: ",
layout=Layout(width='auto', height='auto'))
app = TwoByTwoLayout(top_left=min_slider,
bottom_left=max_slider,
bottom_right=fig,
align_items="center",
height='700px')
jslink((y_sc, 'max'), (max_slider, 'value'))
jslink((y_sc, 'min'), (min_slider, 'value'))
jslink((min_slider, 'max'), (max_slider, 'value'))
jslink((max_slider, 'min'), (min_slider, 'value'))
max_slider.value = 1.5
app
Explanation: You can easily create more complex layouts with custom widgets. For example, you can use bqplot Figure widget to add plots:
End of explanation
from ipywidgets import AppLayout, Button, Layout
header_button = create_expanded_button('Header', 'success')
left_button = create_expanded_button('Left', 'info')
center_button = create_expanded_button('Center', 'warning')
right_button = create_expanded_button('Right', 'info')
footer_button = create_expanded_button('Footer', 'success')
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=footer_button)
Explanation: AppLayout
AppLayout is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane:
End of explanation
AppLayout(header=None,
left_sidebar=None,
center=center_button,
right_sidebar=None,
footer=None)
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None)
AppLayout(header=None,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None)
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=None,
footer=footer_button)
AppLayout(header=header_button,
left_sidebar=None,
center=center_button,
right_sidebar=right_button,
footer=footer_button)
AppLayout(header=header_button,
left_sidebar=None,
center=center_button,
right_sidebar=None,
footer=footer_button)
AppLayout(header=header_button,
left_sidebar=left_button,
center=None,
right_sidebar=right_button,
footer=footer_button)
Explanation: However with the automatic merging feature, it's possible to achieve many other layouts:
End of explanation
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=footer_button,
pane_widths=[3, 3, 1],
pane_heights=[1, 5, '60px'])
Explanation: You can also modify the relative and absolute widths and heights of the panes using pane_widths and pane_heights arguments. Both accept a sequence of three elements, each of which is either an integer (equivalent to the weight given to the row/column) or a string in the format '1fr' (same as integer) or '100px' (absolute size).
End of explanation
from ipywidgets import GridspecLayout
grid = GridspecLayout(4, 3)
for i in range(4):
for j in range(3):
grid[i, j] = create_expanded_button('Button {} - {}'.format(i, j), 'warning')
grid
Explanation: Grid layout
GridspecLayout is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's GridSpec.
You can use GridspecLayout to define a simple regularly-spaced grid. For example, to create a 4x3 layout:
End of explanation
grid = GridspecLayout(4, 3, height='300px')
grid[:3, 1:] = create_expanded_button('One', 'success')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'warning')
grid[3, 2] = create_expanded_button('Four', 'danger')
grid
Explanation: To make a widget span several columns and/or rows, you can use slice notation:
End of explanation
grid = GridspecLayout(4, 3, height='300px')
grid[:3, 1:] = create_expanded_button('One', 'success')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'warning')
grid[3, 2] = create_expanded_button('Four', 'danger')
grid
grid[0, 0].description = "I am the blue one"
Explanation: You can still change properties of the widgets stored in the grid, using the same indexing notation.
End of explanation
grid = GridspecLayout(4, 3, height='300px')
grid[:3, 1:] = create_expanded_button('One', 'info')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'info')
grid[3, 2] = create_expanded_button('Four', 'info')
grid
grid[3, 1] = create_expanded_button('New button!!', 'danger')
Explanation: Note: It's enough to pass an index of one of the grid cells occupied by the widget of interest. Slices are not supported in this context.
If there is already a widget that conflicts with the position of the widget being added, it will be removed from the grid:
End of explanation
grid[:3, 1:] = create_expanded_button('I am new too!!!!!', 'warning')
Explanation: Note: Slices are supported in this context.
End of explanation
import bqplot as bq
import numpy as np
from ipywidgets import GridspecLayout, Button, Layout
n_features = 5
data = np.random.randn(100, n_features)
data[:50, 2] += 4 * data[:50, 0] **2
data[50:, :] += 4
A = np.random.randn(n_features, n_features)/5
data = np.dot(data,A)
scales_x = [bq.LinearScale() for i in range(n_features)]
scales_y = [bq.LinearScale() for i in range(n_features)]
gs = GridspecLayout(n_features, n_features)
for i in range(n_features):
for j in range(n_features):
if i != j:
sc_x = scales_x[j]
sc_y = scales_y[i]
scatt = bq.Scatter(x=data[:, j], y=data[:, i], scales={'x': sc_x, 'y': sc_y}, default_size=1)
gs[i, j] = bq.Figure(marks=[scatt], layout=Layout(width='auto', height='auto'),
fig_margin=dict(top=0, bottom=0, left=0, right=0))
else:
sc_x = scales_x[j]
sc_y = bq.LinearScale()
hist = bq.Hist(sample=data[:,i], scales={'sample': sc_x, 'count': sc_y})
gs[i, j] = bq.Figure(marks=[hist], layout=Layout(width='auto', height='auto'),
fig_margin=dict(top=0, bottom=0, left=0, right=0))
gs
Explanation: Creating scatter plots using GridspecLayout
In these examples, we will demonstrate how to use GridspecLayout and bqplot widget to create a multipanel scatter plot. To run this example you will need to install the bqplot package.
For example, you can use the following snippet to obtain a scatter plot across multiple dimensions:
End of explanation
AppLayout(header=None,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None,
height="200px", width="50%")
Explanation: Style attributes
You can specify extra style properties to modify the layout. For example, you can change the size of the whole layout using the height and width arguments.
End of explanation
AppLayout(header=None,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None,
height="200px", width="50%",
grid_gap="10px")
Explanation: The gap between the panes can be increase or decreased with grid_gap argument:
End of explanation
from ipywidgets import Text, HTML
TwoByTwoLayout(top_left=top_left_button, top_right=top_right_button,
bottom_right=bottom_right_button,
justify_items='center',
width="50%",
align_items='center')
Explanation: Additionally, you can control the alignment of widgets within the layout using justify_content and align_items attributes:
End of explanation
TwoByTwoLayout(top_left=top_left_button, top_right=top_right_button,
bottom_right=bottom_right_button,
justify_items='center',
width="50%",
align_items='top')
Explanation: For other alignment options it's possible to use common names (top and bottom) or their CSS equivalents (flex-start and flex-end):
End of explanation |
14,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
Step1: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step2: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here
Step3: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
Picking only one attribute
Step4: Predicted 216 for benign but only 54 is true,predicted 50 but there are 107 cases, so this model doesnt work.
Picking all the attributes and testing the accuracy | Python Code:
from sklearn import datasets
import pandas as pd
%matplotlib inline
from sklearn import datasets
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import tree
iris = datasets.load_iris()
iris
iris.keys()
iris['target']
iris['target_names']
iris['data']
iris['feature_names']
x = iris.data[:,2:] # the attributes # we are picking up only the info on petal length and width
y = iris.target # the target variable
# The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
dt = tree.DecisionTreeClassifier()
# .fit testing
dt = dt.fit(x,y)
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.50,train_size=0.50)
dt = dt.fit(x_train,y_train)
from sklearn.cross_validation import train_test_split
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test) #generate a prediction based on the model created to output a predicted y
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
Explanation: We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
End of explanation
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test)
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
# 75-25 seems to be better at predicting with precision
Explanation: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
End of explanation
cancer = datasets.load_breast_cancer()
print(cancer)
cancer.keys()
#cancer['DESCR']
# we are trying to predict how malignant / benign a specific cancer 'feature' is
cancer['target_names']
cancer['data']
cancer['feature_names']
cancer['feature_names'][11]
cancer['target']
x = cancer.data[:,10:11]
print(x)
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:,10:11], x[:,13:14], c=y, cmap=plt.cm.CMRmap)
plt.xlabel('texture error')
plt.ylabel('smoothness error')
plt.axhline(y=56)
plt.axvline(x=0.5)
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:,1:2], x[:,3:4], c=y, cmap=plt.cm.CMRmap)
plt.xlabel('mean perimeter')
plt.ylabel('mean area')
plt.axhline(y=800)
plt.axvline(x=17)
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:,5:6], x[:,6:7], c=y, cmap=plt.cm.CMRmap)
plt.xlabel('Mean Concavity')
plt.ylabel('Mean Concave Point')
plt.axhline(y=0.06)
plt.axvline(x=0.25)
Explanation: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
End of explanation
x = cancer.data[:,10:11] # the attributes of skin color
y = cancer.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, cancer.target_names, rotation=45)
plt.yticks(tick_marks, cancer.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test)
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
Explanation: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
Picking only one attribute : Skin Color
End of explanation
x = cancer.data[:,:] # the attributes of skin color
y = cancer.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt) #measure on the test data (rather than train)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, cancer.target_names, rotation=45)
plt.yticks(tick_marks, cancer.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = dt.fit(x_train, y_train).predict(x_test)
cm = metrics.confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
Explanation: Predicted 216 for benign but only 54 is true,predicted 50 but there are 107 cases, so this model doesnt work.
Picking all the attributes and testing the accuracy
End of explanation |
14,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Reference
Step1: Python
Names
Assign values to names with the assignment operator =.
Values at the end of a cell are printed.
Step2: You can also call the print function
Step4: Functions
Step5: Help
One can access help and documentation info using Google-fu (i.e., using your favorite internet search engine) or built-in documentation.
The cell below shows how to access a quick documentation browser withing a Jupyter notebook.
Sometimes, however, the documentation is terse, and you may be better off doing an internet search.
Step6: Strings
A snippet of text is represented by a string in Python.
You can create strings with single quotes (') and double quotes (") as delimiters.
|Name|Example|Purpose|
|-|-|-|
|""|s = "some text"|Create a string|
|''|s = 'some text'|Create a string|
|+|s1 = 'some'<br>s2 = 'text'<br>s = s1 + ' ' + s2|Concatenate strings|
|.replace|'Hello'.replace('o', 'a')|Replace all instances of a substring|
|.lower|'Hello'.lower()|Return a lowercased version of the string|
|.upper|'hello'.upper()|Return an uppercased version of the string|
|.capitalize|'unIvErSity of cOlOrAdo'.capitalize()|Return a version with the first letter capitalized|
|.title|'unIvErSity of cOlOrAdo'.title()|Return a version with the first letter of every word capitalized|
Tables and Arrays
Arrays
Step7: Index into the array with the .item method
Step8: Apply arithmetic functions to arrays (provided by NumPy)
Step9: Tables
|Name|Example|Purpose|
|-|-|-|
|Table|Table()|Create an empty table, usually to extend with data|
|Table.read_table|Table.read_table("my_data.csv")|Create a table from a data file|
|with_columns|tbl = Table().with_columns("N", np.arange(5), "2*N", np.arange(0, 10, 2))|Create a copy of a table with more columns|
|column|tbl.column("N")|Create an array containing the elements of a column|
|sort|tbl.sort("N")|Create a copy of a table sorted by the values in a column|
|where|tbl.where("N", are.above(2))|Create a copy of a table with only the rows that match some predicate|
|num_rows|tbl.num_rows|Compute the number of rows in a table|
|num_columns|tbl.num_columns|Compute the number of columns in a table|
|select|tbl.select("N")|Create a copy of a table with only some of the columns|
|drop|tbl.drop("2*N")|Create a copy of a table without some of the columns|
|take|tbl.take(np.arange(0, 6, 2))|Create a copy of the table with only the rows whose indices are in the given array| | Python Code:
import numpy as np
from datascience import *
from pprint import pprint
Explanation: Quick Reference
End of explanation
ten = 3 * 2 + 4
ten
Explanation: Python
Names
Assign values to names with the assignment operator =.
Values at the end of a cell are printed.
End of explanation
print(ten)
# You can also make compound expressions
height = 1.3
the_number_five = abs(-5)
absolute_height_difference = abs(height - 1.688)
Explanation: You can also call the print function:
End of explanation
def to_percentage(proportion):
Converts a proportion to a percentage.
factor = 100
return proportion * factor
to_percentage(0.19)
Explanation: Functions
End of explanation
to_percentage?
Explanation: Help
One can access help and documentation info using Google-fu (i.e., using your favorite internet search engine) or built-in documentation.
The cell below shows how to access a quick documentation browser withing a Jupyter notebook.
Sometimes, however, the documentation is terse, and you may be better off doing an internet search.
End of explanation
arr = make_array(0.125, 4.75, -1.3)
arr
Explanation: Strings
A snippet of text is represented by a string in Python.
You can create strings with single quotes (') and double quotes (") as delimiters.
|Name|Example|Purpose|
|-|-|-|
|""|s = "some text"|Create a string|
|''|s = 'some text'|Create a string|
|+|s1 = 'some'<br>s2 = 'text'<br>s = s1 + ' ' + s2|Concatenate strings|
|.replace|'Hello'.replace('o', 'a')|Replace all instances of a substring|
|.lower|'Hello'.lower()|Return a lowercased version of the string|
|.upper|'hello'.upper()|Return an uppercased version of the string|
|.capitalize|'unIvErSity of cOlOrAdo'.capitalize()|Return a version with the first letter capitalized|
|.title|'unIvErSity of cOlOrAdo'.title()|Return a version with the first letter of every word capitalized|
Tables and Arrays
Arrays
End of explanation
arr.item(1)
Explanation: Index into the array with the .item method:
End of explanation
2 * (arr + 1.5)
np.log10(make_array(1, 2, 10, 1000))
np.sum(np.log10(make_array(1, 2, 10, 1000)))
Explanation: Apply arithmetic functions to arrays (provided by NumPy):
End of explanation
are?
Explanation: Tables
|Name|Example|Purpose|
|-|-|-|
|Table|Table()|Create an empty table, usually to extend with data|
|Table.read_table|Table.read_table("my_data.csv")|Create a table from a data file|
|with_columns|tbl = Table().with_columns("N", np.arange(5), "2*N", np.arange(0, 10, 2))|Create a copy of a table with more columns|
|column|tbl.column("N")|Create an array containing the elements of a column|
|sort|tbl.sort("N")|Create a copy of a table sorted by the values in a column|
|where|tbl.where("N", are.above(2))|Create a copy of a table with only the rows that match some predicate|
|num_rows|tbl.num_rows|Compute the number of rows in a table|
|num_columns|tbl.num_columns|Compute the number of columns in a table|
|select|tbl.select("N")|Create a copy of a table with only some of the columns|
|drop|tbl.drop("2*N")|Create a copy of a table without some of the columns|
|take|tbl.take(np.arange(0, 6, 2))|Create a copy of the table with only the rows whose indices are in the given array|
End of explanation |
14,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
14,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Transfer Learning
This notebook shows how to use pre-trained models from TensorFlowHub. Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application.
Learning Objectives
Know how to apply image augmentation
Know how to download and use a TensorFlow Hub module as a layer in Keras.
Step1: Exploring the data
As usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories
Step2: We can use python's built in pathlib tool to get a sense of this unstructured data.
Step3: Let's display the images so we can see what our model will be trying to learn.
Step4: Building the dataset
Keras has some convenient methods to read in image data. For instance tf.keras.preprocessing.image.ImageDataGenerator is great for small local datasets. A tutorial on how to use it can be found here, but what if we have so many images, it doesn't fit on a local machine? We can use tf.data.datasets to build a generator based on files in a Google Cloud Storage Bucket.
We have already prepared these images to be stored on the cloud in gs
Step5: Let's figure out how to read one of these images from the cloud. TensorFlow's tf.io.read_file can help us read the file contents, but the result will be a Base64 image string. Hmm... not very readable for humans or Tensorflow.
Thankfully, TensorFlow's tf.image.decode_jpeg function can decode this string into an integer array, and tf.image.convert_image_dtype can cast it into a 0 - 1 range float. Finally, we'll use tf.image.resize to force image dimensions to be consistent for our neural network.
We'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.
Step6: Is it working? Let's see!
TODO 1.a
Step7: One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. tf.io.decode_csv reads in csv rows (or each line in a csv file), while tf.math.equal will help us format our label such that it's a boolean array with a truth value corresponding to the class in CLASS_NAMES, much like the labels for the MNIST Lab.
Step8: Next, we'll transform the images to give our network more variety to train on. There are a number of image manipulation functions. We'll cover just a few
Step9: Finally, we'll make a function to craft our full dataset using tf.data.dataset. The tf.data.TextLineDataset will read in each line in our train/eval csv files to our decode_csv function.
.cache is key here. It will store the dataset in memory
Step10: We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.
Step11: TODO 1.c
Step12: Note
Step13: If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?
Enter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.
Tensorflow Hub is a database of models, many of which can be used for Transfer Learning. We'll use a model called MobileNet which is an architecture optimized for image classification on mobile devices, which can be done with TensorFlow Lite. Let's compare how a model trained on ImageNet data compares to one built from scratch.
The tensorflow_hub python package has a function to include a Hub model as a layer in Keras. We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.
TODO 2.b
Step14: Even though we're only adding one more Dense layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!
Moment of truth. Let's compile this new model and see how it compares to our MNIST architecture. | Python Code:
import os
import pathlib
import IPython.display as display
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
from PIL import Image
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
Conv2D,
Dense,
Dropout,
Flatten,
MaxPooling2D,
Softmax,
)
Explanation: TensorFlow Transfer Learning
This notebook shows how to use pre-trained models from TensorFlowHub. Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application.
Learning Objectives
Know how to apply image augmentation
Know how to download and use a TensorFlow Hub module as a layer in Keras.
End of explanation
data_dir = tf.keras.utils.get_file(
"flower_photos",
"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz",
untar=True,
)
# Print data path
print("cd", data_dir)
Explanation: Exploring the data
As usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.
The below tf.keras.utils.get_file command downloads a dataset to the local Keras cache. To see the files through a terminal, copy the output of the cell below.
End of explanation
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob("*/*.jpg")))
print("There are", image_count, "images.")
CLASS_NAMES = np.array(
[item.name for item in data_dir.glob("*") if item.name != "LICENSE.txt"]
)
print("These are the available classes:", CLASS_NAMES)
Explanation: We can use python's built in pathlib tool to get a sense of this unstructured data.
End of explanation
roses = list(data_dir.glob("roses/*"))
for image_path in roses[:3]:
display.display(Image.open(str(image_path)))
Explanation: Let's display the images so we can see what our model will be trying to learn.
End of explanation
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv
!cat /tmp/input.csv
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
!cat /tmp/labels.txt
Explanation: Building the dataset
Keras has some convenient methods to read in image data. For instance tf.keras.preprocessing.image.ImageDataGenerator is great for small local datasets. A tutorial on how to use it can be found here, but what if we have so many images, it doesn't fit on a local machine? We can use tf.data.datasets to build a generator based on files in a Google Cloud Storage Bucket.
We have already prepared these images to be stored on the cloud in gs://cloud-ml-data/img/flower_photos/. The images are randomly split into a training set with 90% data and an iterable with 10% data listed in CSV files:
Training set: train_set.csv
Evaluation set: eval_set.csv
Explore the format and contents of the train.csv by running:
End of explanation
IMG_HEIGHT = 224
IMG_WIDTH = 224
IMG_CHANNELS = 3
BATCH_SIZE = 32
# 10 is a magic number tuned for local training of this dataset.
SHUFFLE_BUFFER = 10 * BATCH_SIZE
AUTOTUNE = tf.data.experimental.AUTOTUNE
VALIDATION_IMAGES = 370
VALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE
def decode_img(img, reshape_dims):
# Convert the compressed string to a 3D uint8 tensor.
img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# Resize the image to the desired size.
return tf.image.resize(img, reshape_dims)
Explanation: Let's figure out how to read one of these images from the cloud. TensorFlow's tf.io.read_file can help us read the file contents, but the result will be a Base64 image string. Hmm... not very readable for humans or Tensorflow.
Thankfully, TensorFlow's tf.image.decode_jpeg function can decode this string into an integer array, and tf.image.convert_image_dtype can cast it into a 0 - 1 range float. Finally, we'll use tf.image.resize to force image dimensions to be consistent for our neural network.
We'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.
End of explanation
img = tf.io.read_file(
"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg"
)
# Uncomment to see the image string.
# print(img)
# TODO: decode image and plot it
Explanation: Is it working? Let's see!
TODO 1.a: Run the decode_img function and plot it to see a happy looking daisy.
End of explanation
def decode_csv(csv_row):
record_defaults = ["path", "flower"]
filename, label_string = tf.io.decode_csv(csv_row, record_defaults)
image_bytes = tf.io.read_file(filename=filename)
label = tf.math.equal(CLASS_NAMES, label_string)
return image_bytes, label
Explanation: One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. tf.io.decode_csv reads in csv rows (or each line in a csv file), while tf.math.equal will help us format our label such that it's a boolean array with a truth value corresponding to the class in CLASS_NAMES, much like the labels for the MNIST Lab.
End of explanation
MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7%
CONTRAST_LOWER = 0.2
CONTRAST_UPPER = 1.8
def read_and_preprocess(image_bytes, label, random_augment=False):
if random_augment:
img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10])
# TODO: augment the image.
else:
img = decode_img(image_bytes, [IMG_WIDTH, IMG_HEIGHT])
return img, label
def read_and_preprocess_with_augment(image_bytes, label):
return read_and_preprocess(image_bytes, label, random_augment=True)
Explanation: Next, we'll transform the images to give our network more variety to train on. There are a number of image manipulation functions. We'll cover just a few:
tf.image.random_crop - Randomly deletes the top/bottom rows and left/right columns down to the dimensions specified.
tf.image.random_flip_left_right - Randomly flips the image horizontally
tf.image.random_brightness - Randomly adjusts how dark or light the image is.
tf.image.random_contrast - Randomly adjusts image contrast.
TODO 1.b: Augment the image using the random functions.
End of explanation
def load_dataset(csv_of_filenames, batch_size, training=True):
dataset = (
tf.data.TextLineDataset(filenames=csv_of_filenames)
.map(decode_csv)
.cache()
)
if training:
dataset = (
dataset.map(read_and_preprocess_with_augment)
.shuffle(SHUFFLE_BUFFER)
.repeat(count=None)
) # Indefinately.
else:
dataset = dataset.map(read_and_preprocess).repeat(
count=1
) # Each photo used once.
# Prefetch prepares the next set of batches while current batch is in use.
return dataset.batch(batch_size=batch_size).prefetch(buffer_size=AUTOTUNE)
Explanation: Finally, we'll make a function to craft our full dataset using tf.data.dataset. The tf.data.TextLineDataset will read in each line in our train/eval csv files to our decode_csv function.
.cache is key here. It will store the dataset in memory
End of explanation
train_path = "gs://cloud-ml-data/img/flower_photos/train_set.csv"
train_data = load_dataset(train_path, 1)
itr = iter(train_data)
Explanation: We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.
End of explanation
image_batch, label_batch = next(itr)
img = image_batch[0]
plt.imshow(img)
print(label_batch[0])
Explanation: TODO 1.c: Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on?
End of explanation
eval_path = "gs://cloud-ml-data/img/flower_photos/eval_set.csv"
nclasses = len(CLASS_NAMES)
hidden_layer_1_neurons = 400
hidden_layer_2_neurons = 100
dropout_rate = 0.25
num_filters_1 = 64
kernel_size_1 = 3
pooling_size_1 = 2
num_filters_2 = 32
kernel_size_2 = 3
pooling_size_2 = 2
layers = [
# TODO: Add your image model.
]
old_model = Sequential(layers)
old_model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
train_ds = load_dataset(train_path, BATCH_SIZE)
eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)
old_model.fit(
train_ds,
epochs=5,
steps_per_epoch=5,
validation_data=eval_ds,
validation_steps=VALIDATION_STEPS,
)
Explanation: Note: It may take a 4-5 minutes to see result of different batches.
MobileNetV2
These flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis and there are three color channels, making the information here over 200 times larger!
How do our current techniques stand up? Copy your best model architecture over from the <a href="2_mnist_models.ipynb">MNIST models lab</a> and see how well it does after training for 5 epochs of 50 steps.
TODO 2.a Copy over the most accurate model from 2_mnist_models.ipynb or build a new CNN Keras model.
End of explanation
module_selection = "mobilenet_v2_100_224"
module_handle = "https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(
module_selection
)
transfer_model = tf.keras.Sequential(
[
# TODO
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(
nclasses,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l2(0.0001),
),
]
)
transfer_model.build((None,) + (IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
transfer_model.summary()
Explanation: If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?
Enter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.
Tensorflow Hub is a database of models, many of which can be used for Transfer Learning. We'll use a model called MobileNet which is an architecture optimized for image classification on mobile devices, which can be done with TensorFlow Lite. Let's compare how a model trained on ImageNet data compares to one built from scratch.
The tensorflow_hub python package has a function to include a Hub model as a layer in Keras. We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.
TODO 2.b: Add a Hub Keras Layer at the top of the model using the handle provided.
End of explanation
transfer_model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
train_ds = load_dataset(train_path, BATCH_SIZE)
eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)
transfer_model.fit(
train_ds,
epochs=5,
steps_per_epoch=5,
validation_data=eval_ds,
validation_steps=VALIDATION_STEPS,
)
Explanation: Even though we're only adding one more Dense layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!
Moment of truth. Let's compile this new model and see how it compares to our MNIST architecture.
End of explanation |
14,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Electromagnetics
Step1: Your Default Parameters should be
Step2: Pipe Widget
In the following app, we consider a loop-loop system with a pipe taget. Here, we simulate two surveys, one where the boom is oriented East-West (EW) and one where the boom is oriented North-South (NS).
<img src="https | Python Code:
%matplotlib inline
from geoscilabs.em.FDEM3loop import interactfem3loop
from geoscilabs.em.FDEMpipe import interact_femPipe
from matplotlib import rcParams
rcParams['font.size'] = 14
Explanation: Electromagnetics: 3-loop model
In the first part of this notebook, we consider a 3 loop system, consisting of a transmitter loop, receiver loop, and target loop.
<img src="https://github.com/geoscixyz/geosci-labs/blob/main/images/em/FEM3Loop/SurveyParams.png?raw=true" style="width: 60%; height: 60%"> </img>
Import Necessary Packages
End of explanation
fem3loop = interactfem3loop()
fem3loop
Explanation: Your Default Parameters should be:
<table>
<tr>
<th>Parameter </th>
<th>Default value</th>
</tr>
<tr>
<td>Inductance:</td>
<td>L = 0.1</td>
</tr>
<tr>
<td>Resistance:</td>
<td>R = 2000</td>
</tr>
<tr>
<td>X-center of target loop:</td>
<td>xc = 0</td>
</tr>
<tr>
<td>Y-center of target loop:</td>
<td>yc = 0</td>
</tr>
<tr>
<td>Z-center of target loop:</td>
<td>zc = 1</td>
</tr>
<tr>
<td>Inclination of target loop:</td>
<td>dincl = 0</td>
</tr>
<tr>
<td>Declination of target loop:</td>
<td>ddecl = 90</td>
</tr>
<tr>
<td>Frequency:</td>
<td>f = 10000 </td>
</tr>
<tr>
<td>Sample spacing:</td>
<td>dx = 0.25 </td>
</tr>
</table>
To use the default parameters below, either click the box for "default" or adjust the sliders for R, zc, and dx. When answering the lab questions, make sure all the sliders are where they should be!
Run FEM3loop Widget
End of explanation
pipe = interact_femPipe()
pipe
Explanation: Pipe Widget
In the following app, we consider a loop-loop system with a pipe taget. Here, we simulate two surveys, one where the boom is oriented East-West (EW) and one where the boom is oriented North-South (NS).
<img src="https://github.com/geoscixyz/geosci-labs/blob/main/images/em/FEM3Loop/model.png?raw=true" style="width: 40%; height: 40%"> </img>
The variables are:
alpha:
$$\alpha = \frac{\omega L}{R} = \frac{2\pi f L}{R}$$
pipedepth: Depth of the pipe center
We plot the percentage of Hp/Hs ratio in the Widget.
End of explanation |
14,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wing example using guide curves
We still want have more control on the shape of the wing example.
One option is to simply add more profiles and tweak their shape. Another, more elegant way is to additionally use some guide curves for e.g. the leading and trailing edges.
In this example, we are tweaking the leading edge and using our curve network interpolation algorithm that is based on gordon surfaces. Gordon surfaces are generalization of coons patches. Where coons patches can only interpolate two profiles and two guides at a time, gordon surfaces interpolate all curves at once globally. The benefit of gordon surfaces are better / smoother resulting surfaces.
If you want to read about them im more detail, have a look into "The NURBS Book, 2nd Edition", Chapter 10.5 (Interpolation of a Bidirectional Curve Network)
Step1: Create profile points
We again create the same profiles as in the Wing example. The wing should have one curve at its root, one at its outer end and one at the tip of a winglet.
Step2: Create guide curve points
Now, lets define some points on the guide curves. Important
Step3: Build profiles curves
Now, lets built the profiles curves using tigl3.curve_factories.interpolate_points as done in the Airfoil example.
Step4: Check
Step5: Result
Step6: Visualize the result
Now, lets draw our wing. How does it look like? What can be improved?
Note | Python Code:
import tigl3.curve_factories
import tigl3.surface_factories
from OCC.gp import gp_Pnt
from OCC.Display.SimpleGui import init_display
import numpy as np
Explanation: Wing example using guide curves
We still want have more control on the shape of the wing example.
One option is to simply add more profiles and tweak their shape. Another, more elegant way is to additionally use some guide curves for e.g. the leading and trailing edges.
In this example, we are tweaking the leading edge and using our curve network interpolation algorithm that is based on gordon surfaces. Gordon surfaces are generalization of coons patches. Where coons patches can only interpolate two profiles and two guides at a time, gordon surfaces interpolate all curves at once globally. The benefit of gordon surfaces are better / smoother resulting surfaces.
If you want to read about them im more detail, have a look into "The NURBS Book, 2nd Edition", Chapter 10.5 (Interpolation of a Bidirectional Curve Network): https://www.springer.com/de/book/9783642973857
Importing modules
Again, all low lovel geomtry functions can be found in the tigl3.geometry module. The actual gordon surface algorithm is the class CTiglInterpolateCurveNetwork from the tigl3.geometry module. For a more convenient use,
we again use the module tigl3.surface_factories , which we are using now.
End of explanation
# list of points on NACA2412 profile
px = [1.000084, 0.975825, 0.905287, 0.795069, 0.655665, 0.500588, 0.34468, 0.203313, 0.091996, 0.022051, 0.0, 0.026892, 0.098987, 0.208902, 0.346303, 0.499412, 0.653352, 0.792716, 0.90373, 0.975232, 0.999916]
py = [0.001257, 0.006231, 0.019752, 0.03826, 0.057302, 0.072381, 0.079198, 0.072947, 0.054325, 0.028152, 0.0, -0.023408, -0.037507, -0.042346, -0.039941, -0.033493, -0.0245, -0.015499, -0.008033, -0.003035, -0.001257]
points_c1 = np.array([pnt for pnt in zip(px, [0.]*len(px), py)]) * 2.
points_c2 = np.array([pnt for pnt in zip(px, [0]*len(px), py)])
points_c3 = np.array([pnt for pnt in zip(px, py, [0.]*len(px))]) * 0.2
# shift sections to their correct position
# second curve at y = 7
points_c2 += np.array([1.0, 7, 0])
# third curve at y = 7.5
points_c3[:, 1] *= -1
points_c3 += np.array([1.7, 7.8, 1.0])
Explanation: Create profile points
We again create the same profiles as in the Wing example. The wing should have one curve at its root, one at its outer end and one at the tip of a winglet.
End of explanation
# upper trailing edge points
te_up_points = np.array([points_c1[0,:], points_c2[0,:], points_c3[0,:]])
# leading edge points.
le_points = np.array([
points_c1[10,:], # First profile LE
[0.35, 2., -0.1],# Additional point to control LE shape
[0.7, 5., -0.2], # Additional point to control LE shape
points_c2[10,:], # Second profile LE
points_c3[10,:], # Third profile LE
])
# lower trailing edge points
te_lo_points = np.array([points_c1[-1,:], points_c2[-1,:], points_c3[-1,:]])
Explanation: Create guide curve points
Now, lets define some points on the guide curves. Important: profiles and guides must intersect each other!!!
Therefore, we explicitly add points from the profiles to the guides curves.
End of explanation
profile_1 = tigl3.curve_factories.interpolate_points(points_c1)
profile_2 = tigl3.curve_factories.interpolate_points(points_c2)
profile_3 = tigl3.curve_factories.interpolate_points(points_c3)
# Lets define also the parameters of the points to control the shape of the guide curve
# This is optional, but can improve the result
te_up = tigl3.curve_factories.interpolate_points(te_up_points, [0, 0.65, 1.])
le = tigl3.curve_factories.interpolate_points(le_points, [0., 0.25, 0.55, 0.8, 1.0])
te_lo = tigl3.curve_factories.interpolate_points(te_lo_points, [0, 0.65, 1.])
Explanation: Build profiles curves
Now, lets built the profiles curves using tigl3.curve_factories.interpolate_points as done in the Airfoil example.
End of explanation
# start up the gui
display, start_display, add_menu, add_function_to_menu = init_display()
# make tesselation more accurate
display.Context.SetDeviationCoefficient(0.0001)
# draw the curve
display.DisplayShape(profile_1)
display.DisplayShape(profile_2)
display.DisplayShape(profile_3)
display.DisplayShape(te_up)
display.DisplayShape(le)
display.DisplayShape(te_lo)
# also draw the guide curve points
for point in le_points:
display.DisplayShape(gp_Pnt(*point), update=False)
for point in te_up_points:
display.DisplayShape(gp_Pnt(*point), update=False)
for point in te_lo_points:
display.DisplayShape(gp_Pnt(*point), update=False)
# match content to screen and start the event loop
display.FitAll()
start_display()
Explanation: Check: Draw the curves
Now lets draw the curves. You can still tweak the leadning edge, if you want.
End of explanation
surface = tigl3.surface_factories.interpolate_curve_network([profile_1, profile_2, profile_3],
[te_up, le, te_lo])
Explanation: Result:
Create the gordon surface
The final surface is created with the Gordon surface interpolation from the tigl3.surface_factories package.
Gordon surfaces are surfaces, that passes through a network of curves. Since we just created this network with 3 profiles and 3 guides, we can pass those to the algorithm.
End of explanation
# start up the gui
display, start_display, add_menu, add_function_to_menu = init_display()
# make tesselation more accurate
display.Context.SetDeviationCoefficient(0.00007)
# draw the curve
display.DisplayShape(profile_1)
display.DisplayShape(profile_2)
display.DisplayShape(profile_3)
display.DisplayShape(te_up)
display.DisplayShape(le)
display.DisplayShape(te_lo)
display.DisplayShape(surface)
# match content to screen and start the event loop
display.FitAll()
start_display()
Explanation: Visualize the result
Now, lets draw our wing. How does it look like? What can be improved?
Note: a separate window with the 3D Viewer is opening!
End of explanation |
14,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classify handwritten digits with Keras (MXNet backend)
Data from
Step1: <a id="01">1. Download the MNIST dataset from Internet </a>
I've made the dataset into a zipped tar file. You'll have to download it now.
Step2: 10 folders of images will be extracted from the downloaded tar file.
<a id="02">2. Preprocessing the dataset</a>
Step3: How many digit classes & how many figures belong to each of the classes?
Step4: Split the image paths into train($70\%$), val($15\%$), test($15\%$)
Step5: Load images into RAM
Step6: Remark
Step7: <a id="03">3. Softmax Regression</a>
Step8: Onehot-encoding the labels
Step9: Construct the model
Step10: More details about the constructed model
Step11: Train the model
Step12: See how the accuracy climbs during training
Step13: Now, you'll probably want to evaluate or save the trained model.
Step14: Save model architecture & weights
Step15: Load the saved model architecture & weights
Step16: Output the classification report (see if the trained model works well on the test data)
Step17: <a id="04">4. A small Convolutional Neural Network</a>
Reshape the tensors (this step is necessary, because the CNN model wants the input tensor to be 4D)
Step18: Create the model
Step19: Train the model
Step20: See how the accuracy climbs during training
Step21: Output the classification report (see if the trained model works well on the test data) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
import pandas as pd
import sklearn
import os
import requests
from tqdm._tqdm_notebook import tqdm_notebook
import tarfile
Explanation: Classify handwritten digits with Keras (MXNet backend)
Data from: the MNIST dataset
Download the MNIST dataset from Internet
Preprocessing the dataset
Softmax Regression
A small Convolutional Neural Network
End of explanation
def download_file(url,file):
# Streaming, so we can iterate over the response.
r = requests.get(url, stream=True)
# Total size in bytes.
total_size = int(r.headers.get('content-length', 0));
block_size = 1024
wrote = 0
with open(file, 'wb') as f:
for data in tqdm_notebook(r.iter_content(block_size), total=np.ceil(total_size//block_size) , unit='KB', unit_scale=True):
wrote = wrote + len(data)
f.write(data)
if total_size != 0 and wrote != total_size:
print("ERROR, something went wrong")
url = "https://github.com/chi-hung/PythonTutorial/raw/master/datasets/mnist.tar.gz"
file = "mnist.tar.gz"
print('Retrieving the MNIST dataset...')
download_file(url,file)
print('Extracting the MNIST dataset...')
tar = tarfile.open(file)
tar.extractall()
tar.close()
print('Completed fetching the MNIST dataset.')
Explanation: <a id="01">1. Download the MNIST dataset from Internet </a>
I've made the dataset into a zipped tar file. You'll have to download it now.
End of explanation
def filePathsGen(rootPath):
paths=[]
dirs=[]
for dirPath,dirNames,fileNames in os.walk(rootPath):
for fileName in fileNames:
fullPath=os.path.join(dirPath,fileName)
paths.append((int(dirPath[len(rootPath) ]),fullPath))
dirs.append(dirNames)
return dirs,paths
dirs,paths=filePathsGen('mnist/') # load the image paths
dfPath=pd.DataFrame(paths,columns=['class','path']) # save image paths as a Pandas DataFrame
dfPath.head(5) # see the first 5 paths of the DataFrame
Explanation: 10 folders of images will be extracted from the downloaded tar file.
<a id="02">2. Preprocessing the dataset</a>
End of explanation
dfCountPerClass=dfPath.groupby('class').count()
dfCountPerClass.rename(columns={'path':'amount of figures'},inplace=True)
dfCountPerClass.plot(kind='bar',rot=0)
Explanation: How many digit classes & how many figures belong to each of the classes?
End of explanation
train=dfPath.sample(frac=0.7) # sample 70% data to be the train dataset
test=dfPath.drop(train.index) # the rest 30% are now the test dataset
# take 50% of the test dataset as the validation dataset
val=test.sample(frac=1/2)
test=test.drop(val.index)
# let's check the length of the train, val and test dataset.
print('number of all figures = {:10}.'.format(len(dfPath)))
print('number of train figures= {:9}.'.format(len(train)))
print('number of val figures= {:10}.'.format(len(val)))
print('number of test figures= {:9}.'.format(len(test)))
# let's take a look: plotting 3 figures from the train dataset
for j in range(3):
img=plt.imread(train['path'].iloc[j])
plt.imshow(img,cmap="gray")
plt.axis("off")
plt.show()
Explanation: Split the image paths into train($70\%$), val($15\%$), test($15\%$)
End of explanation
def dataLoad(dfPath):
paths=dfPath['path'].values
x=np.zeros((len(paths),28,28),dtype=np.float32 )
for j in range(len(paths)):
x[j,:,:]=plt.imread(paths[j])/255
y=dfPath['class'].values
return x,y
train_x,train_y=dataLoad(train)
val_x,val_y=dataLoad(val)
test_x,test_y=dataLoad(test)
Explanation: Load images into RAM
End of explanation
print("tensor shapes:\n")
print('train:',train_x.shape,train_y.shape)
print('val :',val_x.shape,val_y.shape)
print('test :',test_x.shape,test_y.shape)
Explanation: Remark: loading all images to RAM might take a while.
End of explanation
from keras.models import Sequential
from keras.layers import Dense,Flatten
from keras.optimizers import SGD
Explanation: <a id="03">3. Softmax Regression</a>
End of explanation
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
train_y_onehot = np.float32( enc.fit_transform(train_y.reshape(-1,1)) \
.toarray() )
val_y_onehot = np.float32( enc.fit_transform(val_y.reshape(-1,1)) \
.toarray() )
test_y_onehot = np.float32( enc.fit_transform(test_y.reshape(-1,1)) \
.toarray() )
Explanation: Onehot-encoding the labels:
End of explanation
model = Sequential()
model.add(Flatten(input_shape=(28,28)))
model.add(Dense(10, activation='softmax') )
sgd=SGD(lr=0.2, momentum=0.0, decay=0.0)
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: Construct the model:
End of explanation
model.summary()
Explanation: More details about the constructed model:
End of explanation
hist=model.fit(train_x, train_y_onehot,
epochs=20, batch_size=128,
validation_data=(val_x,val_y_onehot))
Explanation: Train the model:
End of explanation
plt.plot(hist.history['acc'],ms=5,marker='o',label='accuracy')
plt.plot(hist.history['val_acc'],ms=5,marker='o',label='val accuracy')
plt.legend()
plt.show()
Explanation: See how the accuracy climbs during training:
End of explanation
# calculate loss & accuracy (evaluated on the test dataset)
score = model.evaluate(test_x, test_y_onehot, batch_size=128)
print("LOSS (evaluated on the test dataset)= {}".format(score[0]))
print("ACCURACY (evaluated on the test dataset)= {}".format(score[1]))
Explanation: Now, you'll probably want to evaluate or save the trained model.
End of explanation
import json
with open('first_try.json', 'w') as jsOut:
json.dump(model.to_json(), jsOut)
model.save_weights('first_try.h5')
Explanation: Save model architecture & weights:
End of explanation
from keras.models import model_from_json
with open('first_try.json', 'r') as jsIn:
model_architecture=json.load(jsIn)
model_new=model_from_json(model_architecture)
model_new.load_weights('first_try.h5')
model_new.summary()
Explanation: Load the saved model architecture & weights:
End of explanation
pred_y=model.predict(test_x).argmax(axis=1)
from sklearn.metrics import classification_report
print( classification_report(test_y,pred_y) )
Explanation: Output the classification report (see if the trained model works well on the test data):
End of explanation
train_x = np.expand_dims(train_x,axis=1)
val_x = np.expand_dims(val_x,axis=1)
test_x = np.expand_dims(test_x,axis=1)
Explanation: <a id="04">4. A small Convolutional Neural Network</a>
Reshape the tensors (this step is necessary, because the CNN model wants the input tensor to be 4D):
End of explanation
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten,Conv2D, MaxPooling2D
from keras.layers import Activation
from keras.optimizers import SGD
in_shape=(1,28,28)
# ========== BEGIN TO CREATE THE MODEL ==========
model = Sequential()
# feature extraction (2 conv layers)
model.add(Conv2D(32, (3,3),
activation='relu',
input_shape=in_shape))
model.add(Conv2D(64, (3,3), activation='relu')
)
model.add(MaxPooling2D(pool_size=(2, 2))
)
model.add(Dropout(0.5))
model.add(Flatten())
# classification (2 dense layers)
model.add(Dense(128, activation='relu')
)
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
# ========== COMPLETED THE MODEL CREATION========
# Compile the model before training.
model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=0.01,momentum=0.1),
metrics=['accuracy'],
context=['gpu(0)'])
Explanation: Create the model:
End of explanation
%%time
hist=model.fit(train_x, train_y_onehot,
epochs=20,
batch_size=32,
validation_data=(val_x,val_y_onehot),
)
Explanation: Train the model:
End of explanation
plt.plot(hist.history['acc'],ms=5,marker='o',label='accuracy')
plt.plot(hist.history['val_acc'],ms=5,marker='o',label='val accuracy')
plt.legend()
plt.show()
Explanation: See how the accuracy climbs during training:
End of explanation
pred_y=model.predict(test_x).argmax(axis=1)
from sklearn.metrics import classification_report
print( classification_report(test_y,pred_y) )
Explanation: Output the classification report (see if the trained model works well on the test data):
End of explanation |
14,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: ★ Fundamentals ★
0.1 Horner’s method or Nested multiplication
Step2: Example
Step3: Example
$P(x) = x^6 - 2x^5 + 3x^4 - 4x^3 + 5x^2 - 6x + 7$ at $x = 2$
Step4: 0.1 Computer Problems
Evaluate $P(x) = 1 + x + ... + x^{50}$ at x = 1.00001. <br/>
Find the error of the computation by comparing with the equivalent expression $Q(x) = (x^{51} - 1) / (x - 1)$.
Step5: Evaluate $P(x) = 1 - x + x^2 - x^3 + ... + x^{98} - x^{99}$ at x = 1.00001. <br/>
Find a simpler, equivalent expression, and use it to estimate the error of the nested multiplication.
Step6: 0.4 Loss of significance
0.4 Computer Problems
Calculate the expressions that follow in double precision arithmetic $x = 10^{-1},\cdots,10^{-14}$. Then, using an alternative form of the expression that doesn’t suffer from subtracting nearly equal numbers, repeat the calculation and make a table of results. Report the number of correct digits in the original expression for each $x$. <br/>
$(a)\frac{1 - secx}{tan^{2}x}$
$(b)\frac{1 - (1 - x)^{3}}{x}$
Step7: Alternative form
Step8: Alternative form
Step9: Find the smallest value of $p$ for which the expression calculated in double precision arithmetic at $x = 10^{-p}$ has no correct significant digits. <br/>
$(a)\frac{tanx - x}{x^3}$
$(b)\frac{e^{x} + cos{x} - sin{x} - 2}{x^3}$
Step10: Evaluate the quantity $ a + \sqrt{a^2 + b^2} $ to four correct significant digits, where $ a = -12345678987654321 $ and $ b = 123 $.
$ a + \sqrt{a^2 + b^2} \Rightarrow \frac{(a - \sqrt{a^2 + b^2})(a + \sqrt{a^2 + b^2})}{(a - \sqrt{a^2 + b^2})} \Rightarrow \frac{a^2 - (a^2 + b^2)}{(a - \sqrt{a^2 + b^2})} \Rightarrow \frac{-b^2}{(a - \sqrt{a^2 + b^2})}$
Step11: Evaluate the quantity $ \sqrt{c^2 + d} - c $ to four correct significant digits, where $c = 246886422468$ and $d = 13579$.
$ \sqrt{c^2 + d} - c \Rightarrow \frac{(\sqrt{c^2 + d} - c)(\sqrt{c^2 + d} + c)}{(\sqrt{c^2 + d} + c)} \Rightarrow \frac{(c^2 + d) - c^2}{\sqrt{c^2 + d} + c} \Rightarrow \frac{d}{\sqrt{c^2 + d} + c}$
Step12: Consider a right triangle whose legs are of length 3344556600 and 1.2222222. How much longer is the hypotenuse than the longer leg ? Give your answer with at least four correct digits.
$a = 3344556600$, $b = 1.2222222$, the hypotenuse $c$ is that $c^2 = a^2 + b^2$ <br/>
And $c - a \Rightarrow \sqrt{a^2 + b^2} - a \Rightarrow \frac{(\sqrt{a^2 + b^2} - a)(\sqrt{a^2 + b^2} + a)}{(\sqrt{a^2 + b^2} + a)} \Rightarrow \frac{b^2}{\sqrt{a^2 + b^2} + a}$ | Python Code:
# Import modules
import traceback
import math
import numpy as np
import unittest
def nest(degree, coefficients, x = 0, base_points = None) -> float:
Evaluates polynomial from nested form using Horner’s Method
Examples:
P(x) = 3 * x^2 + 5 * x − 1 and evaluate P(x = 1)
Use nest(2, [3, 5, -1], 1) to get the value of above polynomial
Arguments:
degree (int): degree of polynomial
coefficients (list): list of coefficients
x (float): x-coordinate x at which to evaluate (Default : 0)
base_points (list): list of base points (Default : None)
Return:
value y of polynomial at x
Raises:
ValueError:
coefficients is null
degree is negative
degree is not equal len(base_points)
degree len(coefficients) is not equal len(base_points) + 1
try:
if base_points is None :
base_points = np.zeros(degree).tolist()
if(degree < 0):
raise ValueError('degree is negative')
if(degree != len(base_points)):
raise ValueError('degree is not consistent with base points')
if coefficients is None :
raise ValueError('coefficients is null')
if(degree + 1 != len(coefficients)):
raise ValueError
if(len(coefficients) != len(base_points) + 1):
raise ValueError
# Check whether coefficients is type of ndarray
if(type(coefficients).__module__ == np.__name__):
coefficients = coefficients.tolist()
# Check whether base_points is type of ndarray
if(type(base_points).__module__ == np.__name__):
base_points = base_points.tolist()
y = coefficients.pop(0)
for i in range(degree):
y = y * (x - base_points[i]) + coefficients[i]
return y
except ValueError as e:
print('Exception : ValueError {0}'.format(str(e)))
traceback.print_exception()
# unittest nest(...)
class nest_unittest(unittest.TestCase):
# P(x = 0) = 3
def testcase1(self):
self.assertEqual(nest(0, [3], 0), 3)
# P(x = 2.1) = x
def testcase2(self):
self.assertEqual(nest(1, [1, 0], 2.1), 2.1)
# P(x = 4) = 3 * x^2 + 5 * x + 7
def testcase3(self):
self.assertEqual(nest(2, [3, 5, 7], 4), 75)
unittest.main(argv=['ignored', '--verbose'], exit=False);
Explanation: ★ Fundamentals ★
0.1 Horner’s method or Nested multiplication
End of explanation
nest(4, [2, 3, -3, 5, -1], 0.5)
Explanation: Example : $P(x) = 2x^{4} + 3x^{3} - 3x^{2} + 5x - 1$
Evaluate $P(0.5)$
End of explanation
nest(6, [1, -2, 3, -4, 5, -6, 7], 2)
Explanation: Example
$P(x) = x^6 - 2x^5 + 3x^4 - 4x^3 + 5x^2 - 6x + 7$ at $x = 2$
End of explanation
x = 1.00001
p = nest(50, np.ones(51, dtype=np.int), x)
q = (x ** 51 - 1) / (x - 1)
print('P(1.00001) = {0}'.format(p))
print('Q(1.00001) = {0}'.format(q))
print('error = {0:017.14f}'.format(np.abs(p - q)))
Explanation: 0.1 Computer Problems
Evaluate $P(x) = 1 + x + ... + x^{50}$ at x = 1.00001. <br/>
Find the error of the computation by comparing with the equivalent expression $Q(x) = (x^{51} - 1) / (x - 1)$.
End of explanation
x = 1.00001
p = nest(99, [-1 if i % 2 == 0 else 1 for i in range(100)], x)
q = (1 - x ** 100) / (1 + x) # q(x) = (1 - x^100) / (1 + x) = p(x)
print('P(1.00001) = {0}'.format(p))
print('Q(1.00001) = {0}'.format(q))
print('error = {0:22.19f}'.format(np.abs(p - q)))
Explanation: Evaluate $P(x) = 1 - x + x^2 - x^3 + ... + x^{98} - x^{99}$ at x = 1.00001. <br/>
Find a simpler, equivalent expression, and use it to estimate the error of the nested multiplication.
End of explanation
Px_a = lambda x : (1 - np.cos(x) ** -1) / np.tan(x) ** 2
for i in range(1, 15):
x = 10 ** (-i)
print( 'P(x = {0:>.14f}) = {1:>.14f}'.format(x, Px_a(x)) )
Explanation: 0.4 Loss of significance
0.4 Computer Problems
Calculate the expressions that follow in double precision arithmetic $x = 10^{-1},\cdots,10^{-14}$. Then, using an alternative form of the expression that doesn’t suffer from subtracting nearly equal numbers, repeat the calculation and make a table of results. Report the number of correct digits in the original expression for each $x$. <br/>
$(a)\frac{1 - secx}{tan^{2}x}$
$(b)\frac{1 - (1 - x)^{3}}{x}$
End of explanation
Px_a = lambda x : -1 / (1 + np.cos(x) ** -1)
for i in range(1, 15):
x = 10 ** (-i)
print( 'P(x = {0:>.14f}) = {1:>.14f}'.format(x, Px_a(x)) )
Px_b = lambda x : (1 - (1 - x) ** 3) / x
for i in range(1, 15):
x = 10 ** (-i)
print( 'P(x = {0:>.14f}) = {1:>.14f}'.format(x, Px_b(x)) )
Explanation: Alternative form :
$\frac{1 - secx}{tan^{2}x} \Rightarrow \frac{cosx \cdot (1 - \frac{1}{cosx})}{cosx \cdot (\frac{sinx}{cosx})^2} \Rightarrow \frac{cosx \cdot (1 - \frac{1}{cosx})}{\frac{sin^2{x}}{cosx}} \Rightarrow \frac{cos^{2}x \cdot (1 - \frac{1}{cosx})}{sin^2{x}} \Rightarrow \frac{-\cos{x} \cdot (1 - cosx)}{1 - cos^{2}x} \Rightarrow \frac{-cosx}{(1 + cosx)} \Rightarrow \frac{-1}{(1 + secx)}$
End of explanation
Px_b = lambda x : x ** 2 - 3 * x + 3
for i in range(1, 15):
x = 10 ** (-i)
print( 'P(x = {0:>.14f}) = {1:>.14f}'.format(x, Px_b(x)) )
Explanation: Alternative form :
$\frac{1 - (1 - x)^{3} }{x} \Rightarrow \frac{1 - (1 - 3x + 3x^2 - x^3)}{x} \Rightarrow x^2 - 3x + 3$
End of explanation
def Px2(x, Px):
while True:
yield x, Px(x)
x /= 10
Px2a_generator = Px2(0.1, lambda x : (np.tan(x) - x) / np.power(x, 3))
for i in range(10):
print( 'P(x = {0:>e}) = {1:>.14f}'.format(*next(Px2a_generator)))
print( 'The smallest value of p is 8' )
print()
Px2b_generator = Px2(0.1, lambda x : (np.exp(x) + np.cos(x) - np.sin(x) - 2) / np.power(x, 3))
for i in range(10):
print( 'P(x = {0:>e}) = {1:>.14f}'.format(*next(Px2b_generator)))
print( 'The smallest value of p is 6' )
Explanation: Find the smallest value of $p$ for which the expression calculated in double precision arithmetic at $x = 10^{-p}$ has no correct significant digits. <br/>
$(a)\frac{tanx - x}{x^3}$
$(b)\frac{e^{x} + cos{x} - sin{x} - 2}{x^3}$
End of explanation
Px3 = lambda a, b : - np.power(b, 2) / (a - np.sqrt(a * a + b * b))
print('{0:e}'.format(Px3(-12345678987654321.0, 123.0)))
Explanation: Evaluate the quantity $ a + \sqrt{a^2 + b^2} $ to four correct significant digits, where $ a = -12345678987654321 $ and $ b = 123 $.
$ a + \sqrt{a^2 + b^2} \Rightarrow \frac{(a - \sqrt{a^2 + b^2})(a + \sqrt{a^2 + b^2})}{(a - \sqrt{a^2 + b^2})} \Rightarrow \frac{a^2 - (a^2 + b^2)}{(a - \sqrt{a^2 + b^2})} \Rightarrow \frac{-b^2}{(a - \sqrt{a^2 + b^2})}$
End of explanation
Px4 = lambda c, d : d / (np.sqrt(c * c + d) + c)
print('{0:e}'.format(Px4(246886422468.0, 13579.0)))
Explanation: Evaluate the quantity $ \sqrt{c^2 + d} - c $ to four correct significant digits, where $c = 246886422468$ and $d = 13579$.
$ \sqrt{c^2 + d} - c \Rightarrow \frac{(\sqrt{c^2 + d} - c)(\sqrt{c^2 + d} + c)}{(\sqrt{c^2 + d} + c)} \Rightarrow \frac{(c^2 + d) - c^2}{\sqrt{c^2 + d} + c} \Rightarrow \frac{d}{\sqrt{c^2 + d} + c}$
End of explanation
diff = lambda a, b : np.power(b, 2) / (np.sqrt(a * a + b * b) + a)
print('{0:e}'.format(diff(3344556600.0, 1.2222222)))
Explanation: Consider a right triangle whose legs are of length 3344556600 and 1.2222222. How much longer is the hypotenuse than the longer leg ? Give your answer with at least four correct digits.
$a = 3344556600$, $b = 1.2222222$, the hypotenuse $c$ is that $c^2 = a^2 + b^2$ <br/>
And $c - a \Rightarrow \sqrt{a^2 + b^2} - a \Rightarrow \frac{(\sqrt{a^2 + b^2} - a)(\sqrt{a^2 + b^2} + a)}{(\sqrt{a^2 + b^2} + a)} \Rightarrow \frac{b^2}{\sqrt{a^2 + b^2} + a}$
End of explanation |
14,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GlobalAveragePooling3D
[pooling.GlobalAveragePooling3D.0] input 6x6x3x4, data_format='channels_last'
Step1: [pooling.GlobalAveragePooling3D.1] input 3x6x6x3, data_format='channels_first'
Step2: [pooling.GlobalAveragePooling3D.2] input 5x3x2x1, data_format='channels_last'
Step3: export for Keras.js tests | Python Code:
data_in_shape = (6, 6, 3, 4)
L = GlobalAveragePooling3D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: GlobalAveragePooling3D
[pooling.GlobalAveragePooling3D.0] input 6x6x3x4, data_format='channels_last'
End of explanation
data_in_shape = (3, 6, 6, 3)
L = GlobalAveragePooling3D(data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.GlobalAveragePooling3D.1] input 3x6x6x3, data_format='channels_first'
End of explanation
data_in_shape = (5, 3, 2, 1)
L = GlobalAveragePooling3D(data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.GlobalAveragePooling3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.GlobalAveragePooling3D.2] input 5x3x2x1, data_format='channels_last'
End of explanation
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
14,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejemplo cuencas
En el siguiente ejemplo se presentan las funcionalidades básicas de la herramienta wmf.Stream y wmf.Basin
dentro de los temas tocados se presenta
Step1: Este es como se leen los mapas de direcciones y dem para el trazado de cuencas y corrientes
Step2: Trazado de corrientes
Importantes para determinar por donde se acumula el flujo y por lo tanto por donde se debe trazar la cuenca, estos elementos operan más como una guia que como un resultado final, pueden ser usados directamente para el trazado de las cuencas acudiendo a su propiedad structure, en la cual en las dos primeras entradas se alojan la coordenada X y la coordenada Y
Trazado de una corriente
Step3: El perfil de una corriente puede ser utilizado como punto de referencia para la búsqueda de puntos de trazado
Step4: Trazado de cuenca
Trazado de cuencas mediante el objeto Basin
La cuenca se traza a partir de un par de coordenadas, de un DEM y de un DIR, se le pueden agregar parametros como el nombre o el umbral para producir corrientes, pero este es opcional, este tipo de cuencas no se dejan simular, para ello se debe usar la herramienta SimuBasin
Step5: La ultima cuenca tiene un conteo de celdas igual a 1, lo cual significa que no se ha trazado nada y que por esta celda no pasa ninguna otra celda, por lo tanto esto no es una cuenca, y no debe ser usado para ningún tipo de cálculos, en la siguiente línea este elemento es eliminado
Step6: Balance sobre cuencas
EL objeto Basin Trae por defecto funciones para el cálculño de propiedades geomorfológicas y para el cálculo de caudales mediante el método de balances de largo plazo. a continuación se presenta la funcionalidad del mismo.
Step7: En la Figura se presenta el caudal medio estimado para cada elemento de la cuenca, incluidas celdas en donde no se considera la presencia de red hídrica.
Cuando se ha calculado el caudal medi tambien se ha calculado la evaporación sobre la cuenca, esta se puede ver en la variable cuenca.CellETR
Step8: La figura anterior ha sido guardada en el disco mediante el comando ruta = 'Caldas_ETR.png', en este caso ha sido sobre el directorio de trabajo actual, si este se cambia, se cambia el directorio donde se guarda.
El módulo permite estimar caudales máximos y mínimos mediante rtegionalización de caudales extremos mediante la ecuación
Step9: Cada entrada en Qmax y Qmin corresponde al periodo de retorno Tr [2.33, 5, 10, 25, 50, 100], estos pueden ser cambiados al interior de la función al cambiar la propiedad Tr en el momento en que esta es invocada.
Step10: Guardado en shp
Step11: Geomorfologia
Aca se explica un poco las funciones que hay de geomorfología | Python Code:
#Paquete Watershed Modelling Framework (WMF) para el trabajo con cuencas.
from wmf import wmf
Explanation: Ejemplo cuencas
En el siguiente ejemplo se presentan las funcionalidades básicas de la herramienta wmf.Stream y wmf.Basin
dentro de los temas tocados se presenta:
Trazado de corrientes.
Perfil de corrientes.
Trazado de cuencas.
Balances para estimación de caudal.
Análisis Geomorfológico de cuencas.
End of explanation
# Lectura del DEM
DEM = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dem_corr.tif',isDEMorDIR=True, dxp=30.0)
DIR = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dirAMVA.tif',isDEMorDIR=True, dxp= 30.0)
wmf.cu.nodata=-9999.0; wmf.cu.dxp=30.0
DIR[DIR<=0]=wmf.cu.nodata.astype(int)
DIR=wmf.cu.dir_reclass(DIR,wmf.cu.ncols,wmf.cu.nrows)
Explanation: Este es como se leen los mapas de direcciones y dem para el trazado de cuencas y corrientes
End of explanation
st = wmf.Stream(-75.618,6.00,DEM=DEM,DIR=DIR,name ='Rio Medellin')
st.structure
st.Plot_Profile()
Explanation: Trazado de corrientes
Importantes para determinar por donde se acumula el flujo y por lo tanto por donde se debe trazar la cuenca, estos elementos operan más como una guia que como un resultado final, pueden ser usados directamente para el trazado de las cuencas acudiendo a su propiedad structure, en la cual en las dos primeras entradas se alojan la coordenada X y la coordenada Y
Trazado de una corriente
End of explanation
# Mediante el comando de busqueda hemos buscado donde se localizan las coordenadas que cumplen la propiedad de estar a
#una distancia de la salida que oscila entre 10000 y 10100 metros.
np.where((st.structure[3]>10000) & (st.structure[3]<10100))
Explanation: El perfil de una corriente puede ser utilizado como punto de referencia para la búsqueda de puntos de trazado
End of explanation
# Las coordenadas en la entrada 289 son:
print st.structure[0,289]
print st.structure[1,289]
# La cuenca puede ser trtazada utilizando las coordenadas de forma implicita (como en este ejemplo), o de una
# manera explicita como se realizaría en la segunda línea de código.
cuenca = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# en esta segunda linea estamos trazando una cuenca con unas coordenadas que no son exactas y no se sabe si estan
# sobre la corriente, este problema se corrige al pasarle la corriente al trazador mediante el comando stream, el cual
# recibe como entrada el objeto corriente previamente obtenido.
cuenca2 = wmf.Basin(-75.6422,6.082,DEM,DIR,name='ejemplo',stream=st)
# Cuenca error: en este caso no se para el argumento stream, por lo que la cuenca se traza sobre las coordenadas
# que se han dado, lo cual probablemente produzca un error.
cuenca3 = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# Se imprime la cantidad de celdas que comprenden a cada una de las cuencas obtenidas, esto para ver que efectivamente
# existe una diferencia entre ambas debida a las diferencias de coordenadas.
print cuenca.ncells
print cuenca2.ncells
print cuenca3.ncells
Explanation: Trazado de cuenca
Trazado de cuencas mediante el objeto Basin
La cuenca se traza a partir de un par de coordenadas, de un DEM y de un DIR, se le pueden agregar parametros como el nombre o el umbral para producir corrientes, pero este es opcional, este tipo de cuencas no se dejan simular, para ello se debe usar la herramienta SimuBasin
End of explanation
del(cuenca3)
Explanation: La ultima cuenca tiene un conteo de celdas igual a 1, lo cual significa que no se ha trazado nada y que por esta celda no pasa ninguna otra celda, por lo tanto esto no es una cuenca, y no debe ser usado para ningún tipo de cálculos, en la siguiente línea este elemento es eliminado:
End of explanation
# Balance en una cuenca asumiendo precipitación anual igual a 2000 mm/año sobre toda la cuenca
cuenca.GetQ_Balance(2100)
# La variable de balance de largo plazo se calcula para cada celda de la cuenca y queda almacenada en cuenca.CellQmed
cuenca.Plot_basin(cuenca.CellQmed)
Explanation: Balance sobre cuencas
EL objeto Basin Trae por defecto funciones para el cálculño de propiedades geomorfológicas y para el cálculo de caudales mediante el método de balances de largo plazo. a continuación se presenta la funcionalidad del mismo.
End of explanation
# Plot de la evaporación sobre la cuenca de caldas
cuenca.Plot_basin(cuenca.CellETR, extra_lat= 0.001, extra_long= 0.001, lines_spaces= 0.02,
ruta = 'Caldas_ETR.png')
Explanation: En la Figura se presenta el caudal medio estimado para cada elemento de la cuenca, incluidas celdas en donde no se considera la presencia de red hídrica.
Cuando se ha calculado el caudal medi tambien se ha calculado la evaporación sobre la cuenca, esta se puede ver en la variable cuenca.CellETR
End of explanation
# Estimacion de maximos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmax = cuenca.GetQ_Max(cuenca.CellQmed)
Qmax2 = cuenca.GetQ_Max(cuenca.CellQmed, Tr= [3, 15])
# Estimacion de minimos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmin = cuenca.GetQ_Min(cuenca.CellQmed)
Qmin[Qmin<0]=0
Explanation: La figura anterior ha sido guardada en el disco mediante el comando ruta = 'Caldas_ETR.png', en este caso ha sido sobre el directorio de trabajo actual, si este se cambia, se cambia el directorio donde se guarda.
El módulo permite estimar caudales máximos y mínimos mediante rtegionalización de caudales extremos mediante la ecuación:
$Q_{max}(T_r) = \widehat{Q}{max} + K{dist}(T_r) \sigma_{max}$
$Q_{min}(T_r) = \widehat{Q}{min} - K{dist}(T_r) \sigma_{min}$
End of explanation
# Plot del caudal máximo para un periodo de retorno de 2.33
cuenca.Plot_basin(Qmax[0])
# Plot del caudal máximo para un periodo de retorno de 100
cuenca.Plot_basin(Qmax[5])
Explanation: Cada entrada en Qmax y Qmin corresponde al periodo de retorno Tr [2.33, 5, 10, 25, 50, 100], estos pueden ser cambiados al interior de la función al cambiar la propiedad Tr en el momento en que esta es invocada.
End of explanation
cuenca.Save_Basin2Map('Cuenca.kml',DriverFormat='kml')
cuenca.Save_Net2Map('Red.kml',DriverFormat='kml',qmed=cuenca.CellQmed)
Explanation: Guardado en shp:
Tanto la cuenca como la red hídrica se pueden guardar en shp para poder ser vistos en cualquier visor gis, tambien se puede guardar en otro tipo de archivos como kml.
End of explanation
# Calcula geomorfología por cauces
cuenca.GetGeo_Cell_Basics()
# reporte de geomorfologia generico y los almacena en cuenca.GeoParameters y en cuenca.Tc
cuenca.GetGeo_Parameters()
cuenca.GeoParameters
# Tiempos de concentracion
cuenca.Tc
cuenca.Plot_Tc()
cuenca.GetGeo_IsoChrones(1.34)
cuenca.Plot_basin(cuenca.CellTravelTime)
cuenca.Plot_Travell_Hist()
cuenca.GetGeo_Ppal_Hipsometric()
cuenca.PlotPpalStream()
cuenca.Plot_Hipsometric()
Explanation: Geomorfologia
Aca se explica un poco las funciones que hay de geomorfología
End of explanation |
14,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The OpenFermion Developers
Step2: FQE vs OpenFermion vs Cirq
Step3: The first example we will perform is diagonal Coulomb evolution on the Hartree-Fock state. The diagonal Coulomb operator is defined as
\begin{align}
V = \sum_{\alpha, \beta \in {\uparrow, \downarrow}}\sum_{p,q} V_{pq,pq}n_{p,\alpha}n_{q,\beta}
\end{align}
The number of free parpameters are $\mathcal{O}(N^{2})$ where $N$ is the rank of the spatial basis. The DiagonalCoulomb Hamiltonian takes either a generic 4-index tensor or the $N \times N$ matrix defining $V$. If the 4-index tensor is given the $N \times N$ matrix is constructed along with the diagonal correction. If the goal is to just evolve under $V$ it is recommended the user input the $N \times N$ matrix directly.
All the terms in $V$ commute and thus we can evolve under $V$ exactly by counting the accumulated phase on each bitstring.
To start out let's define a Hartree-Fock wavefunction for 4-orbitals and 2-electrons $S_{z} =0$.
Step4: Now we can define a random 2-electron operator $V$. To define $V$ we need a $4 \times 4$ matrix. We will generate this matrix by making a full random two-electron integral matrix and then just take the diagonal elements
Step5: Evolution under $V$ can be computed by looking at each bitstring, seeing if $n_{p\alpha}n_{q\beta}$ is non-zero and then phasing that string by $V_{pq}$. For the Hartree-Fock state we can easily calculate this phase accumulation. The alpha and beta bitstrings are "0001" and "0001".
Step6: We can now try this out for more than 2 electrons. Let's reinitialize a wavefunction on 6-orbitals with 4-electrons $S_{z} = 0$ to a random state.
Step7: We need to build our Diagoanl Coulomb operator For this bigger system.
Step8: Now we can convert our wavefunction to a cirq wavefunction, evolve under the diagonal_coulomb operator we constructed and then compare the outputs.
Step9: Finally, we can compare against evolving each term of $V$ individually. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The OpenFermion Developers
End of explanation
try:
import fqe
except ImportError:
!pip install fqe --quiet
from itertools import product
import fqe
from fqe.hamiltonians.diagonal_coulomb import DiagonalCoulomb
import numpy as np
import openfermion as of
from scipy.linalg import expm
#Utility function
def uncompress_tei(tei_mat, notation='chemistry'):
uncompress chemist notation integrals
tei_tensor[i, k, j, l] = tei_mat[(i, j), (k, l)]
[1, 1, 2, 2] = [1, 1, 2, 2] = [1, 1, 2, 2] = [1, 1, 2, 2]
[i, j, k, l] = [k, l, i, j] = [j, i, l, k]* = [l, k, j, i]*
For real we also have swap of i <> j and k <> l
[j, i, k, l] = [l, k, i, j] = [i, j, l, k] = [k, l, j, i]
tei_mat[(i, j), (k, l)] = int dr1 int dr2 phi_i(dr1) phi_j(dr1) O(r12) phi_k(dr1) phi_l(dr1)
Physics notation is the notation that is used in FQE.
Args:
tei_mat: compressed two electron integral matrix
Returns:
uncompressed 4-electron integral tensor. No antisymmetry.
if notation not in ['chemistry', 'physics']:
return ValueError("notation can be [chemistry, physics]")
norbs = int(0.5 * (np.sqrt(8 * tei_mat.shape[0] + 1) - 1))
basis = {}
cnt = 0
for i, j in product(range(norbs), repeat=2):
if i >= j:
basis[(i, j)] = cnt
cnt += 1
tei_tensor = np.zeros((norbs, norbs, norbs, norbs))
for i, j, k, l in product(range(norbs), repeat=4):
if i >= j and k >= l:
tei_tensor[i, j, k, l] = tei_mat[basis[(i, j)], basis[(k, l)]]
tei_tensor[k, l, i, j] = tei_mat[basis[(i, j)], basis[(k, l)]]
tei_tensor[j, i, l, k] = tei_mat[basis[(i, j)], basis[(k, l)]]
tei_tensor[l, k, j, i] = tei_mat[basis[(i, j)], basis[(k, l)]]
tei_tensor[j, i, k, l] = tei_mat[basis[(i, j)], basis[(k, l)]]
tei_tensor[l, k, i, j] = tei_mat[basis[(i, j)], basis[(k, l)]]
tei_tensor[i, j, l, k] = tei_mat[basis[(i, j)], basis[(k, l)]]
tei_tensor[k, l, j, i] = tei_mat[basis[(i, j)], basis[(k, l)]]
if notation == 'chemistry':
return tei_tensor
elif notation == 'physics':
return np.asarray(tei_tensor.transpose(0, 2, 1, 3), order='C')
return tei_tensor
Explanation: FQE vs OpenFermion vs Cirq: Diagonal Coulomb Operators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/openfermion/fqe/tutorials/diagonal_coulomb_evolution"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/fqe/tutorials/diagonal_coulomb_evolution.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/fqe/tutorials/diagonal_coulomb_evolution.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/fqe/tutorials/diagonal_coulomb_evolution.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Special routines are available for evolving under a diagonal Coulomb operator. This notebook describes how to use these built in routines and how they work.
End of explanation
norbs = 4
tedim = norbs * (norbs + 1) // 2
if (norbs // 2) % 2 == 0:
n_elec = norbs // 2
else:
n_elec = (norbs // 2) + 1
sz = 0
fqe_wfn = fqe.Wavefunction([[n_elec, sz, norbs]])
fci_data = fqe_wfn.sector((n_elec, sz))
fci_graph = fci_data.get_fcigraph()
hf_wf = np.zeros((fci_data.lena(), fci_data.lenb()), dtype=np.complex128)
hf_wf[0, 0] = 1 # right most bit is zero orbital.
fqe_wfn.set_wfn(strategy='from_data',
raw_data={(n_elec, sz): hf_wf})
fqe_wfn.print_wfn()
Explanation: The first example we will perform is diagonal Coulomb evolution on the Hartree-Fock state. The diagonal Coulomb operator is defined as
\begin{align}
V = \sum_{\alpha, \beta \in {\uparrow, \downarrow}}\sum_{p,q} V_{pq,pq}n_{p,\alpha}n_{q,\beta}
\end{align}
The number of free parpameters are $\mathcal{O}(N^{2})$ where $N$ is the rank of the spatial basis. The DiagonalCoulomb Hamiltonian takes either a generic 4-index tensor or the $N \times N$ matrix defining $V$. If the 4-index tensor is given the $N \times N$ matrix is constructed along with the diagonal correction. If the goal is to just evolve under $V$ it is recommended the user input the $N \times N$ matrix directly.
All the terms in $V$ commute and thus we can evolve under $V$ exactly by counting the accumulated phase on each bitstring.
To start out let's define a Hartree-Fock wavefunction for 4-orbitals and 2-electrons $S_{z} =0$.
End of explanation
tei_compressed = np.random.randn(tedim**2).reshape((tedim, tedim))
tei_compressed = 0.5 * (tei_compressed + tei_compressed.T)
tei_tensor = uncompress_tei(tei_compressed, notation='physics')
diagonal_coulomb = of.FermionOperator()
diagonal_coulomb_mat = np.zeros((norbs, norbs))
for i, j in product(range(norbs), repeat=2):
diagonal_coulomb_mat[i, j] = tei_tensor[i, j, i, j]
for sigma, tau in product(range(2), repeat=2):
diagonal_coulomb += of.FermionOperator(
((2 * i + sigma, 1), (2 * i + sigma, 0), (2 * j + tau, 1),
(2 * j + tau, 0)), coefficient=diagonal_coulomb_mat[i, j])
dc_ham = DiagonalCoulomb(diagonal_coulomb_mat)
Explanation: Now we can define a random 2-electron operator $V$. To define $V$ we need a $4 \times 4$ matrix. We will generate this matrix by making a full random two-electron integral matrix and then just take the diagonal elements
End of explanation
alpha_occs = [list(range(fci_graph.nalpha()))]
beta_occs = [list(range(fci_graph.nbeta()))]
occs = alpha_occs[0] + beta_occs[0]
diag_ele = 0.
for ind in occs:
for jnd in occs:
diag_ele += diagonal_coulomb_mat[ind, jnd]
evolved_phase = np.exp(-1j * diag_ele)
print(evolved_phase)
# evolve FQE wavefunction
evolved_hf_wfn = fqe_wfn.time_evolve(1, dc_ham)
# check they the accumulated phase is equivalent!
assert np.isclose(evolved_hf_wfn.get_coeff((n_elec, sz))[0, 0], evolved_phase)
Explanation: Evolution under $V$ can be computed by looking at each bitstring, seeing if $n_{p\alpha}n_{q\beta}$ is non-zero and then phasing that string by $V_{pq}$. For the Hartree-Fock state we can easily calculate this phase accumulation. The alpha and beta bitstrings are "0001" and "0001".
End of explanation
norbs = 6
tedim = norbs * (norbs + 1) // 2
if (norbs // 2) % 2 == 0:
n_elec = norbs // 2
else:
n_elec = (norbs // 2) + 1
sz = 0
fqe_wfn = fqe.Wavefunction([[n_elec, sz, norbs]])
fqe_wfn.set_wfn(strategy='random')
inital_coeffs = fqe_wfn.get_coeff((n_elec, sz)).copy()
print("Random initial wavefunction")
fqe_wfn.print_wfn()
Explanation: We can now try this out for more than 2 electrons. Let's reinitialize a wavefunction on 6-orbitals with 4-electrons $S_{z} = 0$ to a random state.
End of explanation
tei_compressed = np.random.randn(tedim**2).reshape((tedim, tedim))
tei_compressed = 0.5 * (tei_compressed + tei_compressed.T)
tei_tensor = uncompress_tei(tei_compressed, notation='physics')
diagonal_coulomb = of.FermionOperator()
diagonal_coulomb_mat = np.zeros((norbs, norbs))
for i, j in product(range(norbs), repeat=2):
diagonal_coulomb_mat[i, j] = tei_tensor[i, j, i, j]
for sigma, tau in product(range(2), repeat=2):
diagonal_coulomb += of.FermionOperator(
((2 * i + sigma, 1), (2 * i + sigma, 0), (2 * j + tau, 1),
(2 * j + tau, 0)), coefficient=diagonal_coulomb_mat[i, j])
dc_ham = DiagonalCoulomb(diagonal_coulomb_mat)
Explanation: We need to build our Diagoanl Coulomb operator For this bigger system.
End of explanation
cirq_wfn = fqe.to_cirq(fqe_wfn).reshape((-1, 1))
final_cirq_wfn = expm(-1j * of.get_sparse_operator(diagonal_coulomb)) @ cirq_wfn
# recover a fqe wavefunction
from_cirq_wfn = fqe.from_cirq(final_cirq_wfn.flatten(), 1.0E-8)
fqe_wfn = fqe_wfn.time_evolve(1, dc_ham)
print("Evolved wavefunction")
fqe_wfn.print_wfn()
print("From Cirq Evolution")
from_cirq_wfn.print_wfn()
assert np.allclose(from_cirq_wfn.get_coeff((n_elec, sz)),
fqe_wfn.get_coeff((n_elec, sz)))
print("Wavefunctions are equivalent")
Explanation: Now we can convert our wavefunction to a cirq wavefunction, evolve under the diagonal_coulomb operator we constructed and then compare the outputs.
End of explanation
fqe_wfn = fqe.Wavefunction([[n_elec, sz, norbs]])
fqe_wfn.set_wfn(strategy='from_data',
raw_data={(n_elec, sz): inital_coeffs})
for term, coeff in diagonal_coulomb.terms.items():
op = of.FermionOperator(term, coefficient=coeff)
fqe_wfn = fqe_wfn.time_evolve(1, op)
assert np.allclose(from_cirq_wfn.get_coeff((n_elec, sz)),
fqe_wfn.get_coeff((n_elec, sz)))
print("Individual term evolution is equivalent")
Explanation: Finally, we can compare against evolving each term of $V$ individually.
End of explanation |
14,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predictive Delay Analytics
Step1: 1. Data acquisition
First, let's acquire the data formated in '02_data_preparation.ipynb'. The figure below gives a glimpse of what the data set looks like.
Step2: Cleaning the data
We will focus on flights between New York and Chicago. When cleaning the data set, we have to remove the following entries
Step3: We memorize the cleaned dataset. This will save us some processing time. The two boxes below save the dataset and recover it from a 'cache/predictionData/' folder.
Step4: Restricting the dataset
Step5: We are only interested in the carrier that operates from New York to Chicago. Looking at the table, we also notice that Atlantic Southeast Airlines (airline code EV) is only marginally present. So we drop it from the list of carriers we will study in addition to the other carriers that do not operate on the line.
Step6: In case we're doing a general study, the second step after droppping airlines is to drop airport. This is the purpose of the function below.
WARNING \
RUN THIS CELL ONLY ONCE! Because airports are linked to one another, everytime the function restrict_airport is run, some entries are dropped. After some time, there is no more entries.
To understand this, let's imagine that Boston Logan airport has an annual flight with a small airport XYZ. When we run the function the fisrt time, airport XYZ is droppped. Because of that, the total count of flights for Boston Logan airport decreases as well. If we run the function a second time, removing airport XYZ and its flights with Boston Logan has put the number of flights associated with Boston Logan under the threshold!
Step7: Extracting month and day
The date is given as a string in the format "Year-Month-Day". We exctract the month and the weekday in the following.
WARNING \
RUN THIS CELL ONLY ONCE! Because airports are linked to one another,
Step8: Adjusting numerical data
Let's change teh format of all time entries. Instead of having hour
Step9: We need to center and normalize all continuous data for better results. The idea is that if a feature X has a range of variation that is considerably higher than a feature Y, the variations of X may completely mask the variations of Y and therefore make Y useless.
Step10: Encode categorical variables
Many of the features are categorical - for example the origin airport feature contains strings which represent the code of each airport. To make a classification/regression possible, we need to encode these features as series of indicators. More specifically, if the origin airport feature has n possibles values A1, A2, ... , An, we create n indicator functions I(A1), ..., I(An). Indicator function I(Ak) takes the value 1 if the feature origin airport has value Ak.
Step11: Let's save all of our results.
Step12: 1. Random Forest regressor
Our goal is to anticipate the delay of a flight given its features. One way to proceed is to train a random forest regressor. But first, let's look at the baseline predictor, that is the predictor that returns the mean of data set. The measure of accuracy of a predictor will be its mean square error against a test set distinct from the training set.
Step13: Split data into training/test sets
First, let's split the data set into a training set and a test set.
Step14: Useful functions
The functions below satisfy various tasks. The first trains a classifier and estimates its scores on the test set and the training set. The second tries different parameter values for the regressor. The last one wraps everything.
Step15: Parameter dashboard and run cell
We test the following parameters.
the number of trees in the forest
the number of features retained in each tree
the test size
Step16: MSE by size of the forest
Step17: 2. Random forest classifier
We will make prediction on the variable 'ARR_DEL15'. This variable takes the value 1 is the plane is more than 15 minutes late and 0 if not. Let's look at the baseline classifier, that is the classifiers that assign repectively 1 or 0 to 'ARR_DEL15' for every flight.
Here we look at the case whether a flight will be more than 15 minutes late. So we adjust the ARR_DELAY colum to an indicator.
Step18: Prediction functions
Step19: Random forest functions for a classification task
Step20: Classification task
We look at various paramters for the classifier. Note that there are two loss functions available for the random forest classifier, a Gini loss and an entropy loss. Simulations show they behave similarly!
Step21: Analysis
Let's now look at the importance coefficients, that the average usage of each coefficients in the random forest. As we can see, the main factors in the delay of a flight are the age of the aircraft, the time of the departure and the time of the origin. The fact that the departure and arrival time confirm what we observed in the explorative analysis. However, the role of the age of the aircraft was not very marked in the explorative analysis. Lastly, the weekday has a huge influence.
Notice that the airline and the aircraft model play a very limited role in the delay according to this random forest classifier.
Step22: Score by size of the forest
Step23: 3. Predicting flight delay time with Linear Regression
To allow users to enjoy more than a classfication experience we want to give them an expected delay time in minutes.
Step24: First, let us again establish a baseline which has to be beaten by our model. To get a feeling for a good baseline we pick flights from from New York(all airports) to Chicago(all airports).
Step25: 3.1. A first predictor
For a span of days we are interested in knoweledge, which flight we should take. Therefore let's choose the popular date for Christmas returning flights 21.12.2015 as example. As we want to use historical data, let's get first clear what data we have. Is it possible to compare flights over the years?
Step26: As the plot shows, flight numbers are not stable over the years and it is not trivial to find a matching.
Thus comparison by features for individual flights is not really possible.
It seems as if airlines change their flight numbers on a yearly basis.
Thus we need to come up with another idea and use a latent variable based approach instead.
One of the easiest ideas is to average the delay time over each day as a base classifier and refine then.
Step27: To test the quality of this model, we use the last year as test set and the previous as train data. The idea is, that we are always interested in predicting the next year somehow. Thus, if the match for 2014 is good, we expect it to be the same for 2015.
Step28: In the base model, the prediction for 2014 is that we are going to be 27.72 minutes late
Step29: How good did it perform comparing the actual arrival delay?
Step30: Using the root mean squared error as one (of many possible) measures, any model that we develop should beat the benchmark of 44.
3.2 Building a linear regression model for prediction
As flight delay changes over the time of day like explored in the data exploration part, we introduce a new feature which models in a categorical variable 10 minute time windows. I.e. for each window we introduce a latent variable that captures some sort of delay influence of this frame. This is done for both the departure and the arrival time. The model here is first developed for the reduced dataset containing the flights between New York / Chicago only.
Step31: In the next step, before fitting the actual model categorical variables need to be encoded as binary features (they have no order!) and numerical features shall be normalized.
Step32: Dealing with categorical variables
Luckily, sklearn has a routine to do this for us. Yet, as it is only able to handle integers we first reindex all categorical features we create lookup tables for the values. This turned out to be one of the slowest step in processing.
Step33: Using sklearn the variables are now encoded. As we have many variables (around 1200 in the final model for the whole year) using sparse matrices was critical for us to fit the model.
Step34: Recombining categorical and numerical features allows to start the usual Machine Learning routine (split into test/train, normalize, train/cross-validate, test)
Step35: Ordinal least squares regression
Step36: In a first model, we use OLS to get a feeling of what is achievable using a reduced dataset. The motivation for this is mainly to see whether this might be used to speedup the whole process. As the RMSE reveals, we did not do better than the base classifier. What happened?
Step37: Looking at the number of variables (74 here), it might be that either the fit is not good enough or we are ignoring the dependency of flight delays within different routes.
Ridge regression
Step38: Looking at the RMSE again is a bit disappointing. We did worse than OLS and the base classifier! This means, it is now time to tackle the unavoidable
Step39: Comparing the RMSEs and Mean absolute error we see that the linear regression model over the whole data clearly outperformed the baseline predictor. Also, more data did not really tremendously improve the mean absolute error but we can see a clear variance reduction in the RMSE. To further improve the model, additional variables should be included. One of the next ideas is weather data. As formatted historic weather data is unfortunately not for free available and scraping it for 100Ks entries per year is not feasible, the variables could not be added to the model. Another interesting idea might be to include some more meta data. I.e. an variable measuring the effect of holidays and variables accounting for winds which could be based by geographic location (i.e. a flight in western direction is longer than one in eastern direction b/c of the passat wind.
3.3 Application
Step40: In the following, we want to give an example on how to perform a prediction using the model from 2010-2014. We choose the route Chicago / New York.
Step41: Especially before Christmas, it is a good question to ask on which day you should fly and which flight to take. Therefore, we consider only flights between 20.12-24.12. Our assumption is that many values we feed in our predictor do not change, i.e. we can use the data from 2014 (stored in BigFlightTable.csv). For a productive system, these queries should be performed using a database.
Step42: We know load our favourite model and setup the categorical variable encoder.
Step43: Using the lookuptables, it is straight-forward to write the prediction function. Note that variabales need to be normalized according to the training data used in the model.
Step44: We are now ready to predict the delay time on our flights!
Step45: What is the best flight on 22nd December?
Using this info, we can get the best flights for the 22nd of December
Step46: On what day is it best to fly in the evening?
Analogously, we now want our model to give us the answer which is the best flight during the busy evening hours (17 | Python Code:
%matplotlib inline
# import required modules for prediction tasks
import numpy as np
import pandas as pd
import math
import random
Explanation: Predictive Delay Analytics
End of explanation
%%time
# reads all predefined months for a year and merge into one data frame
rawData2014 = pd.DataFrame.from_csv('cache/predictionData/complete2014Data.csv')
print rawData2014.columns
rawData2014.head(5)
Explanation: 1. Data acquisition
First, let's acquire the data formated in '02_data_preparation.ipynb'. The figure below gives a glimpse of what the data set looks like.
End of explanation
#entries to be dropped in the analysis
columns_dropped = ['index', 'TAIL_NUM', 'FL_NUM', 'DEP_TIME', 'DEP_DELAY', 'TAXI_OUT', 'WHEELS_OFF', \
'WHEELS_ON', 'TAXI_IN', 'ARR_TIME', 'CANCELLED', 'CANCELLATION_CODE', 'AIR_TIME', \
'CARRIER_DELAY', 'WEATHER_DELAY', 'NAS_DELAY', 'SECURITY_DELAY', 'LATE_AIRCRAFT_DELAY']
def clean(data, list_col):
'''
Creates a dataset by excluding undesirable columns contained in list_col
Parameters:
-----------
data: pandas.DataFrame
Flight dataframe
list_col: <list 'string'>
Comumns to exclude from the data set
'''
data.drop(data[data.CANCELLED == 0].index, inplace=True)
data.drop(list_col, axis=1, inplace=True)
data.dropna(axis = 0, inplace = True)
return
%%time
data2014 = clean(rawData2014.copy(), columns_dropped)
print data2014.columns
Explanation: Cleaning the data
We will focus on flights between New York and Chicago. When cleaning the data set, we have to remove the following entries:
flights that have been cancelled or diverted. We focus on predicting the delay. As a result, we also remove the columns associated with diverted flights.
colmuns that give the answer. This is the case of many colmuns related to the arrival of the plane
rows where a value is missing
flights whose destination or origin do not correspond to our study aim
Note that data points have to be cleaned in this order because most flights have empty entries for the 'diverted' columns.
End of explanation
#%%time
# save the data to avoid computing them again
#file_path = "cache/predictionData/predictionData2014.csv"
#data2014.to_csv(path_or_buf= file_path)
#%%time
# recover data2014 from cache/predictionData folder
#file_path = "cache/predictionData/predictionData2014.csv"
#data2014 = pd.read_csv(file_path)
#data2014.drop('Unnamed: 0', axis= 1, inplace = True)
#data2014.head(3)
# test that clean did the job
print "size of raw data set: ", len(rawData2014)
print "number of cancelled: ", len(rawData2014[(rawData2014.CANCELLED == 1)])
print "size of data set: ", len(data2014)
Explanation: We memorize the cleaned dataset. This will save us some processing time. The two boxes below save the dataset and recover it from a 'cache/predictionData/' folder.
End of explanation
dexample = data2014[data2014.DEST == 'ORD'][(data2014.ORIGIN == 'JFK') | (data2014.ORIGIN == 'LGA') | (data2014.ORIGIN == 'EWR')].copy()
print len(dexample)
dexample.head(3)
Explanation: Restricting the dataset: Case study NYC - Chicago flights
The dataset has more than 4 millions entries, which makes any data manipulation extremely costly - let alone model fitting. We will therefore make some restrictions on the airports and the airlines considered.
One way to do it is to make a case study, an example that we will follow all along. Here, the example looks at a flight from New York to Chicago.
Comment
: it is possible to work on a different data set or a different example. However, due to the high complexity of the data set - both by a large number of points and by a large number of high-dimensional categorical variables, it is easier to work on a shrinked version of it.
End of explanation
dexample.groupby('UNIQUE_CARRIER').size()
# In case we want to tackle a broader case, we can use this function. The droplist can be any list of carrier.
def restrict_carrier(data, droplist):
'''
Drop carriers from the data set according to broadlist
Parameters:
-----------
data: pandas.DataFrame
dataframe
droplist: <list 'string'>
List of carriers to be droppped
'''
for item in droplist:
data.drop(data[data.UNIQUE_CARRIER == item].index, inplace= True)
return
%%time
drop_airline = [ 'AS','DL', 'EV', 'F9', 'FL', 'HA', 'MQ', 'US', 'VX', 'WN']
restrict_carrier(dexample, drop_airline)
Explanation: We are only interested in the carrier that operates from New York to Chicago. Looking at the table, we also notice that Atlantic Southeast Airlines (airline code EV) is only marginally present. So we drop it from the list of carriers we will study in addition to the other carriers that do not operate on the line.
End of explanation
#remove all airports that have an annual traffic under threshold
def restrict_airport(data, threshold):
'''
Drop carriers from the data set.
Parameters:
-----------
data: pandas.DataFrame
dataframe
droplist: <list 'string'>
List of carriers to be droppped
'''
dict_count = data.groupby("DEST").agg(['count']).LAT.to_dict()['count']
for key in dict_count:
if dict_count[key] < threshold:
data.drop(data[data.DEST == key].index, inplace=True)
data.drop(data[data.ORIGIN == key].index, inplace=True)
print data.groupby("DEST").agg(['count']).LAT.to_dict()['count']
return
Explanation: In case we're doing a general study, the second step after droppping airlines is to drop airport. This is the purpose of the function below.
WARNING \
RUN THIS CELL ONLY ONCE! Because airports are linked to one another, everytime the function restrict_airport is run, some entries are dropped. After some time, there is no more entries.
To understand this, let's imagine that Boston Logan airport has an annual flight with a small airport XYZ. When we run the function the fisrt time, airport XYZ is droppped. Because of that, the total count of flights for Boston Logan airport decreases as well. If we run the function a second time, removing airport XYZ and its flights with Boston Logan has put the number of flights associated with Boston Logan under the threshold!
End of explanation
from time import strptime
days = {0:"Mon", 1:"Tues", 2:"Wed", 3:"Thurs", 4:"Fri", 5:"Sat", 6:"Sun"}
months = {1:"Jan", 2:"Feb", 3:"Mar", 4:"Apr", 5:"May", 6:"June", 7:"July", 8:"Aug", 9:"Sep", \
10:"Oct", 11:"Nov", 12:"Dec"}
def adjust_time(data):
'''
Return the month list and the weekday list out of a DataFrame.
Parameters:
-----------
data: pandas.DataFrame
dataframe that contains a colum of flight dates
'''
monlist = np.empty(len(data), dtype = str)
daylist = np.empty(len(data), dtype = str)
for i in xrange(len(data)):
date= strptime(data.FL_DATE.iloc[i], "%Y-%M-%d")
monlist[i] = months[date.tm_min]
daylist[i] = days[date.tm_wday]
return monlist, daylist
%%time
# now add the weekday and the month columns created by the adjust_time
monlist, daylist = adjust_time(dexample)
print "OK"
dexample['MONTH'] = pd.Series(monlist, index=dexample.index)
dexample['DAY'] = pd.Series(daylist, index=dexample.index)
if 'FL_DATE' in dexample.columns:
dexample.drop('FL_DATE', axis = 1, inplace= True)
print dexample.columns
Explanation: Extracting month and day
The date is given as a string in the format "Year-Month-Day". We exctract the month and the weekday in the following.
WARNING \
RUN THIS CELL ONLY ONCE! Because airports are linked to one another,
End of explanation
%%time
ti = lambda x : x/100 + x % 100
dexample['CRS_ARR_TIME_COR'] = dexample.CRS_ARR_TIME.map(ti)
dexample['CRS_DEP_TIME_COR'] = dexample.CRS_DEP_TIME.map(ti)
dexample.drop(['CRS_DEP_TIME', 'CRS_ARR_TIME'], axis = 1, inplace = True)
Explanation: Adjusting numerical data
Let's change teh format of all time entries. Instead of having hour : minute, we convert everything in minutes.
WARNING \
RUN THIS CELL ONLY ONCE!
End of explanation
def normalize(array):
mean = np.mean(array)
std = np.std(array)
return [(x - mean)/std for x in array]
def normalize_data(data, feature_list):
'''
Normalize data.
Parameters:
-----------
data: pandas.DataFrame
dataframe
feature_list: <list 'string'>
List of features to be normalized
'''
for feature in feature_list:
if feature in data.columns:
data[feature + '_NOR'] = normalize(data[feature].values)
data.drop(feature, axis =1, inplace=True)
return
dexample.drop(dexample[dexample.AIRCRAFT_YEAR ==' '].index, inplace = True)
dexample['AIRCRAFT_YEAR_COR'] = dexample.AIRCRAFT_YEAR.map(lambda x: int(x))
dexample.drop('AIRCRAFT_YEAR', axis = 1, inplace = True)
normalize_data(dexample, ['DISTANCE', 'LAT', 'LONG', 'CRS_ARR_TIME_COR', 'CRS_DEP_TIME_COR', 'AIRCRAFT_YEAR_COR'])
dexample.head(3)
Explanation: We need to center and normalize all continuous data for better results. The idea is that if a feature X has a range of variation that is considerably higher than a feature Y, the variations of X may completely mask the variations of Y and therefore make Y useless.
End of explanation
encoded_list = ['UNIQUE_CARRIER', 'ORIGIN', 'DEST', 'AIRCRAFT_MFR', 'MONTH','DAY']
dexample = pd.get_dummies(dexample, columns=encoded_list)
dexample.head(3)
Explanation: Encode categorical variables
Many of the features are categorical - for example the origin airport feature contains strings which represent the code of each airport. To make a classification/regression possible, we need to encode these features as series of indicators. More specifically, if the origin airport feature has n possibles values A1, A2, ... , An, we create n indicator functions I(A1), ..., I(An). Indicator function I(Ak) takes the value 1 if the feature origin airport has value Ak.
End of explanation
%%time
# save the restricted data to avoid computing them again
file_path = "cache/predictionData/dexample.csv"
dexample.to_csv(path_or_buf= file_path)
#%%time
# recover file
#file_path = "cache/predictionData/dexample.csv"
#dexample= pd.read_csv(file_path)
#dexample.drop('Unnamed: 0', axis= 1, inplace = True)
#print dexample.columns
Explanation: Let's save all of our results.
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
def baseline_predictor(data):
return np.mean(data.ARR_DELAY.values)
print "MSE of the mean predictor:" , \
mean_squared_error(dexample.ARR_DELAY.values, [baseline_predictor(dexample) for x in xrange(len(dexample))])
Explanation: 1. Random Forest regressor
Our goal is to anticipate the delay of a flight given its features. One way to proceed is to train a random forest regressor. But first, let's look at the baseline predictor, that is the predictor that returns the mean of data set. The measure of accuracy of a predictor will be its mean square error against a test set distinct from the training set.
End of explanation
def split(data, list_drop, target, test_size):
'''
Splits the data into a training and a test set
Separates the training and test sets according to a feature set and a target set
Balance the features sets by retaining only fraction of its points
Parameters:
-----------
data: pandas.DataFrame
Flight dataframe
list_drop: <list 'string'>
List of columns to exclude from the features set
target: string
target column along whch we make the target set
test_size: float
size of the test set
'''
#split the dataset into a training set and a test set
dtrain, dtest = train_test_split(data, test_size = 0.3)
Xtrain = dtrain.drop(list_drop, axis=1).values
ytrain = dtrain[target].values
Xtest = dtest.drop(list_drop, axis=1).values
ytest = dtest[target].values
return Xtrain, ytrain, Xtest, ytest
Explanation: Split data into training/test sets
First, let's split the data set into a training set and a test set.
End of explanation
def score_random_forest(Xtrain, ytrain, Xtest, ytest, n_trees=10, max_features='auto'):
'''
Fits a random forest with (Xtrain ,ytrain)
Computes the score on (Xtest, ytest)
Parameters:
-----------
Xtrain: numpy 2D array
Feature training set
ytrain: numpy 1D array
Target training set
Xtest: numpy 2D array
Feature test set
ytest: numpy 1D array
Target test set
n_trees: int
number of trees in the forest
max_features: string or int
number of features used for every tree
Outputs:
--------
score_train: float
score on the train set
score_test: float
score on the test set
clf.feature_importances_
weights of each feature as used by the classifier
'''
clf= RandomForestRegressor(n_estimators=n_trees, max_features= max_features)
clf.fit(Xtrain, ytrain)
score_train = mean_squared_error(clf.predict(Xtrain), ytrain)
score_test = mean_squared_error(clf.predict(Xtest), ytest)
return score_train, score_test, clf
def best_parameters(Xtrain, ytrain, Xtest, ytest, nb_trees, nb_features):
'''
Fits sequentially random forest classifiers
Adds each test score in a pandas.DataFrame with the number of trees, the loss function, the train score,
and the importance of each features
Returns a DataFrame with all scores
Parameters:
-----------
Xtrain: numpy 2D array
Feature training set
ytrain: numpy 1D array
Target training set
Xtest: numpy 2D array
Feature test set
ytest: numpy 1D array
Target test set
n_trees: <list int>
list of numbers of trees in the forest
nb_features: <list int>
list of number of features in the forest
Outputs:
--------
score_tab: pandas.DataFrame
DataFrame of scores with associated parameters
'''
score_tab = pd.DataFrame(columns=['nb_trees', 'nb_features', 'test_score', 'train_score', 'classifier'])
# counter will increment the index in score_tab
counter = 0
for n_estimators in nb_trees:
for max_features in nb_features:
score_train, score_test, classifier = \
score_random_forest(Xtrain, ytrain, Xtest, ytest, n_trees=n_estimators, max_features=max_features)
score_tab.loc[counter] = [n_estimators, max_features, score_test, score_train, classifier]
counter += 1
return score_tab
def classify_random_forest(data, list_drop, target, test_size=0.4, nb_trees=[10], nb_features = ['auto']):
Xtrain, ytrain, Xtest, ytest = split(data, list_drop, target, test_size)
scores = best_parameters(Xtrain, ytrain, Xtest, ytest, nb_trees, nb_features)
return scores
Explanation: Useful functions
The functions below satisfy various tasks. The first trains a classifier and estimates its scores on the test set and the training set. The second tries different parameter values for the regressor. The last one wraps everything.
End of explanation
nb_trees = [25, 50, 75, 100, 150, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
nb_features = ['auto', 'log2', 'sqrt']
test_size = 0.4
%%time
randomForest2014 = classify_random_forest(dexample, ['ARR_DELAY'], 'ARR_DELAY', test_size=test_size, nb_trees=nb_trees, nb_features=nb_features)
randomForest2014.head()
# save file to /data/ folder
file_path = "cache/predictionData/randomForest2014.csv"
randomForest2014.to_csv(path_or_buf= file_path)
Explanation: Parameter dashboard and run cell
We test the following parameters.
the number of trees in the forest
the number of features retained in each tree
the test size
End of explanation
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
# MSE on test set
fig = plt.gcf()
fig.set_size_inches(8, 5)
for key, co in zip(['auto', 'sqrt', 'log2'], ['b', 'g', 'y']):
x = randomForest2014[randomForest2014.nb_features == key].nb_trees.values
y = randomForest2014[randomForest2014.nb_features == key].test_score.values
plt.plot(x, y, alpha = 0.5, c =co, label = key)
plt.ylabel("Mean Square Error")
plt.xlabel("Number of features")
plt.title("MSE on test set by nb of features and nb of trees")
plt.legend(loc = 1)
plt.show()
# MSE on train set
fig = plt.gcf()
fig.set_size_inches(8, 5)
for key, co in zip(['auto', 'sqrt', 'log2'], ['b', 'g', 'y']):
x = randomForest2014[randomForest2014.nb_features == key].nb_trees.values
y = randomForest2014[randomForest2014.nb_features == key].train_score.values
plt.plot(x, y, alpha = 0.5, c =co, label = key)
plt.ylabel("Mean Square Error")
plt.xlabel("Number of features")
plt.title("MSE on train set by nb of features and nb of trees")
plt.legend(loc = 1)
plt.show()
Explanation: MSE by size of the forest
End of explanation
%%time
dexample['ARR_DELAY_COR'] = dexample.ARR_DELAY.map(lambda x: (x >= 15))
dexample.drop('ARR_DELAY', axis = 1, inplace = True)
file_path = "cache/predictionData/dexample_class.csv"
dexample.to_csv(path_or_buf= file_path)
dexample_class = dexample.copy()
#file_path = "cache/predictionData/dexample_class.csv"
#dexample_class = pd.read_csv(path_or_buf= file_path)
#dexample_class.drop('Unnamed: 0', axis= 1, inplace = True)
dexample_class.head(3)
from __future__ import division
def baseline_class(data, target):
'''
Compute the baseline classifiers along a target variable for a data set data
Parameters:
-----------
data: pandas.DataFrame
dataframe
target: string
Column of data along wich we compute the baseline classifiers
'''
score_baseline_1 = np.size(data[data[target] == 1][target].values) / np.size(data[target].values)
score_baseline_0 = np.size(data[data[target] == 0][target].values) / np.size(data[target].values)
print "baseline classifier everyone to 0: ", int(score_baseline_0*100) , "%"
print "baseline classifier everyone to 1: ", int(score_baseline_1*100) , "%"
return score_baseline_0, score_baseline_1
baseline_class(dexample_class, 'ARR_DELAY_COR')
Explanation: 2. Random forest classifier
We will make prediction on the variable 'ARR_DEL15'. This variable takes the value 1 is the plane is more than 15 minutes late and 0 if not. Let's look at the baseline classifier, that is the classifiers that assign repectively 1 or 0 to 'ARR_DEL15' for every flight.
Here we look at the case whether a flight will be more than 15 minutes late. So we adjust the ARR_DELAY colum to an indicator.
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
Explanation: Prediction functions
End of explanation
def split(data, list_drop, target, test_size):
'''
Splits the data into a training and a test set
Separates the training and test sets according to a feature set and a target set
Balance the features sets by retaining only fraction of its points
Parameters:
-----------
data: pandas.DataFrame
Flight dataframe
list_drop: <list 'string'>
List of columns to exclude from the features set
target: string
target column along whch we make the target set
test_size: float
size of the test set
'''
#split the dataset into a training set and a test set
dtrain, dtest = train_test_split(data, test_size = 0.3)
Xtrain = dtrain.drop(list_drop, axis=1).values
ytrain = dtrain[target].values
Xtest = dtest.drop(list_drop, axis=1).values
ytest = dtest[target].values
return Xtrain, ytrain, Xtest, ytest
def score_random_forest_classifier(Xtrain, ytrain, Xtest, ytest, n_trees=10, max_features='auto'):
'''
Fits a random forest with (Xtrain ,ytrain)
Computes the score on (Xtest, ytest)
Parameters:
-----------
Xtrain: numpy 2D array
Feature training set
ytrain: numpy 1D array
Target training set
Xtest: numpy 2D array
Feature test set
ytest: numpy 1D array
Target test set
n_trees: int
number of trees in the forest
max_features: string or int
number of features used for every tree
Outputs:
--------
score_train: float
score on the train set
score_test: float
score on the test set
clf.feature_importances_
weights of each feature as used by the classifier
'''
clf= RandomForestClassifier(n_estimators=n_trees, max_features= max_features)
clf.fit(Xtrain, ytrain)
score_train = clf.score(Xtrain, ytrain)
score_test = clf.score(Xtest, ytest)
return score_train, score_test, clf
def best_parameters_classifier(Xtrain, ytrain, Xtest, ytest, nb_trees, nb_features):
'''
Fits sequentially random forest classifiers
Adds each test score in a pandas.DataFrame with the number of trees, the loss function, the train score,
and the importance of each features
Returns a DataFrame with all scores
Parameters:
-----------
Xtrain: numpy 2D array
Feature training set
ytrain: numpy 1D array
Target training set
Xtest: numpy 2D array
Feature test set
ytest: numpy 1D array
Target test set
n_trees: <list int>
list of numbers of trees in the forest
nb_features: <list int>
list of number of features in the forest
Outputs:
--------
score_tab: pandas.DataFrame
DataFrame of scores with associated parameters
'''
score_tab = pd.DataFrame(columns=['nb_trees', 'nb_features', 'test_score', 'train_score', 'classifier'])
# counter will increment the index in score_tab
counter = 0
for n_estimators in nb_trees:
for max_features in nb_features:
score_train, score_test, classifier = \
score_random_forest_classifier(Xtrain, ytrain, Xtest, ytest, n_trees=n_estimators, max_features=max_features)
score_tab.loc[counter] = [n_estimators, max_features, score_test, score_train, classifier]
counter += 1
return score_tab
def classify_random_forest_class(data, list_drop, target, test_size=0.4, nb_trees=[10], nb_features = ['auto']):
Xtrain, ytrain, Xtest, ytest = split(data, list_drop, target, test_size)
scores = best_parameters_classifier(Xtrain, ytrain, Xtest, ytest, nb_trees, nb_features)
return scores
Explanation: Random forest functions for a classification task
End of explanation
nb_trees = [25, 50, 75, 100, 150, 200, 300, 400, 500, 750, 1000, 2000, 5000]
nb_features = ['sqrt', 'log2']
test_size = 0.4
%%time
randomForest2014_class = classify_random_forest_class(dexample, ['ARR_DELAY_COR'], 'ARR_DELAY_COR', test_size=test_size, nb_trees=nb_trees, nb_features=nb_features)
randomForest2014_class.head(9)
# save file to /data/ folder
file_path = "cache/predictionData/randomForest2014_class.csv"
randomForest2014_class.to_csv(path_or_buf= file_path)
Explanation: Classification task
We look at various paramters for the classifier. Note that there are two loss functions available for the random forest classifier, a Gini loss and an entropy loss. Simulations show they behave similarly!
End of explanation
features = dexample_class.drop('ARR_DELAY_COR', axis =1).columns
coeffs_class = randomForest2014_class.classifier[len(randomForest2014_class)-1].feature_importances_
u = pd.Series(coeffs_class, index=features)
u.sort(inplace=True, ascending=False)
print u
Explanation: Analysis
Let's now look at the importance coefficients, that the average usage of each coefficients in the random forest. As we can see, the main factors in the delay of a flight are the age of the aircraft, the time of the departure and the time of the origin. The fact that the departure and arrival time confirm what we observed in the explorative analysis. However, the role of the age of the aircraft was not very marked in the explorative analysis. Lastly, the weekday has a huge influence.
Notice that the airline and the aircraft model play a very limited role in the delay according to this random forest classifier.
End of explanation
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
# Score on test set
fig = plt.gcf()
fig.set_size_inches(8, 5)
for key, co in zip(['sqrt', 'log2'], ['b', 'g']):
x = randomForest2014_class[randomForest2014_class.nb_features == key].nb_trees.values
y = randomForest2014_class[randomForest2014_class.nb_features == key].test_score.values
plt.plot(x, y, alpha = 0.5, c =co, label = key)
plt.ylabel("Score")
plt.xlabel("Number of features")
plt.title("Score on test set by nb of features and nb of trees")
plt.legend(loc = 4)
plt.show()
# Score on train set
fig = plt.gcf()
fig.set_size_inches(8, 5)
for key, co in zip(['sqrt', 'log2'], ['b', 'g']):
x = randomForest2014_class[randomForest2014_class.nb_features == key].nb_trees.values
y = randomForest2014_class[randomForest2014_class.nb_features == key].train_score.values
plt.plot(x, y, alpha = 0.5, c =co, label = key)
plt.ylim(0.935, 0.95)
plt.ylabel("Score")
plt.xlabel("Number of features")
plt.title("Score on train set by nb of features and nb of trees")
plt.legend(loc = 4)
plt.show()
Explanation: Score by size of the forest
End of explanation
%matplotlib inline
# import required modules for prediction tasks
import numpy as np
import pandas as pd
import math
import random
import requests
import zipfile
import StringIO
import re
import json
import os
import matplotlib
import matplotlib.pyplot as plt
# sklearn functions used for the linear regression model
from sklearn.preprocessing import OneHotEncoder
from scipy import sparse
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDRegressor
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import mean_absolute_error
Explanation: 3. Predicting flight delay time with Linear Regression
To allow users to enjoy more than a classfication experience we want to give them an expected delay time in minutes.
End of explanation
# first step is to load the actual data and exclude rows that are unnecessary
# a script that produces the csv file can be found in the src folder
# (it might take some while to run, on my macbook up to one hour)
print('loading data...')
bigdf = pd.read_csv('cache/Big5FlightTable.csv')
years = ['2010', '2011', '2012', '2013', '2014']
# specify here which cities should be investigated
city_from = 'New York, NY'
city_to = 'Chicago, IL'
# filter for cities
bigdf = bigdf[(bigdf.ORIGIN_CITY_NAME == city_from) & (bigdf.DEST_CITY_NAME == city_to)]
Explanation: First, let us again establish a baseline which has to be beaten by our model. To get a feeling for a good baseline we pick flights from from New York(all airports) to Chicago(all airports).
End of explanation
query_day = 21
query_month = 12
# how many flights do exist in all years?
flights = []
flightvalues = []
for y in years:
query = list(bigdf[(bigdf.YEAR == int(y)) & (bigdf.MONTH == query_month) & (bigdf.DAY_OF_MONTH == query_day)].FL_NUM.astype(int).unique())
flights.append(query)
flightvalues += query
# build a matrix
data_matrix = np.zeros((len(flightvalues), len(years)))
# build dict
flightdict = dict(zip(flightvalues, np.arange(0, len(flightvalues))))
# fill datamatrix
for i in xrange(len(years)):
for j in flights[i]:
data_matrix[flightdict[j], i] = 1.
# plot matrix
plt.figure(figsize=(10, 6))
plt.imshow(data_matrix, extent=[0,data_matrix.shape[0],0,data_matrix.shape[1] * 5], \
interpolation='none', cmap='gray')
Explanation: 3.1. A first predictor
For a span of days we are interested in knoweledge, which flight we should take. Therefore let's choose the popular date for Christmas returning flights 21.12.2015 as example. As we want to use historical data, let's get first clear what data we have. Is it possible to compare flights over the years?
End of explanation
dffordate = bigdf[bigdf.MONTH == query_month]
dffordate = dffordate[dffordate.DAY_OF_MONTH == query_day]
dffordate.head()
def predict_base_model(X):
return np.array([dffordate.ARR_DELAY.mean()]*X.shape[0])
Explanation: As the plot shows, flight numbers are not stable over the years and it is not trivial to find a matching.
Thus comparison by features for individual flights is not really possible.
It seems as if airlines change their flight numbers on a yearly basis.
Thus we need to come up with another idea and use a latent variable based approach instead.
One of the easiest ideas is to average the delay time over each day as a base classifier and refine then.
End of explanation
# build test/train set
df_train = dffordate[dffordate.YEAR != int(years[-1])]
df_test = dffordate[dffordate.YEAR == int(years[-1])]
y_train = df_train.ARR_DELAY
X_train = y_train # here dummy
y_test = df_test.ARR_DELAY
X_test = y_test # here dummy
Explanation: To test the quality of this model, we use the last year as test set and the previous as train data. The idea is, that we are always interested in predicting the next year somehow. Thus, if the match for 2014 is good, we expect it to be the same for 2015.
End of explanation
y_pred = predict_base_model(X_test)
y_pred[0]
Explanation: In the base model, the prediction for 2014 is that we are going to be 27.72 minutes late
End of explanation
def rmse(y, y_pred):
return np.sqrt(((y - y_pred)**2).mean())
def mas(y, y_pred):
return (np.abs(y - y_pred)).mean()
MAS_base = mas(y_test, y_pred)
RMSE_base = rmse(y_test, y_pred)
RMSE_base
Explanation: How good did it perform comparing the actual arrival delay?
End of explanation
%%time
bigdf['HOUR_OF_ARR'] = 0
bigdf['HOUR_OF_DEP'] = 0
for index, row in bigdf.iterrows():
bigdf.set_value(index, 'HOUR_OF_ARR', int(row['ARR_TIME']) / 10)
bigdf.set_value(index, 'HOUR_OF_DEP', int(row['DEP_TIME']) / 10)
Explanation: Using the root mean squared error as one (of many possible) measures, any model that we develop should beat the benchmark of 44.
3.2 Building a linear regression model for prediction
As flight delay changes over the time of day like explored in the data exploration part, we introduce a new feature which models in a categorical variable 10 minute time windows. I.e. for each window we introduce a latent variable that captures some sort of delay influence of this frame. This is done for both the departure and the arrival time. The model here is first developed for the reduced dataset containing the flights between New York / Chicago only.
End of explanation
# split data into numerical and categorical features
numericalFeat = bigdf[['DISTANCE', 'AIRCRAFT_AGE']].astype('float')
categoricalFeat = bigdf[['MONTH', 'DAY_OF_MONTH', 'ORIGIN', 'DEST', \
'HOUR_OF_ARR', 'HOUR_OF_DEP', 'UNIQUE_CARRIER', \
'DAY_OF_WEEK', 'AIRCRAFT_MFR']]
Explanation: In the next step, before fitting the actual model categorical variables need to be encoded as binary features (they have no order!) and numerical features shall be normalized.
End of explanation
%%time
# for the next step, all features need to be encoded as integers --> create lookup Tables!
def transformToID(df, col):
vals = df[col].unique()
LookupTable = dict(zip(vals, np.arange(len(vals))))
for key in LookupTable.keys():
df[df[col] == key] = LookupTable[key]
return (LookupTable, df)
mfrDict, categoricalFeat = transformToID(categoricalFeat, 'AIRCRAFT_MFR')
originDict, categoricalFeat = transformToID(categoricalFeat, 'ORIGIN')
destDict, categoricalFeat = transformToID(categoricalFeat, 'DEST')
carrierDict, categoricalFeat = transformToID(categoricalFeat, 'UNIQUE_CARRIER')
categoricalFeat.head()
Explanation: Dealing with categorical variables
Luckily, sklearn has a routine to do this for us. Yet, as it is only able to handle integers we first reindex all categorical features we create lookup tables for the values. This turned out to be one of the slowest step in processing.
End of explanation
import sklearn
from sklearn import linear_model
from sklearn.preprocessing import OneHotEncoder
from scipy import sparse
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDRegressor
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import mean_squared_error
%%time
encoder = OneHotEncoder()
categoricals_encoded = encoder.fit_transform(categoricalFeat)
Explanation: Using sklearn the variables are now encoded. As we have many variables (around 1200 in the final model for the whole year) using sparse matrices was critical for us to fit the model.
End of explanation
numericals_sparse = sparse.csr_matrix(numericalFeat)
# get data matrix & response variable
X_all = sparse.hstack((numericals_sparse, categoricals_encoded))
y_all = bigdf['ARR_DELAY'].values
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size = 0.2, random_state = 42)
# we have 2 numerical features
X_train_numericals = X_train[:, 0:3].toarray()
X_test_numericals = X_test[:, 0:3].toarray()
# use sklearn tools to standardize numerical features...
scaler = StandardScaler()
scaler.fit(X_train_numericals) # get std/mean from train set
X_train_numericals = sparse.csr_matrix(scaler.transform(X_train_numericals))
X_test_numericals = sparse.csr_matrix(scaler.transform(X_test_numericals))
# update sets
X_train[:, 0:3] = X_train_numericals
X_test[:, 0:3] = X_test_numericals
Explanation: Recombining categorical and numerical features allows to start the usual Machine Learning routine (split into test/train, normalize, train/cross-validate, test)
End of explanation
clf = sklearn.linear_model.LinearRegression()
# fit the model!
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
rmse(y_test, y_pred)
Explanation: Ordinal least squares regression
End of explanation
X_train.shape, clf.coef_.shape
Explanation: In a first model, we use OLS to get a feeling of what is achievable using a reduced dataset. The motivation for this is mainly to see whether this might be used to speedup the whole process. As the RMSE reveals, we did not do better than the base classifier. What happened?
End of explanation
%%time
# Use ridge regression (i.e. Gaussian prior) and vary the lambda parameter using Grid search
from sklearn.linear_model import SGDRegressor
from sklearn.grid_search import GridSearchCV
SGD_params = {'alpha': 10.0 ** -np.arange(-2,8)}
SGD_model = GridSearchCV(SGDRegressor(random_state = 42), \
SGD_params, scoring = 'mean_absolute_error', cv = 6) # cross validate 6 times
# train the model, this might take some time...
SGD_model.fit(X_train, y_train)
y_pred = SGD_model.predict(X_test)
rmse(y_test, y_pred)
Explanation: Looking at the number of variables (74 here), it might be that either the fit is not good enough or we are ignoring the dependency of flight delays within different routes.
Ridge regression
End of explanation
import json
import seaborn as sns
import matplotlib.ticker as ticker
def load_model(filename):
mdl = None
with open(filename, 'r') as f:
mdl = json.load(f)
return mdl
# load model results
mdl1 = load_model('results/models/2014model.json') # model with 2014 data (1 year)
mdl5 = load_model('results/models/2010_2014model.json') # model with 2010-2014 data ( 5 years)
labels = np.array(['base predictor', '1 year LM', '5 year LM'])
RMSEs = np.array([RMSE_base, mdl1['RMSE'], mdl5['RMSE']])
MASs = np.array([MAS_base, mdl1['MAS'], mdl5['MAS']])
width = .35
xx = np.arange(len(RMSEs))
plt.bar(xx-width, RMSEs, width, label='RMSE', color=sns.color_palette()[0])
plt.bar(xx , MASs, width, label='MAE', color=sns.color_palette()[2])
plt.legend(loc='best')
# Hide major tick labels
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.NullFormatter())
# Customize minor tick labels
ax.xaxis.set_minor_locator(ticker.FixedLocator([i for i in xx]))
ax.xaxis.set_minor_formatter(ticker.FixedFormatter([labels[i] for i in xx]))
Explanation: Looking at the RMSE again is a bit disappointing. We did worse than OLS and the base classifier! This means, it is now time to tackle the unavoidable: Regressing the whole dataset!
As the data becomes rapidly huge (for 1 year ~ 700MB uncompressed CSV, 5 years ~ 3.5GB, 10 years ~ 7.2GB) the code to perform the actual regression has been developed first in an IPython notebook and then run separately. It can be found in the src folder.
Regression over the whole data
In the following 2 models based on 1, 5 years of data are used (saved in BigFlightTable.csv, Big5FlightTable.csv).
End of explanation
from datetime import timedelta, datetime, tzinfo
from pytz import timezone
import pytz
def convertToUTC(naive, zonestring="America/New_York"):
local = pytz.timezone (zonestring)
local_dt = local.localize(naive, is_dst=None)
return local_dt.astimezone (pytz.utc)
Explanation: Comparing the RMSEs and Mean absolute error we see that the linear regression model over the whole data clearly outperformed the baseline predictor. Also, more data did not really tremendously improve the mean absolute error but we can see a clear variance reduction in the RMSE. To further improve the model, additional variables should be included. One of the next ideas is weather data. As formatted historic weather data is unfortunately not for free available and scraping it for 100Ks entries per year is not feasible, the variables could not be added to the model. Another interesting idea might be to include some more meta data. I.e. an variable measuring the effect of holidays and variables accounting for winds which could be based by geographic location (i.e. a flight in western direction is longer than one in eastern direction b/c of the passat wind.
3.3 Application: Predicting the best flight for a given date and route
End of explanation
# we are only interested in flights NY to Chicago!
city_to = 'Chicago, IL'
city_from = 'New York, NY'
zone_to = 'America/Chicago'
zone_from = 'America/New_York'
Explanation: In the following, we want to give an example on how to perform a prediction using the model from 2010-2014. We choose the route Chicago / New York.
End of explanation
# need to create a lookup table for the values (i.e. flight numbers, city and so on and then drop all duplicates!)
db = pd.read_csv('cache/BigFlightTable.csv')
# remove all unnecessary columns
db = db[['ORIGIN_CITY_NAME', 'DEST_CITY_NAME', 'AIRCRAFT_AGE', 'DEST', 'ARR_TIME', \
'DEP_TIME', 'UNIQUE_CARRIER', 'DAY_OF_WEEK', 'AIRCRAFT_MFR', 'FL_NUM', 'MONTH', \
'DAY_OF_MONTH', 'DISTANCE', 'ORIGIN']]
print str(db.count()[0]) + ' entries'
db.head()
# drop everything except for the 5 days before christmas! i.e. 20.12, 21.12, 22.12, 23.12, 24.12.
db = db[db.MONTH == 12]
db = db[db.DAY_OF_MONTH <= 24]
db = db[db.DAY_OF_MONTH >= 20]
print '5 days have ' + str(db.count()[0]) + ' flights'
db = db[db.ORIGIN_CITY_NAME == city_from]
db = db[db.DEST_CITY_NAME == city_to]
db.reset_index(inplace=True)
print 'Found ' + str(db.count()[0]) + ' flights from ' + city_from + ' to ' + city_to + ' for 20.12 - 24.12'
Explanation: Especially before Christmas, it is a good question to ask on which day you should fly and which flight to take. Therefore, we consider only flights between 20.12-24.12. Our assumption is that many values we feed in our predictor do not change, i.e. we can use the data from 2014 (stored in BigFlightTable.csv). For a productive system, these queries should be performed using a database.
End of explanation
mdl = load_model('results/models/2014model.json')
# categorical feature encoder, fitted on the keys
encoder = OneHotEncoder(sparse=True, n_values=mdl['encoder']['values'])
Explanation: We know load our favourite model and setup the categorical variable encoder.
End of explanation
# input is a datarow
# prediction of day in the next year!
def predictDelayTime(row, mdl):
s_mean, s_std, coeff, intercept = mdl['scaler_mean'], mdl['scaler_std'], mdl['coeff'], mdl['intercept']
# read out tables
carrierTable = mdl['CARRIER']
mfrTable = mdl['MANUFACTURER']
destTable = mdl['DEST']
originTable = mdl['ORIGIN']
distance = row['DISTANCE'] # <-- look this up!
aircraft_age = row['AIRCRAFT_AGE'] # <-- look this up!
# normalize numerical features according to scaler
distance = (distance - s_mean[0]) / s_std[0]
aircraft_age = (aircraft_age + 1 - s_mean[1]) / s_std[1]
month = row['MONTH']
day_of_month = row['DAY_OF_MONTH']
origin = row['ORIGIN']
dest = row['DEST']
hour_of_arr = int(row['ARR_TIME']) / 10
hour_of_dep = int(row['DEP_TIME']) / 10
carrier = row['UNIQUE_CARRIER']
day_of_week = datetime(year=2015, month=row.MONTH, day=row.DAY_OF_MONTH).weekday() # <-- get via datetimeobject
mfr = row['AIRCRAFT_MFR']
# for nonindexed categorical features, do lookup!
origin = originTable[origin]
dest = destTable[dest]
mfr = mfrTable[mfr]
carrier = carrierTable[carrier]
# write into df
df = {}
df['MONTH'] = month
df['DAY_OF_MONTH'] = day_of_month
df['ORIGIN'] = origin
df['DEST'] = dest
df['HOUR_OF_ARR'] = hour_of_arr
df['HOUR_OF_DEP'] = hour_of_dep
df['UNIQUE_CARRIER'] = carrier
df['DAY_OF_WEEK'] = day_of_week
df['AIRCRAFT_MFR'] = mfr
df = pd.DataFrame([df])
# order here is important! make sure it is the same as in the model!
categoricalFeat = df[['MONTH', 'DAY_OF_MONTH', 'ORIGIN',
'DEST', 'HOUR_OF_ARR', 'HOUR_OF_DEP',
'UNIQUE_CARRIER', 'DAY_OF_WEEK', 'AIRCRAFT_MFR']].copy() # Categorical features
# construct the data vector for the linear model
categoricals_encoded = encoder.fit_transform(categoricalFeat)
num_features = np.array([distance, aircraft_age])
cat_features = categoricals_encoded.toarray().T.ravel()
w = np.hstack([num_features, cat_features])
y_pred = np.dot(w, coeff) + intercept
return y_pred[0]
Explanation: Using the lookuptables, it is straight-forward to write the prediction function. Note that variabales need to be normalized according to the training data used in the model.
End of explanation
# create for each day info
db['PREDICTED_DELAY'] = 0.
db['FLIGHT_TIME'] = 0
db['PREDICTED_FLIGHT_TIME'] = 0
for index, row in db.iterrows():
print 'processing {idx}'.format(idx=index)
y_pred = predictDelayTime(row, mdl)
db.set_value(index, 'PREDICTED_DELAY', y_pred)
arr_time = datetime(year=2015, month=row['MONTH'], day=row['DAY_OF_MONTH'], \
hour= int(row['ARR_TIME'] / 100), minute=int(row['ARR_TIME'] % 100))
dep_time = datetime(year=2015, month=row['MONTH'], day=row['DAY_OF_MONTH'], \
hour= int(row['DEP_TIME'] / 100), minute=int(row['DEP_TIME'] % 100))
flight_time_in_min = (convertToUTC(arr_time) - convertToUTC(dep_time))
flight_time_in_min = int(flight_time_in_min.total_seconds() / 60)
db.set_value(index, 'FLIGHT_TIME', flight_time_in_min)
db.set_value(index, 'PREDICTED_FLIGHT_TIME', y_pred + flight_time_in_min)
Explanation: We are now ready to predict the delay time on our flights!
End of explanation
db2 = db[db.DAY_OF_MONTH == 22]
dbres2 = db2.sort('PREDICTED_FLIGHT_TIME')
dbres2.to_csv('data/best_flights.csv')
Explanation: What is the best flight on 22nd December?
Using this info, we can get the best flights for the 22nd of December
End of explanation
db3 = db[(db.DEP_TIME > 1700) & (db.DEP_TIME < 2000)]
dbres3 = db2.sort('PREDICTED_FLIGHT_TIME')
dbres3.head()
Explanation: On what day is it best to fly in the evening?
Analogously, we now want our model to give us the answer which is the best flight during the busy evening hours (17:00-20:00)
End of explanation |
14,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: Generate data
Let's first initialize a bundle and change some of the parameter values. We'll then export the computed models as "observables" to use with the rv_geometry estimator.
Step2: Initialize the bundle
To showcase the rv_estimator, we'll start with a fresh default bundle.
Step3: rv_geometry
The rv_geometry estimator is meant to provide an efficient starting point for q, vgamma, asini, esinw and ecosw. Similar to the light curve estimators, it will by default bin the input data if the number of data points is larger than phase_nbins and will expose the analytical (in this case, Keplerian orbit) models that were fit to the data.First we add the solver options via add_solver
Step4: The solution, as expected returns the fitted values and the analytic models we fit to get them, which can be turned off by setting expose_model to False. Let's inspect the fitted twigs and values before adopting the solution
Step5: As we can see all values look okay, and we have asini@binary in the twigs, which means we'll need to flip the asini constraint to be able to set it with adopt_solution
Step6: single-lined RVs
In some cases, only one RV is available, in which case not all parameters can be estimated with rv_geometry. Let's recreate the above example with only providing the primary RV and see how the solution differs.
Step7: If we compare the fitted_twigs from this solution with our two-RV solution, we'll notice two things | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
Explanation: Advanced: RV Estimators
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b = phoebe.default_binary()
# set parameter values
b.set_value('q', value = 0.6)
b.set_value('incl', component='binary', value = 84.5)
b.set_value('ecc', 0.2)
b.set_value('per0', 63.7)
b.set_value('sma', component='binary', value= 7.3)
b.set_value('vgamma', value= -32.84)
# add an rv dataset
b.add_dataset('rv', compute_phases=phoebe.linspace(0,1,101))
#compute the model
b.run_compute()
# extract the arrays from the model that we'll use as observables in the next step and add noise to the rvs
times = b.get_value('times', context='model', component='primary', dataset='rv01')
np.random.seed(0) # to ensure reproducibility with added noise
rvs1 = b.get_value('rvs', component='primary', context='model', dataset='rv01') + np.random.normal(size=times.shape)
rvs2 = b.get_value('rvs', component='secondary', context='model', dataset='rv01') + np.random.normal(size=times.shape)
sigmas_rv = np.ones_like(times) * 2
Explanation: Generate data
Let's first initialize a bundle and change some of the parameter values. We'll then export the computed models as "observables" to use with the rv_geometry estimator.
End of explanation
b = phoebe.default_binary()
b.add_dataset('rv')
b.set_value_all('times', dataset='rv01', value = times)
b.set_value('rvs', component='primary', dataset='rv01', value = rvs1)
b.set_value('rvs', component='secondary', dataset='rv01', value = rvs2)
b.set_value_all('sigmas', dataset='rv01', value = sigmas_rv)
b.run_compute()
_ = b.plot(legend=True, show=True)
Explanation: Initialize the bundle
To showcase the rv_estimator, we'll start with a fresh default bundle.
End of explanation
b.add_solver('estimator.rv_geometry', solver='rvgeom')
print(b.filter(solver='rvgeom'))
b.run_solver('rvgeom', solution='rvgeom_solution')
print(b.filter(solution='rvgeom_solution'))
Explanation: rv_geometry
The rv_geometry estimator is meant to provide an efficient starting point for q, vgamma, asini, esinw and ecosw. Similar to the light curve estimators, it will by default bin the input data if the number of data points is larger than phase_nbins and will expose the analytical (in this case, Keplerian orbit) models that were fit to the data.First we add the solver options via add_solver:
End of explanation
print(b.get_value('fitted_twigs', solution='rvgeom_solution'))
print(b.get_value('fitted_values', solution='rvgeom_solution'))
Explanation: The solution, as expected returns the fitted values and the analytic models we fit to get them, which can be turned off by setting expose_model to False. Let's inspect the fitted twigs and values before adopting the solution:
End of explanation
b.flip_constraint('asini@binary', solve_for='sma@binary')
b.adopt_solution('rvgeom_solution')
b.run_compute()
_ = b.plot(x='phases', show=True)
Explanation: As we can see all values look okay, and we have asini@binary in the twigs, which means we'll need to flip the asini constraint to be able to set it with adopt_solution:
End of explanation
b = phoebe.default_binary()
b.add_dataset('rv', component='primary', times=times, rvs = rvs1, sigmas=sigmas_rv)
b.run_compute()
_ = b.plot(legend=True, show=True)
b.add_solver('estimator.rv_geometry', solver='rvgeom')
b.run_solver('rvgeom', solution='rvgeom_solution')
print(b.get_value('fitted_twigs', solution='rvgeom_solution'))
print(b.get_value('fitted_values', solution='rvgeom_solution'))
Explanation: single-lined RVs
In some cases, only one RV is available, in which case not all parameters can be estimated with rv_geometry. Let's recreate the above example with only providing the primary RV and see how the solution differs.
End of explanation
b.flip_constraint('asini@primary', solve_for='sma@binary')
b.adopt_solution('rvgeom_solution')
b.run_compute()
_ = b.plot(x='phases', show=True)
Explanation: If we compare the fitted_twigs from this solution with our two-RV solution, we'll notice two things:
- q is missing from the list
- We have asini@primary instead of asini@binary.
This is because with only one RV we cannot get a reliable estimate of the mass ratio and semi-major axis of the system and instead we revert to what can be estimated from just the one available RV, or in this case the semi-major axis of the primary orbit around the center of mass.
We still need to flip the asini@primary before adopting the solution, with flip_constraint:
End of explanation |
14,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the Simulation Archive to restart a simulation
The Simulation Archive (SA) is a binary file that can be used to restart a simulation. This can be useful when running a long simulation. REBOUND can restart simulation exactly (bit by bit) when using a SA. There are some restriction to when a SA can be used. Please read the corresponding paper (Rein & Tamayo 2017) for details.
We first setup a simulation in the normal way.
Step1: We then initialize the SA and specify the output filename and output cadence. We can choose the output interval to either correspond to constant intervals in walltime (in seconds) or simulation time. Here, we choose walltime. To choose simulation time instead replace the walltime argument with interval.
Step2: Now, we can run the simulation forward in time.
Step3: Depending on how fast your computer is, the above command may take a couple of seconds. Once the simulation is done, we can delete it from memory and load it back in from the SA. You could do this at a later time. Note that this will even work if the SA file was generated on a different computer with a different operating system and even a different version of REBOUND. See Rein & Tamayo (2017) for a full discussion on machine independent code.
Step4: If we want to integrate the simulation further in time and append snapshots to the same SA, then we need to call the automateSimulationArchive method again (this is a fail-safe mechanism to avoid accidentally modifying a SA file). Note that we set the deletefile flag to False. Otherwise, we would create a new empty SA file. This outputs a warning because the file already exists (which is ok since we want to append that file).
Step5: Now, let's integrate the simulation further in time.
Step6: If we repeat the process, one can see that the SA binary file now includes the new snapshots from the restarted simulation. | Python Code:
import rebound
sim = rebound.Simulation()
sim.integrator = "whfast"
sim.dt = 2.*3.1415/365.*6 # 6 days in units where G=1
sim.add(m=1.)
sim.add(m=1e-3,a=1.)
sim.add(m=5e-3,a=2.25)
sim.move_to_com()
Explanation: Using the Simulation Archive to restart a simulation
The Simulation Archive (SA) is a binary file that can be used to restart a simulation. This can be useful when running a long simulation. REBOUND can restart simulation exactly (bit by bit) when using a SA. There are some restriction to when a SA can be used. Please read the corresponding paper (Rein & Tamayo 2017) for details.
We first setup a simulation in the normal way.
End of explanation
sim.automateSimulationArchive("simulationarchive.bin", walltime=1.,deletefile=True)
Explanation: We then initialize the SA and specify the output filename and output cadence. We can choose the output interval to either correspond to constant intervals in walltime (in seconds) or simulation time. Here, we choose walltime. To choose simulation time instead replace the walltime argument with interval.
End of explanation
sim.integrate(2e5)
Explanation: Now, we can run the simulation forward in time.
End of explanation
sim = None
sim = rebound.Simulation("simulationarchive.bin")
print("Time after loading simulation %.1f" %sim.t)
Explanation: Depending on how fast your computer is, the above command may take a couple of seconds. Once the simulation is done, we can delete it from memory and load it back in from the SA. You could do this at a later time. Note that this will even work if the SA file was generated on a different computer with a different operating system and even a different version of REBOUND. See Rein & Tamayo (2017) for a full discussion on machine independent code.
End of explanation
sim.automateSimulationArchive("simulationarchive.bin", walltime=1.,deletefile=False)
Explanation: If we want to integrate the simulation further in time and append snapshots to the same SA, then we need to call the automateSimulationArchive method again (this is a fail-safe mechanism to avoid accidentally modifying a SA file). Note that we set the deletefile flag to False. Otherwise, we would create a new empty SA file. This outputs a warning because the file already exists (which is ok since we want to append that file).
End of explanation
sim.integrate(sim.t+2e5)
Explanation: Now, let's integrate the simulation further in time.
End of explanation
sim = None
sim = rebound.Simulation("simulationarchive.bin")
print("Time after loading simulation %.1f" %sim.t)
Explanation: If we repeat the process, one can see that the SA binary file now includes the new snapshots from the restarted simulation.
End of explanation |
14,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
End of explanation
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
14,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib 2015
requires ipython and ipyton-notebook packages
execute
Step1: Above commands enable pylab environment => direct access to numpy, scipy and matplotlib. The option 'inline' results in plot outputs to be directly embedded in the Notebook. If this causes problems, remove the option 'inline'.
Step2: A simple plotting example. Maybe some variations
Step3: Alternatively, Matplotlib understands the MATLAB syntax, e.g., for the above command (does not work with 'inline' enabled)
Step4: Available settings for a plot can be found this way (does not work with 'inline' enabled)
Step5: Some more commands for plot formatting
Step6: Figures can be saved in a number of output formats such as Postscript
Step7: Alternatively, you can also save figures in PNG bitmap and PDF vector formats. (Note that some export formats may not be supported on your platform.)
Step8: Get a handle to current figure and close it
Step9: Or in MATLAB style, close all open figures
Step10: Let's do a figure with several subpanels
Step11: Especially for figures with multiple subpanels it may be advisable to increase the figure size somewhat. Do this by using function arguments in the figure() call
Step12: By using Numpy arrays, Matplotlib can conveniently be used as a function ploting program
Step13: Certainly, you can do plots with logarithmic scale
Step14: Let's add grid lines
Step15: Analogously, you can use semilogy() and loglog() for plots with log y-axis and loglog plots.
Step16: Anybody bar charts?
Step17: Let's pimp the plot a little
Step18: For horizontal bar charts, you would use barh().
Where there are bars, there should be pies
Step19: As you will have seen, we retrieved handles to the individual pie slices. Let's do something with them
Step20: Matplotlib also offers quiver() plots which are illustrated in the following example (taken from http
Step21: Polar plots are also nicely illustrated on the very same homepage
Step22: Contour plots are well suited for visualization of three-dimensional data sets
Step23: A similar yet distint representation is provided by pcolormesh().
Step24: Compare the two figures, spot the similarities and differences.
Matplotlib sports an add-on module for 3D graphics as detailed below. First, we need to import this module.
Step25: Let's switch to the Qt backend for 3D plotting
Step26: Then, we can play around.
Step27: Try moving and rotating the (so far empty) plot in three dimensions. Once you have calmed down, let's populate the plot with some data
Step28: In addition to the above 3D scatter plot, other plot types are supported, such as 3D surface plots
Step29: Try some other colormaps such as cm.bone, cm.spring or cm.cool (once more, these are the MATLAB color schemes).
We do one more example that shows the use of 3D surface and 3D contour plots (as opposed to the 2D contour plots above).
Step30: Let's display the same data in contour representation
Step31: Style sheets
Step32: There are several predefinded style sheets for matplotlib. You can show all availible styles by typing
Step33: to pick one of them, type e.g.
Step34: and your plots will look similar to those created with ggplot2 in R
Step35: Exercises for Numpy and Matplotlib
Exercise 1
Plot the following functions into individual subplots
Step36: Exercise 2
generate two 1D-arrays $A$ and $B$ of size $N$ containing Gaussian random numbers
Step37: Check whether they are correlated using a scatter plot
Step38: Plot their 2D density as a contour plot (hint | Python Code:
%matplotlib inline
from pylab import *
Explanation: Matplotlib 2015
requires ipython and ipyton-notebook packages
execute: ipython notebook
check out this resource for Matplotlib: http://matplotlib.org/gallery.html
End of explanation
xv=[1,2,3,4]; yv=[5,1,4,0]
plot(xv,yv)
Explanation: Above commands enable pylab environment => direct access to numpy, scipy and matplotlib. The option 'inline' results in plot outputs to be directly embedded in the Notebook. If this causes problems, remove the option 'inline'.
End of explanation
plot(xv,yv,'ro')
myplot=plot(xv,yv,'k--')
setp(myplot,linewidth=3.0,marker='+',markersize=30)
Explanation: A simple plotting example. Maybe some variations:
End of explanation
myplot=plot(xv,yv,'k--')
setp(myplot,'linewidth',3.0,'marker','+','markersize',30)
Explanation: Alternatively, Matplotlib understands the MATLAB syntax, e.g., for the above command (does not work with 'inline' enabled):
End of explanation
setp(myplot)
Explanation: Available settings for a plot can be found this way (does not work with 'inline' enabled):
End of explanation
axis()
axis([0.5,4.5,-0.5,5.5])
ti=title('Very important data')
xl=xlabel('time'); yl=ylabel('value')
setp(xl,fontweight='bold')
Explanation: Some more commands for plot formatting:
End of explanation
savefig('foo.ps', dpi=600, format='ps',orientation='landscape')
Explanation: Figures can be saved in a number of output formats such as Postscript:
End of explanation
savefig('foo.png', dpi=600, format='png',orientation='landscape')
savefig('foo.pdf', dpi=600, format='pdf',orientation='landscape')
Explanation: Alternatively, you can also save figures in PNG bitmap and PDF vector formats. (Note that some export formats may not be supported on your platform.)
End of explanation
myfig=gcf()
close(myfig)
Explanation: Get a handle to current figure and close it:
End of explanation
close('all')
Explanation: Or in MATLAB style, close all open figures:
End of explanation
fig2=figure()
subplot(2,1,1)
plot(xv,yv,'b-')
subplot(2,1,2)
plot(yv,xv,'ro')
close(fig2)
Explanation: Let's do a figure with several subpanels:
End of explanation
fig2=figure(figsize=(10,10))
subplot(2,1,1)
plot(xv,yv,'b-')
subplot(2,1,2)
plot(yv,xv,'ro')
Explanation: Especially for figures with multiple subpanels it may be advisable to increase the figure size somewhat. Do this by using function arguments in the figure() call:
End of explanation
xv=np.arange(-10,10.5,0.5); xv
plot(xv,2*xv**3-5*xv**2+7*xv)
plot(xv,2000*cos(xv),'r--')
text(-10,-2800,'curve A')
text(3,1500,'curve B')
Explanation: By using Numpy arrays, Matplotlib can conveniently be used as a function ploting program:
End of explanation
close('all'); xv_lin=np.arange(-3,3.01,0.02)
xv=10.**xv_lin
semilogx(xv,exp(-xv/0.01)+0.5*exp(-xv/10)+0.2* exp(-xv/200))
Explanation: Certainly, you can do plots with logarithmic scale:
End of explanation
semilogx(xv,exp(-xv/0.01)+0.5*exp(-xv/10)+0.2* exp(-xv/200))
grid(color='k')
Explanation: Let's add grid lines:
End of explanation
semilogy(xv,exp(-xv/0.01)+0.5*exp(-xv/10)+0.2* exp(-xv/200))
Explanation: Analogously, you can use semilogy() and loglog() for plots with log y-axis and loglog plots.
End of explanation
close('all')
xv=[0.5,1.5,2.5,3.5]; yv=[2,5,1,6]
mybar=bar(xv, yv, width=1, yerr=0.5)
Explanation: Anybody bar charts?
End of explanation
mybar=bar(xv, yv, width=1, yerr=0.5)
xticks(xv, ['A','B','C','D'])
setp(mybar, color='r',
edgecolor='k')
Explanation: Let's pimp the plot a little:
End of explanation
close('all')
figure(figsize=(5,5))
handles=pie([1,2,3,4], explode=[0.2,0,0,0], shadow=True, labels=['A','B','C','D'])
handles
Explanation: For horizontal bar charts, you would use barh().
Where there are bars, there should be pies:
End of explanation
figure(figsize=(5,5))
handles=pie([1,2,3,4], explode=[0.2,0,0,0], shadow=True, labels=['A','B','C','D'])
setp(handles[0] [0], color='y')
setp(handles[1] [0], text='Blubber')
Explanation: As you will have seen, we retrieved handles to the individual pie slices. Let's do something with them:
End of explanation
close('all')
n=8; X,Y=np.mgrid[0:n,0:n]
T=np.arctan2(Y-n/2.0,X-n/2.0)
R=10+np.sqrt((Y-n/2.0)**2+(X-n/2.0)**2)
U,V=R*np.cos(T),R*np.sin(T)
axes([0.025,0.025,0.95,0.95])
quiver(X,Y,U,V,R,alpha=.5)
quiver(X,Y,U,V, edgecolor='k', facecolor= 'None', linewidth=.5)
show()
Explanation: Matplotlib also offers quiver() plots which are illustrated in the following example (taken from http://www.loria.fr/~rougier/teaching/matplotlib/):
End of explanation
close('all')
ax=axes([0.025,0.025,0.95,0.95],polar=True)
N=20; theta=np.arange(0.0,2*np.pi,2*np.pi/N)
radii=10*np.random.rand(N)
width=np.pi/4*np.random.rand(N)
bars=bar(theta,radii,width=width,bottom=0.0)
for r,bar in zip(radii,bars):
bar.set_facecolor( cm.jet(r/10.))
bar.set_alpha(0.5)
show()
Explanation: Polar plots are also nicely illustrated on the very same homepage:
End of explanation
close('all')
xv=linspace(-10,10,100); yv=xv
X,Y=meshgrid(xv,yv)
Z=exp(-((X-1)**2/2/0.5**2)-((Y+2)**2/2/3**2))
Z=Z+1.5*exp(-((X-5)**2/2/4**2)-((Y-6)**2/2/3**2))
contourf(X,Y,Z,10,alpha=0.5,cmap=cm.hot)
C=contour(X,Y,Z,10,colors='black', linewidth=0.5)
clabel(C,inline=1,fontsize=10)
Explanation: Contour plots are well suited for visualization of three-dimensional data sets:
End of explanation
figure()
pcolormesh(X,Y,Z,alpha=0.5,cmap=cm.hot)
axis([-5,10,-8,10])
Explanation: A similar yet distint representation is provided by pcolormesh().
End of explanation
from mpl_toolkits.mplot3d import Axes3D
Explanation: Compare the two figures, spot the similarities and differences.
Matplotlib sports an add-on module for 3D graphics as detailed below. First, we need to import this module.
End of explanation
%matplotlib
Explanation: Let's switch to the Qt backend for 3D plotting
End of explanation
close('all')
fig=figure(); ax=Axes3D(fig)
Explanation: Then, we can play around.
End of explanation
close('all')
fig=figure(); ax=Axes3D(fig)
import random as rn
xv=[]; yv=[]; zv=[]
for c in range(100):
xv.append(rn.random()); yv.append(rn.random())
zv.append(rn.random())
ax.scatter(xv,yv,zv)
Explanation: Try moving and rotating the (so far empty) plot in three dimensions. Once you have calmed down, let's populate the plot with some data:
End of explanation
close('all'); fig=figure()
ax=Axes3D(fig)
xv=linspace(-10,10,100); yv=linspace(-10,10,100)
cx,cy=meshgrid(xv,yv)
cz=0.5*cx+exp(-cy**2)
tilt=ax.plot_surface(cx,cy,cz,linewidth=0, cmap=cm.jet)
Explanation: In addition to the above 3D scatter plot, other plot types are supported, such as 3D surface plots:
End of explanation
close('all'); fig=figure()
ax=Axes3D(fig)
xv=linspace(-10,10,100); yv=linspace(-10,10,100)
cx,cy=meshgrid(xv,yv)
cz=0*cx
def gauss2D(x0,y0,sigx=1,sigy=1,height=1):
z=height*exp(-((cx-x0)**2/2/sigx**2)-((cy-y0)**2/2/sigy**2))
return z
cz=cz+gauss2D(-2,3)
cz=cz+gauss2D(2,4,2,3)
ax.plot_surface(cx,cy,cz,linewidth=0,cstride=2, rstride=2,cmap=cm.jet)
Explanation: Try some other colormaps such as cm.bone, cm.spring or cm.cool (once more, these are the MATLAB color schemes).
We do one more example that shows the use of 3D surface and 3D contour plots (as opposed to the 2D contour plots above).
End of explanation
close('all'); fig=figure()
ax=Axes3D(fig)
ax.contour(cx,cy,cz,cstride=2,rstride=2, cmap=cm.jet)
Explanation: Let's display the same data in contour representation:
End of explanation
%matplotlib inline
close('all')
Explanation: Style sheets
End of explanation
style.available
Explanation: There are several predefinded style sheets for matplotlib. You can show all availible styles by typing
End of explanation
style.use('ggplot')
Explanation: to pick one of them, type e.g.
End of explanation
x = np.linspace(0,10,100)
y = np.sin(x)
plot(x,y)
Explanation: and your plots will look similar to those created with ggplot2 in R
End of explanation
%matplotlib inline
Explanation: Exercises for Numpy and Matplotlib
Exercise 1
Plot the following functions into individual subplots:
$f(x) = e^{-\alpha x^2},\quad g(x) = \alpha x^3-5x^2, \quad h(x) = \mathrm{erf}(x)$
with $\alpha \in {0.1,0.5,1}$
hint: use scipy for the definition of $\mathrm{erf}(x)$
End of explanation
N = 1e4
Explanation: Exercise 2
generate two 1D-arrays $A$ and $B$ of size $N$ containing Gaussian random numbers
End of explanation
%matplotlib inline
Explanation: Check whether they are correlated using a scatter plot
End of explanation
%matplotlib
Explanation: Plot their 2D density as a contour plot (hint: the density can be obtained from a histogram)
Plot the density as 3D surface plot
End of explanation |
14,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build and test the environment
This document explains how to set up your environment for the geocomputing course.
1. Install Anaconda
First, install Anaconda for Python 3.5, following the instructions there.
If you wish, you can also install git for your platform, but it's not a requirement.
2. Get the course materials
If you are using git, clone this repo
Step1: 5. Check conda installs
If any of these fail, go out to a terminal and do this, replacing <package> with the name of the package.
source activate geocomp # Or whatever is the name of the environment
conda install <package>
Step2: 6. Check pip installs
If any of these fail, go out to a terminal and do this, replacing <package> with the name of the package.
source activate geocomp # Or whatever is the name of the environment
pip install <package>
Step3: 7. Download data | Python Code:
!python -V
# Should be 3.5
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import ricker
Explanation: Build and test the environment
This document explains how to set up your environment for the geocomputing course.
1. Install Anaconda
First, install Anaconda for Python 3.5, following the instructions there.
If you wish, you can also install git for your platform, but it's not a requirement.
2. Get the course materials
If you are using git, clone this repo: https://github.com/EvanBianco/Practical_Programming_for_Geoscientists
If not, download this ZIP file: https://github.com/EvanBianco/Practical_Programming_for_Geoscientists/archive/master.zip
Put the folder somewhere you will find it again.
This file is in the top level folder, called Build_ad_test_environment.ipynb.
3. a. Set up the environment
Do the following in a terminal:
conda config –-add channels conda-forge
conda create –n geocomputing python=3.5 anaconda
Start the environment:
source activate geocomputing
Now install packages:
conda install numpy # Ensures latest.
conda install obspy
conda install geopandas
conda install ipyparallel
conda install tqdm
conda install folium
pip install lasio
pip install welly
pip install bruges
All of this should go without trouble.
3. b. Enter the environment and the repo
Now you will need to start running this notebook.
# If you didn't do this already:
git clone https://github.com/EvanBianco/Practical_Programming_for_Geoscientists
Then...
cd geocomputing
jupyter notebook Build_and_test_environment.ipynb
or if you're already running it, restart its kernel... and we can go on to check the environment.
4. Check anaconda basics
End of explanation
import pandas as pd
import requests
import numba
import ipyparallel as ipp
import obspy
import geopandas as gpd # Not a catastrophe if missing.
import folium # Not a catastrophe if missing.
Explanation: 5. Check conda installs
If any of these fail, go out to a terminal and do this, replacing <package> with the name of the package.
source activate geocomp # Or whatever is the name of the environment
conda install <package>
End of explanation
import lasio
import welly
import bruges as b
Explanation: 6. Check pip installs
If any of these fail, go out to a terminal and do this, replacing <package> with the name of the package.
source activate geocomp # Or whatever is the name of the environment
pip install <package>
End of explanation
import os
import requests
files = [
'2D_Land_vibro_data_2ms.tgz',
'3D_gathers_pstm_nmo_X1001.sgy',
'Penobscot_0-1000ms.sgy.gz',
'Penobscot_NumPy.npy.gz',
]
url = "https://s3.amazonaws.com/agilegeo/"
localpath = '' # For CWD.
for file in files:
print(file)
r = requests.get(url+file, stream=True)
chunk_size = 1024 * 1024 # 1MB
with open(os.path.join(localpath, file), 'wb') as fd:
for chunk in r.iter_content(chunk_size):
if chunk:
fd.write(chunk)
Explanation: 7. Download data
End of explanation |
14,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step1: Load the data
The treeslider() tool takes the .seqs.hdf5 database file from ipyrad as its input file. Select scaffolds by their index (integer) which can be found in the .scaffold_table.
Step2: Quick full example
Here I select the scaffold Qrob_Chr03 (scaffold_idx=2), and run 2Mb windows (window_size) non-overlapping (2Mb slide_size) across the entire scaffold. I use the default inference method "raxml", and modify its default arguments to run 100 bootstrap replicates. More details on modifying raxml params later. I set for it to skip windows with <10 SNPs (minsnps), and to filter sites within windows (mincov) to only include those that have coverage across all 9 clades, with samples grouped into clades using an imap dictionary.
Step3: The results table (tree table)
The main result of a tree slider analysis is the tree_table. This is a pandas dataframe that includes information about the size and informativeness of each window in addition to the inferred tree for that window. This table is also saved as a CSV file. You can later re-load this CSV to perform further analysis on the tree results. For example, see the clade_weights tool for how to analyze the support for clades throughout the genome, or see the example tutorial for running ASTRAL species tree or SNAQ species network analyses using the list of trees inferred here.
Step4: Filter and examine the tree table
Some windows in your analysis may not include a tree if for example there was too much missing data or insufficient information in that region. You can use pandas masking like below to filter based on various criteria.
Step5: The tree inference command
You can examine the command that will be called on each genomic window. By modifying the inference_args above we can modify this string. See examples later in this tutorial.
Step6: Run tree inference jobs in parallel
To run the command on every window across all available cores call the .run() command. This will automatically save checkpoints to a file of the tree_table as it runs, and can be restarted later if it interrupted.
Step7: The tree table
Our goal is to fill the .tree_table, a pandas DataFrame where rows are genomic windows and the information content of each window is recorded, and a newick string tree is inferred and filled in for each. The tree table is also saved as a CSV formatted file in the workdir. You can re-load it later using Pandas. Below I demonstrate how to plot results from the tree_able. To examine how phylogenetic relationships vary across the genome see also the clade_weights() tool, which takes the tree_table as input.
Step8: <h3><span style="color
Step9: Draw cloud tree
Using toytree you can easily draw a cloud tree of overlapping gene trees to visualize discordance. These typically look much better if you root the trees, order tips by their consensus tree order, and do not use edge lengths. See below for an example, and see the toytree documentation.
Step10: <h3><span style="color | Python Code:
# conda install ipyrad -c bioconda
# conda install raxml -c bioconda
# conda install toytree -c eaton-lab
import ipyrad.analysis as ipa
import toytree
Explanation: <span style="color:gray">ipyrad-analysis toolkit:</span> treeslider
<h5><span style="color:red">(Reference only method)</span></h5>
With reference mapped RAD loci you can select windows of loci located close together on scaffolds and automate extracting and filtering and concatenating the RAD data to write to phylip format (see also the window_extracter tool.) The treeslider tool here automates this process across many windows, distributes the tree inference jobs in parallel, and organizes the results.
Key features:
Filter and concatenate ref-mapped RAD loci into alignments.
Group individuals into clades represented by consensus (reduces missing data).
Distribute phylogenetic inference jobs (e.g., raxml) in parallel.
Easily restart from checkpoints if interrupted.
Results written as a tree_table (dataframe).
Can be paired with other tools for further analysis (e.g., see clade_weights).
Required software
End of explanation
# the path to your HDF5 formatted seqs file
data = "/home/deren/Downloads/ref_pop2.seqs.hdf5"
# check scaffold idx (row) against scaffold names
ipa.treeslider(data).scaffold_table.head()
Explanation: Load the data
The treeslider() tool takes the .seqs.hdf5 database file from ipyrad as its input file. Select scaffolds by their index (integer) which can be found in the .scaffold_table.
End of explanation
# select a scaffold idx, start, and end positions
ts = ipa.treeslider(
name="test2",
data="/home/deren/Downloads/ref_pop2.seqs.hdf5",
workdir="analysis-treeslider",
scaffold_idxs=2,
window_size=250000,
slide_size=250000,
inference_method="raxml",
inference_args={"N": 100, "T": 4},
minsnps=10,
consensus_reduce=True,
mincov=5,
imap={
"reference": ["reference"],
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi-N": ["TXGR3", "TXMD3"],
"fusi-S": ["MXED8", "MXGT4"],
"sagr": ["CUVN10", "CUCA4", "CUSV6"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017"],
},
)
ts.show_inference_command()
ts.run(auto=True, force=True)
Explanation: Quick full example
Here I select the scaffold Qrob_Chr03 (scaffold_idx=2), and run 2Mb windows (window_size) non-overlapping (2Mb slide_size) across the entire scaffold. I use the default inference method "raxml", and modify its default arguments to run 100 bootstrap replicates. More details on modifying raxml params later. I set for it to skip windows with <10 SNPs (minsnps), and to filter sites within windows (mincov) to only include those that have coverage across all 9 clades, with samples grouped into clades using an imap dictionary.
End of explanation
ts.tree_table.head()
Explanation: The results table (tree table)
The main result of a tree slider analysis is the tree_table. This is a pandas dataframe that includes information about the size and informativeness of each window in addition to the inferred tree for that window. This table is also saved as a CSV file. You can later re-load this CSV to perform further analysis on the tree results. For example, see the clade_weights tool for how to analyze the support for clades throughout the genome, or see the example tutorial for running ASTRAL species tree or SNAQ species network analyses using the list of trees inferred here.
End of explanation
# example: remove any rows where the tree is NaN
df = ts.tree_table.loc[ts.tree_table.tree.notna()]
mtre = toytree.mtree(df.tree)
mtre.treelist = [i.root("reference") for i in mtre.treelist]
mtre.draw_tree_grid(
nrows=3, ncols=4, start=20,
tip_labels_align=True,
tip_labels_style={"font-size": "9px"},
);
# select a scaffold idx, start, and end positions
ts = ipa.treeslider(
name="test",
data="/home/deren/Downloads/ref_pop2.seqs.hdf5",
workdir="analysis-treeslider",
scaffold_idxs=2,
window_size=1000000,
slide_size=1000000,
inference_method="mb",
inference_args={"N": 0, "T": 4},
minsnps=10,
mincov=9,
consensus_reduce=True,
imap={
"reference": ["reference"],
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi-N": ["TXGR3", "TXMD3"],
"fusi-S": ["MXED8", "MXGT4"],
"sagr": ["CUVN10", "CUCA4", "CUSV6"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017"],
},
)
# select a scaffold idx, start, and end positions
ts = ipa.treeslider(
name="test",
data="/home/deren/Downloads/ref_pop2.seqs.hdf5",
workdir="analysis-treeslider",
scaffold_idxs=2,
window_size=2000000,
slide_size=2000000,
inference_method="raxml",
inference_args={"N": 100, "T": 4},
minsnps=10,
mincov=9,
imap={
"reference": ["reference"],
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi-N": ["TXGR3", "TXMD3"],
"fusi-S": ["MXED8", "MXGT4"],
"sagr": ["CUVN10", "CUCA4", "CUSV6"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017"],
},
)
Explanation: Filter and examine the tree table
Some windows in your analysis may not include a tree if for example there was too much missing data or insufficient information in that region. You can use pandas masking like below to filter based on various criteria.
End of explanation
# this is the tree inference command that will be used
ts.show_inference_command()
Explanation: The tree inference command
You can examine the command that will be called on each genomic window. By modifying the inference_args above we can modify this string. See examples later in this tutorial.
End of explanation
ts.run(auto=True, force=True)
Explanation: Run tree inference jobs in parallel
To run the command on every window across all available cores call the .run() command. This will automatically save checkpoints to a file of the tree_table as it runs, and can be restarted later if it interrupted.
End of explanation
# the tree table is automatically saved to disk as a CSV during .run()
ts.tree_table.head()
Explanation: The tree table
Our goal is to fill the .tree_table, a pandas DataFrame where rows are genomic windows and the information content of each window is recorded, and a newick string tree is inferred and filled in for each. The tree table is also saved as a CSV formatted file in the workdir. You can re-load it later using Pandas. Below I demonstrate how to plot results from the tree_able. To examine how phylogenetic relationships vary across the genome see also the clade_weights() tool, which takes the tree_table as input.
End of explanation
# filter to only windows with >50 SNPS
trees = ts.tree_table[ts.tree_table.snps > 50].tree.tolist()
# load all trees into a multitree object
mtre = toytree.mtree(trees)
# root trees and collapse nodes with <50 bootstrap support
mtre.treelist = [
i.root("reference").collapse_nodes(min_support=50)
for i in mtre.treelist
]
# draw the first 12 trees in a grid
mtre.draw_tree_grid(
nrows=3, ncols=4, start=0,
tip_labels_align=True,
tip_labels_style={"font-size": "9px"},
);
Explanation: <h3><span style="color:red">Advanced</span>: Plots tree results </h3>
Examine multiple trees
You can select trees from the .tree column of the tree_table and plot them one by one using toytree, or any other tree drawing tool. Below I use toytree to draw a grid of the first 12 trees.
End of explanation
# filter to only windows with >50 SNPS (this could have been done in run)
trees = ts.tree_table[ts.tree_table.snps > 50].tree.tolist()
# load all trees into a multitree object
mtre = toytree.mtree(trees)
# root trees
mtre.treelist = [i.root("reference") for i in mtre.treelist]
# infer a consensus tree to get best tip order
ctre = mtre.get_consensus_tree()
# draw the first 12 trees in a grid
mtre.draw_cloud_tree(
width=400,
height=400,
fixed_order=ctre.get_tip_labels(),
use_edge_lengths=False,
);
Explanation: Draw cloud tree
Using toytree you can easily draw a cloud tree of overlapping gene trees to visualize discordance. These typically look much better if you root the trees, order tips by their consensus tree order, and do not use edge lengths. See below for an example, and see the toytree documentation.
End of explanation
# select a scaffold idx, start, and end positions
ts = ipa.treeslider(
name="chr1_w500K_s100K",
data=data,
workdir="analysis-treeslider",
scaffold_idxs=[0, 1, 2],
window_size=500000,
slide_size=100000,
minsnps=10,
inference_method="raxml",
inference_args={"m": "GTRCAT", "N": 10, "f": "d", 'x': None},
)
# this is the tree inference command that will be used
ts.show_inference_command()
Explanation: <h3><span style="color:red">Advanced</span>: Modify the raxml command</h3>
In this analysis I entered multiple scaffolds to create windows across each scaffold. I also entered a smaller slide size than window size so that windows are partially overlapping. The raxml command string was modified to perform 10 full searches with no bootstraps.
End of explanation |
14,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing SourceTracker 2 Results
Once you have run sourcetracker2 to produce the mixing proportions of our sources to your sink samples, you'll likely want to visualize the results for understanding your scientific question of interest. Given that many individuals are using IPython notebooks inline for analysis, I'll demonstrate some quick ways to visualize the results in a notebook using pandas.
The results files are simple tab-delimited text documents, which will make importing into R, Excel, MATLAB, or your favorite visualization package very easy!
Step1: The above table shows us that we had 5 sink samples and 4 possible source environments (including the Unknown).
The pandas package has built in plotting features (built upon matplotlib), that allow for quick visualization.
Step2: If the user wanted to plot the standard deviations for the draws for estimating the mixing proportions, simply create a new pandas dataframe with the mixing_proportions_stds.txt, and pass that dataframe into the plotting function with the yerr argument.
Step3: Pandas also allows the user to specify which columns of interest to plot
Step4: Or, if the user wants to use subplots, pandas also allows for that
Step5: Matplotlib options
Because pandas uses matplotlib in the background, you can use the matplotlib API to alter your graphs | Python Code:
# Import packages of interest
# You might need to install these in your local environment
# which can be easily accomplished with $pip (package)
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
# Move into our tiny test directory
cd ../data/tiny-test/
# read in the mixing proportions result file to a pandas DataFrame
# sep='\t' to denote tab delimited file
# index_col=0 to denote that pandas should set the index values as the SampleIDs
results = pd.read_csv('mixing_proportions/mixing_proportions.txt', sep='\t', index_col=0)
results
Explanation: Visualizing SourceTracker 2 Results
Once you have run sourcetracker2 to produce the mixing proportions of our sources to your sink samples, you'll likely want to visualize the results for understanding your scientific question of interest. Given that many individuals are using IPython notebooks inline for analysis, I'll demonstrate some quick ways to visualize the results in a notebook using pandas.
The results files are simple tab-delimited text documents, which will make importing into R, Excel, MATLAB, or your favorite visualization package very easy!
End of explanation
results.plot(kind='bar', grid=True, figsize=(8,6), ylim=(0,0.5))
Explanation: The above table shows us that we had 5 sink samples and 4 possible source environments (including the Unknown).
The pandas package has built in plotting features (built upon matplotlib), that allow for quick visualization.
End of explanation
stdevs = pd.read_csv('mixing_proportions/mixing_proportions_stds.txt', sep='\t', index_col=0)
stdevs
# Plot mixing proportions with yerr
results.plot(kind='bar', grid=True, figsize=(8,6), ylim=(0,0.5), yerr=stdevs)
Explanation: If the user wanted to plot the standard deviations for the draws for estimating the mixing proportions, simply create a new pandas dataframe with the mixing_proportions_stds.txt, and pass that dataframe into the plotting function with the yerr argument.
End of explanation
# Plotting only the drainwater source
results['drainwater'].plot(kind='bar', grid=True, figsize=(8,6), ylim=(0,0.5), yerr=stdevs, color='pink', title='Mixing Proportions')
Explanation: Pandas also allows the user to specify which columns of interest to plot:
End of explanation
# Plot mixing proportions with yerr
results[['drainwater', 'seawater']].plot(subplots=True, kind='bar', grid=True, figsize=(8,6), ylim=(0,0.5), yerr=stdevs)
Explanation: Or, if the user wants to use subplots, pandas also allows for that:
End of explanation
# set figure in matplotlib
fig, ax = plt.subplots(1,1)
# read in the dataframe
# use 'ax=ax' to assign the dataframe to the matplotlib axes object
results.plot(kind='line', lw=5, ax=ax)
# set options
ax.set_ylabel('Proportions')
ax.set_xlabel('Sample')
ax.set_title('Mixing Proportions')
# move legend
ax.legend(bbox_to_anchor=(1.4, 0.4))
Explanation: Matplotlib options
Because pandas uses matplotlib in the background, you can use the matplotlib API to alter your graphs:
End of explanation |
14,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
medication
The medications table reflects the active medication orders for patients. These are orders but do not necessarily reflect administration to the patient. For example, while existence of data in the infusionDrug table confirms a patient received a continuous infusion, existence of the same data in this table only indicates that the infusion was ordered for the patient. Most orders are fulfilled, but not all. Furthermore, many orders are done pro re nata, or PRN, which means "when needed". Administration of these orders is difficult to quantify.
In the US, all orders must be reviewed by a pharmacist. The majority of hospitals have an HL7 medication interface system in place which automatically synchronizes the orders with eCareManager (the source of this database) as they are verified by the pharmacist in the source pharmacy system. For hospitals without a medication interface, the eICU staff may enter a selection of medications to facilitate population management and completeness for reporting purposes.
Step2: Examine a single patient
Step4: Here we can see that, roughly on ICU admission, the patient had an order for vancomycin, aztreonam, and tobramycin.
Identifying patients admitted on a single drug
Let's look for patients who have an order for vancomycin using exact text matching.
Step6: Exact text matching is fairly weak, as there's no systematic reason to prefer upper case or lower case. Let's relax the case matching.
Step8: HICL codes are used to group together drugs which have the same underlying ingredient (i.e. most frequently this is used to group brand name drugs with the generic name drugs). We can see above the HICL for vancomycin is 10093, so let's try grabbing that.
Step10: No luck! I wonder what we missed? Let's go back to the original query, this time retaining HICL and the name of the drug.
Step11: It appears there are more than one HICL - we can group by HICL in this query to get an idea.
Step13: Unfortunately, we can't be sure that these HICLs always identify only vancomycin. For example, let's look at drugnames for HICL = 1403.
Step15: This HICL seems more focused on the use of creams than on vancomycin. Let's instead inspect the top 3.
Step17: This is fairly convincing that these only refer to vancomycin. An alternative approach is to acquire the code book for HICL codes and look up vancomycin there.
Hospitals with data available | Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
Explanation: medication
The medications table reflects the active medication orders for patients. These are orders but do not necessarily reflect administration to the patient. For example, while existence of data in the infusionDrug table confirms a patient received a continuous infusion, existence of the same data in this table only indicates that the infusion was ordered for the patient. Most orders are fulfilled, but not all. Furthermore, many orders are done pro re nata, or PRN, which means "when needed". Administration of these orders is difficult to quantify.
In the US, all orders must be reviewed by a pharmacist. The majority of hospitals have an HL7 medication interface system in place which automatically synchronizes the orders with eCareManager (the source of this database) as they are verified by the pharmacist in the source pharmacy system. For hospitals without a medication interface, the eICU staff may enter a selection of medications to facilitate population management and completeness for reporting purposes.
End of explanation
patientunitstayid = 237395
query = query_schema +
select *
from medication
where patientunitstayid = {}
order by drugorderoffset
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
df.columns
# Look at a subset of columns
cols = ['medicationid','patientunitstayid',
'drugorderoffset','drugorderoffset', 'drugstopoffset',
'drugivadmixture', 'drugordercancelled', 'drugname','drughiclseqno', 'gtc',
'dosage','routeadmin','loadingdose', 'prn']
df[cols].head().T
Explanation: Examine a single patient
End of explanation
drug = 'VANCOMYCIN'
query = query_schema +
select
distinct patientunitstayid
from medication
where drugname like '%{}%'
.format(drug)
df_drug = pd.read_sql_query(query, con)
print('{} unit stays with {}.'.format(df_drug.shape[0], drug))
Explanation: Here we can see that, roughly on ICU admission, the patient had an order for vancomycin, aztreonam, and tobramycin.
Identifying patients admitted on a single drug
Let's look for patients who have an order for vancomycin using exact text matching.
End of explanation
drug = 'VANCOMYCIN'
query = query_schema +
select
distinct patientunitstayid
from medication
where drugname ilike '%{}%'
.format(drug)
df_drug = pd.read_sql_query(query, con)
print('{} unit stays with {}.'.format(df_drug.shape[0], drug))
Explanation: Exact text matching is fairly weak, as there's no systematic reason to prefer upper case or lower case. Let's relax the case matching.
End of explanation
hicl = 10093
query = query_schema +
select
distinct patientunitstayid
from medication
where drughiclseqno = {}
.format(hicl)
df_hicl = pd.read_sql_query(query, con)
print('{} unit stays with HICL = {}.'.format(df_hicl.shape[0], hicl))
Explanation: HICL codes are used to group together drugs which have the same underlying ingredient (i.e. most frequently this is used to group brand name drugs with the generic name drugs). We can see above the HICL for vancomycin is 10093, so let's try grabbing that.
End of explanation
drug = 'VANCOMYCIN'
query = query_schema +
select
drugname, drughiclseqno, count(*) as n
from medication
where drugname ilike '%{}%'
group by drugname, drughiclseqno
order by n desc
.format(drug)
df_drug = pd.read_sql_query(query, con)
df_drug.head()
Explanation: No luck! I wonder what we missed? Let's go back to the original query, this time retaining HICL and the name of the drug.
End of explanation
df_drug['drughiclseqno'].value_counts()
Explanation: It appears there are more than one HICL - we can group by HICL in this query to get an idea.
End of explanation
hicl = 1403
query = query_schema +
select
drugname, count(*) as n
from medication
where drughiclseqno = {}
group by drugname
order by n desc
.format(hicl)
df_hicl = pd.read_sql_query(query, con)
df_hicl.head()
Explanation: Unfortunately, we can't be sure that these HICLs always identify only vancomycin. For example, let's look at drugnames for HICL = 1403.
End of explanation
for hicl in [4042, 10093, 37442]:
query = query_schema +
select
drugname, count(*) as n
from medication
where drughiclseqno = {}
group by drugname
order by n desc
.format(hicl)
df_hicl = pd.read_sql_query(query, con)
print('HICL {}'.format(hicl))
print('Number of rows: {}'.format(df_hicl['n'].sum()))
print('Top 5 rows by frequency:')
print(df_hicl.head())
print()
Explanation: This HICL seems more focused on the use of creams than on vancomycin. Let's instead inspect the top 3.
End of explanation
query = query_schema +
with t as
(
select distinct patientunitstayid
from medication
)
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct t.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join t
on pt.patientunitstayid = t.patientunitstayid
group by pt.hospitalid
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
Explanation: This is fairly convincing that these only refer to vancomycin. An alternative approach is to acquire the code book for HICL codes and look up vancomycin there.
Hospitals with data available
End of explanation |
14,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Обнаружение статистически значимых отличий в уровнях экспрессии генов больных раком
Данные для этой задачи взяты из исследования, проведённого в Stanford School of Medicine. В исследовании была предпринята попытка выявить набор генов, которые позволили бы более точно диагностировать возникновение рака груди на самых ранних стадиях.
В эксперименте принимали участие 24 человек, у которых не было рака груди (normal), 25 человек, у которых это заболевание было диагностировано на ранней стадии (early neoplasia), и 23 человека с сильно выраженными симптомами (cancer).
Step1: Ученые провели секвенирование биологического материала испытуемых, чтобы понять, какие из этих генов наиболее активны в клетках больных людей.
Секвенирование — это определение степени активности генов в анализируемом образце с помощью подсчёта количества соответствующей каждому гену РНК.
В данных для этого задания представлена именно эта количественная мера активности каждого из 15748 генов у каждого из 72 человек, принимавших участие в эксперименте.
Нужно будет определить те гены, активность которых у людей в разных стадиях заболевания отличается статистически значимо.
Кроме того, нужно будет оценить не только статистическую, но и практическую значимость этих результатов, которая часто используется в подобных исследованиях.
Диагноз человека содержится в столбце под названием "Diagnosis".
Практическая значимость изменения
Цель исследований — найти гены, средняя экспрессия которых отличается не только статистически значимо, но и достаточно сильно. В экспрессионных исследованиях для этого часто используется метрика, которая называется fold change (кратность изменения). Определяется она следующим образом
Step2: Для того, чтобы использовать двухвыборочный критерий Стьюдента, убедимся, что распределения в выборках существенно не отличаются от нормальных, применив критерий Шапиро-Уилка.
Step3: Так как среднее значение p-value >> 0.05, то будем применять критерий Стьюдента.
Step4: Часть 2
Step5: Часть 3 | Python Code:
from __future__ import division
import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
gen = pd.read_csv('gene_high_throughput_sequencing.csv')
gen.head()
types, cnts = np.unique(gen.Diagnosis.values, return_counts=True)
_ = sns.barplot(types, cnts)
_ = plt.xlabel('Diagnosis')
_ = plt.ylabel('Count')
Explanation: Обнаружение статистически значимых отличий в уровнях экспрессии генов больных раком
Данные для этой задачи взяты из исследования, проведённого в Stanford School of Medicine. В исследовании была предпринята попытка выявить набор генов, которые позволили бы более точно диагностировать возникновение рака груди на самых ранних стадиях.
В эксперименте принимали участие 24 человек, у которых не было рака груди (normal), 25 человек, у которых это заболевание было диагностировано на ранней стадии (early neoplasia), и 23 человека с сильно выраженными симптомами (cancer).
End of explanation
#Diagnosis types
types
#Split data by groups
gen_normal = gen.loc[gen.Diagnosis == 'normal']
gen_neoplasia = gen.loc[gen.Diagnosis == 'early neoplasia']
gen_cancer = gen.loc[gen.Diagnosis == 'cancer']
Explanation: Ученые провели секвенирование биологического материала испытуемых, чтобы понять, какие из этих генов наиболее активны в клетках больных людей.
Секвенирование — это определение степени активности генов в анализируемом образце с помощью подсчёта количества соответствующей каждому гену РНК.
В данных для этого задания представлена именно эта количественная мера активности каждого из 15748 генов у каждого из 72 человек, принимавших участие в эксперименте.
Нужно будет определить те гены, активность которых у людей в разных стадиях заболевания отличается статистически значимо.
Кроме того, нужно будет оценить не только статистическую, но и практическую значимость этих результатов, которая часто используется в подобных исследованиях.
Диагноз человека содержится в столбце под названием "Diagnosis".
Практическая значимость изменения
Цель исследований — найти гены, средняя экспрессия которых отличается не только статистически значимо, но и достаточно сильно. В экспрессионных исследованиях для этого часто используется метрика, которая называется fold change (кратность изменения). Определяется она следующим образом:
Fc(C,T)=T/C при T>C и -T/C при T<C,
где C,T — средние значения экспрессии гена в control и treatment группах соответственно. По сути, fold change показывает, во сколько раз отличаются средние двух выборок.
Часть 1: применение t-критерия Стьюдента
В первой части нужно применить критерий Стьюдента для проверки гипотезы о равенстве средних в двух независимых выборках. Применить критерий для каждого гена нужно будет дважды:
для групп normal (control) и early neoplasia (treatment)
для групп early neoplasia (control) и cancer (treatment)
В качестве ответа в этой части задания необходимо указать количество статистически значимых отличий, которые мы нашли с помощью t-критерия Стьюдента, то есть число генов, у которых p-value этого теста оказался меньше, чем уровень значимости.
End of explanation
#Shapiro-Wilk test for samples
print('Shapiro-Wilk test for samples')
sw_normal = gen_normal.iloc[:,2:].apply(stats.shapiro, axis=0)
sw_normal_p = [p for _, p in sw_normal]
_, sw_normal_p_corr, _, _ = multipletests(sw_normal_p, method='fdr_bh')
sw_neoplasia = gen_neoplasia.iloc[:,2:].apply(stats.shapiro, axis=0)
sw_neoplasia_p = [p for _, p in sw_neoplasia]
_, sw_neoplasia_p_corr, _, _ = multipletests(sw_neoplasia_p, method='fdr_bh')
sw_cancer = gen_cancer.iloc[:,2:].apply(stats.shapiro, axis=0)
sw_cancer_p = [p for _, p in sw_cancer]
_, sw_cancer_p_corr, _, _ = multipletests(sw_cancer_p, method='fdr_bh')
print('Mean corrected p-value for "normal": %.4f' % sw_normal_p_corr.mean())
print('Mean corrected p-value for "early neoplasia": %.4f' % sw_neoplasia_p_corr.mean())
print('Mean corrected p-value for "cancer": %.4f' % sw_cancer_p_corr.mean())
Explanation: Для того, чтобы использовать двухвыборочный критерий Стьюдента, убедимся, что распределения в выборках существенно не отличаются от нормальных, применив критерий Шапиро-Уилка.
End of explanation
tt_ind_normal_neoplasia = stats.ttest_ind(gen_normal.iloc[:,2:], gen_neoplasia.iloc[:,2:], equal_var = False)
tt_ind_normal_neoplasia_p = tt_ind_normal_neoplasia[1]
tt_ind_neoplasia_cancer = stats.ttest_ind(gen_neoplasia.iloc[:,2:], gen_cancer.iloc[:,2:], equal_var = False)
tt_ind_neoplasia_cancer_p = tt_ind_neoplasia_cancer[1]
tt_ind_normal_neoplasia_p_5 = tt_ind_normal_neoplasia_p[np.where(tt_ind_normal_neoplasia_p < 0.05)].shape[0]
tt_ind_neoplasia_cancer_p_5 = tt_ind_neoplasia_cancer_p[np.where(tt_ind_neoplasia_cancer_p < 0.05)].shape[0]
print('Normal vs neoplasia samples p-values number below 0.05: %d' % tt_ind_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples p-values number below 0.05: %d' % tt_ind_neoplasia_cancer_p_5)
with open('answer1.txt', 'w') as fout:
fout.write(str(tt_ind_normal_neoplasia_p_5))
with open('answer2.txt', 'w') as fout:
fout.write(str(tt_ind_neoplasia_cancer_p_5))
Explanation: Так как среднее значение p-value >> 0.05, то будем применять критерий Стьюдента.
End of explanation
#Holm correction
_, tt_ind_normal_neoplasia_p_corr, _, _ = multipletests(tt_ind_normal_neoplasia_p, method='holm')
_, tt_ind_neoplasia_cancer_p_corr, _, _ = multipletests(tt_ind_neoplasia_cancer_p, method='holm')
#Bonferroni correction
p_corr = np.array([tt_ind_normal_neoplasia_p_corr, tt_ind_neoplasia_cancer_p_corr])
_, p_corr_bonf, _, _ = multipletests(p_corr, is_sorted=True, method='bonferroni')
p_corr_bonf_normal_neoplasia_p_5 = p_corr_bonf[0][np.where(p_corr_bonf[0] < 0.05)].shape[0]
p_corr_bonf_neoplasia_cancer_p_5 = p_corr_bonf[1][np.where(p_corr_bonf[1] < 0.05)].shape[0]
print('Normal vs neoplasia samples p-values number below 0.05: %d' % p_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples p-values number below 0.05: %d' % p_corr_bonf_neoplasia_cancer_p_5)
def fold_change(C, T, limit=1.5):
'''
C - control sample
T - treatment sample
'''
if T >= C:
fc_stat = T / C
else:
fc_stat = -C / T
return (np.abs(fc_stat) > limit), fc_stat
#Normal vs neoplasia samples
gen_p_corr_bonf_normal_p_5 = gen_normal.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
gen_p_corr_bonf_neoplasia0_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
fc_corr_bonf_normal_neoplasia_p_5 = 0
for norm, neopl in zip(gen_p_corr_bonf_normal_p_5.mean(), gen_p_corr_bonf_neoplasia0_p_5.mean()):
accept, _ = fold_change(norm, neopl)
if accept: fc_corr_bonf_normal_neoplasia_p_5 += 1
#Neoplasia vs cancer samples
gen_p_corr_bonf_neoplasia1_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
gen_p_corr_bonf_cancer_p_5 = gen_cancer.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
fc_corr_bonf_neoplasia_cancer_p_5 = 0
for neopl, canc in zip(gen_p_corr_bonf_neoplasia1_p_5.mean(), gen_p_corr_bonf_cancer_p_5.mean()):
accept, _ = fold_change(neopl, canc)
if accept: fc_corr_bonf_neoplasia_cancer_p_5 += 1
print('Normal vs neoplasia samples fold change above 1.5: %d' % fc_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples fold change above 1.5: %d' % fc_corr_bonf_neoplasia_cancer_p_5)
with open('answer3.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_normal_neoplasia_p_5))
with open('answer4.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_neoplasia_cancer_p_5))
Explanation: Часть 2: поправка методом Холма
Для этой части задания нам понадобится модуль multitest из statsmodels.
В этой части задания нужно будет применить поправку Холма для получившихся двух наборов достигаемых уровней значимости из предыдущей части. Обратим внимание, что поскольку мы будем делать поправку для каждого из двух наборов p-value отдельно, то проблема, связанная с множественной проверкой останется.
Для того, чтобы ее устранить, достаточно воспользоваться поправкой Бонферрони, то есть использовать уровень значимости 0.05 / 2 вместо 0.05 для дальнейшего уточнения значений p-value c помощью метода Холма.
В качестве ответа к этому заданию требуется ввести количество значимых отличий в каждой группе после того, как произведена коррекция Холма-Бонферрони. Причем это число нужно ввести с учетом практической значимости: посчитать для каждого значимого изменения fold change и выписать в ответ число таких значимых изменений, абсолютное значение fold change которых больше, чем 1.5.
Обратим внимание, что
применять поправку на множественную проверку нужно ко всем значениям достигаемых уровней значимости, а не только для тех, которые меньше значения уровня доверия;
при использовании поправки на уровне значимости 0.025 меняются значения достигаемого уровня значимости, но не меняется значение уровня доверия (то есть для отбора значимых изменений скорректированные значения уровня значимости нужно сравнивать с порогом 0.025, а не 0.05)!
End of explanation
#Benjamini-Hochberg correction
_, tt_ind_normal_neoplasia_p_corr, _, _ = multipletests(tt_ind_normal_neoplasia_p, method='fdr_bh')
_, tt_ind_neoplasia_cancer_p_corr, _, _ = multipletests(tt_ind_neoplasia_cancer_p, method='fdr_bh')
#Bonferroni correction
p_corr = np.array([tt_ind_normal_neoplasia_p_corr, tt_ind_neoplasia_cancer_p_corr])
_, p_corr_bonf, _, _ = multipletests(p_corr, is_sorted=True, method='bonferroni')
p_corr_bonf_normal_neoplasia_p_5 = p_corr_bonf[0][np.where(p_corr_bonf[0] < 0.05)].shape[0]
p_corr_bonf_neoplasia_cancer_p_5 = p_corr_bonf[1][np.where(p_corr_bonf[1] < 0.05)].shape[0]
print('Normal vs neoplasia samples p-values number below 0.05: %d' % p_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples p-values number below 0.05: %d' % p_corr_bonf_neoplasia_cancer_p_5)
#Normal vs neoplasia samples
gen_p_corr_bonf_normal_p_5 = gen_normal.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
gen_p_corr_bonf_neoplasia0_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[0] < 0.05)[0]]
fc_corr_bonf_normal_neoplasia_p_5 = 0
for norm, neopl in zip(gen_p_corr_bonf_normal_p_5.mean(), gen_p_corr_bonf_neoplasia0_p_5.mean()):
accept, _ = fold_change(norm, neopl)
if accept: fc_corr_bonf_normal_neoplasia_p_5 += 1
#Neoplasia vs cancer samples
gen_p_corr_bonf_neoplasia1_p_5 = gen_neoplasia.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
gen_p_corr_bonf_cancer_p_5 = gen_cancer.iloc[:,2:].iloc[:, np.where(p_corr_bonf[1] < 0.05)[0]]
fc_corr_bonf_neoplasia_cancer_p_5 = 0
for neopl, canc in zip(gen_p_corr_bonf_neoplasia1_p_5.mean(), gen_p_corr_bonf_cancer_p_5.mean()):
accept, _ = fold_change(neopl, canc)
if accept: fc_corr_bonf_neoplasia_cancer_p_5 += 1
print('Normal vs neoplasia samples fold change above 1.5: %d' % fc_corr_bonf_normal_neoplasia_p_5)
print('Neoplasia vs cancer samples fold change above 1.5: %d' % fc_corr_bonf_neoplasia_cancer_p_5)
with open('answer5.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_normal_neoplasia_p_5))
with open('answer6.txt', 'w') as fout:
fout.write(str(fc_corr_bonf_neoplasia_cancer_p_5))
Explanation: Часть 3: поправка методом Бенджамини-Хохберга
Данная часть задания аналогична второй части за исключением того, что нужно будет использовать метод Бенджамини-Хохберга.
Обратим внимание, что методы коррекции, которые контролируют FDR, допускает больше ошибок первого рода и имеют большую мощность, чем методы, контролирующие FWER. Большая мощность означает, что эти методы будут совершать меньше ошибок второго рода (то есть будут лучше улавливать отклонения от H0, когда они есть, и будут чаще отклонять H0, когда отличий нет).
В качестве ответа к этому заданию требуется ввести количество значимых отличий в каждой группе после того, как произведена коррекция Бенджамини-Хохберга, причем так же, как и во второй части, считать только такие отличия, у которых abs(fold change) > 1.5.
End of explanation |
14,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
Step13: Quick peek at your data
This tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step14: Create a dataset
datasets.create-dataset-api
Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters
Step15: Example Output
Step16: Example output
Step17: Example output
Step18: Example output
Step19: Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket.
Step20: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs
Step21: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters
Step22: Example output
Step23: Example Output
Step24: Example Output
Step25: Example output
Step26: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is
Step27: Example output
Step28: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI: Vertex AI Migration: AutoML Image Classification
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
IMPORT_FILE = (
"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv"
)
Explanation: Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
This tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
dataset = aip.ImageDataset.create(
display_name="Flowers" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,
)
print(dataset.resource_name)
Explanation: Create a dataset
datasets.create-dataset-api
Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
dag = aip.AutoMLImageTrainingJob(
display_name="flowers_" + TIMESTAMP,
prediction_type="classification",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(dag)
Explanation: Example Output:
INFO:google.cloud.aiplatform.datasets.dataset:Creating ImageDataset
INFO:google.cloud.aiplatform.datasets.dataset:Create ImageDataset backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/1941426647739662336
INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592
INFO:google.cloud.aiplatform.datasets.dataset:To use this ImageDataset in another session:
INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.ImageDataset('projects/759209241365/locations/us-central1/datasets/2940964905882222592')
INFO:google.cloud.aiplatform.datasets.dataset:Importing ImageDataset data: projects/759209241365/locations/us-central1/datasets/2940964905882222592
INFO:google.cloud.aiplatform.datasets.dataset:Import ImageDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/8100099138168815616
INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592
projects/759209241365/locations/us-central1/datasets/2940964905882222592
Train a model
training.automl-api
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: An image classification model.
object_detection: An image object detection model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
model_type: The type of model for deployment.
CLOUD: Deployment on Google Cloud
CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.
CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.
MOBILE_TF_VERSATILE_1: Deployment on an edge device.
MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.
MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.
base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.
The instantiated object is the DAG (directed acyclic graph) for the training job.
End of explanation
model = dag.run(
dataset=dataset,
model_display_name="flowers_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
)
Explanation: Example output:
<google.cloud.aiplatform.training_jobs.AutoMLImageTrainingJob object at 0x7f806a6116d0>
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).
disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
End of explanation
# Get model resource ID
models = aip.Model.list(filter="display_name=flowers_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
Explanation: Example output:
INFO:google.cloud.aiplatform.training_jobs:View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/2109316300865011712?project=759209241365
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
...
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/1284590221056278528
Evaluate the model
projects.locations.models.evaluations.list
Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
test_items = !gsutil cat $IMPORT_FILE | head -n2
if len(str(test_items[0]).split(",")) == 3:
_, test_item_1, test_label_1 = str(test_items[0]).split(",")
_, test_item_2, test_label_2 = str(test_items[1]).split(",")
else:
test_item_1, test_label_1 = str(test_items[0]).split(",")
test_item_2, test_label_2 = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
Explanation: Example output:
name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824"
metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml"
metrics {
struct_value {
fields {
key: "auPrc"
value {
number_value: 0.9891107
}
}
fields {
key: "confidenceMetrics"
value {
list_value {
values {
struct_value {
fields {
key: "precision"
value {
number_value: 0.2
}
}
fields {
key: "recall"
value {
number_value: 1.0
}
}
}
}
Make batch predictions
predictions.batch-prediction
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
Explanation: Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket.
End of explanation
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
content: The Cloud Storage path to the image.
mime_type: The content type. In our example, it is a jpeg file.
For example:
{'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
End of explanation
batch_predict_job = model.batch_predict(
job_display_name="flowers_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
batch_predict_job.wait()
Explanation: Example output:
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
Explanation: Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
End of explanation
endpoint = model.deploy()
Explanation: Example Output:
{'instance': {'content': 'gs://andy-1234-221921aip-20210802180634/100080576_f52e8ee070_n.jpg', 'mimeType': 'image/jpeg'}, 'prediction': {'ids': ['3195476558944927744', '1636105187967893504', '7400712711002128384', '2789026692574740480', '5501319568158621696'], 'displayNames': ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'], 'confidences': [0.99998736, 8.222247e-06, 3.6782617e-06, 5.3231275e-07, 2.6960555e-07]}}
Make online predictions
predictions.deploy-model-api
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
End of explanation
test_item = !gsutil cat $IMPORT_FILE | head -n1
if len(str(test_item[0]).split(",")) == 3:
_, test_item, test_label = str(test_item[0]).split(",")
else:
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
predictions.online-prediction-automl
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
import base64
import tensorflow as tf
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{"content": base64.b64encode(content).decode("utf-8")}]
prediction = endpoint.predict(instances=instances)
print(prediction)
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ 'content': { 'b64': base64_encoded_bytes } }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
endpoint.undeploy_all()
Explanation: Example output:
Prediction(predictions=[{'ids': ['3195476558944927744', '5501319568158621696', '1636105187967893504', '2789026692574740480', '7400712711002128384'], 'displayNames': ['daisy', 'tulips', 'dandelion', 'sunflowers', 'roses'], 'confidences': [0.999987364, 2.69604527e-07, 8.2222e-06, 5.32310196e-07, 3.6782335e-06]}], deployed_model_id='5949545378826158080', explanations=None)
Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
14,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science
Step1: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network
Step6: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Step7: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
Step8: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this
Step9: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normalization.
Step10: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
Step11: Lower layers produce features of lower complexity.
Step12: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
Step13: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
Step14: Let's load some image and populate it with DogSlugs (in case you've missed them).
Step15: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works | Python Code:
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
Explanation: DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see GoogLeNet and VGG16 galleries)
embed TensorBoard graph visualizations into Jupyter notebooks
produce high-resolution images with tiled computation (example)
use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the GoogLeNet architecture, trained to classify images into one of 1000 categories of the ImageNet dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for GoogLeNet and VGG16 architectures.
End of explanation
!wget -nc https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip -n inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
Explanation: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network:
End of explanation
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
Explanation: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
End of explanation
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
Explanation: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
End of explanation
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
Explanation: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
End of explanation
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
Explanation: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normalization.
End of explanation
render_lapnorm(T(layer)[:,:,:,65])
Explanation: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
End of explanation
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
Explanation: Lower layers produce features of lower complexity.
End of explanation
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
Explanation: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
End of explanation
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
Explanation: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
End of explanation
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
Explanation: Let's load some image and populate it with DogSlugs (in case you've missed them).
End of explanation
render_deepdream(T(layer)[:,:,:,139], img0)
Explanation: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
End of explanation |
14,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Abstract
In order to optimize dataframe
Step1: Original Article
https
Step2: First Look
Step3: Under the hood, pandas groups the columns into block of values of the same type
Step4: Subtype
Under the hood pandas represents numeric values as NumPy ndarrays and stores them in a continuous block of memory.
This approach
Step5: Optimize Numeric Columns with subtypes
We can use the function pd.to_numeric() to downcast our numeric types
Step6: We can see a drop from 7.9 to 1.5 MB. Now we have 5 uint8 and 1 unit32 instead of 6 int64.
Lets do the same thing with float columns
Step7: All our float columns were converted from float64 to float32, give us a 50 reduction in memory usage.
We apply this optimizations on the entire df
Step8: In order to have the best benefit, we have to optimize the object types.
Comparing Numeric and String Storage
The object type represents values using Python string objects, partly due to the lack of support for missing string values in NumPy. Because Python is a high-level, interpreted language, it doesn’t have fine grained-control over how values in memory are stored.
This limitation causes strings to be stored in a fragmented way that consumes more memory and is slower to access. Each element in an object column is really a pointer that contains the “address” for the actual value’s location in memory.
Object types as using a variable amount of memory. While each pointer takes up 1 byte of memory, each actual string value uses the same amount of memory that string would use if stored individually in Python. Let’s use sys.getsizeof() to prove that out, first by looking at individual strings, and then items in a pandas series.
Step9: You can see that the size of strings when stored in a pandas series are identical to their usage as separate strings in Python.
Optimizing object types using categoricals
The category type uses integer values under the hood to represent the values in a column, rather than the raw values. Pandas uses a separate mapping dictionary that maps the integer values to the raw ones. This arrangement is useful whenever a column contains a limited set of values. When we convert a column to the category dtype, pandas uses the most space efficient int subtype that can represent all of the unique values in a column.
Step10: A quick glance reveals many columns where there are few unique values relative to the overall ~172,000 games in our data set.
We start to convert day_of_week to category using the .astype(method).
Step11: When we convert columns to category, it's important to be aware of trade-off
Step12: Convert date
Step13: We’ll convert using pandas.to_datetime() function, using the format parameter to tell it that our date data is stored YYYY-MM-DD.
Step14: Selecting Types While Reading the Data In
we can specify the optimal column types when we read the data set in. The pandas.read_csv() function has a few different parameters that allow us to do this. The dtype parameter accepts a dictionary that has (string) column names as the keys and NumPy type objects as the values.
Step15: Now we can use the dictionary, along with a few parameters for the date to read in the data with the correct types in a few lines
Step16: Analyzing baseball games
how game length has varied over the years | Python Code:
import os
import pandas as pd
# Load Data
gl = pd.read_csv('..\data\game_logs.csv')
# Available also at https://data.world/dataquest/mlb-game-logs
# Data Preview
gl.head()
Explanation: Abstract
In order to optimize dataframe:
Downcasting numeric columns
python
df_num = df.select_dtypes(include=['int64','float64'])
converted_num = df_num.apply(pd.to_numeric,downcast='unsigned')
Converting string column to categorical type
When we convert columns to category, it's important to be aware of trade-off:
- we can't perform numerical computation on category
- we should use the category type primarily for object column where less than 50% of the values are unique
```python
gl_obj = gl.select_dtypes(include=['object']).copy()
converted_obj = pd.DataFrame()
for col in gl_obj.columns:
num_unique_values = len(gl_obj[col].unique())
num_total_values = len(gl_obj[col])
if num_unique_values / num_total_values < 0.5:
converted_obj.loc[:,col] = gl_obj[col].astype('category')
else:
converted_obj.loc[:,col] = gl_obj[col]
```
Converting date
python
df['date'] = pd.to_datetime(date,format='%Y%m%d')
Specify type when load data
python
dtypes = optimized_gl.drop('date',axis=1).dtypes
dtypes_col = dtypes.index
dtypes_type=[i.name for i in dtypes.values]
column_types = dict(zip(dtypes_col, dtypes_type))
read_and_optimized = pd.read_csv('..\data\game_logs.csv',dtype=column_types,parse_dates=['date'],infer_datetime_format=True)
Tip to Reduce memory usage
End of explanation
gl.dtypes.head()
# Select only the column with same type
gl.select_dtypes(include=['object']).head()
#Exact amount of memory usage of df
gl.info(memory_usage='deep')
Explanation: Original Article
https://www.dataquest.io/blog/pandas-big-data/
End of explanation
gl.describe()
# Reference http://www.markhneedham.com/blog/2017/07/05/pandas-find-rows-where-columnfield-is-null/
# Columns with null values
null_columns=gl.columns[gl.isnull().any()]
gl[null_columns].isnull().sum()
# Every row that contains at least one null value
print(gl[gl.isnull().any(axis=1)][null_columns].head())
Explanation: First Look
End of explanation
gl.dtypes.value_counts()
for dtype in gl.dtypes.unique(): #['float','int64','object']:
selected_dtype = gl.select_dtypes(include=[dtype])
mean_usage_b = selected_dtype.memory_usage(deep=True).mean()
mean_usage_mb = mean_usage_b / 1024 ** 2
print("Average memory usage for {} columns: {:03.2f} MB".format(dtype,mean_usage_mb))
Explanation: Under the hood, pandas groups the columns into block of values of the same type:
- ObjectBlock, contains string
- FloatBlock (ndarray)
- IntBlock (ndarray)
Because each data type is stored separately, we examine the memory usage by data type.
Average memory usage for data type
End of explanation
import numpy as np
int_types = ["uint8", "int8", "int16"]
for it in int_types:
print(np.iinfo(it))
Explanation: Subtype
Under the hood pandas represents numeric values as NumPy ndarrays and stores them in a continuous block of memory.
This approach:
- consumes less space
- allows us to access quickly
Many types in pandas have multiple subtypes that can use fewer bytes to represent each value. For example, the float type has the float16, float32, and float64 subtypes.
To discovare the range of values of given dtype we can use
https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.iinfo.html
End of explanation
# We're going to be calculating memory usage a lot,
# so we'll create a function to save us some time!
def mem_usage(pandas_obj):
if isinstance(pandas_obj,pd.DataFrame):
usage_b = pandas_obj.memory_usage(deep=True).sum()
else: # we assume if not a df it's a series
usage_b = pandas_obj.memory_usage(deep=True)
usage_mb = usage_b / 1024 ** 2 # convert bytes to megabytes
return "{:03.2f} MB".format(usage_mb)
mem_usage(gl)
gl_int = gl.select_dtypes(include=['int64'])
converted_int = gl_int.apply(pd.to_numeric,downcast='unsigned')
print(mem_usage(gl_int))
print(mem_usage(converted_int))
compare_ints = pd.concat([gl_int.dtypes,converted_int.dtypes],axis=1)
compare_ints.columns = ['before','after']
compare_ints.apply(pd.Series.value_counts)
Explanation: Optimize Numeric Columns with subtypes
We can use the function pd.to_numeric() to downcast our numeric types
End of explanation
gl_float = gl.select_dtypes(include=['float'])
converted_float = gl_float.apply(pd.to_numeric,downcast='float')
print(mem_usage(gl_float))
print(mem_usage(converted_float))
compare_floats = pd.concat([gl_float.dtypes,converted_float.dtypes],axis=1)
compare_floats.columns = ['before','after']
compare_floats.apply(pd.Series.value_counts)
Explanation: We can see a drop from 7.9 to 1.5 MB. Now we have 5 uint8 and 1 unit32 instead of 6 int64.
Lets do the same thing with float columns
End of explanation
optimized_gl = gl.copy()
optimized_gl[converted_int.columns] = converted_int
optimized_gl[converted_float.columns] = converted_float
print(mem_usage(gl))
print(mem_usage(optimized_gl))
Explanation: All our float columns were converted from float64 to float32, give us a 50 reduction in memory usage.
We apply this optimizations on the entire df
End of explanation
from sys import getsizeof
s1 = 'working out'
s2 = 'memory usage for'
s3 = 'strings in python is fun!'
s4 = 'strings in python is fun!'
for s in [s1, s2, s3, s4]:
print(getsizeof(s))
obj_series = pd.Series(['working out',
'memory usage for',
'strings in python is fun!',
'strings in python is fun!'])
obj_series.apply(getsizeof)
Explanation: In order to have the best benefit, we have to optimize the object types.
Comparing Numeric and String Storage
The object type represents values using Python string objects, partly due to the lack of support for missing string values in NumPy. Because Python is a high-level, interpreted language, it doesn’t have fine grained-control over how values in memory are stored.
This limitation causes strings to be stored in a fragmented way that consumes more memory and is slower to access. Each element in an object column is really a pointer that contains the “address” for the actual value’s location in memory.
Object types as using a variable amount of memory. While each pointer takes up 1 byte of memory, each actual string value uses the same amount of memory that string would use if stored individually in Python. Let’s use sys.getsizeof() to prove that out, first by looking at individual strings, and then items in a pandas series.
End of explanation
#Where we migh be able to reduce memory?
gl_obj = gl.select_dtypes(include=['object']).copy()
gl_obj.describe()
Explanation: You can see that the size of strings when stored in a pandas series are identical to their usage as separate strings in Python.
Optimizing object types using categoricals
The category type uses integer values under the hood to represent the values in a column, rather than the raw values. Pandas uses a separate mapping dictionary that maps the integer values to the raw ones. This arrangement is useful whenever a column contains a limited set of values. When we convert a column to the category dtype, pandas uses the most space efficient int subtype that can represent all of the unique values in a column.
End of explanation
dow = gl_obj.day_of_week
print(dow.head())
dow_cat = dow.astype('category')
print(dow_cat.head())
# We can see the integer values associated to column
dow_cat.head().cat.codes
# We compare the memory usage
print(mem_usage(dow))
print(mem_usage(dow_cat))
Explanation: A quick glance reveals many columns where there are few unique values relative to the overall ~172,000 games in our data set.
We start to convert day_of_week to category using the .astype(method).
End of explanation
converted_obj = pd.DataFrame()
for col in gl_obj.columns:
num_unique_values = len(gl_obj[col].unique())
num_total_values = len(gl_obj[col])
if num_unique_values / num_total_values < 0.5:
converted_obj.loc[:,col] = gl_obj[col].astype('category')
else:
converted_obj.loc[:,col] = gl_obj[col]
print(mem_usage(gl_obj))
print(mem_usage(converted_obj))
compare_obj = pd.concat([gl_obj.dtypes,converted_obj.dtypes],axis=1)
compare_obj.columns = ['before','after']
compare_obj.apply(pd.Series.value_counts)
# Now we combine with the rest of our dataframe (numeric columns)
optimized_gl[converted_obj.columns] = converted_obj
mem_usage(optimized_gl)
Explanation: When we convert columns to category, it's important to be aware of trade-off:
- we can't perform numerical computation on category
- we should use the category type primarily for object column where less than 50% of the values are unique
End of explanation
date = optimized_gl.date
print(mem_usage(date))
date.head()
Explanation: Convert date
End of explanation
optimized_gl['date'] = pd.to_datetime(date,format='%Y%m%d')
print(mem_usage(optimized_gl))
optimized_gl.date.head()
Explanation: We’ll convert using pandas.to_datetime() function, using the format parameter to tell it that our date data is stored YYYY-MM-DD.
End of explanation
dtypes = optimized_gl.drop('date',axis=1).dtypes
dtypes.head()
dtypes_col = dtypes.index
dtypes_col
dtypes_type=[i.name for i in dtypes.values]
column_types = dict(zip(dtypes_col, dtypes_type))
#Preview of first 10
{k:v for k,v in list(column_types.items())[:10]}
Explanation: Selecting Types While Reading the Data In
we can specify the optimal column types when we read the data set in. The pandas.read_csv() function has a few different parameters that allow us to do this. The dtype parameter accepts a dictionary that has (string) column names as the keys and NumPy type objects as the values.
End of explanation
read_and_optimized = pd.read_csv('..\data\game_logs.csv',dtype=column_types,parse_dates=['date'],infer_datetime_format=True)
print(mem_usage(read_and_optimized))
read_and_optimized.head()
Explanation: Now we can use the dictionary, along with a few parameters for the date to read in the data with the correct types in a few lines:
End of explanation
import matplotlib.pyplot as plt
optimized_gl['year'] = optimized_gl.date.dt.year
game_lengths = optimized_gl.pivot_table(index='year', values='length_minutes')
game_lengths.reset_index().plot.scatter('year','length_minutes')
plt.show()
Explanation: Analyzing baseball games
how game length has varied over the years
End of explanation |
14,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial on Units and UnitConverters in Parcels
In most applications, Parcels works with spherical meshes, where longitude and latitude are given in degrees, while depth is given in meters. But it is also possible to use flat meshes, where longitude and latitude are given in meters (note that the dimensions are then still called longitude and latitude for consistency reasons).
In all cases, velocities are given in m/s. So Parcels seemlessly converts between meters and degrees, under the hood. For transparency, this tutorial explain how this works.
Let's first import the relevant modules, and create dictionaries for the U, V and temp data arrays, with the velocities 1 m/s and the temperature 20C.
Step1: We can convert these data and dims to a FieldSet object using FieldSet.from_data. We add the argument mesh='spherical' (this is the default option) to signal that all longitudes and latitudes are in degrees.
Plotting the U field indeed shows a uniform 1m/s eastward flow. The .show() method recognises that this is a spherical mesh and hence plots the northwest European coastlines on top.
Step2: However, printing the velocites directly shows something perhaps surprising. Here, we use the square-bracket field-interpolation notation to print the field value at (5W, 40N, 0m depth) at time 0.
Step3: While the temperature field indeed is 20C, as we defined, these printed velocities are much smaller.
This is because Parcels converts under the hood from m/s to degrees/s. This conversion is done with a UnitConverter object, which is stored in the .units attribute of each Field. Below, we print these
Step4: So the U field has a GeographicPolar UnitConverter object, the V field has a Geographic Unitconverter and the temp field has a UnitConverter object.
Indeed, if we multiply the value of the V field with 1852 * 60 (the number of meters in 1 degree of latitude), we get the expected 1 m/s.
Step5: Note that you can also interpolate the Field without a unit conversion, by using the eval() method and setting applyConversion=False, as below
Step6: UnitConverters for mesh='flat'
If longitudes and latitudes are given in meters, rather than degrees, simply add mesh='flat' when creating the FieldSet object.
Step7: Indeed, in this case all Fields have the same default UnitConverter object. Note that the coastlines have also gone in the plot, as .show() recognises that this is a flat mesh.
UnitConverters for Diffusion fields
The units for Brownian diffusion are in $m^2/s$. If (and only if!) the diffusion fields are called kh_zonal and kh_meridional, Parcels will automatically assign the correct Unitconverter objects to these fields.
Step8: Here, the unitconverters are GeographicPolarSquare and GeographicSquare, respectively.
Indeed, multiplying with $(1852\cdot60)^2$ returns the original value
Step9: Adding a UnitConverter object to a Field
So, to summarise, here is a table with all the conversions
| Field name | Converter object | Conversion for mesh='spherical'| Conversion for mesh='flat'|
|-------|-----------------|-----------------------------------|
| 'U' | GeographicPolar| $1852 \cdot 60 \cdot \cos(lat \cdot \frac{\pi}{180})$ | 1 |
| 'V' | Geographic | $1852 \cdot 60 $ | 1 |
| 'Kh_zonal' | GeographicPolarSquare | $(1852 \cdot 60 \cdot \cos(lat \cdot \frac{\pi}{180}))^2$ | 1 |
| 'Kh_meridional' | GeographicSquare | $(1852 \cdot 60)^2 $ | 1 |
| All other fields | UnitConverter | 1 | 1 |
Only four Field names are recognised and assigned an automatic UnitConverter object. This means that things might go very wrong when e.g. a velocity field is not called U or V.
Fortunately, you can always add a UnitConverter later, as explained below
Step10: This value for Ustokes of course is not as expected, since the mesh is spherical and hence this would mean 1 degree/s velocity. Assigning the correct GeographicPolar Unitconverter gives
Step11: Alternatively, the UnitConverter can be set when the FieldSet or Field is created by using the fieldtype argument (use a dictionary in the case of FieldSet construction.
Step12: Using velocities in units other than m/s
Some OGCM store velocity data in units of e.g. cm/s. For these cases, Field objects have a method set_scaling_factor().
If your data is in cm/s and if you want to use the built-in Advection kernels, you will therefore have to use fieldset.U.set_scaling_factor(100) and fieldset.V.set_scaling_factor(100). | Python Code:
%matplotlib inline
from parcels import Field, FieldSet
import numpy as np
xdim, ydim = (10, 20)
data = {'U': np.ones((ydim, xdim), dtype=np.float32),
'V': np.ones((ydim, xdim), dtype=np.float32),
'temp': 20*np.ones((ydim, xdim), dtype=np.float32)}
dims = {'lon': np.linspace(-15, 5, xdim, dtype=np.float32),
'lat': np.linspace(35, 60, ydim, dtype=np.float32)}
Explanation: Tutorial on Units and UnitConverters in Parcels
In most applications, Parcels works with spherical meshes, where longitude and latitude are given in degrees, while depth is given in meters. But it is also possible to use flat meshes, where longitude and latitude are given in meters (note that the dimensions are then still called longitude and latitude for consistency reasons).
In all cases, velocities are given in m/s. So Parcels seemlessly converts between meters and degrees, under the hood. For transparency, this tutorial explain how this works.
Let's first import the relevant modules, and create dictionaries for the U, V and temp data arrays, with the velocities 1 m/s and the temperature 20C.
End of explanation
fieldset = FieldSet.from_data(data, dims, mesh='spherical')
fieldset.U.show()
Explanation: We can convert these data and dims to a FieldSet object using FieldSet.from_data. We add the argument mesh='spherical' (this is the default option) to signal that all longitudes and latitudes are in degrees.
Plotting the U field indeed shows a uniform 1m/s eastward flow. The .show() method recognises that this is a spherical mesh and hence plots the northwest European coastlines on top.
End of explanation
print((fieldset.U[0, 0, 40, -5], fieldset.V[0, 0, 40, -5], fieldset.temp[0, 0, 40, -5]))
Explanation: However, printing the velocites directly shows something perhaps surprising. Here, we use the square-bracket field-interpolation notation to print the field value at (5W, 40N, 0m depth) at time 0.
End of explanation
for fld in [fieldset.U, fieldset.V, fieldset.temp]:
print('%s: %s' % (fld.name, fld.units))
Explanation: While the temperature field indeed is 20C, as we defined, these printed velocities are much smaller.
This is because Parcels converts under the hood from m/s to degrees/s. This conversion is done with a UnitConverter object, which is stored in the .units attribute of each Field. Below, we print these
End of explanation
print(fieldset.V[0, 0, 40, -5]* 1852*60)
Explanation: So the U field has a GeographicPolar UnitConverter object, the V field has a Geographic Unitconverter and the temp field has a UnitConverter object.
Indeed, if we multiply the value of the V field with 1852 * 60 (the number of meters in 1 degree of latitude), we get the expected 1 m/s.
End of explanation
print(fieldset.V.eval(0, 0, 40, -5, applyConversion=False))
Explanation: Note that you can also interpolate the Field without a unit conversion, by using the eval() method and setting applyConversion=False, as below
End of explanation
fieldset_flat = FieldSet.from_data(data, dims, mesh='flat')
fieldset_flat.U.show()
for fld in [fieldset_flat.U, fieldset_flat.V, fieldset_flat.temp]:
print('%s: %f %s' % (fld.name, fld[0, 0, 40, -5], fld.units))
Explanation: UnitConverters for mesh='flat'
If longitudes and latitudes are given in meters, rather than degrees, simply add mesh='flat' when creating the FieldSet object.
End of explanation
kh_zonal = 100 # in m^2/s
kh_meridional = 100 # in m^2/s
fieldset.add_field(Field('Kh_zonal', kh_zonal*np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
fieldset.add_field(Field('Kh_meridional', kh_meridional*np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
for fld in [fieldset.Kh_zonal, fieldset.Kh_meridional]:
print('%s: %e %s' % (fld.name, fld[0, 0, 40, -5], fld.units))
Explanation: Indeed, in this case all Fields have the same default UnitConverter object. Note that the coastlines have also gone in the plot, as .show() recognises that this is a flat mesh.
UnitConverters for Diffusion fields
The units for Brownian diffusion are in $m^2/s$. If (and only if!) the diffusion fields are called kh_zonal and kh_meridional, Parcels will automatically assign the correct Unitconverter objects to these fields.
End of explanation
deg_to_m = 1852*60
print(fieldset.Kh_meridional[0, 0, 40, -5]*deg_to_m**2)
Explanation: Here, the unitconverters are GeographicPolarSquare and GeographicSquare, respectively.
Indeed, multiplying with $(1852\cdot60)^2$ returns the original value
End of explanation
fieldset.add_field(Field('Ustokes', np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
print(fieldset.Ustokes[0, 0, 40, -5])
Explanation: Adding a UnitConverter object to a Field
So, to summarise, here is a table with all the conversions
| Field name | Converter object | Conversion for mesh='spherical'| Conversion for mesh='flat'|
|-------|-----------------|-----------------------------------|
| 'U' | GeographicPolar| $1852 \cdot 60 \cdot \cos(lat \cdot \frac{\pi}{180})$ | 1 |
| 'V' | Geographic | $1852 \cdot 60 $ | 1 |
| 'Kh_zonal' | GeographicPolarSquare | $(1852 \cdot 60 \cdot \cos(lat \cdot \frac{\pi}{180}))^2$ | 1 |
| 'Kh_meridional' | GeographicSquare | $(1852 \cdot 60)^2 $ | 1 |
| All other fields | UnitConverter | 1 | 1 |
Only four Field names are recognised and assigned an automatic UnitConverter object. This means that things might go very wrong when e.g. a velocity field is not called U or V.
Fortunately, you can always add a UnitConverter later, as explained below:
End of explanation
from parcels.tools.converters import GeographicPolar
fieldset.Ustokes.units = GeographicPolar()
print(fieldset.Ustokes[0, 0, 40, -5])
print(fieldset.Ustokes[0, 0, 40, -5]*1852*60*np.cos(40*np.pi/180))
Explanation: This value for Ustokes of course is not as expected, since the mesh is spherical and hence this would mean 1 degree/s velocity. Assigning the correct GeographicPolar Unitconverter gives
End of explanation
fieldset.add_field(Field('Ustokes2', np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid, fieldtype='U'))
print(fieldset.Ustokes2[0, 0, 40, -5])
Explanation: Alternatively, the UnitConverter can be set when the FieldSet or Field is created by using the fieldtype argument (use a dictionary in the case of FieldSet construction.
End of explanation
fieldset.add_field(Field('Ucm', 0.01*np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
fieldset.Ucm.set_scaling_factor(100)
print(fieldset.Ucm[0, 0, 40, -5])
Explanation: Using velocities in units other than m/s
Some OGCM store velocity data in units of e.g. cm/s. For these cases, Field objects have a method set_scaling_factor().
If your data is in cm/s and if you want to use the built-in Advection kernels, you will therefore have to use fieldset.U.set_scaling_factor(100) and fieldset.V.set_scaling_factor(100).
End of explanation |
14,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 M.Z. Jorisch
<h1 align="center">Orbital</h1>
<h1 align="center">Perturbations</h1>
In this lesson, we will discuss the orbits of bodies in space, and how those bodies can be affected by others as they fly by. We will look at Encke's method, which was created by Johann Franz Encke in 1851, and uses a second order ODE to describe the true orbit of a body when affected by the pull of an additional body flying by.
In traditional orbital dynamics, the standard two-body problem is used to describe two bodies in motion with one orbiting the other. This fails to take into account the affect of outside bodies on the orbital trajectory and can produce an orbit very different than the "true" orbit.
These orbits play a large role in our daily lives. There are numerous satellites currently orbiting Earth, which are used for communications, GPS, as well as other data grabbers. These satellites can have slight changes to their orbits around Earth caused by other satellites, planets, moons, or comets that need to be taken into consideration when designing orbital parameters for them.
We will compare the traditional two-body motion and Encke's method to see how much the orbits vary over time.
For our example, we will use Mars and Jupiter orbiting the Sun, with Jupiter being the disturbing body. These two planets were chosen for their proximity to one another and for Jupiter's large mass (Over 300 times greater than Earth!)
<h2 align="center">Encke's Method</h2>
<h4 align="center">Figure 1. Visualization of Encke's Method ([Analytical Mechanics of Aerospace Systems Pg 342](http
Step3: The Kepler_eqn function uses the eccentricity, $e$, and mean anomaly, $M$, and then uses the iterative method to spit out our value of $E$.
The eccentric anomaly $E$ is the basis for the elliptical trajectory and is the value that changes at each time step, in turn changing the radial and velocity vectors.
Now let's define a function, that given our orbital parameters, will give us our trajectory at a time $t$.
The orbital elements that we will need to input into the function are
Step5: Now that we have a function to solve for the orbital trajectory of an elliptical orbit (which Mars and Jupiter both have) we have to create a function for the disturbing acceleration caused by the third body, Jupiter.
The equation for $a_d$ as mentioned earlier, is
Step6: Now that we have our Kepler function, our elliptical orbit function, and our acceleration function, we can set up our initial conditions. After that we will be able to get our final radial and velocity vectors for the osculating and perturbed orbits in addition to Jupiter's orbit.
<h3 align="center">Initial Conditions</h3>
Step7: Next we will have a for-loop that will give us our solution at every time step for each of our orbits. In this case, we will use a final time of 4000 days. Each time step will be one day (in seconds), and we will be able to see the trajectories over that time period.
The loop uses a crude method of integration (Tewari pg 166) to solve for $\gamma$ and $\delta$, the difference between the osculating and perturbed radial and velocity vectors respectively.
The osculating orbit is used here as our base orbit. The orbit for Jupiter is calculated as well and used with the osculating orbit to get our disturbing acceleration. The acceleration is then used to find $\gamma$, which in turn is used to calculate $\delta$.
$\delta$ is added to the radial vector of the osculating orbit at every time step to give us our perturbed orbit. The same is done with $\gamma$ for the velocity vector for the perturbed orbit. The solutions are then entered into arrays so that we can plot them!
Step8: We mentioned earlier that sometimes the term $1 - \frac{r_osc ^3}{r^3}$ can cause issues because towards the beginning of orbit the two terms are approximately equal and can cause the solution to blow up, but this can be solved as follows
Step9: Looking at our first plot we can see that there is a change in Mars' orbit due to the gravitational pull from Jupiter as it flies by!
<h3 align="center">Dig Deeper</h3>
See what happens when you change the orbital parameters! What happens with different planets or with a satellite orbiting earth? What about when both planets don't start at the zero point of the y axis?
<h2 align="center">References</h2>
NASA Mars Fact Sheet
NASA Jupiter Fact Sheet
Standard Gravitational Parameter Wikipedia
Perturbation (Astronomy)
Battin, Richard H., AIAA Education Series, An Introduction to the Mathematics and Methods of Astrodynamics, AIAA, 1999
Schaub, Hanspeter & John. L Junkins, AIAA Education Series, Analytical Mechanics of Aerospace Systems, AIAA, 2000
Tewari, Ashish, Atmospheric and Space Flight Dynamics | Python Code:
from matplotlib import pyplot
import numpy
from numpy import linalg
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
def Kepler_eqn(e, M):
Takes the eccentricity and mean anomaly of an orbit to solve Kepler's equation
Parameters:
----------
e : float
eccentricity of orbit
M : float
Mean anomaly of orbit
Returns:
-------
E : float
Eccentric anomaly
E = M + e * numpy.sin(M) # eccentric anomoaly
fofE = E - e * numpy.sin(E) - M #eccentric anomaly as a function of E
fdotE = 1 - e * numpy.cos(E) #derivative with respect to E of fofE
dE = - fofE / fdotE # change in E
Enew = E + dE
tolerance = 1e-2
while abs(fofE) > tolerance:
E = M + e * numpy.sin(Enew)
fofE = E - numpy.sin(E) - M
fdotE = 1 - e * numpy.cos(E)
dE = - fofE / fdotE
Enew = E + dE
return E
#Based on code from Ashish Tewari
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 M.Z. Jorisch
<h1 align="center">Orbital</h1>
<h1 align="center">Perturbations</h1>
In this lesson, we will discuss the orbits of bodies in space, and how those bodies can be affected by others as they fly by. We will look at Encke's method, which was created by Johann Franz Encke in 1851, and uses a second order ODE to describe the true orbit of a body when affected by the pull of an additional body flying by.
In traditional orbital dynamics, the standard two-body problem is used to describe two bodies in motion with one orbiting the other. This fails to take into account the affect of outside bodies on the orbital trajectory and can produce an orbit very different than the "true" orbit.
These orbits play a large role in our daily lives. There are numerous satellites currently orbiting Earth, which are used for communications, GPS, as well as other data grabbers. These satellites can have slight changes to their orbits around Earth caused by other satellites, planets, moons, or comets that need to be taken into consideration when designing orbital parameters for them.
We will compare the traditional two-body motion and Encke's method to see how much the orbits vary over time.
For our example, we will use Mars and Jupiter orbiting the Sun, with Jupiter being the disturbing body. These two planets were chosen for their proximity to one another and for Jupiter's large mass (Over 300 times greater than Earth!)
<h2 align="center">Encke's Method</h2>
<h4 align="center">Figure 1. Visualization of Encke's Method ([Analytical Mechanics of Aerospace Systems Pg 342](http://www.control.aau.dk/~jan/undervisning/MechanicsI/mechsys/chapter10.pdf))</h4>
Encke's method uses the difference between the standard Keplerian orbit and the true orbit affected by a third body, and can be represented by the following equations.
The Keplerian orbit or the osculating orbit is represented by the equation:
$$\frac{d^2 \vec{r}{osc}}{dt^2} = - \frac{\mu}{r^3 {osc}} \vec{r}_{osc}$$
The perturbed orbit is represented by a similar equation:
$$\frac{d^2 \vec{r}}{dt^2} = - \frac{\mu}{r^3} \vec{r} + \vec{a}_d$$
When looking at the two equations, the only difference between the osculating orbit and the perturbed orbit is the term $\vec{a}_d$, which is the acceleration vector caused by the third body flyby.
The acceleration vector can be found using the following:
$$\vec{a}d = \frac{1}{m_2} \vec{f}{d_2} - \frac{1}{m_1} \vec{f}_{d_1}$$
The two accelerations cancel out many times. (Schaub pg 257)
This leads to:
$$\vec{a}_d = \frac{1}{m_2} \frac{G m_2 m_3}{|\vec{r}_23|^3} \vec{r}_23 - \frac{1}{m_1} \frac{G m_1 m_3}{|\vec{r}_13|^3} \vec{r}_13$$
Where $m_1$ is the mass of the central body, $m_2$ is the mass of the body orbiting around $m_1$, and $m_3$ is the mass of the disturbing body.
Initially, at time $t_0 = 0$, the osculating and perturbed orbits are equal. The change occurs at a time $t = t_0 + \Delta t$.
Let's define the difference between the radius of the osculating and perturbed orbits as $\delta$ and the difference between the velocities of the two orbits as $\gamma$
Therefore at time $t$, which we just defined, the radial and velocity components are:
$$\vec{\delta}(t) = \vec{r}(t) - \vec{r}{osc} (t)$$
$$\vec{\gamma}(t) = \vec{v}(t) - \vec{v}{osc} (t)$$
We have some initial conditions as well. As mentioned before the obrbits are equal at $t_0$ which gives us $\vec{\delta} (t_0) = 0$. The velocity difference at $t_0$ is also zero. $\frac{d \vec{\delta} (t_0)}{dt} = \vec{\gamma} (t_0) = 0$
If we subtract our two initial equations we get:
$$\frac{d^2 \vec{\delta}}{dt^2} = \vec{a}d + \mu \left( \frac{\vec{r}{osc}}{r_{osc} ^3} + \frac{\vec{r}}{r^3} \right)$$
This can be simiplied to:
$$\frac{d^2 \vec{\delta}}{dt^2} + \frac{\mu}{r_{osc} ^3} \vec{\delta} = \frac{\mu}{r_{osc} ^3} \left( 1 - \frac{r_{osc} ^3}{r^3} \right) \vec{r} + \vec{a}_d$$
Our term $1 - \frac{r_{osc} ^3}{r^3}$ can be an issue because at the beginning of flight $r_{osc}$ and $r$ are approximately equal. That can't be too good, can it? We'll take a look at that a little later on.
We can find the radial and velocity components using the initial values of the radius and velocity along with the Legrangian coefficients in terms of the eccentric anomaly $E$, where the eccentric anomaly is the angle between the major axis and any point on the orbit.
$$\vec{r} = F \vec{r}_0 + G \vec{v}_0$$
$$\vec{v} = \dot{F} \vec{r}_0 + \dot{G} \vec{v}_0$$
Where
$$F = 1 + \frac{a}{r_0} \left[ cos(E - E_0) - 1 \right]$$
$$G = \frac{a \alpha _0}{\mu} \left[ 1 - cos(E - E_0) \right] + r_0 \sqrt{\frac{a}{\mu}} sin(E - E_0)$$
$$\dot{F} = - \frac{\sqrt{\mu a}}{r r_0} sin(E - E_0)$$
$$\dot{G} = 1 + \frac{a}{r} \left[ cos(E - E_0) - 1 \right]$$
(Tewari pg 104)
The eccentric anomaly, $E$, is equal to $M + e sin(E)$. $M$ is the mean anomaly and $e$ is the eccentricity.
<h4 align="center">Figure 2. Orbital anomalies for elliptic motion ([AIAA pg158])
As you can see from the equation, the eccentric anomaly is a function of itself. In order to calculate it we will have to start with a guess and then iterate until the difference between our new value of E and the guess is within a certain tolerance. This is based on Newton's approximation using a Taylor series expansion of $f(E) = E -e sin E - M$
The expansion is: $f(E + \Delta E) = \sum_{k = 0} ^{\infty} f^{(k)} (E) \frac{(\Delta E) ^k}{k!}$, in which $f(k) \approx \frac{d^k f(E)}{dE^k}$ The first two terms of the Taylor series can be used for Newton's approximation:
$$f(E + \Delta E) \approx f(E) + f^{(1)} (E) (\Delta E)$$
The following method and code for the eccentric anomaly approximation, are based upon those from Ashish Tewari's book, _Atmospheric and Space Flight Dynamics_. The initial guess we will start with uses the mean anomaly $M$ to give $E$:
$$E + M + e sin M$$
$\Delta E$ will be calculated using $-\frac{f(E)}{f^{(1)} (E)} = \frac{-E + e sin E + M}{1 - e cos E}$ so $f(E + \Delta E)$ is equal to $0$. After that, the value of E is updated, where $E = E + \Delta E$. This operation is repeated until a small enough difference is found, and our $E$ value is found.
Let's get to the coding!
End of explanation
def ellip_orb(a, Period, mu, e, t0, r0, v0, t):
Calculates the orbital position for an elliptical orbit
Parameters:
----------
a : float
Semi-major axis
Period : float
Period of planetary orbit
mu : float
Gravitational parameter
t0 : float
Initial time t = 0
r0 : array of float
Initial positional array
v0 : array of float
Initial velocity array
t : float
time
Returns:
-------
r : array of float
Array of radius at each time t
v : array of float
Array of velocity at each time t
r0_norm = numpy.linalg.norm(r0) # Normalized initial radius
v0_norm = numpy.linalg.norm(v0) # Normalized initial velocity
alpha = r0 * v0 # Constant used for Legrangian coefficients
theta0 = numpy.pi # Initial true anomaly
n = 2 * numpy.pi / (Period) # Mean motion, given the period
E0 = 2 * numpy.arctan(numpy.sqrt((1 - e) / (1 + e)) * numpy.tan(0.5 * theta0)) # Initial eccentric anomaly
tau = t0 + (- E0 + e * numpy.sin(E0)) / n # t - tau is time since Perigee
M = n * (t - tau) # Mean anomaly
E = Kepler_eqn(e, M) # Eccentric anomaly found through Kepler's equation
r_leg = a * (1 - e * numpy.cos(E)) # Radius used for legrangian coefficients
#Legrangian Coefficients
F = 1 + a * (numpy.cos(E - E0) - 1) * r0_norm
G = a * alpha * (1 - numpy.cos(E - E0)) / mu + r0_norm * numpy.sqrt(a / mu) * numpy.sin(E - E0)
F_dot = - numpy.sqrt(mu * a) * (numpy.sin(E - E0)) / (r_leg * r0_norm)
G_dot = 1 + a * (numpy.cos(E - E0) - 1) / r_leg
r = numpy.zeros_like(r0)
v = numpy.zeros_like(v0)
r = F * r0 + G * v0 # Radial value of orbit for specified time
v = F_dot * r0 + G_dot * v0 # Velocity value of orbit for specified time
return r, v
Explanation: The Kepler_eqn function uses the eccentricity, $e$, and mean anomaly, $M$, and then uses the iterative method to spit out our value of $E$.
The eccentric anomaly $E$ is the basis for the elliptical trajectory and is the value that changes at each time step, in turn changing the radial and velocity vectors.
Now let's define a function, that given our orbital parameters, will give us our trajectory at a time $t$.
The orbital elements that we will need to input into the function are:
$a$, the semi-major axis, the distance from the center of the orbit to either of the foci
$P$, the period, the time to complete one orbit
$\mu$, the gravitational parameter, the gravitational constant $G$ times the mass of the body,
$e$, the eccentricity
$t_0$, an initial time
$r_0$, an initial radial vector
$v_0$, an initial velocity vector
$t$, a time, where the new radial and velocity vectors will be found.
Using these initial values, we have equations that are set up within the function to give us our radial and velocity vectors. We also have to calculate our Legrangian coefficients, described earlier, $F$, $G$, $\dot{F}$, and $\dot{G}$
We will use the following equations:
$\alpha _0 = r_0 \circ v_0$ : a constant that is used in the coefficient equations
$\theta _0 = \pi$: our initial true anomaly (We are starting at the semi-major axis where it is equal to $\pi$)
$n = \frac{2 \pi}{P}$: the mean motion, which is $2 \pi$ over the period
$E_0 = 2 \tan ^{-1} (\sqrt{\frac{1 - e}{1 + e}} \tan (0.5 \theta _0))$: initial eccentric anomaly
$\tau = t_0 + \frac{- E_0 + e \sin (E_0)}{n}$: where $t - \tau$ is the time since the closest point of orbit
$M = n (t - \tau)$ : the mean anomaly which is used for the eccentric anomaly iteration in Kepler's equation
End of explanation
def acceleration_d(m1, m2, m3, r, r3):
Calculates the acceleration due to the disturbing orbit
Parameters:
----------
m1 : float
Mass of central body
m2 : float
Mass of second body
m3 : float
Mass of third (disturbing) body
r : array of float
Radial distance between body two and one
r3: array of float
Radial distance between body three and one
Returns:
-------
a_d : array of float
Acceleration due to the disturbing orbit
a_d = numpy.zeros((2, 1))
G = 6.674e-11 # Gravitational constant
r13 = r3 # Radial distance between Jupiter and the Sun
r23 = r - r3 # Radial distance between Jupiter and Mars
r23_norm = numpy.linalg.norm(r23) # Normalized radius between Jupiter and Mars
r13_norm = numpy.linalg.norm(r13) # Normalized radius between Jupiter and the Sun
a_d = (((1 / m2) * ((G* m2 * m3)/ (r23_norm ** 3))) * r23) - (((1 / m1) * ((G * m1 * m3) / (r13_norm ** 3))) * r13)
return a_d
Explanation: Now that we have a function to solve for the orbital trajectory of an elliptical orbit (which Mars and Jupiter both have) we have to create a function for the disturbing acceleration caused by the third body, Jupiter.
The equation for $a_d$ as mentioned earlier, is:
$$\vec{a}_d = \frac{1}{m_2} \frac{G m_2 m_3}{|\vec{r}_23|^3} \vec{r}_23 - \frac{1}{m_1} \frac{G m_1 m_3}{|\vec{r}_13|^3} \vec{r}_13$$
Our inputs for this function will be the mass of the Sun ($m_1$), the mass of Mars ($m_2$), the mass of Jupiter ($m_3$), and the radial vectors of Mars and Jupiter with respect to the Sun.
End of explanation
mu3 = 1.2669e17 # Standard gravitational parameter of Jupiter in m^3 / s^2
m3 = 1.8983e27 # Mass of Jupiter in kg
e3 = .0489 # Eccentricity of Jupiter
a3 = 778000000. # Semi-major Axis of Jupiter in km
Period3 = 4332.589 * 3600 * 24 # Period of Jupiter Orbit in seconds
mu = 4.2828e13 # Standard gravitational parameter of Mars in m^3 / s^2
m2 = 6.4174e23 # Mass of Mars in kg
e = .0934 # Eccentricity of Mars
a = 228000000. # Semi-major Axis of Mars in km
Period = 686.980 * 3600 * 24 # Period of Mars Orbit in seconds
mu1 = 1.3271e20 # Standard gravitational parameters of the Sun in m^3 / s^2
m1 = 1.989e30 # Mass of the Sun in kg
dt = 24 * 3600 # Time step
tfinal = 4000 * dt # Final time
N = int(tfinal / dt) + 1 # Number of time steps
t = numpy.linspace(0,tfinal,N) # Time array
r0 = numpy.array([228000000., 0.]) # Initial radius of Mars
v0 = numpy.array([-21.84, -10.27]) # Initial velocity
r3_0 = numpy.array([778000000., 0.]) # Initial radius of Jupiter
v3_0 = numpy.array([-13.04, -.713]) # Initial velocity of Jupiter
# Set arrays for radial and velocity components
r = numpy.empty((N, 2))
v = numpy.empty((N, 2))
gamma = numpy.empty((N, 2))
delta = numpy.empty((N, 2))
r_n = numpy.empty((N, 2))
v_n = numpy.empty((N, 2))
a_d = numpy.empty((N, 2))
r_osc = numpy.empty((N, 2))
r_osc_n = numpy.empty((N, 2))
v_osc = numpy.empty((N, 2))
v_osc_n = numpy.empty((N, 2))
r3_n = numpy.empty((N, 2))
Explanation: Now that we have our Kepler function, our elliptical orbit function, and our acceleration function, we can set up our initial conditions. After that we will be able to get our final radial and velocity vectors for the osculating and perturbed orbits in addition to Jupiter's orbit.
<h3 align="center">Initial Conditions</h3>
End of explanation
for i,ts in enumerate(t):
delta = numpy.zeros_like(r0)
gamma = numpy.zeros_like(r0)
r_osc, v_osc = ellip_orb(a, Period, mu1, e, t[0], r0, v0, ts) # Trajectory of the osculating orbit of Mars
r_osc_norm = numpy.linalg.norm(r_osc) # Normalized osculating orbit of Mars
r0_norm = numpy.linalg.norm(r0) # Normalized initial orbit of Mars
r3, v3 = ellip_orb(a3, Period3, mu3, e3, t[0], r3_0, v3_0, ts) # Trajectory of Jupiter
a_d = acceleration_d(m1, m2, m3, r_osc, r3) # Acceleration due to Jupiter
gamma = mu3 * (dt) * ((1 - (r_osc_norm / r0_norm) ** 3) / r_osc_norm ** 3) + a_d * (dt) # Difference in velocity between osculating orbit and perturbed
delta = gamma * (dt) # Difference between osculating orbit and perturbed orbit radius
r = r_osc + delta # Perturbed orbital radius
v = v_osc + gamma # Perturbed orbital velocity
r_osc_n[i,:] = r_osc # Value of osculating orbital radius for every time step
v_osc_n[i,:] = v_osc # Value of osculating orbital velocity for every time step
r3_n[i,:] = r3 # Value of Jupiter's radius for every time step
r_n[i,:] = r # Value of the perturbed orbital radius for every time step
v_n[i,:] = v # Value of the perturbed orbital velocity for every time step
Explanation: Next we will have a for-loop that will give us our solution at every time step for each of our orbits. In this case, we will use a final time of 4000 days. Each time step will be one day (in seconds), and we will be able to see the trajectories over that time period.
The loop uses a crude method of integration (Tewari pg 166) to solve for $\gamma$ and $\delta$, the difference between the osculating and perturbed radial and velocity vectors respectively.
The osculating orbit is used here as our base orbit. The orbit for Jupiter is calculated as well and used with the osculating orbit to get our disturbing acceleration. The acceleration is then used to find $\gamma$, which in turn is used to calculate $\delta$.
$\delta$ is added to the radial vector of the osculating orbit at every time step to give us our perturbed orbit. The same is done with $\gamma$ for the velocity vector for the perturbed orbit. The solutions are then entered into arrays so that we can plot them!
End of explanation
x = numpy.linspace(t[0], t[-1], N)
pyplot.figure(figsize = (10,10))
pyplot.grid(True)
pyplot.xlabel(r'X Distance (km)', fontsize = 18)
pyplot.ylabel(r'Y Distance (km)', fontsize = 18)
pyplot.title('Trajectory of Osc vs Perturbed Orbit, Flight Time = %.2f days' %(tfinal / dt), fontsize=14)
pyplot.plot(r_n[:,0], r_n[:,1])
pyplot.plot(r_osc_n[:,0], r_osc_n[:,1])
pyplot.legend(['Perturbed Orbit', 'Osculating Orbit']);
pyplot.figure(figsize = (10,10))
pyplot.grid(True)
pyplot.xlabel(r'X Distance (km)', fontsize = 18)
pyplot.ylabel(r'Y Distance (km)', fontsize = 18)
pyplot.title('Trajectory of Osc, Perturbed and Jupiter Orbit, Flight Time = %.2f days' %(tfinal / dt), fontsize=14)
pyplot.plot(r_n[:,0], r_n[:,1])
pyplot.plot(r_osc_n[:,0], r_osc_n[:,1])
pyplot.plot(r3_n[:,0], r3_n[:,1])
pyplot.legend(['Perturbed Orbit', 'Osculating Orbit', 'Jupiter\'s Orbit']);
Explanation: We mentioned earlier that sometimes the term $1 - \frac{r_osc ^3}{r^3}$ can cause issues because towards the beginning of orbit the two terms are approximately equal and can cause the solution to blow up, but this can be solved as follows:
$$1 - \frac{r_{osc} ^3}{r^3} = -B \frac{3 + 3B + B^2}{1 + (1 + B) ^{\frac{3}{2}}}$$
Where $$B = \frac{\vec{\delta} (\vec{\delta} - 2 \vec{r})}{r^2}$$
Now that we have our solutions arrays, we can plot our answers. Each array contains an x and y component for each time step. We will look at a plot of the osculating orbit and the perturbed orbit of Mars, and in a separate plot, we will look at those two orbits along with Jupiter's orbit.
End of explanation
from IPython.core.display import HTML
css_file = '../numericalmoocstyle.css'
HTML(open(css_file, "r").read())
Explanation: Looking at our first plot we can see that there is a change in Mars' orbit due to the gravitational pull from Jupiter as it flies by!
<h3 align="center">Dig Deeper</h3>
See what happens when you change the orbital parameters! What happens with different planets or with a satellite orbiting earth? What about when both planets don't start at the zero point of the y axis?
<h2 align="center">References</h2>
NASA Mars Fact Sheet
NASA Jupiter Fact Sheet
Standard Gravitational Parameter Wikipedia
Perturbation (Astronomy)
Battin, Richard H., AIAA Education Series, An Introduction to the Mathematics and Methods of Astrodynamics, AIAA, 1999
Schaub, Hanspeter & John. L Junkins, AIAA Education Series, Analytical Mechanics of Aerospace Systems, AIAA, 2000
Tewari, Ashish, Atmospheric and Space Flight Dynamics: Modeling and Simulation with MATLAB and Simulink, Birkhauser, 2007
End of explanation |
14,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Simulating fullCyc Day1 control gradients
Not simulating incorporation (all 0% isotope incorp.)
Don't know how much true incorporatation for emperical data
Using parameters inferred from emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters
Determining whether simulated taxa show similar distribution to the emperical data
Input parameters
phyloseq.bulk file
taxon mapping file
list of genomes
fragments simulated for all genomes
bulk community richness
workflow
Creating a community file from OTU abundances in bulk soil samples
phyloseq.bulk --> OTU table --> filter to sample --> community table format
Fragment simulation
simulated_fragments --> parse out fragments for target OTUs
simulated_fragments --> parse out fragments from random genomes to obtain richness of interest
combine fragment python objects
Convert fragment lists to kde object
Add diffusion
Make incorp config file
Add isotope incorporation
Calculating BD shift from isotope incorp
Simulating gradient fractions
Simulating OTU table
Simulating PCR
Subsampling from the OTU table
Init
Step1: Nestly
assuming fragments already simulated
Step2: Checking amplicon fragment BD distribution
'Raw' fragments
Step3: fragments w/ diffusion + DBL
Step4: BD min/max
what is the min/max BD that we care about?
Step5: Plotting number of taxa in each fraction
Emperical data (fullCyc)
Step6: w/ simulated data
Step7: Total sequence count
Step8: Plotting Shannon diversity for each
Step9: min/max abundances of taxa
Step10: Plotting rank-abundance of heavy fractions
In heavy fractions, is DBL resulting in approx. equal abundances among taxa?
Step11: BD range where an OTU is detected
Do the simulated OTU BD distributions span the same BD range of the emperical data?
Simulated
Step12: Emperical
Step13: BD span of just overlapping taxa
Taxa overlapping between emperical data and genomes in dataset
These taxa should have the same relative abundances in both datasets.
The comm file was created from the emperical dataset phyloseq file.
Step14: Check
Are all target (overlapping) taxa the same relative abundances in both datasets?
Step15: Correlation between relative abundance and BD_range diff
Are low abundant taxa more variable in their BD span
Step16: Plotting abundance distributions
Step17: --OLD--
Determining the pre-fractionation abundances of taxa in each gradient fraction
emperical data
low-abundant taxa out at the tails?
OR broad distributions of high abundant taxa
Step18: Plotting the abundance distribution of top 10 most abundant taxa (bulk samples) | Python Code:
import os
import glob
import re
import nestly
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(phyloseq)
## BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
Explanation: Goal
Simulating fullCyc Day1 control gradients
Not simulating incorporation (all 0% isotope incorp.)
Don't know how much true incorporatation for emperical data
Using parameters inferred from emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters
Determining whether simulated taxa show similar distribution to the emperical data
Input parameters
phyloseq.bulk file
taxon mapping file
list of genomes
fragments simulated for all genomes
bulk community richness
workflow
Creating a community file from OTU abundances in bulk soil samples
phyloseq.bulk --> OTU table --> filter to sample --> community table format
Fragment simulation
simulated_fragments --> parse out fragments for target OTUs
simulated_fragments --> parse out fragments from random genomes to obtain richness of interest
combine fragment python objects
Convert fragment lists to kde object
Add diffusion
Make incorp config file
Add isotope incorporation
Calculating BD shift from isotope incorp
Simulating gradient fractions
Simulating OTU table
Simulating PCR
Subsampling from the OTU table
Init
End of explanation
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'
buildDir = os.path.join(workDir, 'Day1_default_run')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
fragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl'
targetFile = '/home/nick/notebook/SIPSim/dev/fullCyc/CD-HIT/target_taxa.txt'
physeqDir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq_bulkCore = 'bulk-core'
physeq_SIP_core = 'SIP-core_unk'
prefrac_comm_abundance = ['1e9']
richness = 2503 # chao1 estimate for bulk Day 1
seq_per_fraction = ['lognormal', 9.432, 0.5, 10000, 30000] # dist, mean, scale, min, max
bulk_days = [1]
nprocs = 24
# building tree structure
nest = nestly.Nest()
## varying params
nest.add('abs', prefrac_comm_abundance)
## set params
nest.add('bulk_day', bulk_days, create_dir=False)
nest.add('percIncorp', [0], create_dir=False)
nest.add('percTaxa', [0], create_dir=False)
nest.add('np', [nprocs], create_dir=False)
nest.add('richness', [richness], create_dir=False)
nest.add('subsample_dist', [seq_per_fraction[0]], create_dir=False)
nest.add('subsample_mean', [seq_per_fraction[1]], create_dir=False)
nest.add('subsample_scale', [seq_per_fraction[2]], create_dir=False)
nest.add('subsample_min', [seq_per_fraction[3]], create_dir=False)
nest.add('subsample_max', [seq_per_fraction[4]], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
nest.add('fragFile', [fragFile], create_dir=False)
nest.add('targetFile', [targetFile], create_dir=False)
nest.add('physeqDir', [physeqDir], create_dir=False)
nest.add('physeq_bulkCore', [physeq_bulkCore], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
#-- making DNA pool similar to gradient of interest
echo '# Creating comm file from phyloseq'
phyloseq2comm.r {physeqDir}{physeq_bulkCore} -s 12C-Con -d {bulk_day} > {physeq_bulkCore}_comm.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm.txt
echo '## Adding target taxa to comm file'
comm_add_target.r {physeq_bulkCore}_comm.txt {targetFile} > {physeq_bulkCore}_comm_target.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm_target.txt
echo '# Adding extra richness to community file'
printf "1\t{richness}\n" > richness_needed.txt
comm_add_richness.r -s {physeq_bulkCore}_comm_target.txt richness_needed.txt > {physeq_bulkCore}_comm_all.txt
### renaming comm file for downstream pipeline
cat {physeq_bulkCore}_comm_all.txt > {physeq_bulkCore}_comm_target.txt
rm -f {physeq_bulkCore}_comm_all.txt
echo '## parsing out genome fragments to make simulated DNA pool resembling the gradient of interest'
## all OTUs without an associated reference genome will be assigned a random reference (of the reference genome pool)
### this is done through --NA-random
SIPSim fragment_KDE_parse {fragFile} {physeq_bulkCore}_comm_target.txt \
--rename taxon_name --NA-random > fragsParsed.pkl
echo '#-- SIPSim pipeline --#'
echo '# converting fragments to KDE'
SIPSim fragment_KDE \
fragsParsed.pkl \
> fragsParsed_KDE.pkl
echo '# adding diffusion'
SIPSim diffusion \
fragsParsed_KDE.pkl \
--np {np} \
> fragsParsed_KDE_dif.pkl
echo '# adding DBL contamination'
SIPSim DBL \
fragsParsed_KDE_dif.pkl \
--np {np} \
> fragsParsed_KDE_dif_DBL.pkl
echo '# making incorp file'
SIPSim incorpConfigExample \
--percTaxa {percTaxa} \
--percIncorpUnif {percIncorp} \
> {percTaxa}_{percIncorp}.config
echo '# adding isotope incorporation to BD distribution'
SIPSim isotope_incorp \
fragsParsed_KDE_dif_DBL.pkl \
{percTaxa}_{percIncorp}.config \
--comm {physeq_bulkCore}_comm_target.txt \
--np {np} \
> fragsParsed_KDE_dif_DBL_inc.pkl
#echo '# calculating BD shift from isotope incorporation'
#SIPSim BD_shift \
# fragsParsed_KDE_dif_DBL.pkl \
# fragsParsed_KDE_dif_DBL_inc.pkl \
# --np {np} \
# > fragsParsed_KDE_dif_DBL_inc_BD-shift.txt
echo '# simulating gradient fractions'
SIPSim gradient_fractions \
{physeq_bulkCore}_comm_target.txt \
> fracs.txt
echo '# simulating an OTU table'
SIPSim OTU_table \
fragsParsed_KDE_dif_DBL_inc.pkl \
{physeq_bulkCore}_comm_target.txt \
fracs.txt \
--abs {abs} \
--np {np} \
> OTU_abs{abs}.txt
#echo '# simulating PCR'
SIPSim OTU_PCR \
OTU_abs{abs}.txt \
> OTU_abs{abs}_PCR.txt
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}_PCR.txt \
> OTU_abs{abs}_PCR_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_meta.txt
!chmod 777 $bashFile
!cd $workDir; \
nestrun --template-file $bashFile -d Day1_default_run --log-file log.txt -j 1
Explanation: Nestly
assuming fragments already simulated
End of explanation
workDir1 = os.path.join(workDir, 'Day1_default_run/1e9/')
!cd $workDir1; \
SIPSim KDE_info \
-s fragsParsed_KDE.pkl \
> fragsParsed_KDE_info.txt
%%R -i workDir1
inFile = file.path(workDir1, 'fragsParsed_KDE_info.txt')
df = read.delim(inFile, sep='\t') %>%
filter(KDE_ID == 1)
df %>% head(n=3)
%%R -w 600 -h 300
ggplot(df, aes(median)) +
geom_histogram(binwidth=0.001) +
labs(x='Buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Checking amplicon fragment BD distribution
'Raw' fragments
End of explanation
workDir1 = os.path.join(workDir, 'Day1_default_run/1e9/')
!cd $workDir1; \
SIPSim KDE_info \
-s fragsParsed_KDE_dif_DBL.pkl \
> fragsParsed_KDE_dif_DBL_info.pkl
%%R -i workDir1
inFile = file.path(workDir1, 'fragsParsed_KDE_dif_DBL_info.pkl')
df = read.delim(inFile, sep='\t') %>%
filter(KDE_ID == 1)
df %>% head(n=3)
%%R -w 600 -h 300
ggplot(df, aes(median)) +
geom_histogram(binwidth=0.001) +
labs(x='Buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: fragments w/ diffusion + DBL
End of explanation
%%R
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
cat('Min BD:', min_BD, '\n')
cat('Max BD:', max_BD, '\n')
Explanation: BD min/max
what is the min/max BD that we care about?
End of explanation
%%R
# simulated OTU table file
OTU.table.dir = '/home/nick/notebook/SIPSim/dev/fullCyc/frag_norm_9_2.5_n5/Day1_default_run/1e9/'
OTU.table.file = 'OTU_abs1e9_PCR_sub.txt'
#OTU.table.file = 'OTU_abs1e9_sub.txt'
#OTU.table.file = 'OTU_abs1e9.txt'
%%R -i physeqDir -i physeq_SIP_core -i bulk_days
# bulk core samples
F = file.path(physeqDir, physeq_SIP_core)
physeq.SIP.core = readRDS(F)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core = prune_samples(physeq.SIP.core.m$Substrate == '12C-Con' &
physeq.SIP.core.m$Day %in% bulk_days,
physeq.SIP.core) %>%
filter_taxa(function(x) sum(x) > 0, TRUE)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core
%%R -w 800 -h 300
## dataframe
df.EMP = physeq.SIP.core %>% otu_table %>%
as.matrix %>% as.data.frame
df.EMP$OTU = rownames(df.EMP)
df.EMP = df.EMP %>%
gather(sample, abundance, 1:(ncol(df.EMP)-1))
df.EMP = inner_join(df.EMP, physeq.SIP.core.m, c('sample' = 'X.Sample'))
df.EMP.nt = df.EMP %>%
group_by(sample) %>%
mutate(n_taxa = sum(abundance > 0)) %>%
ungroup() %>%
distinct(sample) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
## plotting
p = ggplot(df.EMP.nt, aes(Buoyant_density, n_taxa)) +
geom_point(color='blue') +
geom_line(color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
Explanation: Plotting number of taxa in each fraction
Emperical data (fullCyc)
End of explanation
%%R -w 800 -h 300
# loading file
F = file.path(workDir1, OTU.table.file)
df.SIM = read.delim(F, sep='\t')
## edit table
df.SIM.nt = df.SIM %>%
filter(count > 0) %>%
group_by(library, BD_mid) %>%
summarize(n_taxa = n()) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
## plot
p = ggplot(df.SIM.nt, aes(BD_mid, n_taxa)) +
geom_point(color='red') +
geom_line(color='red') +
geom_point(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
%%R -w 800 -h 300
# normalized by max number of taxa
## edit table
df.SIM.nt = df.SIM.nt %>%
group_by() %>%
mutate(n_taxa_norm = n_taxa / max(n_taxa))
df.EMP.nt = df.EMP.nt %>%
group_by() %>%
mutate(n_taxa_norm = n_taxa / max(n_taxa))
## plot
p = ggplot(df.SIM.nt, aes(BD_mid, n_taxa_norm)) +
geom_point(color='red') +
geom_line(color='red') +
geom_point(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
scale_y_continuous(limits=c(0, 1)) +
labs(x='Buoyant density', y='Number of taxa\n(fraction of max)') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
Explanation: w/ simulated data
End of explanation
%%R -w 800 -h 300
# simulated
df.SIM.s = df.SIM %>%
group_by(library, BD_mid) %>%
summarize(total_abund = sum(count)) %>%
rename('Day' = library, 'Buoyant_density' = BD_mid) %>%
ungroup() %>%
mutate(dataset='simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Day, Buoyant_density) %>%
summarize(total_abund = sum(abundance)) %>%
ungroup() %>%
mutate(dataset='emperical')
# join
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.SIM.s = df.EMP.s = ""
# plot
ggplot(df.j, aes(Buoyant_density, total_abund, color=dataset)) +
geom_point() +
geom_line() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Buoyant density', y='Total sequences per sample') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Total sequence count
End of explanation
%%R
shannon_index_long = function(df, abundance_col, ...){
# calculating shannon diversity index from a 'long' formated table
## community_col = name of column defining communities
## abundance_col = name of column defining taxon abundances
df = df %>% as.data.frame
cmd = paste0(abundance_col, '/sum(', abundance_col, ')')
df.s = df %>%
group_by_(...) %>%
mutate_(REL_abundance = cmd) %>%
mutate(pi__ln_pi = REL_abundance * log(REL_abundance),
shannon = -sum(pi__ln_pi, na.rm=TRUE)) %>%
ungroup() %>%
dplyr::select(-REL_abundance, -pi__ln_pi) %>%
distinct_(...)
return(df.s)
}
%%R
# calculating shannon
df.SIM.shan = shannon_index_long(df.SIM, 'count', 'library', 'fraction') %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.EMP.shan = shannon_index_long(df.EMP, 'abundance', 'sample') %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
%%R -w 800 -h 300
# plotting
p = ggplot(df.SIM.shan, aes(BD_mid, shannon)) +
geom_point(color='red') +
geom_line(color='red') +
geom_point(data=df.EMP.shan, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.shan, aes(x=Buoyant_density), color='blue') +
scale_y_continuous(limits=c(4, 7.5)) +
labs(x='Buoyant density', y='Shannon index') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
Explanation: Plotting Shannon diversity for each
End of explanation
%%R -h 300 -w 800
# simulated
df.SIM.s = df.SIM %>%
filter(rel_abund > 0) %>%
group_by(BD_mid) %>%
summarize(min_abund = min(rel_abund),
max_abund = max(rel_abund)) %>%
ungroup() %>%
rename('Buoyant_density' = BD_mid) %>%
mutate(dataset = 'simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Buoyant_density) %>%
mutate(rel_abund = abundance / sum(abundance)) %>%
filter(rel_abund > 0) %>%
summarize(min_abund = min(rel_abund),
max_abund = max(rel_abund)) %>%
ungroup() %>%
mutate(dataset = 'emperical')
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
# plotting
ggplot(df.j, aes(Buoyant_density, max_abund, color=dataset, group=dataset)) +
geom_point() +
geom_line() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Buoyant density', y='Maximum relative abundance') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: min/max abundances of taxa
End of explanation
%%R -w 900
# simulated
df.SIM.s = df.SIM %>%
select(BD_mid, rel_abund) %>%
rename('Buoyant_density' = BD_mid) %>%
mutate(dataset='simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Buoyant_density) %>%
mutate(rel_abund = abundance / sum(abundance)) %>%
ungroup() %>%
filter(rel_abund > 0) %>%
select(Buoyant_density, rel_abund) %>%
mutate(dataset='emperical')
# join
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density > 1.73) %>%
mutate(Buoyant_density = round(Buoyant_density, 3),
Buoyant_density_c = as.character(Buoyant_density))
df.j$Buoyant_density_c = reorder(df.j$Buoyant_density_c, df.j$Buoyant_density)
ggplot(df.j, aes(Buoyant_density_c, rel_abund)) +
geom_boxplot() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Buoyant density', y='Maximum relative abundance') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=60, hjust=1),
legend.position = 'none'
)
Explanation: Plotting rank-abundance of heavy fractions
In heavy fractions, is DBL resulting in approx. equal abundances among taxa?
End of explanation
%%R
# loading comm file
F = file.path(workDir1, 'bulk-core_comm_target.txt')
df.comm = read.delim(F, sep='\t') %>%
dplyr::select(library, taxon_name, rel_abund_perc) %>%
rename('bulk_abund' = rel_abund_perc) %>%
mutate(bulk_abund = bulk_abund / 100)
## joining
df.SIM.j = inner_join(df.SIM, df.comm, c('library' = 'library',
'taxon' = 'taxon_name')) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.j %>% head(n=3)
Explanation: BD range where an OTU is detected
Do the simulated OTU BD distributions span the same BD range of the emperical data?
Simulated
End of explanation
%%R
bulk_days = c(1)
%%R
physeq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq.bulk = 'bulk-core'
physeq.file = file.path(physeq.dir, physeq.bulk)
physeq.bulk = readRDS(physeq.file)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' &
physeq.bulk.m$Day %in% bulk_days, physeq.bulk)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk
%%R
physeq.bulk.n = transform_sample_counts(physeq.bulk, function(x) x/sum(x))
physeq.bulk.n
%%R
# making long format of each bulk table
bulk.otu = physeq.bulk.n %>% otu_table %>% as.data.frame
ncol = ncol(bulk.otu)
bulk.otu$OTU = rownames(bulk.otu)
bulk.otu = bulk.otu %>%
gather(sample, abundance, 1:ncol)
bulk.otu = inner_join(physeq.bulk.m, bulk.otu, c('X.Sample' = 'sample')) %>%
dplyr::select(OTU, abundance) %>%
rename('bulk_abund' = abundance)
bulk.otu %>% head(n=3)
%%R
# joining tables
df.EMP.j = inner_join(df.EMP, bulk.otu, c('OTU' = 'OTU')) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.EMP.j %>% head(n=3)
%%R -h 400
# filtering & combining emperical w/ simulated data
## emperical
max_BD_range = max(df.EMP.j$Buoyant_density) - min(df.EMP.j$Buoyant_density)
df.EMP.j.f = df.EMP.j %>%
filter(abundance > 0) %>%
group_by(OTU) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(Buoyant_density),
max_BD = max(Buoyant_density),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
mutate(dataset = 'emperical')
## simulated
max_BD_range = max(df.SIM.j$BD_mid) - min(df.SIM.j$BD_mid)
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
group_by(taxon) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(BD_mid),
max_BD = max(BD_mid),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
rename('OTU' = taxon) %>%
mutate(dataset = 'simulated')
## join
df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%
filter(BD_range_perc > 0,
mean_rel_abund > 0)
## plotting
ggplot(df.j, aes(mean_rel_abund, BD_range_perc, color=dataset)) +
geom_point(alpha=0.5, shape='O') +
#stat_density2d() +
#scale_fill_gradient(low='white', high='red', na.value='grey50') +
#scale_x_log10(limits=c(min(df.j$mean_rel_abund, na.rm=T), 1e-2)) +
#scale_y_continuous(limits=c(90, 100)) +
scale_x_log10() +
scale_y_continuous() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
#geom_vline(xintercept=0.001, linetype='dashed', alpha=0.5) +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
Explanation: Emperical
End of explanation
%%R -i targetFile
df.target = read.delim(targetFile, sep='\t')
df.target %>% nrow %>% print
df.target %>% head(n=3)
%%R
# filtering to just target taxa
df.j.t = df.j %>%
filter(OTU %in% df.target$OTU)
## plotting
ggplot(df.j.t, aes(mean_rel_abund, BD_range_perc, color=dataset)) +
geom_point(alpha=0.5, shape='O') +
scale_x_log10() +
scale_y_continuous() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
Explanation: BD span of just overlapping taxa
Taxa overlapping between emperical data and genomes in dataset
These taxa should have the same relative abundances in both datasets.
The comm file was created from the emperical dataset phyloseq file.
End of explanation
%%R -w 380 -h 350
# formatting data
df.1 = df.j.t %>%
filter(dataset == 'simulated') %>%
select(OTU, mean_rel_abund, BD_range, BD_range_perc)
df.2 = df.j.t %>%
filter(dataset == 'emperical') %>%
select(OTU, mean_rel_abund, BD_range, BD_range_perc)
df.12 = inner_join(df.1, df.2, c('OTU' = 'OTU')) %>%
mutate(BD_diff_perc = BD_range_perc.x - BD_range_perc.y)
df.12 %>% head
## plotting
p1 = ggplot(df.12, aes(mean_rel_abund.x, mean_rel_abund.y)) +
geom_point(alpha=0.5) +
scale_x_log10() +
scale_y_log10() +
labs(x='Relative abundance (simulated)', y='Relative abundance (emperical)') +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
p1
Explanation: Check
Are all target (overlapping) taxa the same relative abundances in both datasets?
End of explanation
%%R -w 500 -h 350
ggplot(df.12, aes(mean_rel_abund.x, BD_diff_perc)) +
geom_point() +
scale_x_log10() +
labs(x='Relative abundance',
y='Difference in % of gradient spanned\n(simulated vs emperical)',
title='Overlapping taxa') +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
Explanation: Correlation between relative abundance and BD_range diff
Are low abundant taxa more variable in their BD span
End of explanation
%%R
## emperical
df.EMP.j.f = df.EMP.j %>%
filter(abundance > 0) %>%
dplyr::select(OTU, sample, abundance, Buoyant_density, bulk_abund) %>%
mutate(dataset = 'emperical')
## simulated
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
dplyr::select(taxon, fraction, count, BD_mid, bulk_abund) %>%
rename('OTU' = taxon,
'sample' = fraction,
'Buoyant_density' = BD_mid,
'abundance' = count) %>%
mutate(dataset = 'simulated')
df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%
group_by(sample) %>%
mutate(rel_abund = abundance / sum(abundance))
df.j %>% head(n=3) %>% as.data.frame
%%R -w 800 -h 400
# plotting absolute abundances of subsampled
## plot
p = ggplot(df.j, aes(Buoyant_density, abundance, fill=OTU)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Subsampled community\n(absolute abundance)') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
p
%%R -w 800 -h 400
# plotting relative abundances of subsampled
p = ggplot(df.j, aes(Buoyant_density, rel_abund, fill=OTU)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Subsampled community\n(relative abundance)') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
p
Explanation: Plotting abundance distributions
End of explanation
%%R
physeq.SIP.core.n = transform_sample_counts(physeq.SIP.core, function(x) x/sum(x))
physeq.SIP.core.n
%%R
physeq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq.bulk = 'bulk-core'
physeq.file = file.path(physeq.dir, physeq.bulk)
physeq.bulk = readRDS(physeq.file)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' &
physeq.bulk.m$Day %in% bulk_days, physeq.bulk)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk
%%R
physeq.bulk.n = transform_sample_counts(physeq.bulk, function(x) x/sum(x))
physeq.bulk.n
%%R
# making long format of SIP OTU table
SIP.otu = physeq.SIP.core.n %>% otu_table %>% as.data.frame
ncol = ncol(SIP.otu)
SIP.otu$OTU = rownames(SIP.otu)
SIP.otu = SIP.otu %>%
gather(sample, abundance, 1:ncol)
SIP.otu = inner_join(physeq.SIP.core.m, SIP.otu, c('X.Sample' = 'sample')) %>%
select(-core_dataset, -Sample_location, -Sample_date, -Sample_treatment,
-Sample_subtreatment, -library, -Sample_type)
SIP.otu %>% head(n=3)
%%R
# making long format of each bulk table
bulk.otu = physeq.bulk.n %>% otu_table %>% as.data.frame
ncol = ncol(bulk.otu)
bulk.otu$OTU = rownames(bulk.otu)
bulk.otu = bulk.otu %>%
gather(sample, abundance, 1:ncol)
bulk.otu = inner_join(physeq.bulk.m, bulk.otu, c('X.Sample' = 'sample')) %>%
select(OTU, abundance) %>%
rename('bulk_abund' = abundance)
bulk.otu %>% head(n=3)
%%R
# joining tables
SIP.otu = inner_join(SIP.otu, bulk.otu, c('OTU' = 'OTU'))
SIP.otu %>% head(n=3)
%%R -w 900 -h 900
# for each gradient, plotting gradient rel_abund vs bulk rel_abund
ggplot(SIP.otu, aes(bulk_abund, abundance)) +
geom_point(alpha=0.2) +
geom_point(shape='O', alpha=0.6) +
facet_wrap(~ Buoyant_density) +
labs(x='Pre-fractionation relative abundance',
y='Fraction relative abundance') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 900 -h 900
# for each gradient, plotting gradient rel_abund vs bulk rel_abund
ggplot(SIP.otu, aes(bulk_abund, abundance)) +
geom_point(alpha=0.2) +
geom_point(shape='O', alpha=0.6) +
scale_x_continuous(limits=c(0,0.01)) +
scale_y_continuous(limits=c(0,0.01)) +
facet_wrap(~ Buoyant_density) +
labs(x='Pre-fractionation relative abundance',
y='Fraction relative abundance') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=90, hjust=1, vjust=0.5)
)
Explanation: --OLD--
Determining the pre-fractionation abundances of taxa in each gradient fraction
emperical data
low-abundant taxa out at the tails?
OR broad distributions of high abundant taxa
End of explanation
%%R -w 500 -h 300
# checking bulk rank-abundance
tmp = bulk.otu %>%
mutate(rank = row_number(-bulk_abund))
ggplot(tmp, aes(rank, bulk_abund)) +
geom_point()
%%R -w 900
top.n = filter(tmp, rank <= 10)
SIP.otu.f = SIP.otu %>%
filter(OTU %in% top.n$OTU)
ggplot(SIP.otu.f, aes(Buoyant_density, abundance, group=OTU, fill=OTU)) +
#geom_point() +
#geom_line() +
geom_area(position='dodge', alpha=0.4) +
labs(y='Relative abundance', x='Buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 600 -h 400
# Number of gradients that each OTU is found in
max_BD_range = max(SIP.otu$Buoyant_density) - min(SIP.otu$Buoyant_density)
SIP.otu.f = SIP.otu %>%
filter(abundance > 0) %>%
group_by(OTU) %>%
summarize(bulk_abund = mean(bulk_abund),
min_BD = min(Buoyant_density),
max_BD = max(Buoyant_density),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup()
ggplot(SIP.otu.f, aes(bulk_abund, BD_range_perc, group=OTU)) +
geom_point() +
scale_x_log10() +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
geom_vline(xintercept=0.001, linetype='dashed', alpha=0.5) +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Plotting the abundance distribution of top 10 most abundant taxa (bulk samples)
End of explanation |
14,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use decision optimization to help a sports league schedule its games
This tutorial includes everything you need to set up decision optimization engines, build mathematical programming models, and arrive at a good working schedule for a sports league's games.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>
Step1: Step 2
Step3: Use basic HTML and a stylesheet to format the data.
Step4: Now you will import the pandas library. Pandas is an open source Python library for data analysis. It uses two data structures, Series and DataFrame, which are built on top of NumPy.
A Series is a one-dimensional object similar to an array, list, or column in a table. It will assign a labeled index to each item in the series. By default, each item receives an index label from 0 to N, where N is the length of the series minus one.
A DataFrame is a tabular data structure comprised of rows and columns, similar to a spreadsheet, database table, or R's data.frame object. Think of a DataFrame as a group of Series objects that share an index (the column names).
In the example, each division (the AFC and the NFC) is part of a DataFrame.
Step5: The following display function is a tool to show different representations of objects. When you issue the display(teams) command, you are sending the output to the notebook so that the result is stored in the document.
Step6: Step 3
Step7: Step 4
Step8: Express the business constraints
For each week and each team, there is a constraint that the team cannot play itself. Also, the variables must be constrained to be symmetric.
If team t plays team t2 in week w, then team t2 must play team t in week w.
In constraint programming, you can use a decision variable to index an array by using an element expression.
Step9: Each week, every team must be assigned to at most one game. To model this, you use the specialized alldifferent constraint.
for a given week w, the values of play[t][w] must be unique for all teams t.
Step10: One set of constraints is used to ensure that the solution satisfies the number of intradivisional and interdivisional games that each team must play.
A pair of teams cannot play each other on consecutive weeks.
Each team must play at least a certain number of intradivisional games, nbFirstHalfGames, in the first half of the season.
Step11: Express the objective
The objective function for this example is designed to force intradivisional games to occur as late in the season as possible. The incentive for intradivisional games increases by week. There is no incentive for interdivisional games.
Use an indicator matrix, intraDivisionalPair, to specify whether a pair of teams is in the same division or not. For each pair which is intradivisional, the incentive, or gain, is a power function of the week.
These cost functions are used to create an expression that models the overall cost. The cost here is halved as the incentive for each game gets counted twice.
Step12: Solve the model
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
You will get the best solution found after n seconds, thanks to the TimeLimit parameter.
Step13: Step 5
Step14: Run an example analysis
Determine when the last 10 final replay games will occur | Python Code:
import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
Explanation: Use decision optimization to help a sports league schedule its games
This tutorial includes everything you need to set up decision optimization engines, build mathematical programming models, and arrive at a good working schedule for a sports league's games.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Download the library
Step 2: Model the Data
Step 3: Prepare the data
Step 4: Set up the prescriptive model
Define the decision variables
Express the business constraints
Express the objective
Solve with Decision Optimization solve service
Step 5: Investigate the solution and run an example analysis
Summary
Describe the business problem: Games Scheduling in the National Football League
A sports league with two divisions must schedule games so that each team plays every team within its division a given number of times, and each team plays teams in the other division a given number of times.
A team plays exactly one game each week.
A pair of teams cannot play each other on consecutive weeks.
While a third of a team's intradivisional games must be played in the first half of the season, the preference is for intradivisional games to be held as late as possible in the season.
To model this preference, there is an incentive for intradivisional games that increases each week as a square of the week.
An opponent must be assigned to each team each week to maximize the total of the incentives..
This is a type of discrete optimization problem that can be solved by using either Integer Programming (IP) or Constraint Programming (CP).
Integer Programming is the class of problems defined as the optimization of a linear function, subject to linear constraints over integer variables.
Constraint Programming problems generally have discrete decision variables, but the constraints can be logical, and the arithmetic expressions are not restricted to being linear.
For the purposes of this tutorial, we will illustrate a solution with constraint programming (CP).
How decision optimization can help
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
Automate the complex decisions and trade-offs to better manage your limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Download the library
Run the following code to install Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
End of explanation
# Teams in 1st division
TEAM_DIV1 = ["Baltimore Ravens","Cincinnati Bengals", "Cleveland Browns","Pittsburgh Steelers","Houston Texans",
"Indianapolis Colts","Jacksonville Jaguars","Tennessee Titans","Buffalo Bills","Miami Dolphins",
"New England Patriots","New York Jets","Denver Broncos","Kansas City Chiefs","Oakland Raiders",
"San Diego Chargers"]
# Teams in 2nd division
TEAM_DIV2 = ["Chicago Bears","Detroit Lions","Green Bay Packers","Minnesota Vikings","Atlanta Falcons",
"Carolina Panthers","New Orleans Saints","Tampa Bay Buccaneers","Dallas Cowboys","New York Giants",
"Philadelphia Eagles","Washington Redskins","Arizona Cardinals","San Francisco 49ers",
"Seattle Seahawks","St. Louis Rams"]
from collections import namedtuple
NUMBER_OF_MATCHES_TO_PLAY = 2 # Number of match to play between two teams on the league
T_SCHEDULE_PARAMS = (namedtuple("TScheduleParams",
["nbTeamsInDivision",
"maxTeamsInDivision",
"numberOfMatchesToPlayInsideDivision",
"numberOfMatchesToPlayOutsideDivision"
]))
# Schedule parameters: depending on their values, you may overreach the Community Edition of CPLEX
SCHEDULE_PARAMS = T_SCHEDULE_PARAMS(5, # nbTeamsInDivision
10, # maxTeamsInDivision
NUMBER_OF_MATCHES_TO_PLAY, # numberOfMatchesToPlayInsideDivision
NUMBER_OF_MATCHES_TO_PLAY # numberOfMatchesToPlayOutsideDivision
)
Explanation: Step 2: Model the data
In this scenario, the data is simple. There are eight teams in each division, and the teams must play each team in the division once and each team outside the division once.
Use a Python module, Collections, which implements some data structures that will help solve some problems. Named tuples helps to define meaning of each position in a tuple. This helps the code be more readable and self-documenting. You can use named tuples in any place where you use tuples.
In this example, you create a namedtuple to contain information for points. You are also defining some of the parameters.
End of explanation
CSS =
body {
margin: 0;
font-family: Helvetica;
}
table.dataframe {
border-collapse: collapse;
border: none;
}
table.dataframe tr {
border: none;
}
table.dataframe td, table.dataframe th {
margin: 0;
border: 1px solid white;
padding-left: 0.25em;
padding-right: 0.25em;
}
table.dataframe th:not(:empty) {
background-color: #fec;
text-align: left;
font-weight: normal;
}
table.dataframe tr:nth-child(2) th:empty {
border-left: none;
border-right: 1px dashed #888;
}
table.dataframe td {
border: 2px solid #ccf;
background-color: #f4f4ff;
}
table.dataframe thead th:first-child {
display: none;
}
table.dataframe tbody th {
display: none;
}
from IPython.core.display import HTML
HTML('<style>{}</style>'.format(CSS))
Explanation: Use basic HTML and a stylesheet to format the data.
End of explanation
import pandas as pd
team1 = pd.DataFrame(TEAM_DIV1)
team2 = pd.DataFrame(TEAM_DIV2)
team1.columns = ["AFC"]
team2.columns = ["NFC"]
teams = pd.concat([team1,team2], axis=1)
Explanation: Now you will import the pandas library. Pandas is an open source Python library for data analysis. It uses two data structures, Series and DataFrame, which are built on top of NumPy.
A Series is a one-dimensional object similar to an array, list, or column in a table. It will assign a labeled index to each item in the series. By default, each item receives an index label from 0 to N, where N is the length of the series minus one.
A DataFrame is a tabular data structure comprised of rows and columns, similar to a spreadsheet, database table, or R's data.frame object. Think of a DataFrame as a group of Series objects that share an index (the column names).
In the example, each division (the AFC and the NFC) is part of a DataFrame.
End of explanation
from IPython.display import display
display(teams)
Explanation: The following display function is a tool to show different representations of objects. When you issue the display(teams) command, you are sending the output to the notebook so that the result is stored in the document.
End of explanation
import numpy as np
NB_TEAMS = 2 * SCHEDULE_PARAMS.nbTeamsInDivision
TEAMS = range(NB_TEAMS)
# Calculate the number of weeks necessary
NB_WEEKS = (SCHEDULE_PARAMS.nbTeamsInDivision - 1) * SCHEDULE_PARAMS.numberOfMatchesToPlayInsideDivision \
+ SCHEDULE_PARAMS.nbTeamsInDivision * SCHEDULE_PARAMS.numberOfMatchesToPlayOutsideDivision
# Weeks to schedule
WEEKS = tuple(range(NB_WEEKS))
# Season is split into two halves
FIRST_HALF_WEEKS = tuple(range(NB_WEEKS // 2))
NB_FIRST_HALS_WEEKS = NB_WEEKS // 3
Explanation: Step 3: Prepare the data
Given the number of teams in each division and the number of intradivisional and interdivisional games to be played, you can calculate the total number of teams and the number of weeks in the schedule, assuming every team plays exactly one game per week.
The season is split into halves, and the number of the intradivisional games that each team must play in the first half of the season is calculated.
End of explanation
from docplex.cp.model import *
mdl = CpoModel(name="SportsScheduling")
# Variables of the model
plays = {}
for i in range(NUMBER_OF_MATCHES_TO_PLAY):
for t1 in TEAMS:
for t2 in TEAMS:
if t1 != t2:
plays[(t1, t2, i)] = integer_var(1, NB_WEEKS, name="team1_{}_team2_{}_match_{}".format(t1, t2, i))
Explanation: Step 4: Set up the prescriptive model
Define the decision variables
You can model a solution to the problem by assigning an opponent to each team for each week.
Therefore, the main decision variables in this model are indexed on the teams and weeks and take a value in 1..nbTeams.
The value at the solution of the decision variable ( plays[t][w] ) indicates that team t plays in week w.
End of explanation
# Constraints of the model
for t1 in TEAMS:
for t2 in TEAMS:
if t2 != t1:
for i in range(NUMBER_OF_MATCHES_TO_PLAY):
mdl.add(plays[(t1, t2, i)] == plays[(t2, t1, i)]) ### symmetrical match t1->t2 = t2->t1 at the ieme match
Explanation: Express the business constraints
For each week and each team, there is a constraint that the team cannot play itself. Also, the variables must be constrained to be symmetric.
If team t plays team t2 in week w, then team t2 must play team t in week w.
In constraint programming, you can use a decision variable to index an array by using an element expression.
End of explanation
for t1 in TEAMS:
mdl.add(all_diff([plays[(t1, t2, i)] for t2 in TEAMS if t2 != t1 for i in
range(NUMBER_OF_MATCHES_TO_PLAY)])) ### team t1 must play one match per week
Explanation: Each week, every team must be assigned to at most one game. To model this, you use the specialized alldifferent constraint.
for a given week w, the values of play[t][w] must be unique for all teams t.
End of explanation
# Function that returns 1 if the two teams are in same division, 0 if not
def intra_divisional_pair(t1, t2):
return int((t1 <= SCHEDULE_PARAMS.nbTeamsInDivision and t2 <= SCHEDULE_PARAMS.nbTeamsInDivision) or
(t1 > SCHEDULE_PARAMS.nbTeamsInDivision and t2 > SCHEDULE_PARAMS.nbTeamsInDivision))
# Some intradivisional games should be in the first half
mdl.add(sum([intra_divisional_pair(t1, t2) * allowed_assignments(plays[(t1, t2, i)], FIRST_HALF_WEEKS)
for t1 in TEAMS for t2 in [a for a in TEAMS if a != t1]
for i in range(NUMBER_OF_MATCHES_TO_PLAY)]) >= NB_FIRST_HALS_WEEKS)
Explanation: One set of constraints is used to ensure that the solution satisfies the number of intradivisional and interdivisional games that each team must play.
A pair of teams cannot play each other on consecutive weeks.
Each team must play at least a certain number of intradivisional games, nbFirstHalfGames, in the first half of the season.
End of explanation
# Objective of the model is to schedule intradivisional games to be played late in the schedule
sm = []
for t1 in TEAMS:
for t2 in TEAMS:
if t1 != t2:
if not intra_divisional_pair(t1, t2):
for i in range(NUMBER_OF_MATCHES_TO_PLAY):
sm.append(plays[(t1, t2, i)])
mdl.add(maximize(sum(sm)))
Explanation: Express the objective
The objective function for this example is designed to force intradivisional games to occur as late in the season as possible. The incentive for intradivisional games increases by week. There is no incentive for interdivisional games.
Use an indicator matrix, intraDivisionalPair, to specify whether a pair of teams is in the same division or not. For each pair which is intradivisional, the incentive, or gain, is a power function of the week.
These cost functions are used to create an expression that models the overall cost. The cost here is halved as the incentive for each game gets counted twice.
End of explanation
n = 25
msol = mdl.solve(TimeLimit=n)
Explanation: Solve the model
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
You will get the best solution found after n seconds, thanks to the TimeLimit parameter.
End of explanation
if msol:
abb = [list() for i in range(NB_WEEKS)]
for t1 in TEAMS:
for t2 in TEAMS:
if t1 != t2:
for i in range(NUMBER_OF_MATCHES_TO_PLAY):
x = abb[msol.get_value(plays[(t1, t2, i)])-1]
x.append((TEAM_DIV1[t1], TEAM_DIV2[t2], "Home" if i == 1 else "Back", intra_divisional_pair(t1, t2)))
matches = [(week, t1, t2, where, intra) for week in range(NB_WEEKS) for (t1, t2, where, intra) in abb[week]]
matches_bd = pd.DataFrame(matches)
nfl_finals = [("2014", "Patriots", "Seahawks"),("2013", "Seahawks", "Broncos"),
("2012", "Ravens", "Patriots"),("2011", "Giants", "Patriots "),
("2010", "Packers", "Steelers"),("2009", "Saints", "Colts"),
("2008", "Steelers", "Cardinals"),("2007", "Giants", "Patriots"),
("2006", "Colts", "Bears"),("2005", "Steelers", "Seahawks"),
("2004", "Patriots", "Eagles")]
winners_bd = pd.DataFrame(nfl_finals)
winners_bd.columns = ["year", "team1", "team2"]
display(winners_bd)
else:
print("No solution found")
Explanation: Step 5: Investigate the solution and then run an example analysis
End of explanation
if msol:
months = ["January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"]
report = []
for t in nfl_finals:
for m in matches:
if t[1] in m[1] and t[2] in m[2]:
report.append((m[0], months[m[0]//4], m[1], m[2], m[3]))
if t[2] in m[1] and t[1] in m[2]:
report.append((m[0], months[m[0]//4], m[1], m[2], m[3]))
matches_bd = pd.DataFrame(report)
matches_bd.columns = ["week", "Month", "Team1", "Team2", "location"]
try: #pandas >= 0.17
display(matches_bd[matches_bd['location'] != "Home"].sort_values(by='week').drop(labels=['week', 'location'], axis=1))
except:
display(matches_bd[matches_bd['location'] != "Home"].sort('week').drop(labels=['week', 'location'], axis=1))
Explanation: Run an example analysis
Determine when the last 10 final replay games will occur:
End of explanation |
14,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top
Step1: The dataset above shows how much funding (i.e., 'Expenditures' column) the state gave to individuals (for training purposes), including also individuals' age, gender, and ethnicity.
Step2: Discrimination by Ethnicity
Analyze the data set and determine whether or not discrimination among Hispanic and White but not Hispanic groups exists by examining the Expenditures. Feel free to use the dataframes defined in the cell below.
Step3: After analyzing this dataset, was there discrimination in the expenditures across different ethnicities?
Step4: Clue
Pandas supports grouping by multiple columns | Python Code:
# Run the following to import necessary packages and import dataset
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
datafile = "dataset/funding.csv"
df = pd.read_csv(datafile)
df.drop('Dummy', axis=1, inplace=True)
df.head(n=5) # Print n number of rows from top of dataset
ls = df['Age'].tolist()
df.describe(include='all')
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Funding" data-toc-modified-id="Funding-1">Funding</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Instructions-/-Notes:" data-toc-modified-id="Instructions-/-Notes:-1.0.1">Instructions / Notes:</a></span></li></ul></li><li><span><a href="#This-dataset-shows-how-much-funding-the-state-gave-to-individuals-and-tracks-individuals'-age,-gender,-and-ethnicity." data-toc-modified-id="This-dataset-shows-how-much-funding-the-state-gave-to-individuals-and-tracks-individuals'-age,-gender,-and-ethnicity.-1.1">This dataset shows how much funding the state gave to individuals and tracks individuals' age, gender, and ethnicity.</a></span><ul class="toc-item"><li><span><a href="#The-dataset-above-shows-how-much-funding-(i.e.,-'Expenditures'-column)-the-state-gave-to-individuals-(for-training-purposes),-including-also-individuals'-age,-gender,-and-ethnicity." data-toc-modified-id="The-dataset-above-shows-how-much-funding-(i.e.,-'Expenditures'-column)-the-state-gave-to-individuals-(for-training-purposes),-including-also-individuals'-age,-gender,-and-ethnicity.-1.1.1">The dataset above shows how much funding (i.e., 'Expenditures' column) the state gave to individuals (for training purposes), including also individuals' age, gender, and ethnicity.</a></span></li></ul></li></ul></li><li><span><a href="#Discrimination-by-Ethnicity" data-toc-modified-id="Discrimination-by-Ethnicity-2">Discrimination by Ethnicity</a></span><ul class="toc-item"><li><span><a href="#Clue" data-toc-modified-id="Clue-2.1">Clue</a></span><ul class="toc-item"><li><span><a href="#Pandas-supports-grouping-by-multiple-columns:-https://stackoverflow.com/questions/17679089/pandas-dataframe-groupby-two-columns-and-get-counts" data-toc-modified-id="Pandas-supports-grouping-by-multiple-columns:-https://stackoverflow.com/questions/17679089/pandas-dataframe-groupby-two-columns-and-get-counts-2.1.1">Pandas supports grouping by multiple columns: <a href="https://stackoverflow.com/questions/17679089/pandas-dataframe-groupby-two-columns-and-get-counts" target="_blank">https://stackoverflow.com/questions/17679089/pandas-dataframe-groupby-two-columns-and-get-counts</a></a></span></li></ul></li></ul></li></ul></div>
Funding
Instructions / Notes:
Read these carefully
Read and execute each cell in order, without skipping forward
You may create new Jupyter notebook cells to use for e.g. testing, debugging, exploring, etc.- this is encouraged in fact!- just make sure that your final answer dataframes and answers use the set variables outlined below
Have fun!
This dataset shows how much funding the state gave to individuals and tracks individuals' age, gender, and ethnicity.
End of explanation
# Example dataframe query showing there is no discrimination by gender.
df.groupby(['Gender'], sort=True).agg({'Expenditures': [np.mean]})
Explanation: The dataset above shows how much funding (i.e., 'Expenditures' column) the state gave to individuals (for training purposes), including also individuals' age, gender, and ethnicity.
End of explanation
w = "White not Hispanic"
h = "Hispanic"
is_hispanic = df['Ethnicity'] == h
is_white = df['Ethnicity'] == w
df1 = df[is_hispanic | is_white] # filters by two ethnicity groups
dfh = df[is_hispanic]
dfw = df[is_white]
df1.head(5)
df1.groupby(['Ethnicity', 'Age']).agg({'Expenditures': [np.mean]})
# Write your query below and set `df_answer' to the dataframe
df_answer = None
print(df_answer)
Explanation: Discrimination by Ethnicity
Analyze the data set and determine whether or not discrimination among Hispanic and White but not Hispanic groups exists by examining the Expenditures. Feel free to use the dataframes defined in the cell below.
End of explanation
# Write answer below by setting discrimination to True or False
discrimination = None
Explanation: After analyzing this dataset, was there discrimination in the expenditures across different ethnicities?
End of explanation
df_answer_clue = None
print(df_answer_clue)
discrimination_clue = None
Explanation: Clue
Pandas supports grouping by multiple columns: https://stackoverflow.com/questions/17679089/pandas-dataframe-groupby-two-columns-and-get-counts
If this clue changes your answer, try again below. Otherwise, if you are confident in your answer above, leave the following untouched.
End of explanation |
14,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 11 - Regression and Classification
In previous weeks we have looked at the steps needed in preparing different types of data for use by machine learning algorithms.
Step1: All the different models in scikit-learn follow a consistent structure.
The class is passed any parameters needed at initialization. In this case none are needed.
The fit method takes the features and the target as the parameters X and y.
The predict method takes an array of features and returns the predicted values
These are the basic components with additional methods added when needed. For example, classifiers also have
A predict_proba method that gives the probability that a sample belongs to each of the classes.
A predict_log_proba method that gives the log of the probability that a sample belongs to each of the classes.
Evaluating models
Before we consider whether we have a good model, or which model to choose, we must first decide on how we will evaluate our models.
Metrics
As part of our evaluation having a single number with which to compare models can be very useful. Choosing a metric that is as close a representation of our goal as possible enables many models to be automatically compared. This can be important when choosing model parameters or comparing different types of algorithm.
Even if we have a metric we feel is reasonable it can be worthwhile considering in detail the predictions made by any model. Some questions to ask
Step2: Although this single number might seem unimpressive, metrics are a key component for model evaluation. As a simple example, we can perform a permutation test to determine whether we might see this performance by chance.
Step3: Training, validation, and test datasets
When evaluating different models the approach taken above is not going to work. Particularly for models with high variance, that overfit the training data, we will get very good performance on the training data but perform no better than chance on new data.
Step4: Both these models appear to give perfect solutions but all they do is map our test samples back to the training samples and return the associated value.
To understand how our model truly performs we need to evaluate the performance on previously unseen samples. The general approach is to divide a dataset into training, validation and test datasets. Each model is trained on the training dataset. Multiple models can then be compared by evaluating the model against the validation dataset. There is still the potential of choosing a model that performs well on the validation dataset by chance so a final check is made against a test dataset.
This unfortunately means that part of our, often expensively gathered, data can't be used to train our model. Although it is important to leave out a test dataset an alternative approach can be used for the validation dataset. Rather than just building one model we can build multiple models, each time leaving out a different validation dataset. Our validation score is then the average across each of the models. This is known as cross-validation.
Scikit-learn provides classes to support cross-validation but a simple solution can also be implemented directly. Below we will separate out a test dataset to evaluate the nearest neighbor model.
Step5: Model types
Scikit-learn includes a variety of different models. The most commonly used algorithms probably include the following
Step6: There is an expanded example in the documentation.
There are also general classes to handle parameter selection for situations when dedicated classes are not available. As we will often have parameters in preprocessing steps these general classes will be used much more often.
Step7: Exercises
Load the handwritten digits dataset and choose an appropriate metric
Divide the data into a training and test dataset
Build a RandomForestClassifier on the training dataset, using cross-validation to evaluate performance
Choose another classification algorithm and apply it to the digits dataset.
Use grid search to find the optimal parameters for the chosen algorithm.
Comparing the true values with the predictions from the best model identify the numbers that are most commonly confused.
Step8: Choose a metric | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from sklearn import datasets
diabetes = datasets.load_diabetes()
# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
X = diabetes.data
y = diabetes.target
print(X.shape, y.shape)
from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
Explanation: Week 11 - Regression and Classification
In previous weeks we have looked at the steps needed in preparing different types of data for use by machine learning algorithms.
End of explanation
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = linear_model.LinearRegression()
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
from sklearn import metrics
metrics.mean_squared_error(y, clf.predict(X))
Explanation: All the different models in scikit-learn follow a consistent structure.
The class is passed any parameters needed at initialization. In this case none are needed.
The fit method takes the features and the target as the parameters X and y.
The predict method takes an array of features and returns the predicted values
These are the basic components with additional methods added when needed. For example, classifiers also have
A predict_proba method that gives the probability that a sample belongs to each of the classes.
A predict_log_proba method that gives the log of the probability that a sample belongs to each of the classes.
Evaluating models
Before we consider whether we have a good model, or which model to choose, we must first decide on how we will evaluate our models.
Metrics
As part of our evaluation having a single number with which to compare models can be very useful. Choosing a metric that is as close a representation of our goal as possible enables many models to be automatically compared. This can be important when choosing model parameters or comparing different types of algorithm.
Even if we have a metric we feel is reasonable it can be worthwhile considering in detail the predictions made by any model. Some questions to ask:
Is the model sufficiently sensitive for our use case?
Is the model sufficiently specific for our use case?
Is there any systemic bias?
Does the model perform equally well over the distribution of features?
How does the model perform outside the range of the training data?
Is the model overly dependent on one or two samples in the training dataset?
The metric we decide to use will depend on the type of problem we have (regression or classification) and what aspects of the prediction are most important to us. For example, a decision we might have to make is between:
A model with intermediate errors for all samples
A model with low errors for the majority of samples but with a small number of samples that have large errors.
For these two situations in a regression task we might choose mean_squared_error and mean_absolute_error.
There are lists for regression metrics and classification metrics.
We can apply the mean_squared_error metric to the linear regression model on the diabetes dataset:
End of explanation
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = linear_model.LinearRegression()
clf.fit(X, y)
error = metrics.mean_squared_error(y, clf.predict(X))
rounds = 1000
np.random.seed(0)
errors = []
for i in range(rounds):
y_shuffle = y.copy()
np.random.shuffle(y_shuffle)
clf_shuffle = linear_model.LinearRegression()
clf_shuffle.fit(X, y_shuffle)
errors.append(metrics.mean_squared_error(y_shuffle, clf_shuffle.predict(X)))
better_models_by_chance = len([i for i in errors if i <= error])
if better_models_by_chance > 0:
print('Probability of observing a mean_squared_error of {0} by chance is {1}'.format(error,
better_models_by_chance / rounds))
else:
print('Probability of observing a mean_squared_error of {0} by chance is <{1}'.format(error,
1 / rounds))
Explanation: Although this single number might seem unimpressive, metrics are a key component for model evaluation. As a simple example, we can perform a permutation test to determine whether we might see this performance by chance.
End of explanation
from sklearn import tree
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = tree.DecisionTreeRegressor()
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
metrics.mean_squared_error(y, clf.predict(X))
from sklearn import neighbors
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = neighbors.KNeighborsRegressor(n_neighbors=1)
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
metrics.mean_squared_error(y, clf.predict(X))
Explanation: Training, validation, and test datasets
When evaluating different models the approach taken above is not going to work. Particularly for models with high variance, that overfit the training data, we will get very good performance on the training data but perform no better than chance on new data.
End of explanation
from sklearn import neighbors
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
np.random.seed(0)
split = np.random.random(y.shape) > 0.3
X_train = X[split]
y_train = y[split]
X_test = X[np.logical_not(split)]
y_test = y[np.logical_not(split)]
print(X_train.shape, X_test.shape)
clf = neighbors.KNeighborsRegressor(1)
clf.fit(X_train, y_train)
plt.plot(y_test, clf.predict(X_test), 'k.')
plt.show()
metrics.mean_squared_error(y_test, clf.predict(X_test))
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
np.random.seed(0)
split = np.random.random(y.shape) > 0.3
X_train = X[split]
y_train = y[split]
X_test = X[np.logical_not(split)]
y_test = y[np.logical_not(split)]
print(X_train.shape, X_test.shape)
clf = linear_model.LinearRegression()
clf.fit(X_train, y_train)
plt.plot(y_test, clf.predict(X_test), 'k.')
plt.show()
metrics.mean_squared_error(y_test, clf.predict(X_test))
Explanation: Both these models appear to give perfect solutions but all they do is map our test samples back to the training samples and return the associated value.
To understand how our model truly performs we need to evaluate the performance on previously unseen samples. The general approach is to divide a dataset into training, validation and test datasets. Each model is trained on the training dataset. Multiple models can then be compared by evaluating the model against the validation dataset. There is still the potential of choosing a model that performs well on the validation dataset by chance so a final check is made against a test dataset.
This unfortunately means that part of our, often expensively gathered, data can't be used to train our model. Although it is important to leave out a test dataset an alternative approach can be used for the validation dataset. Rather than just building one model we can build multiple models, each time leaving out a different validation dataset. Our validation score is then the average across each of the models. This is known as cross-validation.
Scikit-learn provides classes to support cross-validation but a simple solution can also be implemented directly. Below we will separate out a test dataset to evaluate the nearest neighbor model.
End of explanation
from sklearn import datasets
diabetes = datasets.load_diabetes()
# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
X = diabetes.data
y = diabetes.target
print(X.shape, y.shape)
from sklearn import linear_model
clf = linear_model.LassoCV(cv=20)
clf.fit(X, y)
print('Alpha chosen was ', clf.alpha_)
plt.plot(y, clf.predict(X), 'k.')
Explanation: Model types
Scikit-learn includes a variety of different models. The most commonly used algorithms probably include the following:
Regression
Support Vector Machines
Nearest neighbors
Decision trees
Ensembles & boosting
Regression
We have already seen several examples of regression. The basic form is:
$$f(X) = \beta_{0} + \sum_{j=1}^p X_j\beta_j$$
Each feature is multipled by a coefficient and then the sum returned. This value is then transformed for classification to limit the value to the range 0 to 1.
Support Vector Machines
Support vector machines attempt to project samples into a higher dimensional space such that they can be divided by a hyperplane. A good explanation can be found in this article.
Nearest neighbors
Nearest neighbor methods identify a number of samples from the training set that are close to the new sample and then return the average or most common value depending on the task.
Decision trees
Decision trees attempt to predict the value of a new sample by learning simple rules from the training samples.
Ensembles & boosting
Ensembles are combinations of other models. Combining different models together can improve performance by boosting generalizability. An average or most common value from the models is returned.
Boosting builds one model and then attempts to reduce the errors with the next model. At each stage the bias in the model is reduced. In this way many weak predictors can be combined into one much more powerful predictor.
I often begin with an ensemble or boosting approach as they typically give very good performance without needing to be carefully optimized. Many of the other algorithms are sensitive to their parameters.
Parameter selection
Many of the models require several different parameters to be specified. Their performance is typically heavily influenced by these parameters and choosing the best values is vital in developing the best model.
Some models have alternative implementations that handle parameter selection in an efficient way.
End of explanation
from sklearn import grid_search
from sklearn import neighbors
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
np.random.seed(0)
split = np.random.random(y.shape) > 0.3
X_train = X[split]
y_train = y[split]
X_test = X[np.logical_not(split)]
y_test = y[np.logical_not(split)]
print(X_train.shape, X_test.shape)
knn = neighbors.KNeighborsRegressor()
parameters = {'n_neighbors':[1,2,3,4,5,6,7,8,9,10]}
clf = grid_search.GridSearchCV(knn, parameters)
clf.fit(X_train, y_train)
plt.plot(y_test, clf.predict(X_test), 'k.')
plt.show()
print(metrics.mean_squared_error(y_test, clf.predict(X_test)))
clf.get_params()
Explanation: There is an expanded example in the documentation.
There are also general classes to handle parameter selection for situations when dedicated classes are not available. As we will often have parameters in preprocessing steps these general classes will be used much more often.
End of explanation
from sklearn import datasets, metrics, ensemble, cross_validation
import numpy as np
np.random.seed(0)
digits = datasets.load_digits()
X = digits.data
y = digits.target
print(X.shape, y.shape)
Explanation: Exercises
Load the handwritten digits dataset and choose an appropriate metric
Divide the data into a training and test dataset
Build a RandomForestClassifier on the training dataset, using cross-validation to evaluate performance
Choose another classification algorithm and apply it to the digits dataset.
Use grid search to find the optimal parameters for the chosen algorithm.
Comparing the true values with the predictions from the best model identify the numbers that are most commonly confused.
End of explanation
split = np.random.random(y.shape) > 0.3
X_train = X[split]
X_test = X[np.logical_not(split)]
y_train = y[split]
y_test = y[np.logical_not(split)]
scores = []
cv = 10
for _ in range(cv):
split = np.random.random(y_train.shape) > 1/cv
X_train_train = X_train[split]
y_train_train = y_train[split]
X_val = X_train[np.logical_not(split)]
y_val = y_train[np.logical_not(split)]
rfc = ensemble.RandomForestClassifier(n_estimators=100)
rfc.fit(X_train_train, y_train_train)
scores.append(metrics.accuracy_score(y_val, rfc.predict(X_val)))
print(scores, np.array(scores).mean())
# use cv method from sklearn
rfc = ensemble.RandomForestClassifier(n_estimators=100)
scores = cross_validation.cross_val_score(rfc,
digits.data,
digits.target,
cv=10)
print(scores)
# support vector machines
from sklearn import svm
from sklearn import grid_search
clf = svm.SVC()
parameters = {'C': [1, 0.1, 0.001, 0.0001, 0.00001],
'kernel':['linear', 'poly', 'rbf', 'sigmoid']}
clf = grid_search.GridSearchCV(clf, parameters)
clf.fit(X_train, y_train)
metrics.accuracy_score(y_test, clf.predict(X_test))
metrics.confusion_matrix(y_test, clf.predict(X_test))
Explanation: Choose a metric: ROC curve
Then build a random forest classifier on the training dataset
End of explanation |
14,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word prediction based on Quadgram
This program reads the corpus line by line so it is slower than the program which reads the corpus
in one go.This reads the corpus one line at a time loads it into the memory
Import corpus
Step1: Do preprocessing
Step2: Tokenize and load the corpus data
Step3: Find the probability
Step4: Driver function for doing the prediction
Step5: main function | Python Code:
#import the modules necessary
from nltk.util import ngrams
from collections import defaultdict
import nltk
import string
import time
start_time = time.time()
Explanation: Word prediction based on Quadgram
This program reads the corpus line by line so it is slower than the program which reads the corpus
in one go.This reads the corpus one line at a time loads it into the memory
Import corpus
End of explanation
#returns: string
#arg: string
#remove punctuations and make the string lowercase
def removePunctuations(sen):
#split the string into word tokens
temp_l = sen.split()
i = 0
#changes the word to lowercase and removes punctuations from it
for word in temp_l :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
temp_l[i] = word.lower()
i=i+1
#spliting is being don here beacause in sentences line here---so after punctuation removal it should
#become "here so"
content = " ".join(temp_l)
return content
Explanation: Do preprocessing:
Remove the punctuations and lowercase the tokens
End of explanation
#returns : void
#arg: string,dict,dict,dict
#loads the corpus for the dataset and makes the frequency count of quadgram and trigram strings
def loadCorpus(file_path,tri_dict,quad_dict,vocab_dict):
w1 = '' #for storing the 3rd last word to be used for next token set
w2 = '' #for storing the 2nd last word to be used for next token set
w3 = '' #for storing the last word to be used for next token set
token = []
#open the corpus file and read it line by line
with open(file_path,'r') as file:
for line in file:
#split the line into tokens
token = line.split()
i = 0
#for each word in the token list ,remove pucntuations and change to lowercase
for word in token :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
token[i] = word.lower()
i=i+1
#make the token list into a string
content = " ".join(token)
token = content.split()
#word_len = word_len + len(token)
if not token:
continue
#since we are reading line by line some combinations of word might get missed for pairing
#for trigram
#first add the previous words
if w2!= '':
token.insert(0,w2)
if w3!= '':
token.insert(1,w3)
#tokens for trigrams
temp1 = list(ngrams(token,3))
#insert the 3rd last word from previous line for quadgram pairing
if w1!= '':
token.insert(0,w1)
#add new unique words to the vocaulary set if available
for word in token:
if word not in vocab_dict:
vocab_dict[word] = 1
else:
vocab_dict[word]+= 1
#tokens for quadgrams
temp2 = list(ngrams(token,4))
#count the frequency of the trigram sentences
for t in temp1:
sen = ' '.join(t)
tri_dict[sen] += 1
#count the frequency of the quadgram sentences
for t in temp2:
sen = ' '.join(t)
quad_dict[sen] += 1
#then take out the last 3 words
n = len(token)
#store the last few words for the next sentence pairing
w1 = token[n -3]
w2 = token[n -2]
w3 = token[n -1]
Explanation: Tokenize and load the corpus data
End of explanation
#returns : float
#arg : string sentence,string word,dict,dict
def findprobability(s,w,tri_dict,quad_dict):
c1 = 0 # for count of sentence 's' with word 'w'
c2 = 0 # for count of sentence 's'
s1 = s + ' ' + w
if s1 in quad_dict:
c1 = quad_dict[s1]
if s in tri_dict:
c2 = tri_dict[s]
if c2 == 0:
return 0
return c1/c2
Explanation: Find the probability
End of explanation
def doPrediction(sen,tri_dict,quad_dict,vocab_dict):
sen = removePunctuations(sen)
max_prob = 0
#when there is no probable word available
#now for guessing the word which should exist we use quadgram
right_word = 'apple'
for word in vocab_dict:
prob = findprobability(sen,word,tri_dict,quad_dict)
if prob > max_prob:
max_prob = prob
right_word = word
print('Word Prediction is :',right_word)
Explanation: Driver function for doing the prediction
End of explanation
def main():
#variable declaration
tri_dict = defaultdict(int) #for keeping count of sentences of three words
quad_dict = defaultdict(int) #for keeping count of sentences of three words
vocab_dict = defaultdict(int) #for storing the different words with their frequencies
#load the corpus for the dataset
loadCorpus('corpusfile.txt',tri_dict,quad_dict,vocab_dict)
print("---Preprocessing Time: %s seconds ---" % (time.time() - start_time))
cond = False
#take input
while(cond == False):
sen = input('Enter the string\n')
sen = removePunctuations(sen)
temp = sen.split()
if len(temp) < 3:
print("Please enter atleast 3 words !")
else:
cond = True
temp = temp[-3:]
sen = " ".join(temp)
start_time1 = time.time()
doPrediction(sen,tri_dict,quad_dict,vocab_dict)
print("---Time for Prediction Operation: %s seconds ---" % (time.time() - start_time1))
if __name__ == '__main__':
main()
Explanation: main function
End of explanation |
14,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anomaly detection
Anomaly detection is a machine learning task that consists in spotting so-called outliers.
“An outlier is an observation in a data set which appears to be inconsistent with the remainder of that set of data.”
Johnson 1992
“An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.”
Outlier/Anomaly
Hawkins 1980
Types of anomaly detection setups
Supervised AD
Labels available for both normal data and anomalies
Similar to rare class mining / imbalanced classification
Semi-supervised AD (Novelty Detection)
Only normal data available to train
The algorithm learns on normal data only
Unsupervised AD (Outlier Detection)
no labels, training set = normal + abnormal data
Assumption
Step1: Let's first get familiar with different unsupervised anomaly detection approaches and algorithms. In order to visualise the output of the different algorithms we consider a toy data set consisting in a two-dimensional Gaussian mixture.
Generating the data set
Step2: Anomaly detection with density estimation
Step3: now with One-Class SVM
The problem of density based estimation is that they tend to become inefficient when the dimensionality of the data increase. It's the so-called curse of dimensionality that affects particularly density estimation algorithms. The one-class SVM algorithm can be used in such cases.
Step4: Support vectors - Outliers
The so-called support vectors of the one-class SVM form the outliers
Step5: Only the support vectors are involved in the decision function of the One-Class SVM.
Plot the level sets of the One-Class SVM decision function as we did for the true density.
Emphasize the Support vectors.
Step6: <div class="alert alert-success">
<b>EXERCISE</b>
Step7: Isolation Forest
Isolation Forest is an anomaly detection algorithm based on trees. The algorithm builds a number of random trees and the rationale is that if a sample is isolated it should alone in a leaf after very few random splits. Isolation Forest builds a score of abnormality based the depth of the tree at which samples end up.
Step8: <div class="alert alert-success">
<b>EXERCISE</b>
Step9: Illustration on Digits data set
We will now apply the IsolationForest algorithm to spot digits written in an unconventional way.
Step10: The digits data set consists in images (8 x 8) of digits.
Step11: To use the images as a training set we need to flatten the images.
Step12: Let's focus on digit 5.
Step13: Let's use IsolationForest to find the top 5% most abnormal images.
Let's plot them !
Step14: Compute the level of "abnormality" with iforest.decision_function. The lower, the more abnormal.
Step15: Let's plot the strongest inliers
Step16: Let's plot the strongest outliers
Step17: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
Explanation: Anomaly detection
Anomaly detection is a machine learning task that consists in spotting so-called outliers.
“An outlier is an observation in a data set which appears to be inconsistent with the remainder of that set of data.”
Johnson 1992
“An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.”
Outlier/Anomaly
Hawkins 1980
Types of anomaly detection setups
Supervised AD
Labels available for both normal data and anomalies
Similar to rare class mining / imbalanced classification
Semi-supervised AD (Novelty Detection)
Only normal data available to train
The algorithm learns on normal data only
Unsupervised AD (Outlier Detection)
no labels, training set = normal + abnormal data
Assumption: anomalies are very rare
End of explanation
from sklearn.datasets import make_blobs
X, y = make_blobs(n_features=2, centers=3, n_samples=500,
random_state=42)
X.shape
plt.figure()
plt.scatter(X[:, 0], X[:, 1])
plt.show()
Explanation: Let's first get familiar with different unsupervised anomaly detection approaches and algorithms. In order to visualise the output of the different algorithms we consider a toy data set consisting in a two-dimensional Gaussian mixture.
Generating the data set
End of explanation
from sklearn.neighbors.kde import KernelDensity
# Estimate density with a Gaussian kernel density estimator
kde = KernelDensity(kernel='gaussian')
kde = kde.fit(X)
kde
kde_X = kde.score_samples(X)
print(kde_X.shape) # contains the log-likelihood of the data. The smaller it is the rarer is the sample
from scipy.stats.mstats import mquantiles
alpha_set = 0.95
tau_kde = mquantiles(kde_X, 1. - alpha_set)
n_samples, n_features = X.shape
X_range = np.zeros((n_features, 2))
X_range[:, 0] = np.min(X, axis=0) - 1.
X_range[:, 1] = np.max(X, axis=0) + 1.
h = 0.1 # step size of the mesh
x_min, x_max = X_range[0]
y_min, y_max = X_range[1]
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
grid = np.c_[xx.ravel(), yy.ravel()]
Z_kde = kde.score_samples(grid)
Z_kde = Z_kde.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_kde, levels=tau_kde, colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15, fmt={tau_kde[0]: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1])
plt.legend()
plt.show()
Explanation: Anomaly detection with density estimation
End of explanation
from sklearn.svm import OneClassSVM
nu = 0.05 # theory says it should be an upper bound of the fraction of outliers
ocsvm = OneClassSVM(kernel='rbf', gamma=0.05, nu=nu)
ocsvm.fit(X)
X_outliers = X[ocsvm.predict(X) == -1]
Z_ocsvm = ocsvm.decision_function(grid)
Z_ocsvm = Z_ocsvm.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_ocsvm, levels=[0], colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15, fmt={0: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1])
plt.scatter(X_outliers[:, 0], X_outliers[:, 1], color='red')
plt.legend()
plt.show()
Explanation: now with One-Class SVM
The problem of density based estimation is that they tend to become inefficient when the dimensionality of the data increase. It's the so-called curse of dimensionality that affects particularly density estimation algorithms. The one-class SVM algorithm can be used in such cases.
End of explanation
X_SV = X[ocsvm.support_]
n_SV = len(X_SV)
n_outliers = len(X_outliers)
print('{0:.2f} <= {1:.2f} <= {2:.2f}?'.format(1./n_samples*n_outliers, nu, 1./n_samples*n_SV))
Explanation: Support vectors - Outliers
The so-called support vectors of the one-class SVM form the outliers
End of explanation
plt.figure()
plt.contourf(xx, yy, Z_ocsvm, 10, cmap=plt.cm.Blues_r)
plt.scatter(X[:, 0], X[:, 1], s=1.)
plt.scatter(X_SV[:, 0], X_SV[:, 1], color='orange')
plt.show()
Explanation: Only the support vectors are involved in the decision function of the One-Class SVM.
Plot the level sets of the One-Class SVM decision function as we did for the true density.
Emphasize the Support vectors.
End of explanation
# %load solutions/22_A-anomaly_ocsvm_gamma.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
**Change** the `gamma` parameter and see it's influence on the smoothness of the decision function.
</li>
</ul>
</div>
End of explanation
from sklearn.ensemble import IsolationForest
iforest = IsolationForest(n_estimators=300, contamination=0.10)
iforest = iforest.fit(X)
Z_iforest = iforest.decision_function(grid)
Z_iforest = Z_iforest.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_iforest,
levels=[iforest.threshold_],
colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15,
fmt={iforest.threshold_: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1], s=1.)
plt.legend()
plt.show()
Explanation: Isolation Forest
Isolation Forest is an anomaly detection algorithm based on trees. The algorithm builds a number of random trees and the rationale is that if a sample is isolated it should alone in a leaf after very few random splits. Isolation Forest builds a score of abnormality based the depth of the tree at which samples end up.
End of explanation
# %load solutions/22_B-anomaly_iforest_n_trees.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Illustrate graphically the influence of the number of trees on the smoothness of the decision function?
</li>
</ul>
</div>
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
Explanation: Illustration on Digits data set
We will now apply the IsolationForest algorithm to spot digits written in an unconventional way.
End of explanation
images = digits.images
labels = digits.target
images.shape
i = 102
plt.figure(figsize=(2, 2))
plt.title('{0}'.format(labels[i]))
plt.axis('off')
plt.imshow(images[i], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
Explanation: The digits data set consists in images (8 x 8) of digits.
End of explanation
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
data.shape
X = data
y = digits.target
X.shape
Explanation: To use the images as a training set we need to flatten the images.
End of explanation
X_5 = X[y == 5]
X_5.shape
fig, axes = plt.subplots(1, 5, figsize=(10, 4))
for ax, x in zip(axes, X_5[:5]):
img = x.reshape(8, 8)
ax.imshow(img, cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
Explanation: Let's focus on digit 5.
End of explanation
from sklearn.ensemble import IsolationForest
iforest = IsolationForest(contamination=0.05)
iforest = iforest.fit(X_5)
Explanation: Let's use IsolationForest to find the top 5% most abnormal images.
Let's plot them !
End of explanation
iforest_X = iforest.decision_function(X_5)
plt.hist(iforest_X);
Explanation: Compute the level of "abnormality" with iforest.decision_function. The lower, the more abnormal.
End of explanation
X_strong_inliers = X_5[np.argsort(iforest_X)[-10:]]
fig, axes = plt.subplots(2, 5, figsize=(10, 5))
for i, ax in zip(range(len(X_strong_inliers)), axes.ravel()):
ax.imshow(X_strong_inliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
Explanation: Let's plot the strongest inliers
End of explanation
fig, axes = plt.subplots(2, 5, figsize=(10, 5))
X_outliers = X_5[iforest.predict(X_5) == -1]
for i, ax in zip(range(len(X_outliers)), axes.ravel()):
ax.imshow(X_outliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
Explanation: Let's plot the strongest outliers
End of explanation
# %load solutions/22_C-anomaly_digits.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Rerun the same analysis with all the other digits
</li>
</ul>
</div>
End of explanation |
14,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Library Bioinformatics Service
Jupyter Notebook Tutorial
This tutorial was built in a Jupyter notebook!
Various formats of this tutorial can be accessed at https
Step1: This is a text cell
It is formatted using markdown syntax.
(to edit a markdown cell, just double click on the text. Don't forget to 'execute' the cell afterwards to implement the formatting)
Markdown cells within a notebook have a number of advantages
Step2: Running code in different languages
The code in this notebook is executed by the designated "kernel" loaded at creation. In this case, the IPython kernel was loaded. All code entered will therefore be interpreted by this kernel and run as python code. However, when the kernel is IPython, you have access to "cell magic" (using the % syntax), where it is possible to have cells run by a different interpreter.
Using a single % will run the magic on that line only.
Starting a cell with %% will run the magic on the entire cell.
Step3: Sharing variables between languages
It is even possible to capture the variables from each cell/interpreter/language, and pass them into others
Step4: Other useful IPython cell magic
IPython magics don't only let you use other language interpreters. | Python Code:
# this is a code cell with no output
a=120
# this is a code cell with output
# all output to stdout / stderr will be displayed below the cell.
print(a)
Explanation: Library Bioinformatics Service
Jupyter Notebook Tutorial
This tutorial was built in a Jupyter notebook!
Various formats of this tutorial can be accessed at https://github.com/oxpeter/library_bioinformatics_service/tree/master/Jupyter
Created by Peter Oxley for the library bioinformatics service, May 2017
Installation of Jupyter notebooks is recommended via Anaconda
End of explanation
import numpy as np
import pandas as pd
# notice that this cell doesn't execute when you press enter.
# Only by pressing shift-enter or alt-enter, or clicking on the 'run' icon.
# this cell does not generate any output to stdout or stderr,
# so nothing is shown after executing the cell.
s1 = np.random.normal(0,1,1000) # generate a random sample with normal distribution (mean 0, sd 1, 1000 samples)
s2 = np.random.normal(2,4,1000)
df = pd.DataFrame({"s1":s1, "s2":s2})
# this cell outputs to stdout,
# which is printed immediately following the cell:
df.info()
# table output is formatted to make it easy to view:
df.T.head()
Explanation: This is a text cell
It is formatted using markdown syntax.
(to edit a markdown cell, just double click on the text. Don't forget to 'execute' the cell afterwards to implement the formatting)
Markdown cells within a notebook have a number of advantages:
1. Easy to type
2. Easy to read
3. Great for discussion of code:
* Choice of analysis
* Choice of parameters
* Implications of results
* Introduction/methods/conclusions/references...
Cells are switched between code and markdown by using the menu
Cell > Cell Type > Markdown
Or by using the dropdown box in the icon bar.
You can even create links!
End of explanation
%%bash
# this cell is run in a bash shell created specially for the following code.
echo "Hello, world"
# it is also possible to invoke bash commands using the ```!``` syntax:
!ls -al | head -n 8 | tail -n 2
%%html
<body>
<h2>This is an html interpreted header</h2>
<a href="library.med.cornell.edu">This is an html link</a>
</body>
Explanation: Running code in different languages
The code in this notebook is executed by the designated "kernel" loaded at creation. In this case, the IPython kernel was loaded. All code entered will therefore be interpreted by this kernel and run as python code. However, when the kernel is IPython, you have access to "cell magic" (using the % syntax), where it is possible to have cells run by a different interpreter.
Using a single % will run the magic on that line only.
Starting a cell with %% will run the magic on the entire cell.
End of explanation
# capturing the output of the bash ls command:
directory_contents = !ls -la
directory_contents
%%bash -s "$a"
# The above line puts the variable a into the bash shell as a positional parameter.
# Be aware of any characters (eg. quotation marks) in the python variable -
# these will need to be escaped before being passed to the bash cell.
echo $1
# an alternative to send variables into bash:
!echo {a * 2}
# R requires a few extra steps to access
# rpy2 provides access to R from within Python
# (you can read more here: http://rpy2.readthedocs.io)
# after installing rpy2 - we load the extension into the kernel:
%load_ext rpy2.ipython
# now we can access the installed version of R
iris_dataset = %R iris
iris_dataset.describe()
%%R -i df
# the above line sets R as the interpreter for this cell,
# and imports the variable df (it will be referenced in this cell using the same name)
# Now we can manipulate and graph the dataframe using R functions:
require(ggplot2)
ggplot(data=df) + geom_point(aes(x=s1, y=s2))
Explanation: Sharing variables between languages
It is even possible to capture the variables from each cell/interpreter/language, and pass them into others:
End of explanation
# change the current working directory
%cd jupyterhub/
# list the variables currently available to the kernel
%who
# list the variables and their string representation
%whos
%%time
for i in range(10):
!sleep 1
%%timeit
np.random.normal(0,1,1000).sum()
# to capture plot output and display it inline:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
df['s2'].hist();
plt.show()
# using the question mark will bring up any help documentation
?pd.DataFrame
Explanation: Other useful IPython cell magic
IPython magics don't only let you use other language interpreters.
End of explanation |
14,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see
minimum_norm_estimates.
Step1: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see minimum_norm_estimates.
Step2: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the
same as the end of the recording, see
Step3: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with
Step4: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
Step5: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
Step6: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization_math), especially if only few samples are
available. Unfortunately it is not easy to tell the effective number of
samples, hence, to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization
Step7: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the
Step8: This plot displays both, the whitened evoked signals for each channels and
the whitened
Step9: This will plot the whitened evoked for the optimal estimator and display the | Python Code:
import os.path as op
import mne
from mne.datasets import sample
Explanation: Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see
minimum_norm_estimates.
End of explanation
data_path = sample.data_path()
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference('average', projection=True)
raw.info['bads'] += ['EEG 053'] # bads + 1 more
Explanation: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see minimum_norm_estimates.
End of explanation
raw_empty_room.info['bads'] = [
bb for bb in raw.info['bads'] if 'EEG' not in bb]
raw_empty_room.add_proj(
[pp.copy() for pp in raw.info['projs'] if 'EEG' not in pp['desc']])
noise_cov = mne.compute_raw_covariance(
raw_empty_room, tmin=0, tmax=None)
Explanation: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the
same as the end of the recording, see :func:mne.compute_raw_covariance).
Keep in mind that you want to match your empty room dataset to your
actual MEG data, processing-wise. Ensure that filters
are all the same and if you use ICA, apply it to your empty-room and subject
data equivalently. In this case we did not filter the data and
we don't use ICA. However, we do have bad channels and projections in
the MEG data, and, hence, we want to make sure they get stored in the
covariance object.
End of explanation
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.5,
baseline=(-0.2, 0.0), decim=3, # we'll decimate for speed
verbose='error') # and ignore the warning about aliasing
Explanation: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with :func:mne.write_cov. Later you can read it back
using :func:mne.read_cov.
You can also use the pre-stimulus baseline to estimate the noise covariance.
First we have to construct the epochs. When computing the covariance, you
should use baseline correction when constructing the epochs. Otherwise the
covariance matrix will be inaccurate. In MNE this is done by default, but
just to be sure, we define it here manually.
End of explanation
noise_cov_baseline = mne.compute_covariance(epochs, tmax=0)
Explanation: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
End of explanation
noise_cov.plot(raw_empty_room.info, proj=True)
noise_cov_baseline.plot(epochs.info, proj=True)
Explanation: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
End of explanation
noise_cov_reg = mne.compute_covariance(epochs, tmax=0., method='auto',
rank=None)
Explanation: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization_math), especially if only few samples are
available. Unfortunately it is not easy to tell the effective number of
samples, hence, to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization:
End of explanation
evoked = epochs.average()
evoked.plot_white(noise_cov_reg, time_unit='s')
Explanation: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the :term:global field
power (GFP) <GFP> is 1 (calculation of the GFP should take into account the
true degrees of freedom, e.g. ddof=3 with 2 active SSP vectors):
End of explanation
noise_covs = mne.compute_covariance(
epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True,
rank=None)
evoked.plot_white(noise_covs, time_unit='s')
Explanation: This plot displays both, the whitened evoked signals for each channels and
the whitened :term:GFP. The numbers in the GFP panel represent the
estimated rank of the data, which amounts to the effective degrees of freedom
by which the squared sum across sensors is divided when computing the
whitened :term:GFP. The whitened :term:GFP also helps detecting spurious
late evoked components which can be the consequence of over- or
under-regularization.
Note that if data have been processed using signal space separation
(SSS) [2],
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https://goo.gl/ElWrxe>.
For expert use cases or debugging the alternative estimators can also be
compared (see
sphx_glr_auto_examples_visualization_plot_evoked_whitening.py) and
sphx_glr_auto_examples_inverse_plot_covariance_whitening_dspm.py):
End of explanation
evoked_meg = evoked.copy().pick('meg')
noise_cov['method'] = 'empty_room'
noise_cov_baseline['method'] = 'baseline'
evoked_meg.plot_white([noise_cov_baseline, noise_cov], time_unit='s')
Explanation: This will plot the whitened evoked for the optimal estimator and display the
:term:GFPs <GFP> for all estimators as separate lines in the related panel.
Finally, let's have a look at the difference between empty room and
event related covariance, hacking the "method" option so that their types
are shown in the legend of the plot.
End of explanation |
14,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of how to use the BCES fitting code
BCES python module available on Github.
Step1: Example 1
In this example, the data contains uncertainties on both $x$ and $y$; no correlation between uncertainties. These are real astronomical data for blazars from this paper.
Step2: The regression line is $y = Ax + B$. covab is the resulting covariance matrix which can be used to draw confidence regions.
Step3: Selecting the fitting method
Select the desired BCES method by setting the variable bcesMethod. The available methods are
Step4: Confidence band
Suppose you want to include in the plot a visual estimate of the uncertainty on the fit. This is called the confidence band. For example, the $3\sigma$ confidence interval is 99.7% sure to contain the best-fit regression line. Note that this is not the same as saying it will contain 99.7% of the data points. For more information, check this out.
In order to plot the confidence band, you will need to install the nmmn package and another dependency
Step5: Now we estimate the $3\sigma$ confidence band using one of the methods in the nmmn.stats module. If you want the $1\sigma$ band instead, just change the 7th argument of confbandnl to 0.68.
Step6: Finally, the plot with the confidence band displayed in orange. Even at $3\sigma$, it is still very narrow for this dataset.
Step7: Example 2
Fake data with random uncertainties in $x$ and $y$
Prepares fake data
Step8: The integer corresponds to the desired BCES method for plotting (3-ort, 0-y|x, 1-x|y, don't use bissector)
Step9: Confidence band
Again, make sure you install the nmmn package before proceeding.
Step10: Now we estimate the $2\sigma$ confidence band using one of the methods in the nmmn.stats module.
Step11: Finally, the plot where the confidence band is displayed in orange. As you can see, it is very narrow. | Python Code:
%pylab inline
cd '/Users/nemmen/Dropbox/codes/python/bces'
import bces.bces as BCES
Explanation: Examples of how to use the BCES fitting code
BCES python module available on Github.
End of explanation
data=load('data.npz')
xdata=data['x']
ydata=data['y']
errx=data['errx']
erry=data['erry']
cov=data['cov']
Explanation: Example 1
In this example, the data contains uncertainties on both $x$ and $y$; no correlation between uncertainties. These are real astronomical data for blazars from this paper.
End of explanation
# number of bootstrapping trials
nboot=10000
%%time
# Performs the BCES fit in parallel
a,b,erra,errb,covab=BCES.bcesp(xdata,errx,ydata,erry,cov,nboot)
Explanation: The regression line is $y = Ax + B$. covab is the resulting covariance matrix which can be used to draw confidence regions.
End of explanation
bcesMethod=0
errorbar(xdata,ydata,xerr=errx,yerr=erry,fmt='o')
x=numpy.linspace(xdata.min(),xdata.max())
plot(x,a[bcesMethod]*x+b[bcesMethod],'-k',label="BCES $y|x$")
legend()
xlabel('$x$')
ylabel('$y$')
Explanation: Selecting the fitting method
Select the desired BCES method by setting the variable bcesMethod. The available methods are:
| Value | Method | Description |
|---|---| --- |
| 0 | $y|x$ | Assumes $x$ as the independent variable |
| 1 | $x|y$ | Assumes $y$ as the independent variable |
| 2 | bissector | Line that bisects the $y|x$ and $x|y$. Do not use this method, cf. Hogg, D. et al. 2010, arXiv:1008.4686. |
| 3 | orthogonal | Orthogonal least squares: line that minimizes orthogonal distances. Should be used when it is not clear which variable should be treated as the independent one |
As usual, please read the original BCES paper to understand what these different lines mean.
End of explanation
# array with best-fit parameters
fitm=numpy.array([ a[bcesMethod],b[bcesMethod] ])
# covariance matrix of parameter uncertainties
covm=numpy.array([ (erra[bcesMethod]**2,covab[bcesMethod]), (covab[bcesMethod],errb[bcesMethod]**2) ])
# convenient function for a line
def func(x): return x[1]*x[0]+x[2]
Explanation: Confidence band
Suppose you want to include in the plot a visual estimate of the uncertainty on the fit. This is called the confidence band. For example, the $3\sigma$ confidence interval is 99.7% sure to contain the best-fit regression line. Note that this is not the same as saying it will contain 99.7% of the data points. For more information, check this out.
In order to plot the confidence band, you will need to install the nmmn package and another dependency:
pip install nmmn numdifftools
After installing the package, follow the instructions below to plot the confidence band of your fit.
First we define convenient arrays that encapsulate the fit parameters and their uncertainties—including the covariance.
End of explanation
import nmmn.stats
# Gets lower and upper bounds on the confidence band
lcb,ucb,x=nmmn.stats.confbandnl(xdata,ydata,func,fitm,covm,2,0.997,x)
Explanation: Now we estimate the $3\sigma$ confidence band using one of the methods in the nmmn.stats module. If you want the $1\sigma$ band instead, just change the 7th argument of confbandnl to 0.68.
End of explanation
errorbar(xdata,ydata,xerr=errx,yerr=erry,fmt='o')
plot(x,a[bcesMethod]*x+b[bcesMethod],'-k',label="BCES $y|x$")
fill_between(x, lcb, ucb, alpha=0.3, facecolor='orange')
legend(loc='best')
xlabel('$x$')
ylabel('$y$')
title("Data, fit and confidence band")
Explanation: Finally, the plot with the confidence band displayed in orange. Even at $3\sigma$, it is still very narrow for this dataset.
End of explanation
x=np.arange(1,20)
y=3*x + 4
xer=np.sqrt((x- np.random.normal(x))**2)
yer=np.sqrt((y- np.random.normal(y))**2)
y=numpy.random.normal(y)
x=numpy.random.normal(x)
# simple linear regression
(aa,bb)=np.polyfit(x,y,deg=1)
yfit=x*aa+bb
# BCES fit
cov=zeros(len(x)) # no correlation between error measurements
nboot=10000 # number of bootstrapping trials
a,b,aerr,berr,covab=BCES.bcesp(x,xer,y,yer,cov,nboot)
Explanation: Example 2
Fake data with random uncertainties in $x$ and $y$
Prepares fake data
End of explanation
bcesMethod=3
ybces=a[bcesMethod]*x+b[bcesMethod]
errorbar(x,y,xer,yer,fmt='o',ls='None')
plot(x,yfit,label='Simple regression')
plot(x,ybces,label='BCES orthogonal')
legend()
xlabel('$x$')
ylabel('$y$')
Explanation: The integer corresponds to the desired BCES method for plotting (3-ort, 0-y|x, 1-x|y, don't use bissector)
End of explanation
# array with best-fit parameters
fitm=numpy.array([ a[bcesMethod],b[bcesMethod] ])
# covariance matrix of parameter uncertainties
covm=numpy.array([ (aerr[bcesMethod]**2,covab[bcesMethod]), (covab[bcesMethod],berr[bcesMethod]**2) ])
Explanation: Confidence band
Again, make sure you install the nmmn package before proceeding.
End of explanation
# Gets lower and upper bounds on the confidence band
lcb,ucb,xcb=nmmn.stats.confbandnl(x,y,func,fitm,covm,2,0.954,x)
Explanation: Now we estimate the $2\sigma$ confidence band using one of the methods in the nmmn.stats module.
End of explanation
errorbar(x,y,xerr=xer,yerr=yer,fmt='o')
plot(xcb,a[bcesMethod]*xcb+b[bcesMethod],'-k',label="BCES orthogonal")
fill_between(xcb, lcb, ucb, alpha=0.3, facecolor='orange')
legend(loc='best')
xlabel('$x$')
ylabel('$y$')
title("Data, fit and confidence band")
Explanation: Finally, the plot where the confidence band is displayed in orange. As you can see, it is very narrow.
End of explanation |
14,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, some code. Scroll down.
Step4: Functionality that could be implemented in SparseBinaryMatrix
Step10: This SetMemory docstring is worth reading
Step11: Experiment code
Train an array of columns to recognize these objects, then show it Object 1. It will randomly move its sensors to different feature-locations on the object. It will never put two sensors on the same feature-location at the same time.
Step12: Initialize some feature-locations and objects
Create 8 objects, each with 7 feature-locations. Each object is 1 different from each other object.
Step13: We're testing L2 in isolation, so these "A", "B", etc. patterns are L4 representations, i.e. "feature-locations".
Test
Step14: Move sensors deterministically, trying to touch every point with some sensor as quickly as possible.
Step15: Test
Step16: Test
Step17: Move sensors deterministically, trying to touch every point with some sensor as quickly as possible.
Step18: Can I watch? | Python Code:
import itertools
import random
from collections import deque
from copy import deepcopy
import numpy
from nupic.bindings.math import SparseBinaryMatrix, GetNTAReal
Explanation: First, some code. Scroll down.
End of explanation
def makeSparseBinaryMatrix(numRows, numCols):
Construct a SparseBinaryMatrix.
There is a C++ constructor that does this, but it's currently not available
to Python callers.
matrix = SparseBinaryMatrix(numCols)
matrix.resize(numRows, numCols)
return matrix
def rightVecSumAtNZ_sparse(sparseMatrix, sparseBinaryArray):
Like rightVecSumAtNZ, but it supports sparse binary arrays.
@param sparseBinaryArray (sequence)
A sorted list of indices.
Note: this Python implementation doesn't require the list to be sorted, but
an eventual C implementation would.
denseArray = numpy.zeros(sparseMatrix.nCols(), dtype=GetNTAReal())
denseArray[sparseBinaryArray] = 1
return sparseMatrix.rightVecSumAtNZ(denseArray)
def setOuterToOne(sparseMatrix, rows, cols):
Equivalent to:
SparseMatrix.setOuter(rows, cols,
numpy.ones((len(rows),len(cols)))
But it works with the SparseBinaryMatrix. If this functionality is added to
the SparseBinaryMatrix, it will have the added benefit of not having to
construct a big array of ones.
for rowNumber in rows:
sparseRow = sorted(set(sparseMatrix.getRowSparse(rowNumber)).union(cols))
sparseMatrix.replaceSparseRow(rowNumber, sparseRow)
Explanation: Functionality that could be implemented in SparseBinaryMatrix
End of explanation
class SetMemory(object):
Uses proximal synapses, distal dendrites, and inhibition to implement "set
memory" with neurons. Set Memory can recognize a set via a series of
inputs. It associates an SDR with each set, growing proximal synapses from
each cell in the SDR to each proximal input. When the SetMemory receives an
ambiguous input, it activates a union of these SDRs. As it receives other
inputs, each SDR stays active only if it has both feedforward and lateral
support. Each SDR has lateral connections to itself, so an SDR has lateral
support if it was active in the previous time step. Over time, the union is
narrowed down to a single SDR.
Requiring feedforward and lateral support is functionally similar to computing
the intersection of the feedforward support and the previous active cells.
The advantages of this approach are:
1. Better noise robustness. If cell is randomly inactive, it's not excluded in
the next time step.
2. It doesn't require any new neural phenomena. It accomplishes all this
through distal dendrites and inhibition.
3. It combines well with other parallel layers. A cell can grow one distal
dendrite segment for each layer and connect each to an object SDR, and use
the number of active dendrite segments to drive inhibition.
This doesn't model:
- Synapse permanences. When it grows a synapse, it's immediately connected.
- Subsampling. When growing synapses to active cells, it simply grows
synapses to every one.
These aren't needed for this experiment.
def __init__(self,
layerID,
feedforwardID,
lateralIDs,
layerSizes,
sdrSize,
minThresholdProximal,
minThresholdDistal):
@param layerID
The layer whose activity this SetMemory should update.
@param feedforwardID
The layer that this layer might form feedforward connections to.
@param lateralIDs (iter)
The layers that this layer might form lateral connections to.
If this layer will form internal lateral connections, this list must include
this layer's layerID.
@param layerSizes (dict)
A dictionary from layerID to number of cells. It must contain a size for
layerID, feedforwardID, and each of the lateralIDs.
@param sdrSize (int)
The number of cells in an SDR.
@param minThresholdProximal (int)
The number of active feedforward synapses required for a cell to have
"feedforward support".
@param minThresholdDistal (int)
The number of active distal synapses required for a segment to be active.
self.layerID = layerID
self.feedforwardID = feedforwardID
self.sdrSize = sdrSize
self.minThresholdProximal = minThresholdProximal
self.minThresholdDistal = minThresholdDistal
# Matrix of connected synapses. Permanences aren't modelled.
self.proximalConnections = makeSparseBinaryMatrix(layerSizes[layerID],
layerSizes[feedforwardID])
# Synapses to lateral layers. Each matrix represents one segment per cell.
# A cell won't grow more than one segment to another layer. If the cell
# appears in multiple object SDRs, it will connect its segments to a union
# of object SDRs.
self.lateralConnections = dict(
(lateralID, makeSparseBinaryMatrix(layerSizes[layerID],
layerSizes[lateralID]))
for lateralID in lateralIDs)
self.numCells = layerSizes[layerID]
self.isReset = True
def learningCompute(self, activity):
Chooses active cells using the previous active cells and the reset signal.
Grows proximal synapses to the feedforward layer's current active cells, and
grows lateral synapses to the each lateral layer's previous active cells.
Reads:
- activity[0][feedforwardID]["activeCells"]
- activity[1][lateralID]["activeCells"] for each lateralID
Writes to:
- activity[0][layerID]["activeCells"]
- The feedforward connections matrix
- The lateral connections matrices
# Select active cells
if self.isReset:
activeCells = sorted(random.sample(xrange(self.numCells), self.sdrSize))
self.isReset = False
else:
activeCells = activity[1][self.layerID]["activeCells"]
# Lateral learning
if len(activity) > 1:
for lateralID, connections in self.lateralConnections.iteritems():
setOuterToOne(connections, activeCells,
activity[1][lateralID]["activeCells"])
# Proximal learning
setOuterToOne(self.proximalConnections, activeCells,
activity[0][self.feedforwardID]["activeCells"])
# Write the activity
activity[0][self.layerID]["activeCells"] = activeCells
def inferenceCompute(self, activity):
Chooses active cells using feedforward and lateral input.
Reads:
- activity[0][feedforwardID]["activeCells"]
- activity[1][lateralID]["activeCells"] for each lateralID
Writes to:
- activity[0][layerID]["activeCells"]
# Calculate feedforward support
overlaps = rightVecSumAtNZ_sparse(self.proximalConnections,
activity[0][self.feedforwardID]["activeCells"])
feedforwardSupportedCells = set(
numpy.where(overlaps >= self.minThresholdProximal)[0])
# Calculate lateral support
numActiveSegmentsByCell = numpy.zeros(self.numCells)
if self.isReset:
# Don't activate any segments
self.isReset = False
elif len(activity) >= 2:
for lateralID, connections in self.lateralConnections.iteritems():
overlaps = rightVecSumAtNZ_sparse(connections,
activity[1][lateralID]["activeCells"])
numActiveSegmentsByCell[overlaps >= self.minThresholdDistal] += 1
# Inference
activeCells = []
# First, activate cells that have feedforward support
orderedCandidates = sorted((cell for cell in feedforwardSupportedCells),
key=lambda x: numActiveSegmentsByCell[x],
reverse=True)
for _, cells in itertools.groupby(orderedCandidates,
lambda x: numActiveSegmentsByCell[x]):
activeCells.extend(cells)
if len(activeCells) >= self.sdrSize:
break
# If necessary, activate cells that were previously active and have lateral
# support
if len(activeCells) < self.sdrSize and len(activity) >= 2:
prevActiveCells = activity[1][self.layerID]["activeCells"]
orderedCandidates = sorted((cell for cell in prevActiveCells
if cell not in feedforwardSupportedCells
and numActiveSegmentsByCell[cell] > 0),
key=lambda x: numActiveSegmentsByCell[x],
reverse=True)
for _, cells in itertools.groupby(orderedCandidates,
lambda x: numActiveSegmentsByCell[x]):
activeCells.extend(cells)
if len(activeCells) >= self.sdrSize:
break
# Write the activity
activity[0][self.layerID]["activeCells"] = sorted(activeCells)
def reset(self):
Signal that we're now going to observe a different set.
With learning, this signals that we're going to observe a never-before-seen
set.
With inference, this signals to start inferring a new object, ignoring
recent inputs.
self.isReset = True
Explanation: This SetMemory docstring is worth reading
End of explanation
LAYER_4_SIZE = 2048 * 8
def createFeatureLocationPool(size=10):
duplicateFound = False
for _ in xrange(5):
candidateFeatureLocations = [frozenset(random.sample(xrange(LAYER_4_SIZE), 40))
for featureNumber in xrange(size)]
# Sanity check that they're pretty unique.
duplicateFound = False
for pattern1, pattern2 in itertools.combinations(candidateFeatureLocations, 2):
if len(pattern1 & pattern2) >= 5:
duplicateFound = True
break
if not duplicateFound:
break
if duplicateFound:
raise ValueError("Failed to generate unique feature-locations")
featureLocationPool = {}
for i, featureLocation in enumerate(candidateFeatureLocations):
if i < 26:
name = chr(ord('A') + i)
else:
name = "Feature-location %d" % i
featureLocationPool[name] = featureLocation
return featureLocationPool
def experiment(objects, numColumns, selectRandom=True):
#
# Initialize
#
layer2IDs = ["Column %d Layer 2" % i for i in xrange(numColumns)]
layer4IDs = ["Column %d Layer 4" % i for i in xrange(numColumns)]
layerSizes = dict((layerID, 4096) for layerID in layer2IDs)
layerSizes.update((layerID, LAYER_4_SIZE) for layerID in layer4IDs)
layer2s = dict((l2, SetMemory(layerID=l2,
feedforwardID=l4,
lateralIDs=layer2IDs,
layerSizes=layerSizes,
sdrSize=40,
minThresholdProximal=20,
minThresholdDistal=20))
for l2, l4 in zip(layer2IDs, layer4IDs))
#
# Learn
#
layer2ObjectSDRs = dict((layerID, {}) for layerID in layer2IDs)
activity = deque(maxlen=2)
step = dict((layerID, {})
for layerID in itertools.chain(layer2IDs, layer4IDs))
for objectName, objectFeatureLocations in objects.iteritems():
for featureLocationName in objectFeatureLocations:
l4ActiveCells = sorted(featureLocationPool[featureLocationName])
for _ in xrange(2):
activity.appendleft(deepcopy(step))
# Compute Layer 4
for layerID in layer4IDs:
activity[0][layerID]["activeCells"] = l4ActiveCells
activity[0][layerID]["featureLocationName"] = featureLocationName
# Compute Layer 2
for setMemory in layer2s.itervalues():
setMemory.learningCompute(activity)
for layerID, setMemory in layer2s.iteritems():
layer2ObjectSDRs[layerID][objectName] = activity[0][layerID]["activeCells"]
setMemory.reset()
#
# Infer
#
objectName = "Object 1"
objectFeatureLocations = objects[objectName]
# Start fresh for inference. No max length because we're also using it as a log.
activity = deque()
success = False
for attempt in xrange(60):
if selectRandom:
featureLocationNames = random.sample(objectFeatureLocations, numColumns)
else:
# Naively move the sensors to touch every point as soon as possible.
start = (attempt * numColumns) % len(objectFeatureLocations)
end = start + numColumns
featureLocationNames = list(objectFeatureLocations)[start:end]
overflow = end - len(objectFeatureLocations)
if overflow > 0:
featureLocationNames += list(objectFeatureLocations)[0:overflow]
# Give the feedforward input 3 times so that the lateral inputs have time to spread.
for _ in xrange(3):
activity.appendleft(deepcopy(step))
# Compute Layer 4
for layerID, name in zip(layer4IDs, featureLocationNames):
activity[0][layerID]["activeCells"] = sorted(featureLocationPool[name])
activity[0][layerID]["featureLocationName"] = name
# Compute Layer 2
for setMemory in layer2s.itervalues():
setMemory.inferenceCompute(activity)
if all(activity[0][layer2]["activeCells"] == layer2ObjectSDRs[layer2][objectName]
for layer2 in layer2IDs):
success = True
print "Converged after %d touches" % (attempt + 1)
break
if not success:
print "Failed to converge after %d touches" % (attempt + 1)
return (objectName, activity, layer2ObjectSDRs)
Explanation: Experiment code
Train an array of columns to recognize these objects, then show it Object 1. It will randomly move its sensors to different feature-locations on the object. It will never put two sensors on the same feature-location at the same time.
End of explanation
featureLocationPool = createFeatureLocationPool(size=8)
objects = {"Object 1": set(["A", "B", "C", "D", "E", "F", "G"]),
"Object 2": set(["A", "B", "C", "D", "E", "F", "H"]),
"Object 3": set(["A", "B", "C", "D", "E", "G", "H"]),
"Object 4": set(["A", "B", "C", "D", "F", "G", "H"]),
"Object 5": set(["A", "B", "C", "E", "F", "G", "H"]),
"Object 6": set(["A", "B", "D", "E", "F", "G", "H"]),
"Object 7": set(["A", "C", "D", "E", "F", "G", "H"]),
"Object 8": set(["B", "C", "D", "E", "F", "G", "H"])}
Explanation: Initialize some feature-locations and objects
Create 8 objects, each with 7 feature-locations. Each object is 1 different from each other object.
End of explanation
results = experiment(objects, numColumns=1)
Explanation: We're testing L2 in isolation, so these "A", "B", etc. patterns are L4 representations, i.e. "feature-locations".
Test: Can one column infer an object?
End of explanation
results = experiment(objects, numColumns=1, selectRandom=False)
Explanation: Move sensors deterministically, trying to touch every point with some sensor as quickly as possible.
End of explanation
results = experiment(objects, numColumns=7)
Explanation: Test: Do columns block each other from spreading knowledge?
End of explanation
for numColumns in xrange(1, 8):
print "With %d columns:" % numColumns
results = experiment(objects, numColumns)
print
Explanation: Test: How does number of columns affect recognition time?
Move sensors randomly.
End of explanation
for numColumns in xrange(1, 8):
print "With %d columns:" % numColumns
results = experiment(objects, numColumns, selectRandom=False)
print
Explanation: Move sensors deterministically, trying to touch every point with some sensor as quickly as possible.
End of explanation
(testObject,
activity,
layer2ObjectSDRs) = results
for t, step in enumerate(reversed(activity)):
print "Step %d" % t
for column in xrange(len(step) / 2):
layer2ID = "Column %d Layer 2" % column
layer4ID = "Column %d Layer 4" % column
featureLocationName = step[layer4ID]["featureLocationName"]
activeCells = set(step[layer2ID]["activeCells"])
layer2Contents = {}
for objectName, objectCells in layer2ObjectSDRs[layer2ID].iteritems():
containsRatio = len(activeCells & set(objectCells)) / float(len(objectCells))
if containsRatio >= 0.20:
layer2Contents[objectName] = containsRatio
print "Column %d: Input: %s, Active cells: %d %s" % (column,
featureLocationName,
len(activeCells),
layer2Contents)
print
Explanation: Can I watch?
End of explanation |
14,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Attention on MNIST (Saliency and grad-CAM)
Lets build the mnist model and train it for 5 epochs. It should get to about ~99% test accuracy.
Step1: Saliency
To visualize activation over final dense layer outputs, we need to switch the softmax activation out for linear since gradient of output node will depend on all the other node activations. Doing this in keras is tricky, so we provide utils.apply_modifications to modify network parameters and rebuild the graph.
If this swapping is not done, the results might be suboptimal. We will start by swapping out 'softmax' for 'linear' and compare what happens if we dont do this at the end.
Lets pick an input over which we want to show the attention.
Step2: Time for saliency visualization.
Step3: To used guided saliency, we need to set backprop_modifier='guided'. For rectified saliency or deconv saliency, use backprop_modifier='relu'. Lets try these options quickly and see how they compare to vanilla saliency.
Step4: Both of them look a lot better than vanilla saliency! This in inline with observation in the paper.
We can also visualize negative gradients to see the parts of the image that contribute negatively to the output by using grad_modifier='negate'.
Step5: Lets try all the classes and show original inputs and their heatmaps side by side. We cannot overlay the heatmap on original image since its grayscale.
We will also compare the outputs of guided and rectified or deconv saliency.
Step6: Guided saliency seems to give the best results.
grad-CAM - vanilla, guided, rectified
These should contain more detail since they use Conv or Pooling features that contain more spatial detail which is lost in Dense layers. The only additional detail compared to saliency is the penultimate_layer_idx. This specifies the pre-layer whose gradients should be used. See this paper for technical details
Step7: In this case it appears that saliency is better than grad-CAM as penultimate MaxPooling2D layer has (12, 12) spatial resolution which is relatively large as compared to input of (28, 28). Is is likely that the conv layer hasnt captured enough high level information and most of that is likely within dense_4 layer.
Here is the model summary for reference.
Step8: Visualization without swapping softmax
As alluded at the beginning of the tutorial, we want to compare and see what happens if we didnt swap out softmax for linear activation. Lets try this with guided saliency which gave us the best results so far. | Python Code:
from __future__ import print_function
import numpy as np
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Activation, Input
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 5
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax', name='preds'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Attention on MNIST (Saliency and grad-CAM)
Lets build the mnist model and train it for 5 epochs. It should get to about ~99% test accuracy.
End of explanation
class_idx = 0
indices = np.where(y_test[:, class_idx] == 1.)[0]
# pick some random input from here.
idx = indices[0]
# Lets sanity check the picked image.
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (18, 6)
plt.imshow(x_test[idx][..., 0])
Explanation: Saliency
To visualize activation over final dense layer outputs, we need to switch the softmax activation out for linear since gradient of output node will depend on all the other node activations. Doing this in keras is tricky, so we provide utils.apply_modifications to modify network parameters and rebuild the graph.
If this swapping is not done, the results might be suboptimal. We will start by swapping out 'softmax' for 'linear' and compare what happens if we dont do this at the end.
Lets pick an input over which we want to show the attention.
End of explanation
from vis.visualization import visualize_saliency
from vis.utils import utils
from keras import activations
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx, seed_input=x_test[idx])
# Plot with 'jet' colormap to visualize as a heatmap.
plt.imshow(grads, cmap='jet')
Explanation: Time for saliency visualization.
End of explanation
for modifier in ['guided', 'relu']:
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier=modifier)
plt.figure()
plt.title(modifier)
plt.imshow(grads, cmap='jet')
Explanation: To used guided saliency, we need to set backprop_modifier='guided'. For rectified saliency or deconv saliency, use backprop_modifier='relu'. Lets try these options quickly and see how they compare to vanilla saliency.
End of explanation
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx, seed_input=x_test[idx],
backprop_modifier='guided', grad_modifier='negate')
plt.imshow(grads, cmap='jet')
Explanation: Both of them look a lot better than vanilla saliency! This in inline with observation in the paper.
We can also visualize negative gradients to see the parts of the image that contribute negatively to the output by using grad_modifier='negate'.
End of explanation
# This corresponds to the Dense linear layer.
for class_idx in np.arange(10):
indices = np.where(y_test[:, class_idx] == 1.)[0]
idx = indices[0]
f, ax = plt.subplots(1, 4)
ax[0].imshow(x_test[idx][..., 0])
for i, modifier in enumerate([None, 'guided', 'relu']):
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier=modifier)
if modifier is None:
modifier = 'vanilla'
ax[i+1].set_title(modifier)
ax[i+1].imshow(grads, cmap='jet')
Explanation: Lets try all the classes and show original inputs and their heatmaps side by side. We cannot overlay the heatmap on original image since its grayscale.
We will also compare the outputs of guided and rectified or deconv saliency.
End of explanation
from vis.visualization import visualize_cam
# This corresponds to the Dense linear layer.
for class_idx in np.arange(10):
indices = np.where(y_test[:, class_idx] == 1.)[0]
idx = indices[0]
f, ax = plt.subplots(1, 4)
ax[0].imshow(x_test[idx][..., 0])
for i, modifier in enumerate([None, 'guided', 'relu']):
grads = visualize_cam(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier=modifier)
if modifier is None:
modifier = 'vanilla'
ax[i+1].set_title(modifier)
ax[i+1].imshow(grads, cmap='jet')
Explanation: Guided saliency seems to give the best results.
grad-CAM - vanilla, guided, rectified
These should contain more detail since they use Conv or Pooling features that contain more spatial detail which is lost in Dense layers. The only additional detail compared to saliency is the penultimate_layer_idx. This specifies the pre-layer whose gradients should be used. See this paper for technical details: https://arxiv.org/pdf/1610.02391v1.pdf
By default, if penultimate_layer_idx is not defined, it searches for the nearest pre layer. For our architecture, that would be the MaxPooling2D layer after all the Conv layers. Lets look at all the visualizations like before.
End of explanation
model.summary()
Explanation: In this case it appears that saliency is better than grad-CAM as penultimate MaxPooling2D layer has (12, 12) spatial resolution which is relatively large as compared to input of (28, 28). Is is likely that the conv layer hasnt captured enough high level information and most of that is likely within dense_4 layer.
Here is the model summary for reference.
End of explanation
# Swap linear back with softmax
model.layers[layer_idx].activation = activations.softmax
model = utils.apply_modifications(model)
for class_idx in np.arange(10):
indices = np.where(y_test[:, class_idx] == 1.)[0]
idx = indices[0]
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier='guided')
f, ax = plt.subplots(1, 2)
ax[0].imshow(x_test[idx][..., 0])
ax[1].imshow(grads, cmap='jet')
Explanation: Visualization without swapping softmax
As alluded at the beginning of the tutorial, we want to compare and see what happens if we didnt swap out softmax for linear activation. Lets try this with guided saliency which gave us the best results so far.
End of explanation |
14,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive mapping
Alongside static plots, geopandas can create interactive maps based on the folium library.
Creating maps for interactive exploration mirrors the API of static plots in an explore() method of a GeoSeries or GeoDataFrame.
Loading some example data
Step1: The simplest option is to use GeoDataFrame.explore()
Step2: Interactive plotting offers largely the same customisation as static one plus some features on top of that. Check the code below which plots a customised choropleth map. You can use "BoroName" column with NY boroughs names as an input of the choropleth, show (only) its name in the tooltip on hover but show all values on click. You can also pass custom background tiles (either a name supported by folium, a name recognized by xyzservices.providers.query_name(), XYZ URL or xyzservices.TileProvider object), specify colormap (all supported by matplotlib) and specify black outline.
<div class="alert alert-info">
Note
Note that the GeoDataFrame needs to have a CRS set if you want to use background tiles.
</div>
Step3: The explore() method returns a folium.Map object, which can also be passed directly (as you do with ax in plot()). You can then use folium functionality directly on the resulting map. In the example below, you can plot two GeoDataFrames on the same map and add layer control using folium. You can also add additional tiles allowing you to change the background directly in the map. | Python Code:
import geopandas
nybb = geopandas.read_file(geopandas.datasets.get_path('nybb'))
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
cities = geopandas.read_file(geopandas.datasets.get_path('naturalearth_cities'))
Explanation: Interactive mapping
Alongside static plots, geopandas can create interactive maps based on the folium library.
Creating maps for interactive exploration mirrors the API of static plots in an explore() method of a GeoSeries or GeoDataFrame.
Loading some example data:
End of explanation
nybb.explore()
Explanation: The simplest option is to use GeoDataFrame.explore():
End of explanation
nybb.explore(
column="BoroName", # make choropleth based on "BoroName" column
tooltip="BoroName", # show "BoroName" value in tooltip (on hover)
popup=True, # show all values in popup (on click)
tiles="CartoDB positron", # use "CartoDB positron" tiles
cmap="Set1", # use "Set1" matplotlib colormap
style_kwds=dict(color="black") # use black outline
)
Explanation: Interactive plotting offers largely the same customisation as static one plus some features on top of that. Check the code below which plots a customised choropleth map. You can use "BoroName" column with NY boroughs names as an input of the choropleth, show (only) its name in the tooltip on hover but show all values on click. You can also pass custom background tiles (either a name supported by folium, a name recognized by xyzservices.providers.query_name(), XYZ URL or xyzservices.TileProvider object), specify colormap (all supported by matplotlib) and specify black outline.
<div class="alert alert-info">
Note
Note that the GeoDataFrame needs to have a CRS set if you want to use background tiles.
</div>
End of explanation
import folium
m = world.explore(
column="pop_est", # make choropleth based on "BoroName" column
scheme="naturalbreaks", # use mapclassify's natural breaks scheme
legend=True, # show legend
k=10, # use 10 bins
legend_kwds=dict(colorbar=False), # do not use colorbar
name="countries" # name of the layer in the map
)
cities.explore(
m=m, # pass the map object
color="red", # use red color on all points
marker_kwds=dict(radius=10, fill=True), # make marker radius 10px with fill
tooltip="name", # show "name" column in the tooltip
tooltip_kwds=dict(labels=False), # do not show column label in the tooltip
name="cities" # name of the layer in the map
)
folium.TileLayer('Stamen Toner', control=True).add_to(m) # use folium to add alternative tiles
folium.LayerControl().add_to(m) # use folium to add layer control
m # show map
Explanation: The explore() method returns a folium.Map object, which can also be passed directly (as you do with ax in plot()). You can then use folium functionality directly on the resulting map. In the example below, you can plot two GeoDataFrames on the same map and add layer control using folium. You can also add additional tiles allowing you to change the background directly in the map.
End of explanation |
14,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build and test a Nearest Neighbors classifier.
Load the relevant packages.
Step1: Load the Iris data to use for experiments. The data include 50 observations of each of 3 types of irises (150 total). Each observation includes 4 measurements
Step2: Create a distance function that returns the distance between 2 observations.
Step3: This is just an example of code for Euclidean distance. In order to create a robust production-quality code, you have to be prepared for situations where len(v1) != len (v2)
Step4: Ok now let's create a class that implements a Nearest Neighbors classifier. We'll model it after the sklearn classifier implementations, with fit() and predict() methods.
http
Step5: Run an experiment with the classifier.
Step6: Let's try and see what happens if we do not set the seed for the random number generator. When no seed is given, RNG usually sets the seed to the numeric interpretation of UTC (Coordinated Universal Time). Let us do just that (change the second argument in range() function before you run this loop) | Python Code:
# This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_iris
Explanation: Build and test a Nearest Neighbors classifier.
Load the relevant packages.
End of explanation
# Load the data, which is included in sklearn.
iris = load_iris()
print 'Iris target names:', iris.target_names
print 'Iris feature names:', iris.feature_names
X, Y = iris.data, iris.target
# Shuffle the data, but make sure that the features and accompanying labels stay in sync.
np.random.seed(0) # To ensure repeatability of results
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, Y = X[shuffle], Y[shuffle]
# Split into train and test.
train_data, train_labels = X[:100], Y[:100]
test_data, test_labels = X[100:], Y[100:]
Explanation: Load the Iris data to use for experiments. The data include 50 observations of each of 3 types of irises (150 total). Each observation includes 4 measurements: sepal and petal width and height. The goal is to predict the iris type from these measurements.
http://en.wikipedia.org/wiki/Iris_flower_data_set
End of explanation
## Note: the assumption is len (v1) == len (v2)
def EuclideanDistance(v1, v2):
sum = 0.0
for index in range(len(v1)):
sum += (v1[index] - v2[index]) ** 2
return sum ** 0.5
Explanation: Create a distance function that returns the distance between 2 observations.
End of explanation
dists = []
for i in range(len(train_data) - 1):
for j in range(i + 1, len(train_data)):
dist = EuclideanDistance(train_data[i], train_data[j])
dists.append(dist)
fig = plt.hist(dists, 100) ## Play with different values of the parameter; see how the view changes
Explanation: This is just an example of code for Euclidean distance. In order to create a robust production-quality code, you have to be prepared for situations where len(v1) != len (v2): missing data; bad data; wrong data types, etc. Make sure your functions are always prepared for such scenarios: as much as 50% of a data scientists' job is cleaning up the data. A great overview of "what can go wrong with data" is, e.g., in this book: http://www.amazon.com/Bad-Data-Handbook-Cleaning-Back/dp/1449321887.
Just for fun, let's compute all the pairwise distances in the training data and plot a histogram.
End of explanation
class NearestNeighbors:
# Initialize an instance of the class.
def __init__(self, metric=EuclideanDistance):
self.metric = metric
# No training for Nearest Neighbors. Just store the data.
def fit(self, train_data, train_labels):
self.train_data = train_data
self.train_labels = train_labels
# Make predictions for each test example and return results.
def predict(self, test_data):
results = []
for item in test_data:
results.append(self._predict_item(item))
return results
# Private function for making a single prediction.
def _predict_item(self, item):
best_dist, best_label = 1.0e10, None
for i in range(len(self.train_data)):
dist = self.metric(self.train_data[i], item)
if dist < best_dist:
best_label = self.train_labels[i]
best_dist = dist
return best_label
Explanation: Ok now let's create a class that implements a Nearest Neighbors classifier. We'll model it after the sklearn classifier implementations, with fit() and predict() methods.
http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier
End of explanation
clf = NearestNeighbors()
clf.fit(train_data, train_labels)
preds = clf.predict(test_data)
correct, total = 0, 0
for pred, label in zip(preds, test_labels):
if pred == label: correct += 1
total += 1
print 'total: %3d correct: %3d accuracy: %3.2f' %(total, correct, 1.0*correct/total)
Explanation: Run an experiment with the classifier.
End of explanation
X, Y = iris.data, iris.target
import time
Now = time.time()
print long (Now.real * 100)
# Now run the same codeShuffle the data, but make sure that the features and accompanying labels stay in sync.
for i in range (0, 1):
myseed = long(Now.real)+i
np.random.seed(myseed) # To ensure repeatability of results
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, Y = X[shuffle], Y[shuffle]
# Split into train and test.
train_data, train_labels = X[:100], Y[:100]
test_data, test_labels = X[100:], Y[100:]
clf.fit(train_data, train_labels)
preds = clf.predict(test_data)
correct, total = 0, 0
for pred, label in zip(preds, test_labels):
if pred == label: correct += 1
total += 1
print 'seed: %ld total: %3d correct: %3d accuracy: %3.2f' %(myseed, total, correct, 1.0*correct/total)
Explanation: Let's try and see what happens if we do not set the seed for the random number generator. When no seed is given, RNG usually sets the seed to the numeric interpretation of UTC (Coordinated Universal Time). Let us do just that (change the second argument in range() function before you run this loop):
End of explanation |
14,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle Galaxy Zoo Competition
Step1: First Model
2 layer CNN
Step2: To Do | Python Code:
%matplotlib inline
path = "data/galaxy/sample/"
#path = "data/galaxy/"
train_path = path + 'train/'
valid_path = path + 'valid/'
test_path = path + 'test/'
results_path = path + 'results/'
model_path = path + 'model/'
from utils import *
batch_size = 32
num_epoch = 1
import pandas as pd
df = pd.read_csv(path+ "train.csv")
df_val = pd.read_csv(path+ "valid.csv")
# custom iterator for regression
import Iterator; reload(Iterator)
from Iterator import DirectoryIterator
imgen = image.ImageDataGenerator()
# imgen = image.ImageDataGenerator(samplewise_center=0,
# rotation_range=360,
# width_shift_range=0.05,
# height_shift_range=0.05,
# zoom_range=[0.9,1.2],
# horizontal_flip=True,
# channel_shift_range=0.1,
# dim_ordering='tf')
batches = DirectoryIterator(train_path, imgen,
class_mode=None,
dataframe=df,
batch_size=4,
target_size=(128,128))
val_imgen = image.ImageDataGenerator()
val_batches = DirectoryIterator(valid_path, val_imgen,
class_mode=None,
dataframe=df_val,
batch_size=4,
target_size=(128,128))
imgs, target = next(batches)
imgs[0].shape
plots(imgs)
Explanation: Kaggle Galaxy Zoo Competition
End of explanation
def conv1():
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,128,128)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(37)
])
model.compile(Adam(lr=0.0001), loss='mse')
return model
model = conv1()
model.summary()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
model.save_weights(model_path+'conv1.h5')
train_files = batches.filenames
train_out = model.predict_generator(batches, batches.nb_sample)
features = list(df.columns.values)
train_ids = [os.path.splitext(f) for f in train_files]
submission = pd.DataFrame(train_out, columns=features[2:])
submission.insert(0, 'GalaxyID', [int(a[0][7:]) for a in train_ids])
submission.head()
df.loc[df['GalaxyID'] == 924379]
val_files = val_batches.filenames
val_out = model.predict_generator(val_batches, val_batches.nb_sample)
features = list(df_val.columns.values)
val_ids = [os.path.splitext(f) for f in val_files]
submission = pd.DataFrame(val_out, columns=features[2:])
submission.insert(0, 'GalaxyID', [int(a[0][7:]) for a in val_ids])
submission.head()
df_val.loc[df_val['GalaxyID'] == 546684]
test_batches = get_batches(test_path, batch_size=64, target_size=(128,128))
test_files = test_batches.filenames
test_out = model.predict_generator(test_batches, test_batches.nb_sample)
save_array(results_path+'test_out.dat', test_out)
features = list(df.columns.values)
test_ids = [os.path.splitext(f) for f in test_files]
submission = pd.DataFrame(test_out, columns=features[2:])
submission.insert(0, 'GalaxyID', [int(a[0][7:]) for a in test_ids])
submission.head()
subm_name = results_path+'subm.csv'
submission.to_csv(subm_name, index=False)
FileLink(subm_name)
Explanation: First Model
2 layer CNN
End of explanation
imgen_aug = image.ImageDataGenerator(horizontal_flip=True)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(rotation_range=360)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(width_shift_range=0.05)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(channel_shift_range=20)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
imgen_aug = image.ImageDataGenerator(horizontal_flip=True,
rotation_range=180,
width_shift_range=0.05,
channel_shift_range=20)
batches = DirectoryIterator(train_path, imgen_aug,
class_mode=None,
dataframe=df,
batch_size=4)
model = conv1()
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.0001
model.fit_generator(batches, batches.nb_sample, nb_epoch=5,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
Explanation: To Do:
Data Augmentation to reduce overfitting
Custom output layer for output question constraints
Dropout on dense layers (need all the data)
Larger network, different arch
Data Augmentation
TODO: Crop images
End of explanation |
14,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ilastik for 10/31/16 Week
From last week, I was able to generate 3D TIFF slices and image classifiers on Fear199 downsampled data. However, my problems were that
Step1: In a nutshell, method 1 generates an array with shape (x, y, z) -- specifically, (540, 717, 1358). The method 2 generates a numpy array with shape (z, y, x) -- specifically, (1358, 717, 540). Since we want the first column to be z slices, the original method was granting me x-slices (hence the cigar-tube dimensions).
In order to interconvert, we can either just use the rawData approach after directly calling from ndstore, or we can take our numpy array after loading from nibabel and use numpy's swapaxes method to just swap two of the dimensions (shown below).
Step2: Task 2 | Python Code:
## Script used to download nii run on Docker
from ndreg import *
import matplotlib
import ndio.remote.neurodata as neurodata
import nibabel as nb
inToken = "Fear199"
nd = neurodata()
print(nd.get_metadata(inToken)['dataset']['voxelres'].keys())
inImg = imgDownload(inToken, resolution=5)
imgWrite(inImg, "./Fear199.nii")
## Method 1:
import os
import numpy as np
from PIL import Image
import nibabel as nib
import scipy.misc
TokenName = 'Fear199.nii'
img = nib.load(TokenName)
## Convert into np array (or memmap in this case)
data = img.get_data()
print data.shape
print type(data)
## Method 2:
rawData = sitk.GetArrayFromImage(inImg) ## convert to simpleITK image to normal numpy ndarray
print type(rawData)
Explanation: Ilastik for 10/31/16 Week
From last week, I was able to generate 3D TIFF slices and image classifiers on Fear199 downsampled data. However, my problems were that:
1) The TIFF slices were odd, cigar-shaped tubes.
2) I was unable to generate a significant classifier using the existing data because of the weird image layout.
3) I had trouble loading in the TIFF stack despite having generated one via ImageJ
What I did this week was:
1) Figure out why my original data was the odd cigar-shaped data.
2) Correctly generate a subset of TIFF slices for Fear199.
3) Generate a pixel-based object classifier.
What I need help with/still need to learn:
1) How to interpret/better validate my classifier results (currently have hdf5/TIFF output, how can I validate this?)
2) How to apply this to density mapping
Task 1: Why was my original data cigar-shaped?
When downloading the image from ndreg, there were two different approaches to generating the numpy array. I've shown both below:
End of explanation
## if we have (i, j, k), we want (k, j, i) (converts nibabel format to sitk format)
new_im = newer_img.swapaxes(0,2) # just swap i and k
Explanation: In a nutshell, method 1 generates an array with shape (x, y, z) -- specifically, (540, 717, 1358). The method 2 generates a numpy array with shape (z, y, x) -- specifically, (1358, 717, 540). Since we want the first column to be z slices, the original method was granting me x-slices (hence the cigar-tube dimensions).
In order to interconvert, we can either just use the rawData approach after directly calling from ndstore, or we can take our numpy array after loading from nibabel and use numpy's swapaxes method to just swap two of the dimensions (shown below).
End of explanation
plane = 0;
for plane in (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 100, 101, 102, 103, 104):
output = np.asarray(rawData[plane])
## Save as TIFF for Ilastik
scipy.misc.toimage(output).save('RAWoutfile' + TokenName + 'ITK' + str(plane) + '.tiff')
Explanation: Task 2: Generating raw TIFF slices.
Now that I have appropiate coordinates, I generated a subset of TIFF slices to run the training module for the image classifier. Using the script here:
End of explanation |
14,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
of building a notebook-friendly object into the output of the data API
Author
Step1: Authorization
In the vanilla notebook, you need to manually set an auth. token. You'll need your own value for this, of course.
Get this from the running narrative, e.g. write a narrative code cell that has
Step2: Find and load an object
Open the workspace (1019) and get a Rhodobacter assembly from it
Step3: Get the contigs for the assembly
This takes a while because the current implementation loads the whole assembly, not just the 300 or so strings with the contig values.
Step4: View the contigs
The Contigs object wraps the list of contigs as a Pandas DataFrame (with the qgrid output enabled), so as you can see the plot() function is immediately available. The list of strings in the raw contig IDs is parsed to a set of columns and values for the DataFrame.
The default display is the nice sortable, scrollable, etc. table from the qgrid package. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import qgrid
qgrid.nbinstall()
from biokbase import data_api
from biokbase.data_api import display
display.nbviewer_mode(True)
Explanation: Example
of building a notebook-friendly object into the output of the data API
Author: Dan Gunter
Initialization
Imports
Set up matplotlib, the qgrid (nice table), and import biokbase
End of explanation
import os
os.environ['KB_AUTH_TOKEN'] = open('/tmp/kb_auth_token.txt').read().strip()
Explanation: Authorization
In the vanilla notebook, you need to manually set an auth. token. You'll need your own value for this, of course.
Get this from the running narrative, e.g. write a narrative code cell that has:
import os; print(os.environ('KB_AUTH_TOKEN'))
End of explanation
b = data_api.browse(1019)
x = b[0].object # Assembly object
Explanation: Find and load an object
Open the workspace (1019) and get a Rhodobacter assembly from it
End of explanation
cid_strings = x.get_contig_ids() # 1 min
cids = display.Contigs(cid_strings)
Explanation: Get the contigs for the assembly
This takes a while because the current implementation loads the whole assembly, not just the 300 or so strings with the contig values.
End of explanation
from biokbase import data_api
from biokbase.data_api import display
list(b)
rg = b[0]
rgo = rg.object
type(rgo)
Explanation: View the contigs
The Contigs object wraps the list of contigs as a Pandas DataFrame (with the qgrid output enabled), so as you can see the plot() function is immediately available. The list of strings in the raw contig IDs is parsed to a set of columns and values for the DataFrame.
The default display is the nice sortable, scrollable, etc. table from the qgrid package.
End of explanation |
14,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating Feature Descriptions
As features become more complicated, their names can become harder to understand. Both the describe_feature function and the graph_feature function can help explain what a feature is and the steps Featuretools took to generate it. Additionally, the describe_feature function can be augmented by providing custom definitions and templates to improve the resulting descriptions.
Step1: By default, describe_feature uses the existing column and DataFrame names and the default primitive description templates to generate feature descriptions.
Step2: Improving Descriptions
While the default descriptions can be helpful, they can also be further improved by providing custom definitions of columns and features, and by providing alternative templates for primitive descriptions.
Feature Descriptions
Custom feature definitions will get used in the description in place of the automatically generated description. This can be used to better explain what a ColumnSchema or feature is, or to provide descriptions that take advantage of a user's existing knowledge about the data or domain.
Step3: For example, the above replaces the column name, "join_date", with a more descriptive definition of what that column represents in the dataset. Descriptions can also be set directly on a column in a DataFrame by going through the Woodwork typing information to access the description attribute present on each ColumnSchema
Step4: Descriptions must be set for a column in a DataFrame before the feature is created in order for descriptions to propagate. Note that if a description is both set directly on a column and passed to describe_feature with feature_descriptions, the description in the feature_descriptions parameter will take presedence.
Feature descriptions can also be provided for generated features.
Step5: Here, we create and pass in a custom description of the intermediate feature SUM(transactions.amount). The description for MEAN(sessions.SUM(transactions.amount)), which is built on top of SUM(transactions.amount), uses the custom description in place of the automatically generated one. Feature descriptions can be passed in as a dictionary that maps the custom descriptions to either the feature object itself or the unique feature name in the form "[dataframe_name]
Step6: In this example, we override the default template of 'the sum of {}' with our custom template 'the total of {}'. The description uses our custom template instead of the default.
Multi-output primitives can use a list of primitive description templates to differentiate between the generic multi-output feature description and the feature slice descriptions. The first primitive template is always the generic overall feature. If only one other template is provided, it is used as the template for all slices. The slice number converted to the "nth" form is available through the nth_slice keyword.
Step7: Notice how the multi-output feature uses the first template for its description. Each slice of this feature will use the second slice template
Step8: Alternatively, instead of supplying a single template for all slices, templates can be provided for each slice to further customize the output. Note that in this case, each slice must get its own template. | Python Code:
import featuretools as ft
es = ft.demo.load_mock_customer(return_entityset=True)
feature_defs = ft.dfs(entityset=es,
target_dataframe_name="customers",
agg_primitives=["mean", "sum", "mode", "n_most_common"],
trans_primitives=["month", "hour"],
max_depth=2,
features_only=True)
Explanation: Generating Feature Descriptions
As features become more complicated, their names can become harder to understand. Both the describe_feature function and the graph_feature function can help explain what a feature is and the steps Featuretools took to generate it. Additionally, the describe_feature function can be augmented by providing custom definitions and templates to improve the resulting descriptions.
End of explanation
feature_defs[9]
ft.describe_feature(feature_defs[9])
feature_defs[14]
ft.describe_feature(feature_defs[14])
Explanation: By default, describe_feature uses the existing column and DataFrame names and the default primitive description templates to generate feature descriptions.
End of explanation
feature_descriptions = {'customers: join_date': 'the date the customer joined'}
ft.describe_feature(feature_defs[9], feature_descriptions=feature_descriptions)
Explanation: Improving Descriptions
While the default descriptions can be helpful, they can also be further improved by providing custom definitions of columns and features, and by providing alternative templates for primitive descriptions.
Feature Descriptions
Custom feature definitions will get used in the description in place of the automatically generated description. This can be used to better explain what a ColumnSchema or feature is, or to provide descriptions that take advantage of a user's existing knowledge about the data or domain.
End of explanation
join_date_column_schema = es['customers'].ww.columns['join_date']
join_date_column_schema.description = 'the date the customer joined'
es['customers'].ww.columns['join_date'].description
feature = ft.TransformFeature(es['customers'].ww['join_date'], ft.primitives.Hour)
feature
ft.describe_feature(feature)
Explanation: For example, the above replaces the column name, "join_date", with a more descriptive definition of what that column represents in the dataset. Descriptions can also be set directly on a column in a DataFrame by going through the Woodwork typing information to access the description attribute present on each ColumnSchema:
End of explanation
feature_descriptions = {
'sessions: SUM(transactions.amount)': 'the total transaction amount for a session'}
feature_defs[14]
ft.describe_feature(feature_defs[14], feature_descriptions=feature_descriptions)
Explanation: Descriptions must be set for a column in a DataFrame before the feature is created in order for descriptions to propagate. Note that if a description is both set directly on a column and passed to describe_feature with feature_descriptions, the description in the feature_descriptions parameter will take presedence.
Feature descriptions can also be provided for generated features.
End of explanation
primitive_templates = {'sum': 'the total of {}'}
feature_defs[6]
ft.describe_feature(feature_defs[6], primitive_templates=primitive_templates)
Explanation: Here, we create and pass in a custom description of the intermediate feature SUM(transactions.amount). The description for MEAN(sessions.SUM(transactions.amount)), which is built on top of SUM(transactions.amount), uses the custom description in place of the automatically generated one. Feature descriptions can be passed in as a dictionary that maps the custom descriptions to either the feature object itself or the unique feature name in the form "[dataframe_name]: [feature_name]", as shown above.
Primitive Templates
Primitives descriptions are generated using primitive templates. By default, these are defined using the description_template attribute on the primitive. Primitives without a template default to using the name attribute of the primitive if it is defined, or the class name if it is not. Primitive description templates are string templates that take input feature descriptions as the positional arguments. These can be overwritten by mapping primitive instances or primitive names to custom templates and passing them into describe_feature through the primitive_templates argument.
End of explanation
feature = feature_defs[5]
feature
primitive_templates = {
'n_most_common': [
'the 3 most common elements of {}', # generic multi-output feature
'the {nth_slice} most common element of {}']} # template for each slice
ft.describe_feature(feature, primitive_templates=primitive_templates)
Explanation: In this example, we override the default template of 'the sum of {}' with our custom template 'the total of {}'. The description uses our custom template instead of the default.
Multi-output primitives can use a list of primitive description templates to differentiate between the generic multi-output feature description and the feature slice descriptions. The first primitive template is always the generic overall feature. If only one other template is provided, it is used as the template for all slices. The slice number converted to the "nth" form is available through the nth_slice keyword.
End of explanation
ft.describe_feature(feature[0], primitive_templates=primitive_templates)
ft.describe_feature(feature[1], primitive_templates=primitive_templates)
ft.describe_feature(feature[2], primitive_templates=primitive_templates)
Explanation: Notice how the multi-output feature uses the first template for its description. Each slice of this feature will use the second slice template:
End of explanation
primitive_templates = {
'n_most_common': [
'the 3 most common elements of {}',
'the most common element of {}',
'the second most common element of {}',
'the third most common element of {}']}
ft.describe_feature(feature, primitive_templates=primitive_templates)
ft.describe_feature(feature[0], primitive_templates=primitive_templates)
ft.describe_feature(feature[1], primitive_templates=primitive_templates)
ft.describe_feature(feature[2], primitive_templates=primitive_templates)
Explanation: Alternatively, instead of supplying a single template for all slices, templates can be provided for each slice to further customize the output. Note that in this case, each slice must get its own template.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.