Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CLIXO Ontology Tree Generator
This is a notebook to generate tree data file from original table and annotations.
This is the final version of the script creating an Cytoscape.js file with gene count.
Requirment
DAG file for CLIXO
Term to gene assignment file
GO alignment file
CLIXO TERM COUNT = 4805
Step1: Build Base CyJS Network
Step2: Layout with networkx | Python Code:
# Load data sets
import pandas as pd
treeSourceUrl = './data/preds_yeastnet_no_gi_0.04_0.5.txt.propagate.small_parent_tree'
geneCountFile = './data/preds_yeastnet_no_gi_0.04_0.5.txt.propagate.term_sizes'
alignmentFile = './data/alignments_FDR_0.1_t_0.1'
geneAssignment = './data/preds_yeastnet_no_gi_0.04_0.5.txt.propagate.mapping'
# Load the tree data
treeColNames = ['parent', 'child', 'type', 'in_tree']
tree = pd.read_csv(treeSourceUrl, delimiter='\t', names=treeColNames)
tree.tail()
assignment = pd.read_csv(geneAssignment, sep='\t', names=['gene', 'clixo'])
print(assignment['clixo'].unique().shape)
assignment.head()
al = pd.read_csv(alignmentFile, sep='\t', names=['clixo', 'go', 'similarity', 'fdr', 'genes'])
al.head()
mapping = {}
for row in al.itertuples():
entry = {
'go': row[2],
'score': row[3],
'dfr': row[4]
}
mapping[str(row[1])] = entry
geneCounts = pd.read_csv(geneCountFile, names=['clixo', 'count'], sep='\t')
term2count = {}
for row in geneCounts.itertuples():
term2count[str(row[1])] = row[2].item()
# Get unique terms
clixo_terms = set()
for row in tree.itertuples():
etype = row[3]
if not etype.startswith('gene'):
clixo_terms.add(str(row[1]))
clixo_terms.add(str(row[2]))
print(len(clixo_terms))
Explanation: CLIXO Ontology Tree Generator
This is a notebook to generate tree data file from original table and annotations.
This is the final version of the script creating an Cytoscape.js file with gene count.
Requirment
DAG file for CLIXO
Term to gene assignment file
GO alignment file
CLIXO TERM COUNT = 4805
End of explanation
import json
clixoTree = {
'data': {
'name': 'CLIXO Tree'
},
'elements': {
'nodes': [],
'edges': []
}
}
print(json.dumps(clixoTree, indent=4))
def get_node(id, count):
node = {
'data': {
'id': id,
'geneCount': count
}
}
return node
def get_edge(source, target):
edge = {
'data': {
'source': target,
'target': source
}
}
return edge
edges = []
PREFIX = 'CLIXO:'
for row in tree.itertuples():
etype = row[3]
in_tree = row[4]
if etype.startswith('gene') or in_tree == 'NOT_TREE':
continue
source = PREFIX + str(row[1])
child = PREFIX + str(row[2])
edges.append(get_edge(source, child))
print(len(edges))
nodes = []
for id in clixo_terms:
node = get_node(PREFIX + id, term2count[id])
nodes.append(node)
print(len(nodes))
clixoTree['elements']['nodes'] = nodes
clixoTree['elements']['edges'] = edges
with open('./data/clixo-tree.cyjs', 'w') as outfile:
json.dump(clixoTree, outfile)
Explanation: Build Base CyJS Network
End of explanation
import networkx as nx
DG=nx.DiGraph()
for node in nodes:
DG.add_node(node['data']['id'])
for edge in edges:
DG.add_edge(edge['data']['source'], edge['data']['target'])
import matplotlib.pyplot as plt
nx.draw_circular(DG)
# pos = nx.nx_pydot.pydot_layout(DG)
Explanation: Layout with networkx
End of explanation |
11,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
贝叶斯分类器含义
贝叶斯分类器是一种基于贝叶斯概率的模型,属于生成式分类器算法,用来处理分类问题.最常见的是朴素贝叶斯分类器和高斯贝叶斯分类器,"朴素"是因为它假设各个预测变量之间相互独立,"高斯"是因为它假设每类数据的每个预测变量都服从参数独立的高斯分布,也就是正态分布.
贝叶斯公式
贝叶斯分类器来源于贝叶斯公式,也就是条件概率公式,以离散情况为例便是$P(Y|X)=\frac{P(X|Y)*P(Y)}{P(X)}$.现实中便是用
$\forall Y_k \in Y, \vec{x} = (X_{1,j_1},X_{2,j_2},...,X_{d,j_d} \in X)$,其中$X_{i,j}$就是第i个预测变量的第j种情况.
的频数逼近条件概率$P(X|Y) $来计算出现了$\vec{x}$特征属于$Y_k$类的概率.
朴素贝叶斯(离散型,NB)
对于最简单的朴素贝叶斯分类器,不妨假设共有两个预测变量X1和X2,那么"朴素贝叶斯"便是
$P(y|x1,x2)=\frac{P(x1,x2|y)P(y)}{P(x1,x2)}=\frac{P(x1|x2,y)P(x2|y)P(y)}{P(x1|x2)P(x2)}=\frac{P(x1|y)P(x2|y)P(y)}{P(x1)*P(x2)}$
其中x1,x2分别为X1,X2两个预测变量的一个实例,y为Y的一个实例.
这样,给定X1和X2变量的值x1,x2就可以得到$P(y|x1,x2), \forall y, y\in Y$.我们既可以得到某一数据点属于各种类的概率,也可以求出$ \arg \max \limits_y P(y|x1,x2)$,也就是使得概率最大的那个类.
高斯贝叶斯(连续型,GNB)
除了离散数据,贝叶斯分类器也可以处理连续型数据,叫做"高斯贝叶斯分类器".高斯贝叶斯分类器同样基于变量之间相互独立的假设,并且假设每类数据的每个预测变量都服从高斯分布,也就是正态分布.对于每一个i,k,我们求出$\mu_{i,k}$和$\sigma_{i,k}$,还是假设我们只有两个预测变量,
那么
$P(Y_k|X_1,X_2)=\frac{P(X_1|Y_k)P(X_2|Y_k)P(Y_k)}{P(X_1)P(X_2)}$
其中
$P(X_1|Y_k) = \frac{\exp(\frac{ (X_1 - \mu_{1,k} )^2}{\sigma_{1,k}})}{\sqrt{2 \pi}\sigma_{1,k}}$
$P(X_2|Y_k)=\frac{\exp(\frac{ (X_2 - \mu_{2,k})^2}{\sigma_{2,k}})}{\sqrt{2 \pi}*\sigma_{2,k}}$
操作中我们也会假设$\sigma_{i,k}$与k无关,也就是$P(X_i|Y_k) \sim N(\mu_{i,k}, \sigma_i)$.
算法步骤
我们可以从上面看出,朴素贝叶斯分类器需要的就是算出 $P(x|y) \forall x=(X_{1,j_1},X_{2,j_2},...,X_{d,j_d} \in X, \forall Y_k \in Y$,我们根据各个特征出现的频率就可以得到.
高斯贝叶斯分类器需要算出$\mu,\sigma \forall x \in X, \forall Y_k \in Y$.我们将最大似然估计公式对数化就知道,对于$Y_k$类,$X_1$预测变量,$\mu = mean(S), \sigma = std(S)$,其中S为属于$Y_k$类的所有样本的$X_1$预测变量的值的集合.
复杂度和收敛性
对于离散型,我们针对每一个预测变量的各种情况和每一类算出概率即可,$P(X_i = X_{i,j}|Y = Y_k)$,所以复杂度是$O(NK\sum\limits_{i=1}^d{|Xi|})$,其中N是样本数量,K是类的数量,d为预测变量的个数,|Xi|是Xi预测变量的可能情况的个数.
对于连续型,我们只要针对每一个预测变量和每一类算出$\mu_{ijk},\sigma_{i,j,k}$就好,所以复杂度是$O(NKd))$.
这里算法的收敛性是收敛于参数渐进值(当样本集无限大时的参数值)时需要的样例数,Ng&Jordan证明GNB只需要$O(logd)$个样例.
优缺点
朴素(高斯)贝叶斯分类器简单却有效,在选取合适的预测变量的条件下,在很多应用领域,比如文本类型识别和医疗判断,都表现不错.预测变量之间互不影响,所以不会因为变量的增加而造成复杂度上的爆炸.
但是在离散情况下,有oov(Out Of Vocabulary)问题,也就是面对新特征时手足无措.比如在样本中没有$X_{i,j},Y_k$的样例,那么对于出现$X_{i,j}$特征的数据点,分类器就不会把它归于$Y_k$类.所以有一些smoothing方法,比如$\forall i,j,k$,我们增加一个$X = X_{i,j},Y =Y_k$的样例.
此外,它的独立性假设条件太强,往往不合实际,每个预测变量都能对效果产生独立的影响,很依赖预测变量的选取.在小样本数据集上,GNB的预测效果也往往不如逻辑回归等其他分类器.
与其他算法的关系
离散和连续型的朴素贝叶斯分类的参数可以组合成为逻辑回归的参数,在独立性假设满足并且有无限多样例时,和逻辑回归收敛于同一个分类器.(https
Step1: 数据获取
Step2: 数据预处理
由于特征和标签都是标签类别数据,因此需要为其编码,将其转化为由int类型表示类别的形式才能参与模型训练.
Step3: 数据集拆分
Step4: 训练模型
Step5: 模型评估 | Python Code:
import requests
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report
Explanation: 贝叶斯分类器含义
贝叶斯分类器是一种基于贝叶斯概率的模型,属于生成式分类器算法,用来处理分类问题.最常见的是朴素贝叶斯分类器和高斯贝叶斯分类器,"朴素"是因为它假设各个预测变量之间相互独立,"高斯"是因为它假设每类数据的每个预测变量都服从参数独立的高斯分布,也就是正态分布.
贝叶斯公式
贝叶斯分类器来源于贝叶斯公式,也就是条件概率公式,以离散情况为例便是$P(Y|X)=\frac{P(X|Y)*P(Y)}{P(X)}$.现实中便是用
$\forall Y_k \in Y, \vec{x} = (X_{1,j_1},X_{2,j_2},...,X_{d,j_d} \in X)$,其中$X_{i,j}$就是第i个预测变量的第j种情况.
的频数逼近条件概率$P(X|Y) $来计算出现了$\vec{x}$特征属于$Y_k$类的概率.
朴素贝叶斯(离散型,NB)
对于最简单的朴素贝叶斯分类器,不妨假设共有两个预测变量X1和X2,那么"朴素贝叶斯"便是
$P(y|x1,x2)=\frac{P(x1,x2|y)P(y)}{P(x1,x2)}=\frac{P(x1|x2,y)P(x2|y)P(y)}{P(x1|x2)P(x2)}=\frac{P(x1|y)P(x2|y)P(y)}{P(x1)*P(x2)}$
其中x1,x2分别为X1,X2两个预测变量的一个实例,y为Y的一个实例.
这样,给定X1和X2变量的值x1,x2就可以得到$P(y|x1,x2), \forall y, y\in Y$.我们既可以得到某一数据点属于各种类的概率,也可以求出$ \arg \max \limits_y P(y|x1,x2)$,也就是使得概率最大的那个类.
高斯贝叶斯(连续型,GNB)
除了离散数据,贝叶斯分类器也可以处理连续型数据,叫做"高斯贝叶斯分类器".高斯贝叶斯分类器同样基于变量之间相互独立的假设,并且假设每类数据的每个预测变量都服从高斯分布,也就是正态分布.对于每一个i,k,我们求出$\mu_{i,k}$和$\sigma_{i,k}$,还是假设我们只有两个预测变量,
那么
$P(Y_k|X_1,X_2)=\frac{P(X_1|Y_k)P(X_2|Y_k)P(Y_k)}{P(X_1)P(X_2)}$
其中
$P(X_1|Y_k) = \frac{\exp(\frac{ (X_1 - \mu_{1,k} )^2}{\sigma_{1,k}})}{\sqrt{2 \pi}\sigma_{1,k}}$
$P(X_2|Y_k)=\frac{\exp(\frac{ (X_2 - \mu_{2,k})^2}{\sigma_{2,k}})}{\sqrt{2 \pi}*\sigma_{2,k}}$
操作中我们也会假设$\sigma_{i,k}$与k无关,也就是$P(X_i|Y_k) \sim N(\mu_{i,k}, \sigma_i)$.
算法步骤
我们可以从上面看出,朴素贝叶斯分类器需要的就是算出 $P(x|y) \forall x=(X_{1,j_1},X_{2,j_2},...,X_{d,j_d} \in X, \forall Y_k \in Y$,我们根据各个特征出现的频率就可以得到.
高斯贝叶斯分类器需要算出$\mu,\sigma \forall x \in X, \forall Y_k \in Y$.我们将最大似然估计公式对数化就知道,对于$Y_k$类,$X_1$预测变量,$\mu = mean(S), \sigma = std(S)$,其中S为属于$Y_k$类的所有样本的$X_1$预测变量的值的集合.
复杂度和收敛性
对于离散型,我们针对每一个预测变量的各种情况和每一类算出概率即可,$P(X_i = X_{i,j}|Y = Y_k)$,所以复杂度是$O(NK\sum\limits_{i=1}^d{|Xi|})$,其中N是样本数量,K是类的数量,d为预测变量的个数,|Xi|是Xi预测变量的可能情况的个数.
对于连续型,我们只要针对每一个预测变量和每一类算出$\mu_{ijk},\sigma_{i,j,k}$就好,所以复杂度是$O(NKd))$.
这里算法的收敛性是收敛于参数渐进值(当样本集无限大时的参数值)时需要的样例数,Ng&Jordan证明GNB只需要$O(logd)$个样例.
优缺点
朴素(高斯)贝叶斯分类器简单却有效,在选取合适的预测变量的条件下,在很多应用领域,比如文本类型识别和医疗判断,都表现不错.预测变量之间互不影响,所以不会因为变量的增加而造成复杂度上的爆炸.
但是在离散情况下,有oov(Out Of Vocabulary)问题,也就是面对新特征时手足无措.比如在样本中没有$X_{i,j},Y_k$的样例,那么对于出现$X_{i,j}$特征的数据点,分类器就不会把它归于$Y_k$类.所以有一些smoothing方法,比如$\forall i,j,k$,我们增加一个$X = X_{i,j},Y =Y_k$的样例.
此外,它的独立性假设条件太强,往往不合实际,每个预测变量都能对效果产生独立的影响,很依赖预测变量的选取.在小样本数据集上,GNB的预测效果也往往不如逻辑回归等其他分类器.
与其他算法的关系
离散和连续型的朴素贝叶斯分类的参数可以组合成为逻辑回归的参数,在独立性假设满足并且有无限多样例时,和逻辑回归收敛于同一个分类器.(https://www.cs.cmu.edu/~tom/mlbook/NBayesLogReg.pdf)
应用sklearn中的相关接口
sklearn中提供了3种朴素贝叶斯分类器接口:
naive_bayes.MultinomialNB([alpha, …])多项式模型朴素贝叶斯分类器
处理离散数据
naive_bayes.BernoulliNB([alpha, binarize, …])多元伯努利模型朴素贝叶斯分类器
类似multinomialnb,这个分类器适用于离散数据.不同之处在于,当multinomialnb与出现次数一起工作时,bernoullinb是针对二进制/布尔特征而设计的
naive_bayes.GaussianNB([priors])高斯贝叶斯
处理连续型特征
例:为Car Evaluation Data Set数据集分类
这个数据集也算是比较经典的数据集了,特征都是离散数据,标签有3类,我们把它下载下来稍做处理后作为训练数据,然后进行分析
End of explanation
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data").text
row_name = ['buying','maint','doors','persons','lug_boot','safety','label']
csv_list = csv_content.strip().split("\n")
row_matrix = [line.strip().split(",") for line in csv_list]
dataset = pd.DataFrame(row_matrix,columns=row_name)
dataset[:10]
Explanation: 数据获取
End of explanation
encs = {}
for i in row_name:
enc = LabelEncoder()
enc.fit(dataset[i])
dataset[i] = enc.transform(dataset[i])
encs[i]=enc
dataset[:10]
dataset.groupby("label").count()
Explanation: 数据预处理
由于特征和标签都是标签类别数据,因此需要为其编码,将其转化为由int类型表示类别的形式才能参与模型训练.
End of explanation
train_set,validation_set = train_test_split(dataset)
train_set.groupby("label").count()
validation_set.groupby("label").count()
Explanation: 数据集拆分
End of explanation
nb = MultinomialNB()
nb.fit(train_set[['buying','maint','doors','persons','lug_boot','safety']],train_set["label"])
pre = nb.predict(validation_set[['buying','maint','doors','persons','lug_boot','safety']])
Explanation: 训练模型
End of explanation
print(classification_report(validation_set["label"],pre))
Explanation: 模型评估
End of explanation |
11,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(Run the last cell first in order to enable custom formatting)
Learning Pandas
AMCDawes
Dec 2015
Some parts of our CCDimage code would be much improved by the use of pandas. In particular, some statistics and other anaylsis would be very straightforward. This is a place for my notes as I learn pandas.
Step1: Our data is currently represented as a stack of 2D arrays (so a 3D "array"). In that implementation the third index is essentially a shot number and the first two are pixel row and pixel column. We'd like to make a dataframe of this data and then work with it in pandas instead of in an array. Another data type we have is the $K_p$ values (essentially the FFT output). In this case, it is a 1D array for each shot of data. The pandas version would be a dataframe with each element being a 1D array of $K_p$ values.
To test this, let's create a random dummy set that is 20 shots of 400 $K_p$ values
Step2: So we see a few nice features
Step3: Now we can start to use these data structures in the CCDimage code. It is important to note, that the values can be complex too!
Step4: slicing a dataframe
Step5: Select a few columns
Step6: Plotting
It's easy to plot using the built-in method .plot() from pandas. There are also ways to access the data and plot using matplotlib
Step7: More stats
Step8: Apply
How to change values in the columns | Python Code:
# standard imports:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# use inline plots:
%matplotlib inline
# use ggplot style:
matplotlib.style.use('ggplot')
Explanation: (Run the last cell first in order to enable custom formatting)
Learning Pandas
AMCDawes
Dec 2015
Some parts of our CCDimage code would be much improved by the use of pandas. In particular, some statistics and other anaylsis would be very straightforward. This is a place for my notes as I learn pandas.
End of explanation
# create an example dataframe:
df = pd.DataFrame(np.random.randn(20,400), index=np.arange(20), columns=1.23*np.arange(400))
df
Explanation: Our data is currently represented as a stack of 2D arrays (so a 3D "array"). In that implementation the third index is essentially a shot number and the first two are pixel row and pixel column. We'd like to make a dataframe of this data and then work with it in pandas instead of in an array. Another data type we have is the $K_p$ values (essentially the FFT output). In this case, it is a 1D array for each shot of data. The pandas version would be a dataframe with each element being a 1D array of $K_p$ values.
To test this, let's create a random dummy set that is 20 shots of 400 $K_p$ values:
End of explanation
# nested string list comprehension:
rows = ["r{}s{}".format(i,j) for i in np.arange(4) for j in np.arange(5)]
# could also be a generator:
row_gen = ("r{}s{}".format(i,j) for i in np.arange(4) for j in np.arange(5))
for i in row_gen:
print(i)
df2 = pd.DataFrame(np.random.randn(20,400), index=rows, columns=1.23*np.arange(400))
df2
Explanation: So we see a few nice features:
- we can label the columns with useful things, this could be the actual mode index $p$ for the columns
- we can also label rows, this could be shot number, or even a alphanumeric code for shot/run (can it be alphanumeric?)
To show this, we'll create a list of the row labels using a list comprehension. Note, this could also be done with a generator, but that is a more advanced pythonism
End of explanation
# complex example:
df3 = pd.DataFrame(np.random.randn(20,400) + 0.1*np.random.randn(20,400)*1j, index=rows, columns=1.23*np.arange(400))
df3
Explanation: Now we can start to use these data structures in the CCDimage code. It is important to note, that the values can be complex too!
End of explanation
df3["r3s1":"r3s4"]
Explanation: slicing a dataframe:
We make frequent use of array slicing in python to access portions of our data array or to stack and analyze different parts of the data. Now we'll tinker on the dataframe to find out the equivalent methods.
First, the [] operation slices by rows (can use their names):
End of explanation
df4 = df3[df3.columns[4:10]]
Explanation: Select a few columns:
End of explanation
onerow = df3.loc["r3s2"].apply(np.real) # take the real part to avoid issues plotting
onerow.plot()
# can also use the regular old plot calls from matplotlib
plt.plot(np.imag(df3.loc["r3s2"]))
plt.plot(np.real(df3.loc["r3s2"]))
plt.hexbin(np.real(df3.loc["r3s2"]),np.imag(df3.loc["r3s2"]))
Explanation: Plotting
It's easy to plot using the built-in method .plot() from pandas. There are also ways to access the data and plot using matplotlib
End of explanation
from pandas.tools.plotting import scatter_matrix
Explanation: More stats:
We can do a number of cool things with the data frame. For example, plotting different modes against each other using the scatter_matrix function.
End of explanation
# take the real part or abs of the columns (save as a different dataframes)
df4real = df4.apply(np.real)
df4abs = df4.apply(np.abs)
# the diagonal will be the kernel density estimate (kde)
scatter_matrix(df4abs, figsize=(8, 8), diagonal='kde')
# Format the notebook using style by Lorena Barba (http://lorenabarba.com/)
# run this cell first to use the nice format.
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Apply
How to change values in the columns:
End of explanation |
11,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Battleship!
A variation on the classic game. A ship of random length will be placed on a board whose size is determined by you, the player. You may also select the number of turns you would like to use to try to sink it. After each guess, the board will be updated to show your hits and misses. If you hit all of the blocks in the ship before you run out of turns, you win!
To play the game, execute the code in cells 1, 2, and 3. If you do not yet have the ipythonblocks module installed, execute cell 4 first.
The original code from the Codecademy lesson is included at the bottom so you can see how the project evolved from that.
Step1:
Step2: As an assignment
This project can be assigned after a student completes the Codecademy Python lesson and becomes familiar with the module ipythonblocks .
Open Codecademy and grab your Battleship! code from Lesson 13-18.
Copy it to a new notebook called Battleship and use ipythonblocks to make it more visual.
Use a light blue for the background, red for a hit and a different color of your choice for misses.
Once it is working | Python Code:
from random import randint
from ipythonblocks import BlockGrid
from IPython.display import clear_output
def place_ship(boardsize):
'''
Place a ship randomly on the board
of size boardsize x boardsize.
Randomly decide whether it is vertical
or horizontal, what length ship, the
placement of the first block, and check
that it will fit on the board. If not,
adjust the placement.
'''
#Select vertical or horizontal ship
shipdir = randint(0,1)
#random ship length
shiplength = randint(1,boardsize-2)
shipstart = randint(0,boardsize-1)
shipend = shipstart - 1 + shiplength
shipdiff = boardsize - 1 - shipend
if shipdiff < 0:
shipstart += shipdiff
shipend += shipdiff
#print shipdir, shiplength, shipstart, shipend, shipdiff
shipside = randint(0,boardsize-1)
ship = []
if randint(0,1) == 1:
for i in range(shiplength):
ship.append((shipstart+i,shipside))
else:
for i in range(shiplength):
ship.append((shipside,shipstart+i))
return ship
BLACK = (0,0,0)
RED = (255,0,0)
ORANGE = (255,150,0)
OCEAN = (13, 123, 234)
print "Let's play Battleship!"
boardsize = input("How big should the ocean be? (Enter an integer from 4 to 20)")
nturns = input("How many shots would you like?")
board = BlockGrid(boardsize, boardsize, fill=OCEAN)
board.show()
ship = place_ship(boardsize)
nhits = 0
for turn in range(nturns):
#print ship
guess_row = input("Guess Row:")
guess_col = input("Guess Col:")
if (guess_row < 0 or guess_row >= board.height) or (guess_col < 0 or guess_col >= board.width):
clear_output()
board.show()
print "Oops, that's not even in the ocean."
else:
guessblock = board[guess_row,guess_col]
if (guess_row,guess_col) in ship:
clear_output()
board[guess_row,guess_col] = RED
board.show()
print "It's a hit!"
nhits += 1
elif (guessblock.red,guessblock.green,guessblock.blue) == BLACK or (guessblock.red,guessblock.green,guessblock.blue) == RED:
clear_output()
board.show()
print "You guessed that one already."
else:
clear_output()
board[guess_row,guess_col] = BLACK
board.show()
print "You missed my battleship!"
if nhits == len(ship):
clear_output()
board[guess_row,guess_col] = RED
board.show()
print "Congratulations! You sunk my battleship!"
break
if turn+1 == nturns:
clear_output()
#Reveal location of ship in yellow
for block in ship:
current = board[block[0],block[1]]
if (current.red,current.green,current.blue) == RED:
pass
else:
board[block[0],block[1]] = ORANGE
board.show()
print "Sorry, you are out of turns. Game Over."
else:
print "Remaining turns: ",nturns-1-turn
Explanation: Battleship!
A variation on the classic game. A ship of random length will be placed on a board whose size is determined by you, the player. You may also select the number of turns you would like to use to try to sink it. After each guess, the board will be updated to show your hits and misses. If you hit all of the blocks in the ship before you run out of turns, you win!
To play the game, execute the code in cells 1, 2, and 3. If you do not yet have the ipythonblocks module installed, execute cell 4 first.
The original code from the Codecademy lesson is included at the bottom so you can see how the project evolved from that.
End of explanation
#Upgrade the Python package installer, pip, then install ipythonblocks, if not already present.
!pip install --upgrade pip
!pip install ipythonblocks
Explanation:
End of explanation
from random import randint
board = []
for x in range(5):
board.append(["O"] * 5)
def print_board(board):
for row in board:
print " ".join(row)
print "Let's play Battleship!"
print_board(board)
def random_row(board):
return randint(0, len(board) - 1)
def random_col(board):
return randint(0, len(board[0]) - 1)
ship_row = random_row(board)
ship_col = random_col(board)
# Everything from here on should go in your for loop!
# Be sure to indent four spaces!
for turn in range(4):
guess_row = input("Guess Row:")
guess_col = input("Guess Col:")
if guess_row == ship_row and guess_col == ship_col:
print "Congratulations! You sunk my battleship!"
break
else:
if (guess_row < 0 or guess_row > 4) or (guess_col < 0 or guess_col > 4):
print "Oops, that's not even in the ocean."
elif(board[guess_row][guess_col] == "X"):
print "You guessed that one already."
else:
print "You missed my battleship!"
board[guess_row][guess_col] = "X"
# Print (turn + 1) here!
if turn == 3:
print "Ship was located at row " + str(ship_row) + ", col " + str(ship_col)
print "Game Over"
else:
print turn+1
print_board(board)
Explanation: As an assignment
This project can be assigned after a student completes the Codecademy Python lesson and becomes familiar with the module ipythonblocks .
Open Codecademy and grab your Battleship! code from Lesson 13-18.
Copy it to a new notebook called Battleship and use ipythonblocks to make it more visual.
Use a light blue for the background, red for a hit and a different color of your choice for misses.
Once it is working:
Expand it to a larger field
Make the battleship longer
Provide more tries for the player to find it and completely sink it.
If the player doesn't sink the ship before the end of the game, reveal the playing board to them so they can see where it was.
Document your code with a markdown cell that explains how to run it and what the options are.
Demonstrate the game in another cell to show that it works as expected. You do not have to provide a demo for each possible outcome. Just one demo will do.
Get creative. You could allow ships to lie diagonally, make the player say how many tries they would like to use to sink it, add more ships, etc.
Codecademy Battleship Program
End of explanation |
11,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Write a function
Step1: V- 1 - used numpy to sum soon realized numpy does not work on codality. I have usually required more time working on solutions when solving something espically with a new technique.
Step2: Solution 2
Step3: Solution 3
Step4: solution 4
Step5: Solution 5
Step6: Solution 6 - reduced asympottic time complexity by increasing space complexity. O^n
Step7: Version 7 - Improving Time Complexity - O^n
Step8: [Result | Python Code:
Ax = [-1, 3, -4, 5, 1, -6, 2, 1]
Explanation: Write a function:
def solution(A)
that, given a zero-indexed array A consisting of N integers, returns any of its equilibrium indices. The function should return −1 if no equilibrium index exists.
For example, given array A shown above, the function may return 1, 3 or 7, as explained above.
Assume that:
N is an integer within the range [0..100,000];
each element of array A is an integer within the range [−2,147,483,648..2,147,483,647].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
Time taken to get uptill V 6 - about 6 hours
End of explanation
def solution_1(A):
addition_list = list()
list_index = 1
addition_list.append(A[0])
try:
if len(A) >= 0 and len(A) <= 100000:
for i, int_in_arr in enumerate(A):
# print i, " ", int_in_arr
if int_in_arr >= -2 and int_in_arr <= 647 and type(int_in_arr) is int:
if i == 0: continue
addition_list.append(int_in_arr + addition_list[list_index - 1])
print "i: ", i, "\n"
#print A[0:i], "\n"
# print numpy.sum(A[0:i])
#print A[i + 1:], "\n"
# print addition_list
list_index += 1
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
print A
print "\n", addition_list
last_list_index = i
list_index = 1
while list_index != last_list_index:
print addition_list[last_list_index]
if A[list_index-1] == (addition_list[last_list_index] - addition_list[list_index]):
return list_index
except (ValueError, RuntimeError) as err:
print err.args
test = solution_1(Ax)
Explanation: V- 1 - used numpy to sum soon realized numpy does not work on codality. I have usually required more time working on solutions when solving something espically with a new technique.
End of explanation
def solution_2(A):
addition_list = list()
list_index = 1
addition_list.append(A[0])
try:
if len(A) >= 0 and len(A) <= 100000:
for i, int_in_arr in enumerate(A):
# print i, " ", int_in_arr
if int_in_arr >= -2 and int_in_arr <= 647 and type(int_in_arr) is int:
if i == 0: continue
addition_list.append(int_in_arr + addition_list[list_index - 1])
#print "i: ", i, "\n"
#print A[0:i], "\n"
# print math.sum(A[0:i])
#print A[i + 1:], "\n"
# print addition_list
list_index += 1
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
#print A
#print "\n", addition_list
last_list_index = i
list_index = 1
while list_index != last_list_index:
print addition_list[last_list_index]
if A[list_index-1] == (addition_list[last_list_index] - addition_list[list_index]):
return list_index
except (ValueError, RuntimeError) as err:
return err
print solution_2(Ax)
Explanation: Solution 2
End of explanation
def solution_3(A):
addition_list = list()
list_index = 0
addition_list.append(A[0])
try:
if len(A) >= 0 and len(A) <= 100000:
for i, int_in_arr in enumerate(A):
if type(int_in_arr) is int:
if i == 0: continue
if sum(A[:i]) == (sum(A[:]) - sum(A[:i+1])):
return i
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_3(Ax)
Explanation: Solution 3
End of explanation
def solution_4(A):
addition_list = list()
list_index = 0
addition_list.append(A[0])
try:
if 0 == sum(A[1:]):
return 0
elif len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
elif len(A) >= 0 and len(A) <= 100000:
for i in xrange(len(A)):
#if type(int_in_arr) is int:
if i == 0: continue
if sum(A[:i]) == (sum(A[:]) - sum(A[:i+1])):
return i
else:
raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
raise ValueError("array indices out of range ", len(A))
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_4(Ax)
Explanation: solution 4
End of explanation
def solution_5(A):
try:
if 0 == sum(A[1:]):
return 0
if len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
elif len(A) >= 0 and len(A) <= 100000:
left_sum = A[0]
right_sum = sum(A[1:])
#print "left_sum: " , left_sum
#print "right sum: ", right_sum, "\n"
for i,val in enumerate(A):
#if type(int_in_arr) is int:
if i == 0: continue
#print A
#print "i: ", i
#print "val: ", val
#print "left sum: ", left_sum
#print "right sum: ", right_sum
#print "\n\n"
right_sum -= val
if left_sum == right_sum:
#print "found match"
return i
left_sum += val
#else:
#raise ValueError("one of the array element: integer out of range", int_in_arr)
else:
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_5(Ax)
Explanation: Solution 5
End of explanation
def solution_6(A):
left_sum = 0
right_sum = sum(A[1:])
len_arr = len(A)
try:
if len_arr <= 1:
if len_arr == 1:
return 0
else:
return -1
if left_sum == right_sum:
return 0
#if sum(A[:-1]) == 0:
#return len_arr-1
if len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
if len(A) >= 0 and len(A) <= 100000:
for i,val in enumerate(A):
if i == 0: continue
right_sum -= val
left_sum += A[i-1]
if left_sum == right_sum:
return i
if i >= len_arr:
return -1
else:
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_6(Ax)
Explanation: Solution 6 - reduced asympottic time complexity by increasing space complexity. O^n
End of explanation
def solution_7(A):
left_sum = 0
right_sum = sum(A[1:])
len_arr = len(A)
try:
if len_arr <= 1:
if len_arr == 1:
return 0
else:
return -1
if left_sum == right_sum:
return 0
if sum(A[:-1]) == 0:
return len_arr-1
if len(A) == abs(sum(A[:])):
if len(A) % 2:
return len(A)/2
else:
return -1
if len(A) >= 0 and len(A) <= 100000:
left_sum = A[0]
for i,val in enumerate(A[1:]):
right_sum -= val
#left_sum += val
if left_sum == right_sum:
return i+1
left_sum +=val
if i >= len_arr:
return -1
else:
return -1
except (ValueError, RuntimeError) as err:
return err
print solution_7(Ax)
Explanation: Version 7 - Improving Time Complexity - O^n
End of explanation
print "The end"
Explanation: [Result:https://codility.com/demo/results/demo9VD6VE-DCH/]
End of explanation |
11,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Online Learning
DavisSML
Step1: Loss
Step6: Exercise 8.2
Look at LROnline.py and determine what the decay argument is doing. Play with the arguments and see when you achieve convergence and when you do not.
Perceptron
Recall SVM for $y_i \in {-1,1}$,
$$
\min_\theta \frac 1n \sum_i (1 - y_i x_i^\top \theta)_+ + \lambda \| \theta \|^2.
$$
Then subdifferential of $(1 - y x^\top\theta)_+$ is
${- y x}$ if $1 - y x^\top \theta > 0$
$[0,-yx]$ if $1 - y x^\top \theta = 0$
${0}$ if $1 - y x^\top \theta < 0$
Choose subgradient $0$ when we can.
Perceptron
Our subgradient of $\ell(\theta; x, y) = (1 - y x^\top\theta)_+ + \lambda \| \theta \|^2$ is
$-yx + \lambda \theta$ if $1 - y x^\top \theta > 0$
$\lambda \theta$ otherwise
SGD makes update
$$
\theta \gets (1 - \lambda \eta) \theta + \eta y_t x_t 1{1 - y x^\top \theta > 0}
$$
Perceptron
Recall that as $\lambda \rightarrow 0$ the margin is more narrow, equivalent to reducing 1 in $1 - y x^\top \theta < 0$.
In the limit as $\lambda \rightarrow 0$ and with $\eta = 1$,
$$
\theta \gets \theta + y_t x_t 1{y x^\top \theta \le 0}
$$
which is Rosenblatt's perceptron.
The update for the intercept is simpler
$$
\theta_0 \gets \theta_0 + y_t 1{y x^\top \theta \le 0}
$$ | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## open wine data
wine = pd.read_csv('../../data/winequality-red.csv',delimiter=';')
Y = wine.values[:,-1]
X = wine.values[:,:-1]
n,p = X.shape
X = n**0.5 * (X - X.mean(axis=0)) / X.std(axis=0)
## Look at LROnline.py
from LROnline import *
learner = LROnline(p,loss='sqr',decay=-1.)
help(learner.update_beta) # why we do docstrings
yx_it = zip(Y,X) # iterator giving data
y,x = next(yx_it) # first datum
learner.beta, y, x # init beta, first datum
learner.update_beta(x,y) # return loss
learner.beta, y, x # new beta, first datum
losses = [learner.update_beta(x,y) for y,x in yx_it] # run online learning
plt.plot(losses)
_ = plt.title('Losses with sqr error gradient descent')
Explanation: Online Learning
DavisSML: Lecture 8
Prof. James Sharpnack
Online Learning
Data is streaming, and we need to predict the new points sequentially
For each t:
- $x_t$ is revealed
- learner predicts $\hat y_t$
- $y_t$ is revealed and loss $\ell(\hat y_t,y_t)$ is incurred
- learner updates parameters based on the experience
Naive "batch" method
For each t:
- Learner fits on ${x_i,y_i}_{i=1}^{t-1}$
- Learner predicts $\hat y_t$ from $x_t$
If complexity of fit is $O(A_t)$ time then overall takes
$$
O\left(\sum_{t=1}^T A_t\right)
$$
time, for $A_t = t$ (linear time) then $O(\sum_{t=1}^T A_t) = O(T^2)$
Recall Risk and Empirical Risk
Given a loss $\ell(\theta; X,Y)$, for parameters $\theta$, the risk is
$$
R(\theta) = \mathbb E \ell(\theta; X,Y).
$$
And given training data ${x_i,y_i}{i=1}^{n}$ (drawn iid to $X,Y$), then the empirical risk is
$$
R_n(\theta) = \frac 1n \sum{i=1}^n \ell(\theta; x_i, y_i).
$$
Notice that $\mathbb E R_n(\theta) = R(\theta)$ for fixed $\theta$.
For a class of parameters $\Theta$, the empirical risk minimizer (ERM) is the
$$
\hat \theta = \arg \min_{\theta \in \Theta} R_n(\theta)
$$
(may not be unique).
Ideal gradient descent
Suitable for uncontrained/regularized form. Risk is
$$
R(\theta) = \mathbb E \ell(\theta; X,Y).
$$
Suppose that we had access to $R(\theta)$ the true risk. Then to minimize $R$ we could do gradient descent,
$$
\theta \gets \theta - \eta \nabla R(\theta)
$$
To do this we only need access to $\nabla R(\theta)$
ERM for convex opt
Gradient for empirical risk:
$$
\nabla R_n(\theta) = \frac 1n \sum_{i=1}^n \nabla \ell(\theta; x_i, y_i)
$$
and
$$
\mathbb E \nabla \ell(\theta; x_i, y_i) = \nabla \mathbb E \ell(\theta; x_i, y_i) = \nabla R(\theta)
$$
So, gradient descent for ERM moves $\theta$ in direction of $- \nabla R_n(\theta)$
$$
\theta \gets \theta - \eta \nabla R_n(\theta)
$$
where
$$
\mathbb E \nabla R_n(\theta) = \nabla R(\theta)
$$
Minibatch gradient descent
A minibatch is a random subsample of data $(x_1,y_1), \ldots, (x_m,y_m)$ in the full training data.
Then the minibatch gradient is
$$
\nabla R_m(\theta) = \frac 1m \sum_{i=1}^m \nabla \ell(\theta; x_i, y_i)
$$
we also have that
$$
\mathbb E \nabla R_m(\theta) = \nabla R(\theta)
$$
the downside is that $R_m(\theta)$ is noisier.
Stochastic gradient descent
Assumes that $(x_t,y_t)$ are drawn iid from some population. SGD uses a minibatch size of $m=1$.
For each t:
- $x_t$ is revealed
- learner predicts $\hat y_t$ with $f_\theta$
- $y_t$ is revealed and loss $\ell(\hat y_t,y_t)$ is incurred
- learner updates parameters with update,
$$
\theta \gets \theta - \eta \nabla \ell(\theta; x_t,y_t)
$$
Loss: $$\ell(\hat y_i,y_i) = \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i \right)^2$$
Gradient: $$\frac{\partial}{\partial \beta_j} \ell(\hat y_i,y_i) = 2 \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) x_{i,j} = \delta_i x_{i,j}$$
$$\frac{\partial}{\partial \beta_0} \ell(\hat y_i,y_i) = 2 \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) = \delta_i$$
$$ \delta_i = 2 \left(\hat y_i - y_i \right)$$
Update: $$\beta \gets \beta - \eta \delta_i x_i$$
$$\beta_0 \gets \beta_0 - \eta \delta_i$$
Exercise 8.1
Suppose $t$ is drawn uniformly at random from $1,\ldots,n$. What is $\mathbb E_t \nabla \ell(\theta; x_t, y_t)$ where the expectation is taken only with respect to the random draw of $t$?
For the cell above, let $\beta, \beta_0$ be fixed. Suppose that $y_i = \beta_0^ + x_i^\top \beta^ + \epsilon_i$ where $\epsilon_i$ is zero mean and independent of $x_i$ (this is called exogeneity). What is the expected gradients for a random draw of $x_i,y_i$,
$$ \mathbb E \delta_i x_i = ?$$
$$ \mathbb E \delta_i = ?$$
Try to get these expressions as reduced as possible.
Exercise 8.1 Answers
$$ \mathbb E_t \nabla \ell(\theta; x_t, y_t) = \frac 1n \sum_{i=1}^n \nabla \ell(\theta; x_i, y_i) = \nabla R_n(\theta)$$
Because $\hat y_i = \beta_0 + \beta^\top x_i$, $$\mathbb E \delta_i = 2 \mathbb E (\beta_0 + \beta^\top x_i - y_i) = 2 (\beta - \beta^)^\top \mathbb E [x_i] + 2(\beta_0 - \beta_0^).$$
Also,
$$ \delta_i x_i = 2(\beta_0 + \beta^\top x_i - y_i) x_i = 2(\beta_0 - \beta_0^ + \beta^\top x_i - \beta^{,\top} x_i - \epsilon_i) x_i$$
So,
$$ \mathbb E \delta_i x_i = 2 \mathbb E (\beta_0 - \beta_0^ + \beta^\top x_i - \beta^{,\top} x_i) x_i + 2 \mathbb E \epsilon_i x_i = 2 \left( \mathbb E [x_i x_i^\top] (\beta - \beta^) + (\beta_0 - \beta_0^) \mathbb E [x_i] \right)$$
by the exogeneity.
End of explanation
learner = LROnline(p,loss='abs',decay=-1.)
losses = [learner.update_beta(x,y) for y,x in zip(Y,X)]
plt.plot(losses)
_ = plt.title('Losses with abs error SGD')
Explanation: Loss: $$\ell(\hat y_i,y_i) = \left| \beta_0 + \sum_j \beta_j x_{i,j} - y_i \right|$$
(sub-)Gradient: $$\frac{\partial}{\partial \beta_j} \ell(\hat y_i,y_i) = {\rm sign} \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) x_{i,j} = \delta_i x_{i,j}$$
$$\frac{\partial}{\partial \beta_0} \ell(\hat y_i,y_i) = {\rm sign} \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) = \delta_i$$
$$ \delta_i = {\rm sign} \left(\hat y_i - y_i \right)$$
Update: $$\beta \gets \beta - \eta \delta_i x_i$$
$$\beta_0 \gets \beta_0 - \eta \delta_i$$
End of explanation
class Perceptron:
Rosenblatt's perceptron, online learner
Attributes:
eta: learning rate
beta: coefficient vector
p: dimension of X
beta_zero: intercept
def __init__(self,eta,dim,
beta_init=None,beta_zero_init=None):
initialize and set beta
self.eta = eta
self.p = dim
if beta_init:
self.beta = beta_init
else:
self.beta = np.zeros(dim)
if beta_zero_init:
self.beta_zero = beta_zero_init
else:
self.beta_zero = 0.
...
class Perceptron:
...
def predict(self,x):
predict y with x
s = x @ self.beta + self.beta_zero
yhat = 2*(s > 0) - 1
return yhat
def update_beta(self,x,y):
single step update output 0/1 loss
yhat = self.predict(x)
if yhat != y:
self.beta += self.eta * y * x
self.beta_zero += self.eta * y
return yhat != y
loss = []
t_iter = 40
for t,(x,y) in enumerate(zip(X,Y)):
loss.append(perc.update_beta(x,y))
Explanation: Exercise 8.2
Look at LROnline.py and determine what the decay argument is doing. Play with the arguments and see when you achieve convergence and when you do not.
Perceptron
Recall SVM for $y_i \in {-1,1}$,
$$
\min_\theta \frac 1n \sum_i (1 - y_i x_i^\top \theta)_+ + \lambda \| \theta \|^2.
$$
Then subdifferential of $(1 - y x^\top\theta)_+$ is
${- y x}$ if $1 - y x^\top \theta > 0$
$[0,-yx]$ if $1 - y x^\top \theta = 0$
${0}$ if $1 - y x^\top \theta < 0$
Choose subgradient $0$ when we can.
Perceptron
Our subgradient of $\ell(\theta; x, y) = (1 - y x^\top\theta)_+ + \lambda \| \theta \|^2$ is
$-yx + \lambda \theta$ if $1 - y x^\top \theta > 0$
$\lambda \theta$ otherwise
SGD makes update
$$
\theta \gets (1 - \lambda \eta) \theta + \eta y_t x_t 1{1 - y x^\top \theta > 0}
$$
Perceptron
Recall that as $\lambda \rightarrow 0$ the margin is more narrow, equivalent to reducing 1 in $1 - y x^\top \theta < 0$.
In the limit as $\lambda \rightarrow 0$ and with $\eta = 1$,
$$
\theta \gets \theta + y_t x_t 1{y x^\top \theta \le 0}
$$
which is Rosenblatt's perceptron.
The update for the intercept is simpler
$$
\theta_0 \gets \theta_0 + y_t 1{y x^\top \theta \le 0}
$$
End of explanation |
11,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Sex')
Explanation: Answer: 61.62%
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'SibSp',["Age > 30"])
Explanation: Answer: 78.68%
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'SibSp', [ "Age < 16"])
Explanation: Answer: 79.35%
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Age'] < 16 and passenger['SibSp'] < 2:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
11,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 11
Step1: Load time series data
Step2: There are a few supported file formats. AT2 files can be loaded as follows
Step3: Create site profile
This is about the simplest profile that we can create. Linear-elastic soil and rock.
Step4: Create the site response calculator
Step5: Specify the output
Step6: Perform the calculation
Compute the response of the site, and store the state within the calculation object, which is then used along with the output collection to compute the desired outputs. Also, extract the computed properties for comparison.
Step7: Plot the final properties
Step8: Plot the outputs
Create a few plots of the output. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pysra
%matplotlib inline
# Increased figure sizes
plt.rcParams["figure.dpi"] = 120
Explanation: Example 11 : Time series SRA using FDM
Time series analysis to acceleration transfer functions and spectral ratios.
End of explanation
fname = "data/NIS090.AT2"
with open(fname) as fp:
next(fp)
description = next(fp).strip()
next(fp)
parts = next(fp).split()
time_step = float(parts[1])
accels = [float(p) for l in fp for p in l.split()]
ts = pysra.motion.TimeSeriesMotion(fname, description, time_step, accels)
ts.accels
Explanation: Load time series data
End of explanation
ts = pysra.motion.TimeSeriesMotion.load_at2_file(fname)
ts.accels
fig, ax = plt.subplots()
ax.plot(ts.times, ts.accels)
ax.set(xlabel="Time (sec)", ylabel="Accel (g)")
fig.tight_layout();
Explanation: There are a few supported file formats. AT2 files can be loaded as follows:
End of explanation
profile = pysra.site.Profile(
[
pysra.site.Layer(
pysra.site.DarendeliSoilType(18.0, plas_index=0, ocr=1, stress_mean=200),
30,
400,
),
pysra.site.Layer(pysra.site.SoilType("Rock", 24.0, None, 0.01), 0, 1200),
]
)
Explanation: Create site profile
This is about the simplest profile that we can create. Linear-elastic soil and rock.
End of explanation
calcs = [
("EQL", pysra.propagation.EquivalentLinearCalculator()),
(
"FDM (KA)",
pysra.propagation.FrequencyDependentEqlCalculator(use_smooth_spectrum=True),
),
(
"FDM (ZR)",
pysra.propagation.FrequencyDependentEqlCalculator(use_smooth_spectrum=False),
),
]
Explanation: Create the site response calculator
End of explanation
freqs = np.logspace(-1, np.log10(50.0), num=500)
outputs = pysra.output.OutputCollection(
[
pysra.output.ResponseSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
pysra.output.ResponseSpectrumRatioOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation("outcrop", index=-1),
# Location out (numerator)
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
pysra.output.AccelTransferFunctionOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation("outcrop", index=-1),
# Location out (numerator)
pysra.output.OutputLocation("outcrop", index=0),
),
pysra.output.AccelTransferFunctionOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation("outcrop", index=-1),
# Location out (numerator)
pysra.output.OutputLocation("outcrop", index=0),
ko_bandwidth=30,
),
]
)
Explanation: Specify the output
End of explanation
properties = {}
for name, calc in calcs:
calc(ts, profile, profile.location("outcrop", index=-1))
outputs(calc, name)
properties[name] = {
key: getattr(profile[0], key) for key in ["shear_mod_reduc", "damping"]
}
Explanation: Perform the calculation
Compute the response of the site, and store the state within the calculation object, which is then used along with the output collection to compute the desired outputs. Also, extract the computed properties for comparison.
End of explanation
for key in properties["EQL"].keys():
fig, ax = plt.subplots()
for i, (k, p) in enumerate(properties.items()):
if k == "EQL":
ax.axhline(p[key], label=k, color=f"C{i}")
else:
ax.plot(ts.freqs, p[key], label=k, color=f"C{i}")
ax.set(
ylabel={"damping": "Damping (dec)", "shear_mod_reduc": r"$G/G_{max}$"}[key],
xlabel="Frequency (Hz)",
xscale="log",
)
ax.legend()
Explanation: Plot the final properties
End of explanation
for output in outputs:
fig, ax = plt.subplots()
for name, refs, values in output.iter_results():
ax.plot(refs, values, label=name)
ax.set(xlabel=output.xlabel, xscale="log", ylabel=output.ylabel)
ax.legend()
fig.tight_layout();
Explanation: Plot the outputs
Create a few plots of the output.
End of explanation |
11,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Raykar(RGZ)
Step1: It seems that higher values of $\alpha$ are correlated with lower values of $\beta$, and vice versa. This seems to make some intuitive sense.
Raykar-estimated $\vec \alpha$ and $\vec \beta$
Here, I retrieve the $\vec \alpha$ and $\vec \beta$ estimated by the Raykar et al. algorithm and compare to the approximated values found previously. I will average the values approximated across all splits trialled. | Python Code:
from pprint import pprint
import crowdastro.crowd.util
from crowdastro.crowd.raykar import RaykarClassifier
import crowdastro.experiment.experiment_rgz_raykar as rgzr
from crowdastro.experiment.results import Results
import crowdastro.plot
import h5py
import matplotlib.pyplot as plt
import numpy
import sklearn.metrics
%matplotlib inline
CROWDASTRO_PATH = '../data/crowdastro.h5' # Generated by the crowdastro pipeline.
RESULTS_PATH = '../data/results_rgz_raykar.h5' # Generated by crowdastro.experiment.experiment_rgz_raykar.
with h5py.File(CROWDASTRO_PATH, 'r') as crowdastro_h5:
norris_labels = crowdastro_h5['/wise/cdfs/norris_labels'].value
crowd_labels = numpy.ma.MaskedArray(
crowdastro_h5['/wise/cdfs/rgz_raw_labels'],
mask=crowdastro_h5['/wise/cdfs/rgz_raw_labels_mask'])
top_10 = rgzr.top_n_accurate_targets(crowdastro_h5, n_annotators=10)
approx_alphas = []
approx_betas = []
for t in range(top_10.shape[0]):
cm = sklearn.metrics.confusion_matrix(norris_labels[~top_10[t].mask],
top_10[t][~top_10[t].mask])
alpha = cm[1, 1] / cm.sum(axis=1)[1]
beta = cm[0, 0] / cm.sum(axis=1)[0]
approx_alphas.append(alpha)
approx_betas.append(beta)
print('approximate alpha:')
pprint(approx_alphas)
print('approximate beta:')
pprint(approx_betas)
crowdastro.plot.vertical_scatter(['$\\alpha$', '$\\beta$'], [approx_alphas, approx_betas], line=True)
plt.show()
Explanation: Raykar(RGZ): $\vec \alpha$ and $\vec \beta$
This notebook approximates the values of $\vec \alpha$ and $\vec \beta$ for crowd labellers on the Radio Galaxy Zoo galaxy classification task, and compares these to the values of $\vec \alpha$ and $\vec \beta$ estimated by the Raykar et al. algorithm.
Approximate $\vec \alpha$ and $\vec \beta$
Here, I approximate $\vec \alpha$ and $\vec \beta$ by comparing annotator accuracy to the Norris et al. label set. $\vec \alpha$ is the sensitivity, and $\vec \beta$ is the specificity.
End of explanation
results = Results.from_path(RESULTS_PATH)
raykar_alphas = []
raykar_betas = []
raykar_classifiers = []
for split in range(results.n_splits):
rc = results.get_model('Raykar(Top-10-accurate)', split)
rc = RaykarClassifier.unserialise(rc)
raykar_alphas.append(rc.a_)
raykar_betas.append(rc.b_)
raykar_classifiers.append(rc)
raykar_alphas = numpy.mean(raykar_alphas, axis=0)
raykar_betas = numpy.mean(raykar_betas, axis=0)
print('raykar alpha:')
pprint(list(raykar_alphas))
print('raykar beta:')
pprint(list(raykar_betas))
crowdastro.plot.vertical_scatter(['$\\alpha$', '$\\beta$'], [raykar_alphas, raykar_betas], line=True)
plt.ylim(0, 0.005)
plt.show()
Explanation: It seems that higher values of $\alpha$ are correlated with lower values of $\beta$, and vice versa. This seems to make some intuitive sense.
Raykar-estimated $\vec \alpha$ and $\vec \beta$
Here, I retrieve the $\vec \alpha$ and $\vec \beta$ estimated by the Raykar et al. algorithm and compare to the approximated values found previously. I will average the values approximated across all splits trialled.
End of explanation |
11,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
pritnt(test_labels)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# Problem 2 - Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Problem 2 - Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
11,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analýza volatilních pohybů v Pythonu a Pandas 1
V následujícím grafu jsou pro příklad zvýrazněny volatilní pohyby
Step1: Každý řádek představuje cenu pro daný den a to nejvyšší (High), nejnižší (Low), otevírací (Open - začátek dne) a uzavírací (Close - konec dne). Volatilní pohyb pro daný den pak vidím v grafu na první pohled jako výrazné velké svíce (viz. graf 1). Abych je mohl automaticky v mém analytickém softwaru (python - pandas) označit, definuji volatilní svíce pomocí pravidla například jako
Step2: Nyní znám přesnou změnu ceny každý den. Abych mohl porovnávat velikosti, aniž by mi záleželo na tom, zda se daný den cena propadla, nebo stoupla, aplikuji absolutní hodnotu.
Step3: Identifikace volatilní úsečky
Volatilní svíce identifikuji pomocí funkcionality rolling a funkce apply. Funkcionalita rolling umožnuje rozdělit pandas DataFrame na jednotlivé menší "okna", které bude předávat postupně funkci apply v parametru. Tzn. že v následujícím kódu se provede výpočet funkce is_bigger pro každý řádek dat uložených v spy_data. Do parametru rows se bude postupně vkládat výřez dat, obsahující 4 řádky (aktuální počátaný řádek + 3 řádky předchozí). Jako výsledek funkce is_bigger bude hodnota, zda je aktuálně počítaný řádek volatilnější, než 4 předchozí.
Step4: Které svíce jsou volatilnější, než 4 předchozí, si zobrazím pomocí jednoduché selekce, kde ve sloupečku VolBar == 1. | Python Code:
import pandas as pd
import pandas_datareader.data as web
import datetime
start = datetime.datetime(2015, 1, 1)
end = datetime.datetime(2018, 8, 31)
spy_data = web.DataReader('SPY', 'yahoo', start, end)
spy_data = spy_data.drop(['Volume', 'Adj Close'], axis=1) # sloupce 'Volume' a 'Adj Close' nebudu potřebovat
spy_data.tail()
Explanation: Analýza volatilních pohybů v Pythonu a Pandas 1
V následujícím grafu jsou pro příklad zvýrazněny volatilní pohyby:
Volatilní pohyb je jedna z možností, jak sledovat a analyzovat tržní pohyby, které jsou založeny na emocích. Pokud se cena určité komodity rychle mění, vyvolává to v obchodnících silné emoce. Pokud např. cena nemovitostí během roku vzroste o 100% a má tendenci dále růst, všimnou si toho všichni a ti, co by s koupí nemovitosti otálely, si velmi rychle uspíší ji koupit za aktuální cenu, protože když budou čekat, mohla by cena být někde opravdu vysoko a oni by si už nemohli dovolit nemovitost koupit. A čím rychleji cena stoupá, tím více lidí se nákup snaží uspíšit. Otázka zní:
"Je dobré nakupovat s nimi, nebo je lepší jim danou věc prodat"?
Jak lze identifikovat volatilní pohyb pomocí Pythonu a Pandas?
Budu vycházet ze zdarma dostupných EOD (End Of Day) dat trhu SPY (ETF, kopírující hlavní americký akciový index S&P 500), který můžu stáhnout z yahoo finance pomocí pandas-datareader. Online graf je možné vidět na https://finance.yahoo.com/chart/SPY.
End of explanation
spy_data['C-O'] = spy_data['Close'] - spy_data['Open']
spy_data.tail()
Explanation: Každý řádek představuje cenu pro daný den a to nejvyšší (High), nejnižší (Low), otevírací (Open - začátek dne) a uzavírací (Close - konec dne). Volatilní pohyb pro daný den pak vidím v grafu na první pohled jako výrazné velké svíce (viz. graf 1). Abych je mohl automaticky v mém analytickém softwaru (python - pandas) označit, definuji volatilní svíce pomocí pravidla například jako:
Velikost změny ceny musí být větší než 4 předchozí svíce
K tomuto zjištění musím vypočítat velikost vzdálenosti pro jednotlivé svíce $Close-Open$. Pandas mi tuhle práci velmi pěkně usnadní:
End of explanation
spy_data['Abs(C-O)'] = spy_data['C-O'].abs()
spy_data.tail()
Explanation: Nyní znám přesnou změnu ceny každý den. Abych mohl porovnávat velikosti, aniž by mi záleželo na tom, zda se daný den cena propadla, nebo stoupla, aplikuji absolutní hodnotu.
End of explanation
def is_bigger(rows):
result = rows[-1] > rows[:-1].max() # rows[-1] - poslední hodnota je větší než maximum z předchozích
return result
spy_data['VolBar'] = spy_data['Abs(C-O)'].rolling(4).apply(is_bigger,raw=True)
spy_data.tail(10)
Explanation: Identifikace volatilní úsečky
Volatilní svíce identifikuji pomocí funkcionality rolling a funkce apply. Funkcionalita rolling umožnuje rozdělit pandas DataFrame na jednotlivé menší "okna", které bude předávat postupně funkci apply v parametru. Tzn. že v následujícím kódu se provede výpočet funkce is_bigger pro každý řádek dat uložených v spy_data. Do parametru rows se bude postupně vkládat výřez dat, obsahující 4 řádky (aktuální počátaný řádek + 3 řádky předchozí). Jako výsledek funkce is_bigger bude hodnota, zda je aktuálně počítaný řádek volatilnější, než 4 předchozí.
End of explanation
spy_data[spy_data['VolBar'] == 1].tail()
Explanation: Které svíce jsou volatilnější, než 4 předchozí, si zobrazím pomocí jednoduché selekce, kde ve sloupečku VolBar == 1.
End of explanation |
11,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. The questions tend to have a financial theme to them, but don't look to deeply into these tasks themselves, many of them don't hold any significance and are meaningless. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
Task #1
Given price = 300 , use python to figure out the square root of the price.
Step1: Task #2
Given the string
Step2: Task #3
Given the variables
Step3: Task #4
Given the variable of a nested dictionary with nested lists
Step4: Task #5
Given strings with this form where the last source value is always separated by two dashes --
"PRICE
Step5: Task #5
Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!')
Step6: Task #6
Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation.
Step7: Task #7
Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float. | Python Code:
price = 300
import math
math.sqrt( price )
import math
math.sqrt( price )
Explanation: Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. The questions tend to have a financial theme to them, but don't look to deeply into these tasks themselves, many of them don't hold any significance and are meaningless. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
Task #1
Given price = 300 , use python to figure out the square root of the price.
End of explanation
stock_index = "SP500"
stock_index[2:]
Explanation: Task #2
Given the string:
stock_index = "SP500"
Grab '500' from the string using indexing.
End of explanation
stock_index = "SP500"
price = 300
print('The {quote} is at {price} today'.format(quote=stock_index,price=price))
Explanation: Task #3
Given the variables:
stock_index = "SP500"
price = 300
Use .format() to print the following string:
The SP500 is at 300 today.
End of explanation
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
stock_info['sp500']['yesterday']
stock_info['info'][1][2]
Explanation: Task #4
Given the variable of a nested dictionary with nested lists:
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
Use indexing and key calls to grab the following items:
Yesterday's SP500 price (250)
The number 365 nested inside a list nested inside the 'info' key.
End of explanation
def source_finder(str):
index = str.find('--')
return str[index + 2:]
source_finder("PRICE:345.324:SOURCE--QUANDL")
Explanation: Task #5
Given strings with this form where the last source value is always separated by two dashes --
"PRICE:345.324:SOURCE--QUANDL"
Create a function called source_finder() that returns the source. For example, the above string passed into the function would return "QUANDL"
End of explanation
def price_finder(str):
return str.upper().find('PRICE') != -1
price_finder("What is the price?")
price_finder("DUDE, WHAT IS PRICE!!!")
price_finder("The price is 300")
price_finder("There are no prize is 300")
Explanation: Task #5
Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!')
End of explanation
def count_price(str):
return str.upper().count('PRICE')
s = 'Wow that is a nice price, very nice Price! I said price 3 times.'
count_price(s)
s = 'ANOTHER pRiCe striNG should reTURN 1'
count_price(s)
Explanation: Task #6
Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation.
End of explanation
def avg_price(prices):
return sum(prices) / len(prices)
avg_price([3,4,5])
Explanation: Task #7
Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float.
End of explanation |
11,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Datasets
We introduce several datasets used as running examples in TSA.
Step2: Dataset 1
This dataset is derived from a time series of daily GBP/USD exchange rates, $(S_t){t=0,1,\ldots,n}$, $n = 945$, from 1981.10.01 to 1985.06.28, both inclusive. Logarithmic (continuously compounded) daily returns were computed, scaled by 100, and the resulting time series was mean-adjusted
Step3: Dataset 2
This dataset is derived from a time series of daily closing prices of the Standard & Poor's (S&P) 500, a stock market index based on the market capitalizations of 500 large companies with common stock listed on the New York Stock Exchange (NYSE) or NASDAQ. The data was provided by Yahoo! Finance service. The closing prices were adjusted for all applicable splits and dividend distributions by Yahoo! in adherence to Center for Research in Security Prices (CRSP) standards. From the prices, $(S_t){t=0,\ldots,n}$, $n = 2022$, for the dates from 1980.01.02 to 1987.12.31, both inclusive, we obtained the time series of logarithmic (continuously compounded) daily returns, scaled by 100, and the resulting time series was mean-adjusted | Python Code:
# Copyright (c) Thalesians Ltd, 2019. All rights reserved
# Copyright (c) Paul Alexander Bilokon, 2019. All rights reserved
# Author: Paul Alexander Bilokon <[email protected]>
# Version: 1.0 (2019.04.23)
# Email: [email protected]
# Platform: Tested on Windows 10 with Python 3.6
Explanation:
End of explanation
import pandas as pd
Explanation: Datasets
We introduce several datasets used as running examples in TSA.
End of explanation
df1 = pd.read_csv('../../../../data/dataset-1.csv')
df1.head()
y1 = df1['daily_log_return'].values
Explanation: Dataset 1
This dataset is derived from a time series of daily GBP/USD exchange rates, $(S_t){t=0,1,\ldots,n}$, $n = 945$, from 1981.10.01 to 1985.06.28, both inclusive. Logarithmic (continuously compounded) daily returns were computed, scaled by 100, and the resulting time series was mean-adjusted:
$$X_t = 100 \cdot \left[ \ln S_t - \ln S{t-1} - \frac{1}{n} \sum_{u=1}^n (\ln S_u - \ln S_{u-1}) \right], \quad t = 1, 2, \ldots, n.$$
This dataset has been extensively studied in the literature [HRS94, SP97, KSC98, DK00, MY00]. We obtained it as part of the course materials for [Mey10], which are publicly available for download. It is not clear which fixing was used as the daily exchange rate to generate the dataset. We attempted to reconstruct the dataset using a time series of WM/Reuters fixes and noticed significant differences. Meyer's time series was also longer than that provided by Reuters by eight points. We chose to use Meyer's dataset without modifications for the sake of reproducibility.
End of explanation
df2 = pd.read_csv('../../../../data/dataset-2.csv')
df2.head()
y2 = df2['daily_log_return'].values
Explanation: Dataset 2
This dataset is derived from a time series of daily closing prices of the Standard & Poor's (S&P) 500, a stock market index based on the market capitalizations of 500 large companies with common stock listed on the New York Stock Exchange (NYSE) or NASDAQ. The data was provided by Yahoo! Finance service. The closing prices were adjusted for all applicable splits and dividend distributions by Yahoo! in adherence to Center for Research in Security Prices (CRSP) standards. From the prices, $(S_t){t=0,\ldots,n}$, $n = 2022$, for the dates from 1980.01.02 to 1987.12.31, both inclusive, we obtained the time series of logarithmic (continuously compounded) daily returns, scaled by 100, and the resulting time series was mean-adjusted:
$$X_t = 100 \cdot \left[ \ln S_t - \ln S{t-1} - \frac{1}{n} \sum_{u=1}^n (\ln S_u - \ln S_{u-1}) \right], \quad t = 1, 2, \ldots, n.$$
This is one of the time series used in [Yu05]. We generated the time series ourselves as we didn't have access to the author's input data. The number of data points in our time series matches that reported in [Yu05, p.172]. We were also able to reproduce some of the results mentioned in that paper very closely using our time series.
End of explanation |
11,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AI Platform
Step1: Step 1
Step2: Inspect what the data looks like by looking at the first couple of rows
Step8: Step 2
Step11: The second file, called model.py, defines the input function and the model architecture. In this example, we use tf.data API for the data pipeline and create the model using the Keras Sequential API. We define a DNN with an input layer and 3 additonal layers using the Relu activation function. Since the task is a binary classification, the output layer uses the sigmoid activation.
Step14: The last file, called task.py, trains on data loaded and preprocessed in util.py. Using the tf.distribute.MirroredStrategy() scope, it is possible to train on a distributed fashion. The trained model is then saved in a TensorFlow SavedModel format.
Step15: Step 2.2
Step16: Check if the output has been written to the output folder
Step17: Step 2.3
Step18: Check the numerical representation of the features by printing the preprocessed data
Step19: Notice that categorical fields, like occupation, have already been converted to integers (with the same mapping that was used for training). Numerical fields, like age, have been scaled to a z-score. Some fields have been dropped from the original data.
Export the prediction input to a newline-delimited JSON file
Step20: Inspect the .json file
Step21: Step 2.4
Step22: Since the model's last layer uses a sigmoid function for its activation, outputs between 0 and 0.5 represent negative predictions ("<=50K") and outputs between 0.5 and 1 represent positive ones (">50K").
Step 3
Step23: Step 3.1
Step24: Set the TRAIN_DATA and EVAL_DATA variables to point to the files
Step25: Use gsutil again to copy the JSON test file test.json to your Cloud Storage bucket
Step26: Set the TEST_JSON variable to point to that file
Step27: Go back to the lab instructions and check your progress by testing the completed tasks
Step28: Set an environment variable with the jobId generated above
Step29: You can monitor the progress of your training job by watching the logs on the command line by running
Step30: Set the environment variable MODEL_BINARIES to the full path of your exported trained model binaries $OUTPUT_PATH/keras_export/.
You'll deploy this trained model.
Run the following command to create a version v1 of your model
Step31: It may take several minutes to deploy your trained model. When done, you can see a list of your models using the models list command
Step32: Go back to the lab instructions and check your progress by testing the completed tasks | Python Code:
import os
Explanation: AI Platform: Qwik Start
This lab gives you an introductory, end-to-end experience of training and prediction on AI Platform. The lab will use a census dataset to:
Create a TensorFlow 2.x training application and validate it locally.
Run your training job on a single worker instance in the cloud.
Deploy a model to support prediction.
Request an online prediction and see the response.
End of explanation
%%bash
mkdir data
gsutil -m cp gs://cloud-samples-data/ml-engine/census/data/* data/
%%bash
export TRAIN_DATA=$(pwd)/data/adult.data.csv
export EVAL_DATA=$(pwd)/data/adult.test.csv
Explanation: Step 1: Get your training data
The relevant data files, adult.data and adult.test, are hosted in a public Cloud Storage bucket.
You can read the files directly from Cloud Storage or copy them to your local environment. For this lab you will download the samples for local training, and later upload them to your own Cloud Storage bucket for cloud training.
Run the following command to download the data to a local file directory and set variables that point to the downloaded data files:
End of explanation
%%bash
head data/adult.data.csv
Explanation: Inspect what the data looks like by looking at the first couple of rows:
End of explanation
%%bash
mkdir -p trainer
touch trainer/__init__.py
%%writefile trainer/util.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
DATA_URL = (
'https://storage.googleapis.com/cloud-samples-data/ai-platform/census'
'/data')
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
# These are the features in the dataset.
# Dataset information: https://archive.ics.uci.edu/ml/datasets/census+income
_CSV_COLUMNS = [
'age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week', 'native_country',
'income_bracket'
]
# This is the label (target) we want to predict.
_LABEL_COLUMN = 'income_bracket'
# These are columns we will not use as features for training. There are many
# reasons not to use certain attributes of data for training. Perhaps their
# values are noisy or inconsistent, or perhaps they encode bias that we do not
# want our model to learn. For a deep dive into the features of this Census
# dataset and the challenges they pose, see the Introduction to ML Fairness
# Notebook: https://colab.research.google.com/github/google/eng-edu/blob
# /master/ml/cc/exercises/intro_to_fairness.ipynb
UNUSED_COLUMNS = ['fnlwgt', 'education', 'gender']
_CATEGORICAL_TYPES = {
'workclass': pd.api.types.CategoricalDtype(categories=[
'Federal-gov', 'Local-gov', 'Never-worked', 'Private', 'Self-emp-inc',
'Self-emp-not-inc', 'State-gov', 'Without-pay'
]),
'marital_status': pd.api.types.CategoricalDtype(categories=[
'Divorced', 'Married-AF-spouse', 'Married-civ-spouse',
'Married-spouse-absent', 'Never-married', 'Separated', 'Widowed'
]),
'occupation': pd.api.types.CategoricalDtype([
'Adm-clerical', 'Armed-Forces', 'Craft-repair', 'Exec-managerial',
'Farming-fishing', 'Handlers-cleaners', 'Machine-op-inspct',
'Other-service', 'Priv-house-serv', 'Prof-specialty', 'Protective-serv',
'Sales', 'Tech-support', 'Transport-moving'
]),
'relationship': pd.api.types.CategoricalDtype(categories=[
'Husband', 'Not-in-family', 'Other-relative', 'Own-child', 'Unmarried',
'Wife'
]),
'race': pd.api.types.CategoricalDtype(categories=[
'Amer-Indian-Eskimo', 'Asian-Pac-Islander', 'Black', 'Other', 'White'
]),
'native_country': pd.api.types.CategoricalDtype(categories=[
'Cambodia', 'Canada', 'China', 'Columbia', 'Cuba', 'Dominican-Republic',
'Ecuador', 'El-Salvador', 'England', 'France', 'Germany', 'Greece',
'Guatemala', 'Haiti', 'Holand-Netherlands', 'Honduras', 'Hong',
'Hungary',
'India', 'Iran', 'Ireland', 'Italy', 'Jamaica', 'Japan', 'Laos',
'Mexico',
'Nicaragua', 'Outlying-US(Guam-USVI-etc)', 'Peru', 'Philippines',
'Poland',
'Portugal', 'Puerto-Rico', 'Scotland', 'South', 'Taiwan', 'Thailand',
'Trinadad&Tobago', 'United-States', 'Vietnam', 'Yugoslavia'
]),
'income_bracket': pd.api.types.CategoricalDtype(categories=[
'<=50K', '>50K'
])
}
def _download_and_clean_file(filename, url):
Downloads data from url, and makes changes to match the CSV format.
The CSVs may use spaces after the comma delimters (non-standard) or include
rows which do not represent well-formed examples. This function strips out
some of these problems.
Args:
filename: filename to save url to
url: URL of resource to download
temp_file, _ = urllib.request.urlretrieve(url)
with tf.io.gfile.GFile(temp_file, 'r') as temp_file_object:
with tf.io.gfile.GFile(filename, 'w') as file_object:
for line in temp_file_object:
line = line.strip()
line = line.replace(', ', ',')
if not line or ',' not in line:
continue
if line[-1] == '.':
line = line[:-1]
line += '\n'
file_object.write(line)
tf.io.gfile.remove(temp_file)
def download(data_dir):
Downloads census data if it is not already present.
Args:
data_dir: directory where we will access/save the census data
tf.io.gfile.makedirs(data_dir)
training_file_path = os.path.join(data_dir, TRAINING_FILE)
if not tf.io.gfile.exists(training_file_path):
_download_and_clean_file(training_file_path, TRAINING_URL)
eval_file_path = os.path.join(data_dir, EVAL_FILE)
if not tf.io.gfile.exists(eval_file_path):
_download_and_clean_file(eval_file_path, EVAL_URL)
return training_file_path, eval_file_path
def preprocess(dataframe):
Converts categorical features to numeric. Removes unused columns.
Args:
dataframe: Pandas dataframe with raw data
Returns:
Dataframe with preprocessed data
dataframe = dataframe.drop(columns=UNUSED_COLUMNS)
# Convert integer valued (numeric) columns to floating point
numeric_columns = dataframe.select_dtypes(['int64']).columns
dataframe[numeric_columns] = dataframe[numeric_columns].astype('float32')
# Convert categorical columns to numeric
cat_columns = dataframe.select_dtypes(['object']).columns
dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.astype(
_CATEGORICAL_TYPES[x.name]))
dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.cat.codes)
return dataframe
def standardize(dataframe):
Scales numerical columns using their means and standard deviation to get
z-scores: the mean of each numerical column becomes 0, and the standard
deviation becomes 1. This can help the model converge during training.
Args:
dataframe: Pandas dataframe
Returns:
Input dataframe with the numerical columns scaled to z-scores
dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes)))
# Normalize numeric columns.
for column, dtype in dtypes:
if dtype == 'float32':
dataframe[column] -= dataframe[column].mean()
dataframe[column] /= dataframe[column].std()
return dataframe
def load_data():
Loads data into preprocessed (train_x, train_y, eval_y, eval_y)
dataframes.
Returns:
A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are
Pandas dataframes with features for training and train_y and eval_y are
numpy arrays with the corresponding labels.
# Download Census dataset: Training and eval csv files.
training_file_path, eval_file_path = download(DATA_DIR)
# This census data uses the value '?' for missing entries. We use
# na_values to
# find ? and set it to NaN.
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv
# .html
train_df = pd.read_csv(training_file_path, names=_CSV_COLUMNS,
na_values='?')
eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values='?')
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
# Split train and eval data with labels. The pop method copies and removes
# the label column from the dataframe.
train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN)
eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN)
# Join train_x and eval_x to normalize on overall means and standard
# deviations. Then separate them again.
all_x = pd.concat([train_x, eval_x], keys=['train', 'eval'])
all_x = standardize(all_x)
train_x, eval_x = all_x.xs('train'), all_x.xs('eval')
# Reshape label columns for use with tf.data.Dataset
train_y = np.asarray(train_y).astype('float32').reshape((-1, 1))
eval_y = np.asarray(eval_y).astype('float32').reshape((-1, 1))
return train_x, train_y, eval_x, eval_y
Explanation: Step 2: Run a local training job
A local training job loads your Python training program and starts a training process in an environment that's similar to that of a live Cloud AI Platform cloud training job.
Step 2.1: Create files to hold the Python program
To do that, let's create three files. The first, called util.py, will contain utility methods for cleaning and preprocessing the data, as well as performing any feature engineering needed by transforming and normalizing the data.
End of explanation
%%writefile trainer/model.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
def input_fn(features, labels, shuffle, num_epochs, batch_size):
Generates an input function to be used for model training.
Args:
features: numpy array of features used for training or inference
labels: numpy array of labels for each example
shuffle: boolean for whether to shuffle the data or not (set True for
training, False for evaluation)
num_epochs: number of epochs to provide the data for
batch_size: batch size for training
Returns:
A tf.data.Dataset that can provide data to the Keras model for training or
evaluation
if labels is None:
inputs = features
else:
inputs = (features, labels)
dataset = tf.data.Dataset.from_tensor_slices(inputs)
if shuffle:
dataset = dataset.shuffle(buffer_size=len(features))
# We call repeat after shuffling, rather than before, to prevent separate
# epochs from blending together.
dataset = dataset.repeat(num_epochs)
dataset = dataset.batch(batch_size)
return dataset
def create_keras_model(input_dim, learning_rate):
Creates Keras Model for Binary Classification.
The single output node + Sigmoid activation makes this a Logistic
Regression.
Args:
input_dim: How many features the input has
learning_rate: Learning rate for training
Returns:
The compiled Keras model (still needs to be trained)
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform',
input_shape=(input_dim,)),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Custom Optimizer:
# https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer
optimizer = tf.keras.optimizers.RMSprop(lr=learning_rate)
# Compile Keras model
model.compile(
loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
Explanation: The second file, called model.py, defines the input function and the model architecture. In this example, we use tf.data API for the data pipeline and create the model using the Keras Sequential API. We define a DNN with an input layer and 3 additonal layers using the Relu activation function. Since the task is a binary classification, the output layer uses the sigmoid activation.
End of explanation
%%writefile trainer/task.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os
from . import model
from . import util
import tensorflow as tf
def get_args():
Argument parser.
Returns:
Dictionary of arguments.
parser = argparse.ArgumentParser()
parser.add_argument(
'--job-dir',
type=str,
required=True,
help='local or GCS location for writing checkpoints and exporting '
'models')
parser.add_argument(
'--num-epochs',
type=int,
default=20,
help='number of times to go through the data, default=20')
parser.add_argument(
'--batch-size',
default=128,
type=int,
help='number of records to read during each training step, default=128')
parser.add_argument(
'--learning-rate',
default=.01,
type=float,
help='learning rate for gradient descent, default=.01')
parser.add_argument(
'--verbosity',
choices=['DEBUG', 'ERROR', 'FATAL', 'INFO', 'WARN'],
default='INFO')
args, _ = parser.parse_known_args()
return args
def train_and_evaluate(args):
Trains and evaluates the Keras model.
Uses the Keras model defined in model.py and trains on data loaded and
preprocessed in util.py. Saves the trained model in TensorFlow SavedModel
format to the path defined in part by the --job-dir argument.
Args:
args: dictionary of arguments - see get_args() for details
train_x, train_y, eval_x, eval_y = util.load_data()
# dimensions
num_train_examples, input_dim = train_x.shape
num_eval_examples = eval_x.shape[0]
# Create the Keras Model
keras_model = model.create_keras_model(
input_dim=input_dim, learning_rate=args.learning_rate)
# Pass a numpy array by passing DataFrame.values
training_dataset = model.input_fn(
features=train_x.values,
labels=train_y,
shuffle=True,
num_epochs=args.num_epochs,
batch_size=args.batch_size)
# Pass a numpy array by passing DataFrame.values
validation_dataset = model.input_fn(
features=eval_x.values,
labels=eval_y,
shuffle=False,
num_epochs=args.num_epochs,
batch_size=num_eval_examples)
# Setup Learning Rate decay.
lr_decay_cb = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: args.learning_rate + 0.02 * (0.5 ** (1 + epoch)),
verbose=True)
# Setup TensorBoard callback.
tensorboard_cb = tf.keras.callbacks.TensorBoard(
os.path.join(args.job_dir, 'keras_tensorboard'),
histogram_freq=1)
# Train model
keras_model.fit(
training_dataset,
steps_per_epoch=int(num_train_examples / args.batch_size),
epochs=args.num_epochs,
validation_data=validation_dataset,
validation_steps=1,
verbose=1,
callbacks=[lr_decay_cb, tensorboard_cb])
export_path = os.path.join(args.job_dir, 'keras_export')
tf.keras.models.save_model(keras_model, export_path)
print('Model exported to: {}'.format(export_path))
if __name__ == '__main__':
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
args = get_args()
tf.compat.v1.logging.set_verbosity(args.verbosity)
train_and_evaluate(args)
Explanation: The last file, called task.py, trains on data loaded and preprocessed in util.py. Using the tf.distribute.MirroredStrategy() scope, it is possible to train on a distributed fashion. The trained model is then saved in a TensorFlow SavedModel format.
End of explanation
%%bash
MODEL_DIR=output
gcloud ai-platform local train \
--module-name trainer.task \
--package-path trainer/ \
--job-dir $MODEL_DIR \
-- \
--train-files $TRAIN_DATA \
--eval-files $EVAL_DATA \
--train-steps 1000 \
--eval-steps 100
Explanation: Step 2.2: Run a training job locally using the Python training program
NOTE When you run the same training job on AI Platform later in the lab, you'll see that the command is not much different from the above.
Specify an output directory and set a MODEL_DIR variable to hold the trained model, then run the training job locally by running the following command (by default, verbose logging is turned off. You can enable it by setting the --verbosity tag to DEBUG):
End of explanation
%%bash
ls output/keras_export/
Explanation: Check if the output has been written to the output folder:
End of explanation
from trainer import util
_, _, eval_x, eval_y = util.load_data()
prediction_input = eval_x.sample(5)
prediction_targets = eval_y[prediction_input.index]
Explanation: Step 2.3: Prepare input for prediction
To receive valid and useful predictions, you must preprocess input for prediction in the same way that training data was preprocessed. In a production system, you may want to create a preprocessing pipeline that can be used identically at training time and prediction time.
For this exercise, use the training package's data-loading code to select a random sample from the evaluation data. This data is in the form that was used to evaluate accuracy after each epoch of training, so it can be used to send test predictions without further preprocessing.
Run the following snippet of code to preprocess the raw data from the adult.test.csv file. Here, we are grabbing 5 examples to run predictions on:
End of explanation
print(prediction_input)
Explanation: Check the numerical representation of the features by printing the preprocessed data:
End of explanation
import json
with open('test.json', 'w') as json_file:
for row in prediction_input.values.tolist():
json.dump(row, json_file)
json_file.write('\n')
Explanation: Notice that categorical fields, like occupation, have already been converted to integers (with the same mapping that was used for training). Numerical fields, like age, have been scaled to a z-score. Some fields have been dropped from the original data.
Export the prediction input to a newline-delimited JSON file:
End of explanation
%%bash
cat test.json
Explanation: Inspect the .json file:
End of explanation
%%bash
gcloud ai-platform local predict \
--model-dir output/keras_export/ \
--json-instances ./test.json
Explanation: Step 2.4: Use your trained model for prediction
Once you've trained your TensorFlow model, you can use it for prediction on new data. In this case, you've trained a census model to predict income category given some information about a person.
Run the following command to run prediction on the test.json file we created above:
Note: If you get a "Bad magic number in .pyc file" error, go to the terminal and run:
cd ../../usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine/
sudo rm *.pyc
End of explanation
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT}
PROJECT = "YOUR_PROJECT_NAME" # Replace with your project name
BUCKET_NAME=PROJECT+"-aiplatform"
REGION="us-central1"
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET_NAME"] = BUCKET_NAME
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
os.environ["PYTHONVERSION"] = "3.7"
Explanation: Since the model's last layer uses a sigmoid function for its activation, outputs between 0 and 0.5 represent negative predictions ("<=50K") and outputs between 0.5 and 1 represent positive ones (">50K").
Step 3: Run your training job in the cloud
Now that you've validated your model by running it locally, you will now get practice training using Cloud AI Platform.
Note: The initial job request will take several minutes to start, but subsequent jobs run more quickly. This enables quick iteration as you develop and validate your training job.
First, set the following variables:
End of explanation
%%bash
if ! gsutil ls | grep -q gs://${BUCKET_NAME}; then
gsutil mb -l ${REGION} gs://${BUCKET_NAME}
fi
gsutil cp -r data gs://$BUCKET_NAME/data
Explanation: Step 3.1: Set up a Cloud Storage bucket
The AI Platform services need to access Cloud Storage (GCS) to read and write data during model training and batch prediction.
Create a bucket using BUCKET_NAME as the name for the bucket and copy the data into it.
End of explanation
%%bash
export TRAIN_DATA=gs://$BUCKET_NAME/data/adult.data.csv
export EVAL_DATA=gs://$BUCKET_NAME/data/adult.test.csv
Explanation: Set the TRAIN_DATA and EVAL_DATA variables to point to the files:
End of explanation
%%bash
gsutil cp test.json gs://$BUCKET_NAME/data/test.json
Explanation: Use gsutil again to copy the JSON test file test.json to your Cloud Storage bucket:
End of explanation
%%bash
export TEST_JSON=gs://$BUCKET_NAME/data/test.json
Explanation: Set the TEST_JSON variable to point to that file:
End of explanation
%%bash
JOB_ID=census_$(date -u +%y%m%d_%H%M%S)
OUTPUT_PATH=gs://$BUCKET_NAME/$JOB_ID
gcloud ai-platform jobs submit training $JOB_ID \
--job-dir $OUTPUT_PATH \
--runtime-version $TFVERSION \
--python-version $PYTHONVERSION \
--module-name trainer.task \
--package-path trainer/ \
--region $REGION \
-- \
--train-files $TRAIN_DATA \
--eval-files $EVAL_DATA \
--train-steps 1000 \
--eval-steps 100 \
--verbosity DEBUG
Explanation: Go back to the lab instructions and check your progress by testing the completed tasks:
- "Set up a Google Cloud Storage".
- "Upload the data files to your Cloud Storage bucket".
Step 3.2: Run a single-instance trainer in the cloud
With a validated training job that runs in both single-instance and distributed mode, you're now ready to run a training job in the cloud. For this example, we will be requesting a single-instance training job.
Use the default BASIC scale tier to run a single-instance training job. The initial job request can take a few minutes to start, but subsequent jobs run more quickly. This enables quick iteration as you develop and validate your training job.
Select a name for the initial training run that distinguishes it from any subsequent training runs. For example, we can use date and time to compose the job id.
Specify a directory for output generated by AI Platform by setting an OUTPUT_PATH variable to include when requesting training and prediction jobs. The OUTPUT_PATH represents the fully qualified Cloud Storage location for model checkpoints, summaries, and exports. You can use the BUCKET_NAME variable you defined in a previous step. It's a good practice to use the job name as the output directory.
Run the following command to submit a training job in the cloud that uses a single process. This time, set the --verbosity tag to DEBUG so that you can inspect the full logging output and retrieve accuracy, loss, and other metrics. The output also contains a number of other warning messages that you can ignore for the purposes of this sample:
End of explanation
os.environ["JOB_ID"] = "YOUR_JOB_ID" # Replace with your job id
Explanation: Set an environment variable with the jobId generated above:
End of explanation
os.environ["MODEL_NAME"] = "census"
%%bash
gcloud ai-platform models create $MODEL_NAME --regions=$REGION
Explanation: You can monitor the progress of your training job by watching the logs on the command line by running:
gcloud ai-platform jobs stream-logs $JOB_ID
Or monitor it in the Console at AI Platform > Jobs. Wait until your AI Platform training job is done. It is finished when you see a green check mark by the jobname in the Cloud Console, or when you see the message Job completed successfully from the Cloud Shell command line.
Wait for the job to complete before proceeding to the next step.
Go back to the lab instructions and check your progress by testing the completed task:
- "Run a single-instance trainer in the cloud".
Step 3.3: Deploy your model to support prediction
By deploying your trained model to AI Platform to serve online prediction requests, you get the benefit of scalable serving. This is useful if you expect your trained model to be hit with many prediction requests in a short period of time.
Note: You will get Using endpoint [https://ml.googleapis.com/] output after running the next cells. If you try to open that link, you will see 404 error message. You have to ignore it and move forward.
Create an AI Platform model:
End of explanation
%%bash
OUTPUT_PATH=gs://$BUCKET_NAME/$JOB_ID
MODEL_BINARIES=$OUTPUT_PATH/keras_export/
gcloud ai-platform versions create v1 \
--model $MODEL_NAME \
--origin $MODEL_BINARIES \
--runtime-version $TFVERSION \
--python-version $PYTHONVERSION \
--region=global
Explanation: Set the environment variable MODEL_BINARIES to the full path of your exported trained model binaries $OUTPUT_PATH/keras_export/.
You'll deploy this trained model.
Run the following command to create a version v1 of your model:
End of explanation
%%bash
gcloud ai-platform models list --region=global
Explanation: It may take several minutes to deploy your trained model. When done, you can see a list of your models using the models list command:
End of explanation
%%bash
gcloud ai-platform predict \
--model $MODEL_NAME \
--version v1 \
--json-instances ./test.json \
--region global
Explanation: Go back to the lab instructions and check your progress by testing the completed tasks:
- "Create an AI Platform model".
- "Create a version v1 of your model".
Step 3.4: Send an online prediction request to your deployed model
You can now send prediction requests to your deployed model. The following command sends a prediction request using the test.json.
The response includes the probabilities of each label (>50K and <=50K) based on the data entry in test.json, thus indicating whether the predicted income is greater than or less than 50,000 dollars.
End of explanation |
11,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<H1>Covariance and correlation</H1>
Step1: <H2>Covariance</H2>
<P>Measures how two variables vary in tandem from their means. To measure the covariance we take a variable that consist of a multidimensional vector and convert it into variances from the mean. We will have a variances vector. To compute the covariance between two variables we simply compute the dot product of the variance vectors and divide by the sample size.</P>
$\operatorname{cov} (X,Y)=\frac{1}{n}\sum_{i=1}^n (x_i-E(X))(y_i-E(Y)).$
Step2: <P>Covariance is hard to quantify, because a small covariance (close to zero) means that there is not much correlation between variables, but not how much. This is where correlation comes.
</P>
<H2>Correlation</H2>
<P>
Because correlation just divide the covariance by the standard deviations of both variables, we normalize them.
This means that a correlation of 1 means correlation, and a correlation of -1 is perfect inverse correlation. Zero means
no correlation at all.</P>
Remember that correlation does not imply causation!
Step3: Covariance is sensitive to the units used in the variables, which makes it difficult to interpret. Correlation normalizes everything by their standard deviations, giving you an easier to understand value that ranges from -1 (for a perfect inverse correlation) to 1 (for a perfect positive correlation) | Python Code:
%pylab inline
Explanation: <H1>Covariance and correlation</H1>
End of explanation
# generate two random variables
x = np.random.normal(3.0, 1.0, 1000)
y = np.random.normal(50.0, 10.0, 1000)
# calculate variance vectors
x_var = [i - x.mean() for i in x]
y_var = [i - y.mean() for i in y]
# compute dot product (angle between two vectors)
# n-1 because we're going for sample covariances
# we would use n if we want to compute the covariance of the population
n = len(x)
print np.dot(x_var,y_var)/( n-1 ) # np.cov(x,y)
# plot both variables
plt.scatter(x,y, color='gray');plt.xlabel('X'); plt.ylabel('Y')
Explanation: <H2>Covariance</H2>
<P>Measures how two variables vary in tandem from their means. To measure the covariance we take a variable that consist of a multidimensional vector and convert it into variances from the mean. We will have a variances vector. To compute the covariance between two variables we simply compute the dot product of the variance vectors and divide by the sample size.</P>
$\operatorname{cov} (X,Y)=\frac{1}{n}\sum_{i=1}^n (x_i-E(X))(y_i-E(Y)).$
End of explanation
y = np.random.normal(50.0, 10.0, 1000)/x
plt.scatter(x,y, color='gray'); plt.xlabel('X'); plt.ylabel('Y');
Explanation: <P>Covariance is hard to quantify, because a small covariance (close to zero) means that there is not much correlation between variables, but not how much. This is where correlation comes.
</P>
<H2>Correlation</H2>
<P>
Because correlation just divide the covariance by the standard deviations of both variables, we normalize them.
This means that a correlation of 1 means correlation, and a correlation of -1 is perfect inverse correlation. Zero means
no correlation at all.</P>
Remember that correlation does not imply causation!
End of explanation
# to calculate the correlation we need to compute the covariance
x_var = [i-x.mean() for i in x]
y_var = [i-y.mean() for i in y]
covar = (np.dot(x_var,y_var))/(len(x)-1)
covar/x.std()/y.std()
# to calculate with NumPy
np.corrcoef(x,y)
np.corrcoef(y,x)
Explanation: Covariance is sensitive to the units used in the variables, which makes it difficult to interpret. Correlation normalizes everything by their standard deviations, giving you an easier to understand value that ranges from -1 (for a perfect inverse correlation) to 1 (for a perfect positive correlation):
End of explanation |
11,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
jQAssistant Demos
Neo4j-Server starten
mvn jqassistant
Step1: Klasse mit den meisten Methoden auflisten
Step2: Statische, geschriebene Variablen
Step3: Aggregation von Messergebnissen über fachliche Bereiche
Step4: Umfassende Aggregation von Messergebnissen über fachliche Bereiche | Python Code:
%load_ext cypher
Explanation: jQAssistant Demos
Neo4j-Server starten
mvn jqassistant:server
Browser öffnen
http://localhost:7474/browser/
Drawer öffnen
Labels durchklicken
Commit
Class
:DECLARES
jQAssistant Dokumentation: http://buschmais.github.io/jqassistant/doc/1.3.0/#_java_plugin
Beispiel-Queries
Setup
Cypher Extension für Jupyter laden
End of explanation
%%cypher
MATCH
(t:Type)-[:DECLARES]->(m:Method)
RETURN t.fqn as Typ, COUNT(m) as Methoden
ORDER BY Methoden DESC
Explanation: Klasse mit den meisten Methoden auflisten
End of explanation
%%cypher
MATCH (c:Class)-[:DECLARES]->(f:Field)<-[w:WRITES]-(m:Method)
WHERE
EXISTS(f.static) AND NOT EXISTS(f.final)
RETURN
c.name as InClass,
m.name as theMethod,
w.lineNumber as writesInLine,
f.name as toStaticField
Explanation: Statische, geschriebene Variablen
End of explanation
%%cypher
MATCH
(t:Type)-[:BELONGS_TO]->(s:Subdomain),
(t)-[:HAS_CHANGE]->(ch:Change)
RETURN
s.name as ASubdomain,
COUNT(DISTINCT t) as Types,
COUNT(DISTINCT ch) as Changes
ORDER BY Types DESC
Explanation: Aggregation von Messergebnissen über fachliche Bereiche
End of explanation
%%cypher
MATCH
(t:Type)-[:BELONGS_TO]->(s:Subdomain),
(t)-[:HAS_CHANGE]->(ch:Change),
(t)-[:HAS_MEASURE]->(co:Coverage)
OPTIONAL MATCH
(t)-[:HAS_BUG]->(b:BugInstance)
RETURN
s.name as ASubdomain,
COUNT(DISTINCT t) as Types,
COUNT(DISTINCT ch) as Changes,
AVG(co.ratio) as Coverage,
COUNT(DISTINCT b) as Bugs,
SUM(DISTINCT t.lastMethodLineNumber) as Lines
ORDER BY Coverage ASC, Bugs DESC
Explanation: Umfassende Aggregation von Messergebnissen über fachliche Bereiche
End of explanation |
11,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FD_1D_DX4_DT2_fast 1-D acoustic Finite-Difference modelling
GNU General Public License v3.0
Author
Step1: Input Parameter
Step2: Preparation
Step3: Create space and time vector
Step4: Source signal - Ricker-wavelet
Step5: Time stepping
Step6: Save seismograms | Python Code:
%matplotlib inline
import numpy as np
import time as tm
import matplotlib.pyplot as plt
Explanation: FD_1D_DX4_DT2_fast 1-D acoustic Finite-Difference modelling
GNU General Public License v3.0
Author: Florian Wittkamp
Finite-Difference acoustic seismic wave simulation
Discretization of the first-order acoustic wave equation
Temporal second-order accuracy $O(\Delta T^2)$
Spatial fourth-order accuracy $O(\Delta X^4)$
Initialisation
End of explanation
# Discretization
c1=20 # Number of grid points per dominant wavelength
c2=0.5 # CFL-Number
nx=2000 # Number of grid points
T=10 # Total propagation time
# Source Signal
f0= 10 # Center frequency Ricker-wavelet
q0= 1 # Maximum amplitude Ricker-Wavelet
xscr = 100 # Source position (in grid points)
# Receiver
xrec1=400 # Position Reciever 1 (in grid points)
xrec2=800 # Position Reciever 2 (in grid points)
xrec3=1800 # Position Reciever 3 (in grid points)
# Velocity and density
modell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2)))))
rho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2)))))
Explanation: Input Parameter
End of explanation
# Init wavefields
vx=np.zeros(nx)
p=np.zeros(nx)
# Calculate first Lame-Paramter
l=rho * modell_v * modell_v
cmin=min(modell_v.flatten()) # Lowest P-wave velocity
cmax=max(modell_v.flatten()) # Highest P-wave velocity
fmax=2*f0 # Maximum frequency
dx=cmin/(fmax*c1) # Spatial discretization (in m)
dt=dx/(cmax)*c2 # Temporal discretization (in s)
lampda_min=cmin/fmax # Smallest wavelength
# Output model parameter:
print("Model size: x:",dx*nx,"in m")
print("Temporal discretization: ",dt," s")
print("Spatial discretization: ",dx," m")
print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
Explanation: Preparation
End of explanation
x=np.arange(0,dx*nx,dx) # Space vector
t=np.arange(0,T,dt) # Time vector
nt=np.size(t) # Number of time steps
# Plotting model
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.subplots_adjust(wspace=0.4,right=1.6)
ax1.plot(x,modell_v)
ax1.set_ylabel('VP in m/s')
ax1.set_xlabel('Depth in m')
ax1.set_title('P-wave velocity')
ax2.plot(x,rho)
ax2.set_ylabel('Density in g/cm^3')
ax2.set_xlabel('Depth in m')
ax2.set_title('Density');
Explanation: Create space and time vector
End of explanation
tau=np.pi*f0*(t-1.5/f0)
q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)
# Plotting source signal
plt.figure(3)
plt.plot(t,q)
plt.title('Source signal Ricker-Wavelet')
plt.ylabel('Amplitude')
plt.xlabel('Time in s')
plt.draw()
Explanation: Source signal - Ricker-wavelet
End of explanation
# Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
kx=np.arange(5,nx-4)
print("Starting time stepping...")
## Time stepping
for n in range(2,nt):
# Inject source wavelet
p[xscr]=p[xscr]+q[n]
# Calculating spatial derivative
p_x=i_dx*9.0/8.0*(p[kx+1]-p[kx])-i_dx*1.0/24.0*(p[kx+2]-p[kx-1])
# Update velocity
vx[kx]=vx[kx]-dt/rho[kx]*p_x
# Calculating spatial derivative
vx_x= i_dx*9.0/8.0*(vx[kx]-vx[kx-1])-i_dx*1.0/24.0*(vx[kx+1]-vx[kx-2])
# Update pressure
p[kx]=p[kx]-l[kx]*dt*(vx_x);
# Save seismograms
Seismogramm[0,n]=p[xrec1]
Seismogramm[1,n]=p[xrec2]
Seismogramm[2,n]=p[xrec3]
print("Finished time stepping!")
Explanation: Time stepping
End of explanation
## Save seismograms
np.save("Seismograms/FD_1D_DX4_DT2_fast",Seismogramm)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
Explanation: Save seismograms
End of explanation |
11,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Big-graph generation
In this demo, we will verify that our big-graph generation code is functioning properly on a small portion of a real DWI dataset that we can manually verify very easily.
Logic
The logic of this function is essentially the same as that of the downsampling code written originally by Disa and Greg. The primary difference here is that, instead of looking for the ROI indices a particular voxel is part of, we instead this time define each voxel as its own index, and independently count streamlines for that particular voxel based on the Morton index of a given position.
Advantages
This approach has the advantage that it is purely data-derived, and the graph we end up with will be totally invertible since the only way a voxel ends up as part of the graph is by a streamline existing. By definition of a streamline, a streamline must be between two or more voxels, so then each point will be connected to some other point.
Disadvantages
This approach has the disadvantage that our ultimate graph will only have vertices if there exists a streamline linking the corresponding voxel of a given vertex to some other vertex. This means that small registration differences between subjects may lead to different vertex counts and different Morton indices corresponding to the same anatomical region due to that anatomical region being shifted by a voxel or two.
We begin by running Greg's small demo
Step1: The approach we will take is to take 2 fibers from our graph and verify that we end up with the appropriate voxels in our streamlines being connected
Step2: First, we should check to see that our graph ends up with the right number of vertices. We begin by looking at the floored values of the above voxel positions, since our image resolution is at 1mm scale
Step3: and we see that there are 8 unique possible vertices, defining a vertex as a unique point in 3-dimensional space at 1mm resolution. We then can check out the number of unique vertices in our corresponding graph
Step4: We check that the voxel ids are the same
Step5: Indicating that our vertex indices appear to be correct. Let's check our streamlines to verify that the vertices each streamline is incident to are fully connected (and consequently have nonzero edge weight) in our resulting graph
Step6: Since we don't get any errors here, it is clear that every element that is in our graph should, in fact, be there. Using set notation, what we have shown is that | Python Code:
import ndmg
import ndmg.utils as mgu
# run small demo for experiments
print(mgu.execute_cmd('ndmg_demo-dwi', verb=True)[0])
Explanation: Big-graph generation
In this demo, we will verify that our big-graph generation code is functioning properly on a small portion of a real DWI dataset that we can manually verify very easily.
Logic
The logic of this function is essentially the same as that of the downsampling code written originally by Disa and Greg. The primary difference here is that, instead of looking for the ROI indices a particular voxel is part of, we instead this time define each voxel as its own index, and independently count streamlines for that particular voxel based on the Morton index of a given position.
Advantages
This approach has the advantage that it is purely data-derived, and the graph we end up with will be totally invertible since the only way a voxel ends up as part of the graph is by a streamline existing. By definition of a streamline, a streamline must be between two or more voxels, so then each point will be connected to some other point.
Disadvantages
This approach has the disadvantage that our ultimate graph will only have vertices if there exists a streamline linking the corresponding voxel of a given vertex to some other vertex. This means that small registration differences between subjects may lead to different vertex counts and different Morton indices corresponding to the same anatomical region due to that anatomical region being shifted by a voxel or two.
We begin by running Greg's small demo:
End of explanation
import numpy as np
fibs = np.load('/tmp/small_demo/outputs/fibers/KKI2009_113_1_DTI_s4_fibers.npz')['arr_0']
small_fibs = fibs[1:3]
from ndmg.graph import biggraph as mgg
from ndmg.graph.zindex import XYZMorton
g1 = mgg()
g1.make_graph(small_fibs)
import networkx as nx
gra = nx.Graph()
gra.add_weighted_edges_from(g1.edge_list)
Explanation: The approach we will take is to take 2 fibers from our graph and verify that we end up with the appropriate voxels in our streamlines being connected:
End of explanation
poss_vertices = set() # use a set since we want unique elements
streamlines = []
for stream in small_fibs:
vertices = set()
for vertex in stream:
mid = str(XYZMorton(tuple(np.round(vertex)))) # morton index for vertex
vertices.add(mid)
poss_vertices.add(mid)
streamlines.append(vertices)
print(len(poss_vertices))
Explanation: First, we should check to see that our graph ends up with the right number of vertices. We begin by looking at the floored values of the above voxel positions, since our image resolution is at 1mm scale:
End of explanation
print(len(gra.nodes()))
Explanation: and we see that there are 8 unique possible vertices, defining a vertex as a unique point in 3-dimensional space at 1mm resolution. We then can check out the number of unique vertices in our corresponding graph:
End of explanation
print(poss_vertices == set(gra.nodes()))
Explanation: We check that the voxel ids are the same:
End of explanation
from itertools import combinations
edgect = 0 # count the number of edges we should have
for stream in streamlines:
combns = combinations(stream, 2) # stream is a list of vertices
for comb in combns:
edgect += 1
if gra.get_edge_data(*comb) == 0: # check the particular combination
raise ValueError('Edge should exist that isnt in the graph!')
Explanation: Indicating that our vertex indices appear to be correct. Let's check our streamlines to verify that the vertices each streamline is incident to are fully connected (and consequently have nonzero edge weight) in our resulting graph:
End of explanation
print(edgect == .5*nx.to_numpy_matrix(gra).sum()) # multiply by 2 the expected count since the graph is directed
# whereas the edgecount is undirected
Explanation: Since we don't get any errors here, it is clear that every element that is in our graph should, in fact, be there. Using set notation, what we have shown is that:
\begin{align}
A \subseteq B
\end{align}
where $A$ is the set of edges that we expect to have, and $B$ is the set of edges that actually exist in our resulting graph. However, we also want to show that:
\begin{align}
B \subseteq A
\end{align}
so that we can conclude that $B = A$, or that our graph exactly matches the result we expect to end up with. To do this, we can simply check that the edges of $A$ are the only edges in $B$:
End of explanation |
11,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Case Study - Text classification for SMS spam detection
We first load the text data from the dataset directory that should be located in your notebooks directory, which we created by running the fetch_data.py script from the top level of the GitHub repository.
Furthermore, we perform some simple preprocessing and split the data array into two parts
Step1: Next, we split our dataset into 2 parts, the test and training dataset
Step2: Now, we use the CountVectorizer to parse the text data into a bag-of-words model.
Step3: Training a Classifier on Text Features
We can now train a classifier, for instance a logistic regression classifier, which is a fast baseline for text classification tasks
Step4: We can now evaluate the classifier on the testing set. Let's first use the built-in score function, which is the rate of correct classification in the test set
Step5: We can also compute the score on the training set to see how well we do there
Step6: Visualizing important features
Step7: <img src="figures/supervised_scikit_learn.png" width="100%">
<div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [int(x[0] == "spam") for x in lines]
text[:10]
y[:10]
print('Number of ham and spam messages:', np.bincount(y))
type(text)
type(y)
Explanation: Case Study - Text classification for SMS spam detection
We first load the text data from the dataset directory that should be located in your notebooks directory, which we created by running the fetch_data.py script from the top level of the GitHub repository.
Furthermore, we perform some simple preprocessing and split the data array into two parts:
text: A list of lists, where each sublists contains the contents of our emails
y: our SPAM vs HAM labels stored in binary; a 1 represents a spam message, and a 0 represnts a ham (non-spam) message.
End of explanation
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y,
random_state=42,
test_size=0.25,
stratify=y)
Explanation: Next, we split our dataset into 2 parts, the test and training dataset:
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
print('CountVectorizer defaults')
CountVectorizer()
vectorizer = CountVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
print(len(vectorizer.vocabulary_))
X_train.shape
print(vectorizer.get_feature_names()[:20])
print(vectorizer.get_feature_names()[2000:2020])
print(X_train.shape)
print(X_test.shape)
Explanation: Now, we use the CountVectorizer to parse the text data into a bag-of-words model.
End of explanation
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf
clf.fit(X_train, y_train)
Explanation: Training a Classifier on Text Features
We can now train a classifier, for instance a logistic regression classifier, which is a fast baseline for text classification tasks:
End of explanation
clf.score(X_test, y_test)
Explanation: We can now evaluate the classifier on the testing set. Let's first use the built-in score function, which is the rate of correct classification in the test set:
End of explanation
clf.score(X_train, y_train)
Explanation: We can also compute the score on the training set to see how well we do there:
End of explanation
def visualize_coefficients(classifier, feature_names, n_top_features=25):
# get coefficients with large absolute values
coef = classifier.coef_.ravel()
positive_coefficients = np.argsort(coef)[-n_top_features:]
negative_coefficients = np.argsort(coef)[:n_top_features]
interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
# plot them
plt.figure(figsize=(15, 5))
colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]]
plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 2 * n_top_features + 1), feature_names[interesting_coefficients], rotation=60, ha="right");
visualize_coefficients(clf, vectorizer.get_feature_names())
vectorizer = CountVectorizer(min_df=2)
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
len(vectorizer.get_feature_names())
print(vectorizer.get_feature_names()[:20])
visualize_coefficients(clf, vectorizer.get_feature_names())
Explanation: Visualizing important features
End of explanation
# %load solutions/12A_tfidf.py
# %load solutions/12B_vectorizer_params.py
Explanation: <img src="figures/supervised_scikit_learn.png" width="100%">
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Use TfidfVectorizer instead of CountVectorizer. Are the results better? How are the coefficients different?
</li>
<li>
Change the parameters min_df and ngram_range of the TfidfVectorizer and CountVectorizer. How does that change the important features?
</li>
</ul>
</div>
End of explanation |
11,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating Models with TensorFlow and PyTorch
In the tutorials so far, we have used standard models provided by DeepChem. This is fine for many applications, but sooner or later you will want to create an entirely new model with an architecture you define yourself. DeepChem provides integration with both TensorFlow (Keras) and PyTorch, so you can use it with models from either of these frameworks.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Step1: There are actually two different approaches you can take to using TensorFlow or PyTorch models with DeepChem. It depends on whether you want to use TensorFlow/PyTorch APIs or DeepChem APIs for training and evaluating your model. For the former case, DeepChem's Dataset class has methods for easily adapting it to use with other frameworks. make_tf_dataset() returns a tensorflow.data.Dataset object that iterates over the data. make_pytorch_dataset() returns a torch.utils.data.IterableDataset that iterates over the data. This lets you use DeepChem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing TensorFlow or PyTorch code.
But DeepChem also provides many other useful features. The other approach, which lets you use those features, is to wrap your model in a DeepChem Model object. Let's look at how to do that.
KerasModel
KerasModel is a subclass of DeepChem's Model class. It acts as a wrapper around a tensorflow.keras.Model. Let's see an example of using it. For this example, we create a simple sequential model consisting of two dense layers.
Step2: For this example, we used the Keras Sequential class. Our model consists of a dense layer with ReLU activation, 50% dropout to provide regularization, and a final layer that produces a scalar output. We also need to specify the loss function to use when training the model, in this case L<sub>2</sub> loss. We can now train and evaluate the model exactly as we would with any other DeepChem model. For example, let's load the Delaney solubility dataset. How does our model do at predicting the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs)?
Step3: TorchModel
TorchModel works just like KerasModel, except it wraps a torch.nn.Module. Let's use PyTorch to create another model just like the previous one and train it on the same data.
Step4: Computing Losses
Now let's see a more advanced example. In the above models, the loss was computed directly from the model's output. Often that is fine, but not always. Consider a classification model that outputs a probability distribution. While it is possible to compute the loss from the probabilities, it is more numerically stable to compute it from the logits.
To do this, we create a model that returns multiple outputs, both probabilities and logits. KerasModel and TorchModel let you specify a list of "output types". If a particular output has type 'prediction', that means it is a normal output that should be returned when you call predict(). If it has type 'loss', that means it should be passed to the loss function in place of the normal outputs.
Sequential models do not allow multiple outputs, so instead we use a subclassing style model.
Step5: We can train our model on the BACE dataset. This is a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1.
Step6: Other Features
KerasModel and TorchModel have lots of other features. Here are some of the more important ones.
Automatically saving checkpoints during training.
Logging progress to the console, to TensorBoard, or to Weights & Biases.
Custom loss functions that you define with a function of the form f(outputs, labels, weights).
Early stopping using the ValidationCallback class.
Loading parameters from pre-trained models.
Estimating uncertainty in model outputs.
Identifying important features through saliency mapping.
By wrapping your own models in a KerasModel or TorchModel, you get immediate access to all these features. See the API documentation for full details on them.
Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways | Python Code:
!pip install --pre deepchem
Explanation: Creating Models with TensorFlow and PyTorch
In the tutorials so far, we have used standard models provided by DeepChem. This is fine for many applications, but sooner or later you will want to create an entirely new model with an architecture you define yourself. DeepChem provides integration with both TensorFlow (Keras) and PyTorch, so you can use it with models from either of these frameworks.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
End of explanation
import deepchem as dc
import tensorflow as tf
keras_model = tf.keras.Sequential([
tf.keras.layers.Dense(1000, activation='relu'),
tf.keras.layers.Dropout(rate=0.5),
tf.keras.layers.Dense(1)
])
model = dc.models.KerasModel(keras_model, dc.models.losses.L2Loss())
Explanation: There are actually two different approaches you can take to using TensorFlow or PyTorch models with DeepChem. It depends on whether you want to use TensorFlow/PyTorch APIs or DeepChem APIs for training and evaluating your model. For the former case, DeepChem's Dataset class has methods for easily adapting it to use with other frameworks. make_tf_dataset() returns a tensorflow.data.Dataset object that iterates over the data. make_pytorch_dataset() returns a torch.utils.data.IterableDataset that iterates over the data. This lets you use DeepChem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing TensorFlow or PyTorch code.
But DeepChem also provides many other useful features. The other approach, which lets you use those features, is to wrap your model in a DeepChem Model object. Let's look at how to do that.
KerasModel
KerasModel is a subclass of DeepChem's Model class. It acts as a wrapper around a tensorflow.keras.Model. Let's see an example of using it. For this example, we create a simple sequential model consisting of two dense layers.
End of explanation
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='ECFP', splitter='random')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=50)
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
Explanation: For this example, we used the Keras Sequential class. Our model consists of a dense layer with ReLU activation, 50% dropout to provide regularization, and a final layer that produces a scalar output. We also need to specify the loss function to use when training the model, in this case L<sub>2</sub> loss. We can now train and evaluate the model exactly as we would with any other DeepChem model. For example, let's load the Delaney solubility dataset. How does our model do at predicting the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs)?
End of explanation
import torch
pytorch_model = torch.nn.Sequential(
torch.nn.Linear(1024, 1000),
torch.nn.ReLU(),
torch.nn.Dropout(0.5),
torch.nn.Linear(1000, 1)
)
model = dc.models.TorchModel(pytorch_model, dc.models.losses.L2Loss())
model.fit(train_dataset, nb_epoch=50)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
Explanation: TorchModel
TorchModel works just like KerasModel, except it wraps a torch.nn.Module. Let's use PyTorch to create another model just like the previous one and train it on the same data.
End of explanation
class ClassificationModel(tf.keras.Model):
def __init__(self):
super(ClassificationModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(1000, activation='relu')
self.dense2 = tf.keras.layers.Dense(1)
def call(self, inputs, training=False):
y = self.dense1(inputs)
if training:
y = tf.nn.dropout(y, 0.5)
logits = self.dense2(y)
output = tf.nn.sigmoid(logits)
return output, logits
keras_model = ClassificationModel()
output_types = ['prediction', 'loss']
model = dc.models.KerasModel(keras_model, dc.models.losses.SigmoidCrossEntropy(), output_types=output_types)
Explanation: Computing Losses
Now let's see a more advanced example. In the above models, the loss was computed directly from the model's output. Often that is fine, but not always. Consider a classification model that outputs a probability distribution. While it is possible to compute the loss from the probabilities, it is more numerically stable to compute it from the logits.
To do this, we create a model that returns multiple outputs, both probabilities and logits. KerasModel and TorchModel let you specify a list of "output types". If a particular output has type 'prediction', that means it is a normal output that should be returned when you call predict(). If it has type 'loss', that means it should be passed to the loss function in place of the normal outputs.
Sequential models do not allow multiple outputs, so instead we use a subclassing style model.
End of explanation
tasks, datasets, transformers = dc.molnet.load_bace_classification(feturizer='ECFP', split='scaffold')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
Explanation: We can train our model on the BACE dataset. This is a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1.
End of explanation
@manual{Intro1,
title={5},
organization={DeepChem},
author={Ramsundar, Bharath},
howpublished = {\url{https://github.com/deepchem/deepchem/blob/master/examples/tutorials/Creating_Models_with_TensorFlow_and_PyTorch.ipynb}},
year={2021},
}
Explanation: Other Features
KerasModel and TorchModel have lots of other features. Here are some of the more important ones.
Automatically saving checkpoints during training.
Logging progress to the console, to TensorBoard, or to Weights & Biases.
Custom loss functions that you define with a function of the form f(outputs, labels, weights).
Early stopping using the ValidationCallback class.
Loading parameters from pre-trained models.
Estimating uncertainty in model outputs.
Identifying important features through saliency mapping.
By wrapping your own models in a KerasModel or TorchModel, you get immediate access to all these features. See the API documentation for full details on them.
Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
Star DeepChem on GitHub
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
Join the DeepChem Gitter
The DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
Citing This Tutorial
If you found this tutorial useful please consider citing it using the provided BibTeX.
End of explanation |
11,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython
IPython (Interactive Python) is an enhanced Python shell which provides a more robust and productive development environment for users. There are several key features that set it apart from the standard Python shell.
Interactive data analysis and visualization
Python kernel for Jupyter notebooks
Easy parallel computation
History
In IPython, all your inputs and outputs are saved. There are two variables named In and Out which are assigned as you work with your results. All outputs are saved automatically to variables of the form _N, where N is the prompt number, and inputs to _iN. This allows you to recover quickly the result of a prior computation by referring to its number even if you forgot to store it as a variable.
Step1: Introspection
If you want details regarding the properties and functionality of any Python objects currently loaded into IPython, you can use the ? to reveal any details that are available
Step2: If available, additional detail is provided with two question marks, including the source code of the object itself.
Step3: This syntax can also be used to search namespaces with wildcards (*).
Step4: Tab completion
Because IPython allows for introspection, it is able to afford the user the ability to tab-complete commands that have been partially typed. This is done by pressing the <tab> key at any point during the process of typing a command
Step5: This can even be used to help with specifying arguments to functions, which can sometimes be difficult to remember
Step6: System commands
In IPython, you can type ls to see your files or cd to change directories, just like you would at a regular system prompt
Step7: Virtually any system command can be accessed by prepending !, which passes any subsequent command directly to the OS.
Step8: You can even use Python variables in commands sent to the OS
Step9: The output of a system command using the exclamation point syntax can be assigned to a Python variable.
Step10: Qt Console
If you type at the system prompt
Step11: The notebook lets you document your workflow using either HTML or Markdown.
The Jupyter Notebook consists of two related components
Step13: Markdown cells
Markdown is a simple markup language that allows plain text to be converted into HTML.
The advantages of using Markdown over HTML (and LaTeX)
Step14: Mathjax Support
Mathjax ia a javascript implementation $\alpha$ of LaTeX that allows equations to be embedded into HTML. For example, this markup
Step16: Magic functions
IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. These include
Step17: Timing the execution of code; the timeit magic exists both in line and cell form
Step18: IPython also creates aliases for a few common interpreters, such as bash, ruby, perl, etc.
These are all equivalent to %%script <name>
Step19: IPython has an rmagic extension that contains a some magic functions for working with R via rpy2. This extension can be loaded using the %load_ext magic as follows
Step20: If the above generates an error, it is likely that you do not have the rpy2 module installed. You can install this now via
Step21: or, if you are running Anaconda, via conda
Step22: Debugging
The %debug magic can be used to trigger the IPython debugger (ipd) for a cell that raises an exception. The debugger allows you to step through code line-by-line and inspect variables and execute code.
Step23: Exporting and Converting Notebooks
In Jupyter, one can convert an .ipynb notebook document file into various static formats via the nbconvert tool. Currently, nbconvert is a command line tool, run as a script using Jupyter.
Step24: Currently, nbconvert supports HTML (default), LaTeX, Markdown, reStructuredText, Python and HTML5 slides for presentations. Some types can be post-processed, such as LaTeX to PDF (this requires Pandoc to be installed, however).
Step25: A very useful online service is the IPython Notebook Viewer which allows you to display your notebook as a static HTML page, which is useful for sharing with others
Step26: As of this year, GitHub supports the rendering of Jupyter Notebooks stored on its repositories.
Reproducible Research
reproducing conclusions from a single experiment based on the measurements from that experiment
The most basic form of reproducibility is a complete description of the data and associated analyses (including code!) so the results can be exactly reproduced by others.
Reproducing calculations can be onerous, even with one's own work!
Scientific data are becoming larger and more complex, making simple descriptions inadequate for reproducibility. As a result, most modern research is irreproducible without tremendous effort.
Reproducible research is not yet part of the culture of science in general, or scientific computing in particular.
Scientific Computing Workflow
There are a number of steps to scientific endeavors that involve computing
Step27: Let's now consider a useful function that we might want to run in parallel. Here is a version of the approximate Bayesian computing (ABC) algorithm.
Step28: Let's try running this on one of the cluster engines
Step29: This fails with a NameError because NumPy has not been imported on the engine to which we sent the task. Each engine has its own namespace, so we need to import whatever modules we will need prior to running our code
Step30: An easier approach is to use the parallel cell magic to import everywhere
Step31: This magic can be used to execute the same code on all nodes. | Python Code:
import numpy as np
np.sin(4)**2
_1
_i1
_1 / 4.
Explanation: IPython
IPython (Interactive Python) is an enhanced Python shell which provides a more robust and productive development environment for users. There are several key features that set it apart from the standard Python shell.
Interactive data analysis and visualization
Python kernel for Jupyter notebooks
Easy parallel computation
History
In IPython, all your inputs and outputs are saved. There are two variables named In and Out which are assigned as you work with your results. All outputs are saved automatically to variables of the form _N, where N is the prompt number, and inputs to _iN. This allows you to recover quickly the result of a prior computation by referring to its number even if you forgot to store it as a variable.
End of explanation
some_dict = {}
some_dict?
Explanation: Introspection
If you want details regarding the properties and functionality of any Python objects currently loaded into IPython, you can use the ? to reveal any details that are available:
End of explanation
from numpy.linalg import cholesky
cholesky??
Explanation: If available, additional detail is provided with two question marks, including the source code of the object itself.
End of explanation
import numpy as np
np.random.rand*?
Explanation: This syntax can also be used to search namespaces with wildcards (*).
End of explanation
np.ar
Explanation: Tab completion
Because IPython allows for introspection, it is able to afford the user the ability to tab-complete commands that have been partially typed. This is done by pressing the <tab> key at any point during the process of typing a command:
End of explanation
plt.hist
Explanation: This can even be used to help with specifying arguments to functions, which can sometimes be difficult to remember:
End of explanation
ls /Users/fonnescj/repositories/scientific-python-workshop/
Explanation: System commands
In IPython, you can type ls to see your files or cd to change directories, just like you would at a regular system prompt:
End of explanation
!locate python | grep pdf
Explanation: Virtually any system command can be accessed by prepending !, which passes any subsequent command directly to the OS.
End of explanation
file_type = 'csv'
!ls ../data/*$file_type
Explanation: You can even use Python variables in commands sent to the OS:
End of explanation
data_files = !ls ../data/microbiome/
data_files
Explanation: The output of a system command using the exclamation point syntax can be assigned to a Python variable.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def f(x):
return (x-3)*(x-5)*(x-7)+85
import numpy as np
x = np.linspace(0, 10, 200)
y = f(x)
plt.plot(x,y)
Explanation: Qt Console
If you type at the system prompt:
$ ipython qtconsole
instead of opening in a terminal, IPython will start a graphical console that at first sight appears just like a terminal, but which is in fact much more capable than a text-only terminal. This is a specialized terminal designed for interactive scientific work, and it supports full multi-line editing with color highlighting and graphical calltips for functions, it can keep multiple IPython sessions open simultaneously in tabs, and when scripts run it can display the figures inline directly in the work area.
Jupyter Notebook
Over time, the IPython project grew to include several components, including:
an interactive shell
a REPL protocol
a notebook document fromat
a notebook document conversion tool
a web-based notebook authoring tool
tools for building interactive UI (widgets)
interactive parallel Python
As each component has evolved, several had grown to the point that they warrented projects of their own. For example, pieces like the notebook and protocol are not even specific to Python. As the result, the IPython team created Project Jupyter, which is the new home of language-agnostic projects that began as part of IPython, such as the notebook in which you are reading this text.
The HTML notebook that is part of the Jupyter project supports interactive data visualization and easy high-performance parallel computing.
End of explanation
from IPython.display import IFrame
IFrame('https://jupyter.org', width='100%', height=350)
from IPython.display import YouTubeVideo
YouTubeVideo("rl5DaFbLc60")
Explanation: The notebook lets you document your workflow using either HTML or Markdown.
The Jupyter Notebook consists of two related components:
A JSON based Notebook document format for recording and distributing Python code and rich text.
A web-based user interface for authoring and running notebook documents.
The Notebook can be used by starting the Notebook server with the command:
$ ipython notebook
This initiates an iPython engine, which is a Python instance that takes Python commands over a network connection.
The IPython controller provides an interface for working with a set of engines, to which one or more iPython clients can connect.
The Notebook gives you everything that a browser gives you. For example, you can embed images, videos, or entire websites.
End of explanation
# %load http://matplotlib.org/mpl_examples/shapes_and_collections/scatter_demo.py
Simple demo of a scatter plot.
import numpy as np
import matplotlib.pyplot as plt
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radiuses
plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.show()
Explanation: Markdown cells
Markdown is a simple markup language that allows plain text to be converted into HTML.
The advantages of using Markdown over HTML (and LaTeX):
its a human-readable format
allows writers to focus on content rather than formatting and layout
easier to learn and use
For example, instead of writing:
```html
<p>In order to create valid
<a href="http://en.wikipedia.org/wiki/HTML">HTML</a>, you
need properly coded syntax that can be cumbersome for
“non-programmers” to write. Sometimes, you
just want to easily make certain words <strong>bold
</strong>, and certain words <em>italicized</em> without
having to remember the syntax. Additionally, for example,
creating lists:</p>
<ul>
<li>should be easy</li>
<li>should not involve programming</li>
</ul>
```
we can write the following in Markdown:
```markdown
In order to create valid [HTML], you need properly
coded syntax that can be cumbersome for
"non-programmers" to write. Sometimes, you just want
to easily make certain words bold, and certain
words italicized without having to remember the
syntax. Additionally, for example, creating lists:
should be easy
should not involve programming
```
Emphasis
Markdown uses * (asterisk) and _ (underscore) characters as
indicators of emphasis.
*italic*, _italic_
**bold**, __bold__
***bold-italic***, ___bold-italic___
italic, italic
bold, bold
bold-italic, bold-italic
Lists
Markdown supports both unordered and ordered lists. Unordered lists can use *, -, or
+ to define a list. This is an unordered list:
* Apples
* Bananas
* Oranges
Apples
Bananas
Oranges
Ordered lists are numbered lists in plain text:
1. Bryan Ferry
2. Brian Eno
3. Andy Mackay
4. Paul Thompson
5. Phil Manzanera
Bryan Ferry
Brian Eno
Andy Mackay
Paul Thompson
Phil Manzanera
Links
Markdown inline links are equivalent to HTML <a href='foo.com'>
links, they just have a different syntax.
[Biostatistics home page](http://biostat.mc.vanderbilt.edu "Visit Biostat!")
Biostatistics home page
Block quotes
Block quotes are denoted by a > (greater than) character
before each line of the block quote.
> Sometimes a simple model will outperform a more complex model . . .
> Nevertheless, I believe that deliberately limiting the complexity
> of the model is not fruitful when the problem is evidently complex.
Sometimes a simple model will outperform a more complex model . . .
Nevertheless, I believe that deliberately limiting the complexity
of the model is not fruitful when the problem is evidently complex.
Images
Images look an awful lot like Markdown links, they just have an extra
! (exclamation mark) in front of them.

Remote Code
Use %load to add remote code
End of explanation
from sympy import *
init_printing()
x, y = symbols("x y")
eq = ((x+y)**2 * (x+1))
eq
expand(eq)
(1/cos(x)).series(x, 0, 6)
limit((sin(x)-x)/x**3, x, 0)
diff(cos(x**2)**2 / (1+x), x)
Explanation: Mathjax Support
Mathjax ia a javascript implementation $\alpha$ of LaTeX that allows equations to be embedded into HTML. For example, this markup:
$$ \int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right). $$
becomes this:
$$
\int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right).
$$
SymPy Support
SymPy is a Python library for symbolic mathematics. It supports:
polynomials
calculus
solving equations
discrete math
matrices
End of explanation
%lsmagic
Explanation: Magic functions
IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. These include:
%run
%edit
%debug
%timeit
%paste
%load_ext
End of explanation
%timeit np.linalg.eigvals(np.random.rand(100,100))
%%timeit a = np.random.rand(100, 100)
np.linalg.eigvals(a)
Explanation: Timing the execution of code; the timeit magic exists both in line and cell form:
End of explanation
%%ruby
puts "Hello from Ruby #{RUBY_VERSION}"
%%bash
echo "hello from $BASH"
Explanation: IPython also creates aliases for a few common interpreters, such as bash, ruby, perl, etc.
These are all equivalent to %%script <name>
End of explanation
%load_ext rpy2.ipython
Explanation: IPython has an rmagic extension that contains a some magic functions for working with R via rpy2. This extension can be loaded using the %load_ext magic as follows:
End of explanation
!pip install rpy2
Explanation: If the above generates an error, it is likely that you do not have the rpy2 module installed. You can install this now via:
End of explanation
!conda install rpy2
x,y = np.arange(10), np.random.normal(size=10)
%R print(lm(rnorm(10)~rnorm(10)))
%%R -i x,y -o XYcoef
lm.fit <- lm(y~x)
par(mfrow=c(2,2))
print(summary(lm.fit))
plot(lm.fit)
XYcoef <- coef(lm.fit)
XYcoef
Explanation: or, if you are running Anaconda, via conda:
End of explanation
def div(x, y):
return x/y
div(1,0)
%debug
Explanation: Debugging
The %debug magic can be used to trigger the IPython debugger (ipd) for a cell that raises an exception. The debugger allows you to step through code line-by-line and inspect variables and execute code.
End of explanation
!jupyter nbconvert --to html "IPython and Jupyter.ipynb"
Explanation: Exporting and Converting Notebooks
In Jupyter, one can convert an .ipynb notebook document file into various static formats via the nbconvert tool. Currently, nbconvert is a command line tool, run as a script using Jupyter.
End of explanation
!jupyter nbconvert --to pdf "Introduction to pandas.ipynb"
Explanation: Currently, nbconvert supports HTML (default), LaTeX, Markdown, reStructuredText, Python and HTML5 slides for presentations. Some types can be post-processed, such as LaTeX to PDF (this requires Pandoc to be installed, however).
End of explanation
IFrame("http://nbviewer.ipython.org/2352771", width='100%', height=350)
Explanation: A very useful online service is the IPython Notebook Viewer which allows you to display your notebook as a static HTML page, which is useful for sharing with others:
End of explanation
from ipyparallel import Client
client = Client()
dv = client.direct_view()
len(dv)
def where_am_i():
import os
import socket
return "In process with pid {0} on host: '{1}'".format(
os.getpid(), socket.gethostname())
where_am_i_direct_results = dv.apply(where_am_i)
where_am_i_direct_results.get()
Explanation: As of this year, GitHub supports the rendering of Jupyter Notebooks stored on its repositories.
Reproducible Research
reproducing conclusions from a single experiment based on the measurements from that experiment
The most basic form of reproducibility is a complete description of the data and associated analyses (including code!) so the results can be exactly reproduced by others.
Reproducing calculations can be onerous, even with one's own work!
Scientific data are becoming larger and more complex, making simple descriptions inadequate for reproducibility. As a result, most modern research is irreproducible without tremendous effort.
Reproducible research is not yet part of the culture of science in general, or scientific computing in particular.
Scientific Computing Workflow
There are a number of steps to scientific endeavors that involve computing:
Many of the standard tools impose barriers between one or more of these steps. This can make it difficult to iterate, reproduce work.
The Jupyter notebook eliminates or reduces these barriers to reproducibility.
Parallel IPython
The IPython architecture consists of four components, which reside in the ipyparallel package:
Engine The IPython engine is a Python instance that accepts Python commands over a network connection. When multiple engines are started, parallel and distributed computing becomes possible. An important property of an IPython engine is that it blocks while user code is being executed.
Hub The hub keeps track of engine connections, schedulers, clients, as well as persist all task requests and results in a database for later use.
Schedulers All actions that can be performed on the engine go through a Scheduler. While the engines themselves block when user code is run, the schedulers hide that from the user to provide a fully asynchronous interface to a set of engines.
Client The primary object for connecting to a cluster.
(courtesy Min Ragan-Kelley)
This architecture is implemented using the ØMQ messaging library and the associated Python bindings in pyzmq.
Running parallel IPython
To enable the IPython Clusters tab in Jupyter Notebook:
ipcluster nbextension enable
When you then start a Jupyter session, you should see the following in your IPython Clusters tab:
Before running the next cell, make sure you have first started your cluster, you can use the clusters tab in the dashboard to do so.
Select the number if IPython engines (nodes) that you want to use, then click Start.
End of explanation
import numpy
def abc(y, N, epsilon=[0.2, 0.8]):
trace = []
while len(trace) < N:
# Simulate from priors
mu = numpy.random.normal(0, 10)
sigma = numpy.random.uniform(0, 20)
x = numpy.random.normal(mu, sigma, 50)
#if (np.linalg.norm(y - x) < epsilon):
if ((abs(x.mean() - y.mean()) < epsilon[0]) &
(abs(x.std() - y.std()) < epsilon[1])):
trace.append([mu, sigma])
return trace
y = numpy.random.normal(4, 2, 50)
Explanation: Let's now consider a useful function that we might want to run in parallel. Here is a version of the approximate Bayesian computing (ABC) algorithm.
End of explanation
dv0 = client[0]
dv0.block = True
dv0.apply(abc, y, 10)
Explanation: Let's try running this on one of the cluster engines:
End of explanation
dv0.execute("import numpy")
dv0.apply(abc, y, 10)
Explanation: This fails with a NameError because NumPy has not been imported on the engine to which we sent the task. Each engine has its own namespace, so we need to import whatever modules we will need prior to running our code:
End of explanation
%%px
import numpy
Explanation: An easier approach is to use the parallel cell magic to import everywhere:
End of explanation
%%px
import os
print(os.getpid())
%%px
%matplotlib inline
import matplotlib.pyplot as plt
import os
tsamples = numpy.random.randn(100)
plt.hist(tsamples)
_ = plt.title('PID %i' % os.getpid())
Explanation: This magic can be used to execute the same code on all nodes.
End of explanation |
11,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NGC ... (UGC ...)
Step1: <h2 id="tocheading">Оглавление</h2>
<div id="toc"></div>
Статьи
Разное
Step2: Кинематические данные по звездам
Кривая вращения
Step3: Дисперсии
Для большой оси
Step4: Поверхностная плотность газа
Данные по фотометрии
Зоны звездообразования
Step5: Неустойчивость
Одножидкостная
Устойчиво, когда > 1 | Python Code:
from IPython.display import HTML
from IPython.display import Image
import os
%pylab
%matplotlib inline
%run ../../utils/load_notebook.py
from photometry import *
from instabilities import *
name = '...'
gtype = '...'
incl = None
scale = None #kpc/arcsec
data_path = None
# sin_i, cos_i = np.sin(incl*np.pi/180.), np.cos(incl*np.pi/180.)
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
Explanation: NGC ... (UGC ...)
End of explanation
# Данные из NED
# HTML('<iframe src=http://ned.ipac.caltech.edu/cgi-bin/objsearch?objname=ngc+4725&extend=no&hconst=\
# 73&omegam=0.27&omegav=0.73&corr_z=1&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=\
# 30000.0&list_limit=5&img_stamp=YES width=1000 height=350></iframe>')
# Данные из HYPERLEDA
# HTML('<iframe src=http://leda.univ-lyon1.fr/ledacat.cgi?o=ngc4725 width=1000 height=350></iframe>')
Explanation: <h2 id="tocheading">Оглавление</h2>
<div id="toc"></div>
Статьи
Разное
End of explanation
# os.chdir(data_path)
Explanation: Кинематические данные по звездам
Кривая вращения
End of explanation
# TODO: move to utils
def epicyclicFreq_real(poly_gas, R, resolution):
'''Честное вычисление эпициклической частоты на расстоянии R для сплайна или полинома'''
try:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.deriv()(R) / poly_gas(R)) / (R * resolution )
except:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.derivative()(R) / poly_gas(R)) / (R * resolution )
Explanation: Дисперсии
Для большой оси: $\sigma^2_{maj} = \sigma^2_{\varphi}\sin^2 i + \sigma^2_{z}\cos^2 i$, следовательно примерные ограничения
$$\sigma_{maj} < \sigma_R = \frac{\sigma_{maj}}{\sqrt{f\sin^2 i + \alpha^2\cos^2 i}} ~< \frac{\sqrt{2}\sigma_{maj}}{\sin i} (или \frac{\sigma_{maj}}{\sqrt{f}\sin i}),$$
или можно более точную оценку дать, если построить $f$ (сейчас $0.5 < f < 1$).
Для малой оси: $\sigma^2_{min} = \sigma^2_{R}\sin^2 i + \sigma^2_{z}\cos^2 i$ и ограничения
$$\sigma_{min} < \sigma_R = \frac{\sigma_{min}}{\sqrt{\sin^2 i + \alpha^2\cos^2 i}} ~< \frac{\sigma_{min}}{\sin i}$$
Данные по газу
Кривая вращения
Эпициклическая частота
Для случая бесконечного тонкого диска: $$\kappa=\frac{3}{R}\frac{d\Phi}{dR}+\frac{d^2\Phi}{dR^2}$$
где $\Phi$ - гравпотенциал, однако его знать не надо, т.к. есть проще формула: $$\kappa=\sqrt{2}\frac{\vartheta_c}{R}\sqrt{1+\frac{R}{\vartheta_c}\frac{d\vartheta_c}{dR}}$$
TODO: использовать $\varkappa$? точно ли тут газовая кривая?
End of explanation
def plot_SF(ax):
pass
plot_SF(plt.gca())
plt.xlim(0, 350)
plt.ylim(0, 200)
Explanation: Поверхностная плотность газа
Данные по фотометрии
Зоны звездообразования
End of explanation
# %install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%reload_ext version_information
%version_information numpy, scipy, matplotlib
Explanation: Неустойчивость
Одножидкостная
Устойчиво, когда > 1:
$$Q_g = \frac{\Sigma_g^{cr}}{\Sigma_g}=\frac{\kappa c_g}{\pi G \Sigma_g}$$
$$Q_s = \frac{\Sigma_s^{cr}}{\Sigma_s}=\frac{\sigma_R}{\sigma_R^{min}}=\frac{\kappa \sigma_R}{3.36 G \Sigma_s}$$
Двухжидкостная
Кинетическое приближение:
$$\frac{1}{Q_{\mathrm{eff}}}=\frac{2}{Q_{\mathrm{s}}}\frac{1}{\bar{k}}\left[1-e^{-\bar{k}^{2}}I_{0}(\bar{k}^{2})\right]+\frac{2}{Q_{\mathrm{g}}}s\frac{\bar{k}}{1+\bar{k}^{2}s^{2}}>1\,$$
Гидродинамическое приближение:
$$\frac{2\,\pi\, G\, k\,\Sigma_{\mathrm{s}}}{\kappa+k^{2}\sigma_{\mathrm{s}}}+\frac{2\,\pi\, G\, k\,\Sigma_{\mathrm{g}}}{\kappa+k^{2}c_{\mathrm{g}}}>1$$ или $$\frac{1}{Q_{\mathrm{eff}}}=\frac{2}{Q_{\mathrm{s}}}\frac{\bar{k}}{1+\bar{k}^{2}}+\frac{2}{Q_{\mathrm{g}}}s\frac{\bar{k}}{1+\bar{k}^{2}s^{2}}>1$$ для безразмерного волнового числа ${\displaystyle \bar{k}\equiv\frac{k\,\sigma_{\mathrm{s}}}{\kappa}},\, s=c/\sigma$
Учет толщины
$$\frac{2\,\pi\, G\, k\,\Sigma_{\mathrm{s}}}{\kappa+k^{2}\sigma_{\mathrm{s}}}\,\left{ \frac{1-\exp(-k\, h_{z}^{\mathrm{s}})}{k\, h_{z}^{\mathrm{s}}}\right} +\frac{2\,\pi\, G\, k\,\Sigma_{\mathrm{g}}}{\kappa+k^{2}c_{\mathrm{g}}}\,\left{ \frac{1-\exp(-k\, h_{z}^{\mathrm{g}})}{k\, h_{z}^{\mathrm{g}}}\right} >1$$
$$\begin{array}{rcl}
\sigma_{z}^{2}=\pi Gz_{0}^{\mathrm{s}}(\Sigma_{\mathrm{s}}+\Sigma_{\mathrm{g}})\,,\
\
c_{\mathrm{g}}^{2}=\pi Gz_{0}^{\mathrm{g}}(\Sigma_{\mathrm{g}}+\Sigma_{\mathrm{s}})\,.
\end{array}$$
Отсюда можно найти толщины
Эксперименты
End of explanation |
11,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Numpy Techniques
<img src="assets/numpylogo.png" alt="http
Step1: Bandwidth-limited ops
Have to pull in more cache lines for the pointers
Poor locality causes pipeline stalls
Step2: Flop-limited ops
Can't engage VPU on non-contiguous memory
Step3: Memory consumption
List representation uses 8 extra bytes for every value (assuming 64-bit here and henceforth)!
Step4: Disclaimer
Regular python lists are still useful! They do a lot of things arrays can't
Step5: Axes are read LEFT to RIGHT
Step6: Dtypes
$F$ (our dtype) can be (doc)
Step7: Indexing doc
Probably the most creative, unique part of the entire library. This is what makes NumPy ndarray better than any other array.
And index returns an ndarray view based on the other ndarray.
Basic Indexing
Step8: Basic indices let us access hyper-rectangles with strides
Step9: Why on earth would you do the above? Selection, sampling, algorithms that are based on offsets of arrays (i.e., basically all of them).
What's going on?
Advanced indexing is best thought of in the following way
Step10: The above covers the case of one advanced index and the rest being basic. One other common situation that comes up in practice is every index is advanced.
Recall array x with shape (n0, ..., nN-1). Let indj be integer ndarrays all of the same shape (say, (m0, ..., mM-1)).
Then x[ind0, ... indN-1] has shape (m0, ..., mM-1) and its t=(j0, ..., jM-1)-th element is the (ind0[t], ..., indN-1(t))-th element of x.
Step11: Indexing Applications
Step12: Array Creation and Initialization
doc
If unspecified, default dtype is usually float, with an exception for arange.
Step13: Extremely extensive random generation. Remember to seed!
Transposition
Under the hood. So far, we've just been looking at the abstraction that NumPy offers. How does it actually keep things contiguous in memory?
We have a base array, which is one long contiguous array from 0 to size - 1.
Step14: Transposition Example
Step15: Ufuncs and Broadcasting
doc
Step16: Aliasing
You can save on allocations and copies by providing the output array to copy into.
Aliasing occurs when all or part of the input is repeated in the output
Ufuncs allow aliasing
Step17: [GOTCHA]
Step18: Configuration and Hardware Acceleration
NumPy works quickly because it can perform vectorization by linking to C functions that were built for your particular system.
[GOTCHA] There are two different high-level ways in which NumPy uses hardware to accelerate your computations.
Ufunc
When you perform a built-in ufunc
Step19: General Einsum Approach
Again, lots of visuals in this blog post.
[GOTCHA]. You can't use more than 52 different letters.. But if you find yourself writing np.einsum with more than 52 active dimensions, you should probably make two np.einsum calls. If you have dimensions for which nothing happens, then ... can be used to represent an arbitrary amount of missed dimensions.
Here's the way I think about an np.einsum (the actual implementation is more efficient).
Step20: Neural Nets with Einsum
Original post
<table>
<tr>
<th>
<img src="assets/mlp1.png" alt="https | Python Code:
import numpy as np
import time
import gc
import sys
assert sys.maxsize > 2 ** 32, "get a new computer!"
# Allocation-sensitive timing needs to be done more carefully
# Compares runtimes of f1, f2
def compare_times(f1, f2, setup1=None, setup2=None, runs=5):
print(' format: mean seconds (standard error)', runs, 'runs')
maxpad = max(len(f.__name__) for f in (f1, f2))
means = []
for setup, f in [[setup1, f1], [setup2, f2]]:
setup = (lambda: tuple()) if setup is None else setup
total_times = []
for _ in range(runs):
try:
gc.disable()
args = setup()
start = time.time()
if isinstance(args, tuple):
f(*args)
else:
f(args)
end = time.time()
total_times.append(end - start)
finally:
gc.enable()
mean = np.mean(total_times)
se = np.std(total_times) / np.sqrt(len(total_times))
print(' {} {:.2e} ({:.2e})'.format(f.__name__.ljust(maxpad), mean, se))
means.append(mean)
print(' improvement ratio {:.1f}'.format(means[0] / means[1]))
Explanation: Advanced Numpy Techniques
<img src="assets/numpylogo.png" alt="http://www.numpy.org/#">
General, user-friendly documentation with lots of examples.
Technical, "hard" reference.
Basic Python knowledge assumed.
CPython ~3.6, NumPy ~1.12
If you like content like this, you might be interested in my blog
What is it?
NumPy is an open-source package that's part of the SciPy ecosystem. Its main feature is an array object of arbitrary dimension, but this fundamental collection is integral to any data-focused Python application.
<table>
<tr>
<th>
<img src="assets/nonsteeplearn.png" width="200" alt="http://gabriellhanna.blogspot.com/2015/03/negatively-accelerated-learning-curve-i.html">
</th><th>
<img src="assets/steeplearn.jpg" width="200" alt="http://malaher.org/2007/03/pet-peeve-learning-curve-misuse/">
</th></tr></table>
Most people learn numpy through assimilation or necessity. I believe NumPy has the latter learning curve (steep/easy to learn), so you can actually invest just a little bit of time now (by going through this notebook, for instance), and reap a lot of reward!
Motivation
Provide a uniform interface for handling numerical structured data
Collect, store, and manipulate numerical data efficiently
Low-cost abstractions
Universal glue for numerical information, used in lots of external libraries! The API establishes common functions and re-appears in many other settings with the same abstractions.
<table>
<tr>
<th>
<img src="assets/numba.png" alt="http://numba.pydata.org/" width="150"></th><th><img src="assets/pandas.png" alt="http://pandas.pydata.org/" width="150"> </th><th><img src="assets/tf.png" alt="https://github.com/tensorflow/tensorflow" width="150"></th><th> <img src="assets/sklearn.png" alt="https://github.com/scikit-learn/scikit-learn" width="150"> </th><th><img src="assets/stan.png" alt="http://mc-stan.org/" width="150"></th>
</tr>
</table>
Goals and Non-goals
Goals
What I'll do:
Give a bit of basics first.
Describe NumPy, with under-the-hood details to the extent that they are useful to you, the user
Highlight some [GOTCHA]s, avoid some common bugs
Point out a couple useful NumPy functions
This is not an attempt to exaustively cover the reference manual (there's too many individual functions to keep in your head, anyway).
Instead, I'll try to...
provide you with an overview of the API structure so next time you're doing numeric data work you'll know where to look
convince you that NumPy arrays offer the perfect data structure for the following (wide-ranging) use case:
RAM-sized general-purpose structured numerical data applications: manipulation, collection, and analysis.
Non-goals
No emphasis on multicore processing, but will be briefly mentioned
Some NumPy functionality not covered -- mentioned briefly at end
HPC concerns
GPU programming
Why not a Python list?
A list is a resizing contiguous array of pointers.
<img src="assets/pylist.png" alt="http://www.laurentluce.com/posts/python-list-implementation/">
Nested lists are even worse - there are two levels of indirection.
<img src="assets/nestlist.png" alt="http://www.cs.toronto.edu/~gpenn/csc401/401_python_web/pyseq.html">
Compare to NumPy arrays, happy contiguous chunks of memory, even across axes. This image is only illustrative, a NumPy array may not necessarily be in C-order (more on that later):
<img src="assets/nparr.png" alt="https://www.safaribooksonline.com/library/view/python-for-data/9781491957653/ch04.html" width=300>
Recurring theme: NumPy lets us have the best of both worlds (high-level Python for development, optimized representation and speed via low-level C routines for execution)
End of explanation
size = 10 ** 7 # ints will be un-intered past 258
print('create a list 1, 2, ...', size)
def create_list(): return list(range(size))
def create_array(): return np.arange(size, dtype=int)
compare_times(create_list, create_array)
print('deep copies (no pre-allocation)') # Shallow copy is cheap for both!
size = 10 ** 7
ls = list(range(size))
def copy_list(): return ls[:]
ar = np.arange(size, dtype=int)
def copy_array(): return np.copy(ar)
compare_times(copy_list, copy_array)
print('Deep copy (pre-allocated)')
size = 10 ** 7
def create_lists(): return list(range(size)), [0] * size
def deep_copy_lists(src, dst): dst[:] = src
def create_arrays(): return np.arange(size, dtype=int), np.empty(size, dtype=int)
def deep_copy_arrays(src, dst): dst[:] = src
compare_times(deep_copy_lists, deep_copy_arrays, create_lists, create_arrays)
Explanation: Bandwidth-limited ops
Have to pull in more cache lines for the pointers
Poor locality causes pipeline stalls
End of explanation
print('square out-of-place')
def square_lists(src, dst):
for i, v in enumerate(src):
dst[i] = v * v
def square_arrays(src, dst):
np.square(src, out=dst)
compare_times(square_lists, square_arrays, create_lists, create_arrays)
# Caching and SSE can have huge cumulative effects
print('square in-place')
size = 10 ** 7
def create_list(): return list(range(size))
def square_list(ls):
for i, v in enumerate(ls):
ls[i] = v * v
def create_array(): return np.arange(size, dtype=int)
def square_array(ar):
np.square(ar, out=ar)
compare_times(square_list, square_array, create_list, create_array)
Explanation: Flop-limited ops
Can't engage VPU on non-contiguous memory: won't saturate CPU computational capabilities of your hardware (note that your numpy may not be vectorized anyway, but the "saturate CPU" part still holds)
End of explanation
from pympler import asizeof
size = 10 ** 4
print('list kb', asizeof.asizeof(list(range(size))) // 1024)
print('array kb', asizeof.asizeof(np.arange(size, dtype=int)) // 1024)
Explanation: Memory consumption
List representation uses 8 extra bytes for every value (assuming 64-bit here and henceforth)!
End of explanation
n0 = np.array(3, dtype=float)
n1 = np.stack([n0, n0, n0, n0])
n2 = np.stack([n1, n1])
n3 = np.stack([n2, n2])
for x in [n0, n1, n2, n3]:
print('ndim', x.ndim, 'shape', x.shape)
print(x)
Explanation: Disclaimer
Regular python lists are still useful! They do a lot of things arrays can't:
List comprehensions [x * x for x in range(10) if x % 2 == 0]
Ragged nested lists [[1, 2, 3], [1, [2]]]
The NumPy Array
doc
Abstraction
We know what an array is -- a contiugous chunk of memory holding an indexed list of things from 0 to its size minus 1. If the things have a particular type, using, say, dtype as a placeholder, then we can refer to this as a classical_array of dtypes.
The NumPy array, an ndarray with a datatype, or dtype, dtype is an N-dimensional array for arbitrary N. This is defined recursively:
* For N > 0, an N-dimensional ndarray of dtype dtype is a classical_array of N - 1 dimensional ndarrays of dtype dtype, all with the same size.
* For N = 0, the ndarray is a dtype
We note some familiar special cases:
* N = 0, we have a scalar, or the datatype itself
* N = 1, we have a classical_array
* N = 2, we have a matrix
Each axis has its own classical_array length: this yields the shape.
End of explanation
original = np.arange(10)
# shallow copies
s1 = original[:]
s2 = s1.view()
s3 = original[:5]
print(original)
original[2] = -1
print('s1', s1)
print('s2', s2)
print('s3', s3)
id(original), id(s1.base), id(s2.base), id(s3.base), original.base
Explanation: Axes are read LEFT to RIGHT: an array of shape (n0, n1, ..., nN-1) has axis 0 with length n0, etc.
Detour: Formal Representation
Warning, these are pretty useless definitions unless you want to understand np.einsum, which is only at the end anyway.
Formally, a NumPy array can be viewed as a mathematical object. If:
The dtype belongs to some (usually field) $F$
The array has dimension $N$, with the $i$-th axis having length $n_i$
$N>1$
Then this array is an object in:
$$
F^{n_0}\otimes F^{n_{1}}\otimes\cdots \otimes F^{n_{N-1}}
$$
$F^n$ is an $n$-dimensional vector space over $F$. An element in here can be represented by its canonical basis $\textbf{e}_i^{(n)}$ as a sum for elements $f_i\in F$:
$$
f_1\textbf{e}1^{(n)}+f{2}\textbf{e}{2}^{(n)}+\cdots +f{n}\textbf{e}_{n}^{(n)}
$$
$F^n\otimes F^m$ is a tensor product, which takes two vector spaces and gives you another. Then the tensor product is a special kind of vector space with dimension $nm$. Elements in here have a special structure which we can tie to the original vector spaces $F^n,F^m$:
$$
\sum_{i=1}^n\sum_{j=1}^mf_{ij}(\textbf{e}{i}^{(n)}\otimes \textbf{e}{j}^{(m)})
$$
Above, $(\textbf{e}{i}^{(n)}\otimes \textbf{e}{j}^{(m)})$ is a basis vector of $F^n\otimes F^m$ for each pair $i,j$.
We will discuss what $F$ can be later; but most of this intuition (and a lot of NumPy functionality) is based on $F$ being a type corresponding to a field.
Back to CS / Mutability / Losing the Abstraction
The above is a (simplified) view of ndarray as a tensor, but gives useful intuition for arrays that are not mutated.
An ndarray Python object is a actually a view into a shared ndarray. The base is a representative of the equaivalence class of views of the same array
<img src="assets/ndarrayrep.png" alt="https://docs.scipy.org/doc/numpy/reference/arrays.html">
This diagram is a lie (the array isn't in your own bubble, it's shared)!
End of explanation
# Names are pretty intuitive for basic types
i16 = np.arange(100, dtype=np.uint16)
i64 = np.arange(100, dtype=np.uint64)
print('i16', asizeof.asizeof(i16), 'i64', asizeof.asizeof(i64))
# We can use arbitrary structures for our own types
# For example, exact Gaussian (complex) integers
gauss = np.dtype([('re', np.int32), ('im', np.int32)])
c2 = np.zeros(2, dtype=gauss)
c2[0] = (1, 1)
c2[1] = (2, -1)
def print_gauss(g):
print('{}{:+d}i'.format(g['re'], g['im']))
print(c2)
for x in c2:
print_gauss(x)
l16 = np.array(5, dtype='>u2') # little endian signed char
b16 = l16.astype('<u2') # big endian unsigned char
print(l16.tobytes(), np.binary_repr(l16, width=16))
print(b16.tobytes(), np.binary_repr(b16, width=16))
Explanation: Dtypes
$F$ (our dtype) can be (doc):
boolean
integral
floating-point
complex floating-point
any structure (record array) of the above, e.g. complex integral values
The dtype can also be unicode, a date, or an arbitrary object, but those don't form fields. This means that most NumPy functions aren't usful for this data, since it's not numeric. Why have them at all?
for all: NumPy ndarrays offer the tensor abstraction described above.
unicode: consistent format in memory for bit operations and for I/O
date: compact representation, addition/subtraction, basic parsing
End of explanation
x = np.arange(10)
# start:stop:step
# inclusive start, exclusive stop
print(x)
print(x[2:6:2])
print(id(x), id(x[2:6:2].base))
# Default start is 0, default end is length, default step is 1
print(x[:3])
print(x[7:])
# Don't worry about overshooting
print(x[:100])
print(x[7:2:1])
# Negatives wrap around (taken mod length of axis)
print(x[-4:-1])
# An array whose index goes up in reverse
print(x[::-1]) # default start = n-1 and stop = -1 for negative step [GOTCHA]
print(x[::-1][:3])
# What happens if we do an ascending sort on an array with the reverse index?
x = np.arange(10)
print('x[:5] ', x[:5])
print('x[:5][::-1] ', x[:5][::-1])
x[:5][::-1].sort()
print('calling x[:5][::-1].sort()')
print('x[:5][::-1] (sorted)', x[:5][::-1])
print('x[:5] (rev-sorted) ', x[:5])
print('x ', x)
# Multi-dimensional
def display(exp):
print(exp, eval(exp).shape)
print(eval(exp))
print()
x = np.arange(4 * 4 * 2).reshape(2, 4, 4)
display('x')
display('x[1, :, :1]')
display('x[1, :, 0]')
# Add as many length-1 axes as you want [we'll see why later]
y = np.arange(2 * 2).reshape(2, 2)
display('y')
display('y[:, :, np.newaxis]')
display('y[np.newaxis, :, :, np.newaxis]')
# Programatically create indices
def f(): return slice(0, 2, 1)
s = f()
print('slice', s.start, s.stop, s.step)
display('x[0, 0, s]')
# equivalent notation
display('x[tuple([0, 0, s])]')
display('x[(0, 0, s)]')
Explanation: Indexing doc
Probably the most creative, unique part of the entire library. This is what makes NumPy ndarray better than any other array.
And index returns an ndarray view based on the other ndarray.
Basic Indexing
End of explanation
m = np.arange(4 * 5).reshape(4, 5)
# 1D advanced index
display('m')
display('m[[1,2,1],:]')
print('original indices')
print(' rows', np.arange(m.shape[0]))
print(' cols', np.arange(m.shape[1]))
print('new indices')
print(' rows', ([1, 2, 1]))
print(' cols', np.arange(m.shape[1]))
# 2D advanced index
display('m')
display('m[0:1, [[1, 1, 2],[0, 1, 2]]]')
Explanation: Basic indices let us access hyper-rectangles with strides:
<img src="assets/slices.png" alt="http://www.scipy-lectures.org/intro/numpy/numpy.html" width="300">
Advanced Indexing
Arbitrary combinations of basic indexing. GOTCHA: All advanced index results are copies, not views.
End of explanation
# GOTCHA: accidentally invoking advanced indexing
display('x')
display('x[(0, 0, 1),]') # advanced
display('x[(0, 0, 1)]') # basic
# best policy: don't parenthesize when you want basic
Explanation: Why on earth would you do the above? Selection, sampling, algorithms that are based on offsets of arrays (i.e., basically all of them).
What's going on?
Advanced indexing is best thought of in the following way:
A typical ndarray, x, with shape (n0, ..., nN-1) has N corresponding indices.
(range(n0), ..., range(nN-1))
Indices work like this: the (i0, ..., iN-1)-th element in an array with the above indices over x is:
(range(n0)[i0], ..., range(n2)[iN-1]) == (i0, ..., iN-1)
So the (i0, ..., iN-1)-th element of x is the (i0, ..., iN-1)-th element of "x with indices (range(n0), ..., range(nN-1))".
An advanced index x[:, ..., ind, ..., :], where ind is some 1D list of integers for axis j between 0 and nj, possibly with repretition, replaces the straightforward increasing indices with:
(range(n0), ..., ind, ..., range(nN-1))
The (i0, ..., iN-1)-th element is (i0, ..., ind[ij], ..., iN-1) from x.
So the shape will now be (n0, ..., len(ind), ..., nN-1).
It can get even more complicated -- ind can be higher dimensional.
End of explanation
display('m')
display('m[[1,2],[3,4]]')
# ix_: only applies to 1D indices. computes the cross product
display('m[np.ix_([1,2],[3,4])]')
# r_: concatenates slices and all forms of indices
display('m[0, np.r_[:2, slice(3, 1, -1), 2]]')
# Boolean arrays are converted to integers where they're true
# Then they're treated like the corresponding integer arrays
np.random.seed(1234)
digits = np.random.permutation(np.arange(10))
is_odd = digits % 2
print(digits)
print(is_odd)
print(is_odd.astype(bool))
print(digits[is_odd]) # GOTCHA
print(digits[is_odd.astype(bool)])
print(digits)
print(is_odd.nonzero()[0])
print(digits[is_odd.nonzero()])
# Boolean selection in higher dimensions:
x = np.arange(2 *2).reshape(2, -1)
y = (x % 2).astype(bool)
print(x)
print(y)
print(y.nonzero())
print(x[y]) # becomes double advanced index
Explanation: The above covers the case of one advanced index and the rest being basic. One other common situation that comes up in practice is every index is advanced.
Recall array x with shape (n0, ..., nN-1). Let indj be integer ndarrays all of the same shape (say, (m0, ..., mM-1)).
Then x[ind0, ... indN-1] has shape (m0, ..., mM-1) and its t=(j0, ..., jM-1)-th element is the (ind0[t], ..., indN-1(t))-th element of x.
End of explanation
# Data cleanup / filtering
x = np.array([1, 2, 3, np.nan, 2, 1, np.nan])
b = ~np.isnan(x)
print(x)
print(b)
print(x[b])
# Selecting labelled data (e.g. for plotting)
%matplotlib inline
import matplotlib.pyplot as plt
# From DBSCAN sklearn ex
from sklearn.datasets.samples_generator import make_blobs
X, labels = make_blobs(n_samples=100, centers=[[0, 0], [1, 1]], cluster_std=0.4, random_state=0)
print(X.shape)
print(labels.shape)
print(np.unique(labels))
for label, color in [(0, 'b'), (1, 'r')]:
xy = X[labels == label]
plt.scatter(xy[:, 0], xy[:, 1], color=color, marker='.')
plt.axis([-1, 2, -1, 2])
plt.show()
# Contour plots
# How to plot sin(x)*sin(y) heatmap?
xs, ys = np.mgrid[0:5:100j, 0:5:100j] # genertate mesh
Z = np.sin(xs) * np.sin(ys)
plt.imshow(Z, extent=(0, 5, 0, 5))
plt.show()
# Actual problem from my research:
# Suppose you have 2 sensors, each of which should take measurements
# at even intervals over the day. We want to make a method which can let us
# recover from device failure: if a sensor goes down for an extended period,
# can we impute the missing values from the other?
# Take for example two strongly correlated measured signals:
np.random.seed(1234)
s1 = np.sin(np.linspace(0, 10, 100)) + np.random.randn(100) * 0.05
s2 = 2 * np.sin(np.linspace(0, 10, 100)) + np.random.randn(100) * 0.05
plt.plot(s1, color='blue')
plt.plot(s2, color='red')
plt.show()
# Simulate a failure in sensor 2 for a random 40-index period
def holdout(): # gives arbitrary slice from 0 to 100 width 40
width = 40
start = np.random.randint(0, len(s2) - width)
missing = slice(start, start + width)
return missing, np.r_[:start, missing.stop:len(s2)]
# Find the most likely scaling for reconstructing s2 from s1
def factor_finder(train_ix):
return np.mean((s2[train_ix] + 0.0001) / (s1[train_ix] + 0.0001))
test, train = holdout()
f = factor_finder(train)
def plot_factor(factor):
times = np.arange(len(s1))
test, train = holdout()
plt.plot(times, s1, color='blue', ls='--', label='s1')
plt.scatter(times[train], s2[train], color='red', marker='.', label='train')
plt.plot(times[test], s1[test] * factor, color='green', alpha=0.6, label='prediction')
plt.scatter(times[test], s2[test], color='magenta', marker='.', label='test')
plt.legend(bbox_to_anchor=(1.05, 0.6), loc=2)
plt.title('prediction factor {}'.format(factor))
plt.show()
plot_factor(f)
# Cubic kernel convolution and interpolation
# Complicated example; take a look on your own time!
import scipy
import scipy.sparse
# From Cubic Convolution Interpolation (Keys 1981)
# Computes a piecewise cubic kernel evaluated at each data point in x
def cubic_kernel(x):
y = np.zeros_like(x)
x = np.fabs(x)
if np.any(x > 2):
raise ValueError('only absolute values <= 2 allowed')
q = x <= 1
y[q] = ((1.5 * x[q] - 2.5) * x[q]) * x[q] + 1
q = ~q
y[q] = ((-0.5 * x[q] + 2.5) * x[q] - 4) * x[q] + 2
return y
# Everything is 1D
# Given a uniform grid of size grid_size
# and requested samples of size n_samples,
# generates an n_samples x grid_size interpolation matrix W
# such that W.f(grid) ~ f(samples) for differentiable f and samples
# inside of the grid.
def interp_cubic(grid, samples):
delta = grid[1] - grid[0]
factors = (samples - grid[0]) / delta
# closest refers to the closest grid point that is smaller
idx_of_closest = np.floor(factors)
dist_to_closest = factors - idx_of_closest # in units of delta
grid_size = len(grid)
n_samples = len(samples)
csr = scipy.sparse.csr_matrix((n_samples, grid_size), dtype=float)
for conv_idx in range(-2, 2): # sliding convolution window
coeff_idx = idx_of_closest - conv_idx
coeff_idx[coeff_idx < 0] = 0 # threshold (no wraparound below)
coeff_idx[coeff_idx >= grid_size] = grid_size - 1 # threshold (no wraparound above)
relative_dist = dist_to_closest + conv_idx
data = cubic_kernel(relative_dist)
col_idx = coeff_idx
ind_ptr = np.arange(0, n_samples + 1)
csr += scipy.sparse.csr_matrix((data, col_idx, ind_ptr),
shape=(n_samples, grid_size))
return csr
lo, hi = 0, 1
fine = np.linspace(lo, hi, 100)
coarse = np.linspace(lo, hi, 15)
W = interp_cubic(coarse, fine)
W.shape
def f(x):
a = np.sin(2 / (x + 0.2)) * (x + 0.1)
#a = a * np.cos(5 * x)
a = a * np.cos(2 * x)
return a
known = f(coarse) # only use coarse
interp = W.dot(known)
plt.scatter(coarse, known, color='blue', label='grid')
plt.plot(fine, interp, color='red', label='interp')
plt.plot(fine, f(fine), color='black', label='exact', ls=':')
plt.legend(bbox_to_anchor=(1.05, 0.6), loc=2)
plt.show()
Explanation: Indexing Applications
End of explanation
display('np.linspace(4, 8, 2)')
display('np.arange(4, 8, 2)') # GOTCHA
plt.plot(np.linspace(1, 4, 10), np.logspace(1, 4, 10))
plt.show()
shape = (4, 2)
print(np.zeros(shape)) # init to zero. Use np.ones or np.full accordingly
# [GOTCHA] np.empty won't initialize anything; it will just grab the first available chunk of memory
x = np.zeros(shape)
x[0] = [1, 2]
del x
print(np.empty(shape))
# From iterator/list/array - can just use constructor
np.array([[1, 2], range(3, 5), np.array([5, 6])]) # auto-flatten (if possible)
# Deep copies & shape/dtype preserving creations
x = np.arange(4).reshape(2, 2)
y = np.copy(x)
z = np.zeros_like(x)
x[1, 1] = 5
print(x)
print(y)
print(z)
Explanation: Array Creation and Initialization
doc
If unspecified, default dtype is usually float, with an exception for arange.
End of explanation
x = np.arange(2 * 3 * 4).reshape(2, 3, 4)
print(x.shape)
print(x.size)
# Use ravel() to get the underlying flat array. np.flatten() will give you a copy
print(x)
print(x.ravel())
# np.transpose or *.T will reverse axes
print('transpose', x.shape, '->', x.T.shape)
# rollaxis pulls the argument axis to axis 0, keeping all else the same.
print('rollaxis', x.shape, '->', np.rollaxis(x, 1, 0).shape)
print()
# all the above are instances of np.moveaxis
# it's clear how these behave:
perm = np.array([0, 2, 1])
moved = np.moveaxis(x, range(3), perm)
print('arbitrary permutation', list(range(3)), perm)
print(x.shape, '->', moved.shape)
print('moved[1, 2, 0]', moved[1, 2, 0], 'x[1, 0, 2]', x[1, 0, 2])
# When is transposition useful?
# Matrix stuff, mostly:
np.random.seed(1234)
X = np.random.randn(3, 4)
print('sigma {:.2f}, eig {:.2f}'.format(
np.linalg.svd(X)[1].max(),
np.sqrt(np.linalg.eigvalsh(X.dot(X.T)).max())))
# Create a random symmetric matrix
X = np.random.randn(3, 3)
plt.imshow(X)
plt.show()
X += X.T
plt.imshow(X)
plt.show()
print('Check frob norm upper vs lower tri', np.linalg.norm(np.triu(X) - np.tril(X).T))
# Row-major, C-order
# largest axis changes fastest
A = np.arange(2 * 3).reshape(2, 3).copy(order='C')
# Row-major, Fortran-order
# smallest axis changes fastest
# GOTCHA: many numpy funcitons assume C ordering
B = np.arange(2 * 3).reshape(2, 3).copy(order='F')
# Differences in representation don't manifest in abstraction
print(A)
print(B)
# Array manipulation functions with order option
# will use C/F ordering, but this is independent of the underlying layout
print(A.ravel())
print(A.ravel(order='F'))
# Reshape ravels an array, then folds back into shape, according to the given order
# Note reshape can infer one dimension; we leave it as -1.
print(A.ravel(order='F').reshape(-1, 3))
print(A.ravel(order='F').reshape(-1, 3, order='F'))
# GOTCHA: ravel will copy the array so that everything is contiguous
# if the order differs
print(id(A), id(A.ravel().base), id(A.ravel(order='F')))
Explanation: Extremely extensive random generation. Remember to seed!
Transposition
Under the hood. So far, we've just been looking at the abstraction that NumPy offers. How does it actually keep things contiguous in memory?
We have a base array, which is one long contiguous array from 0 to size - 1.
End of explanation
# Kronecker demo
A = np.array([[1, 1/2], [-1/2, -1]])
B = np.identity(2)
f, axs = plt.subplots(2, 2)
# Guess what a 2x2 axes subplot type is?
print(type(axs))
# Use of numpy for convenience: arbitrary object flattening
for ax in axs.ravel():
ax.axis('off')
ax1, ax2, ax3, ax4 = axs.ravel()
ax1.imshow(A, vmin=-1, vmax=1)
ax1.set_title('A')
ax2.imshow(B, vmin=-1, vmax=1)
ax2.set_title('B')
ax3.imshow(np.kron(A, B), vmin=-1, vmax=1)
ax3.set_title(r'$A\otimes B$')
im = ax4.imshow(np.kron(B, A), vmin=-1, vmax=1)
ax4.set_title(r'$B\otimes A$')
f.colorbar(im, ax=axs.ravel().tolist())
plt.axis('off')
plt.show()
# Transposition demo: using transpose, you can compute
A = np.random.randn(40, 40)
B = np.random.randn(40, 40)
AB = np.kron(A, B)
z = np.random.randn(40 * 40)
def kron_mvm():
return AB.dot(z)
def saatci_mvm():
# This differs from the paper's MVM, but is the equivalent for
# a C-style ordering of arrays.
x = z.copy()
for M in [B, A]:
n = M.shape[1]
x = x.reshape(-1, n).T
x = M.dot(x)
return x.ravel()
print('diff', np.linalg.norm(kron_mvm() - saatci_mvm()))
print('Kronecker matrix vector multiplication')
compare_times(kron_mvm, saatci_mvm)
Explanation: Transposition Example: Kronecker multiplication
Based on Saatci 2011 (PhD thesis).
Recall the tensor product over vector spaces $V \otimes W$ from before. If $V$ has basis $\textbf{v}_i$ and $W$ has $\textbf{w}_j$, we can define the tensor product over elements $\nu\in V,\omega\in W$ as follows.
Let $\nu= \sum_{i=1}^n\nu_i\textbf{v}i$ and $\omega= \sum{j=1}^m\omega_j\textbf{w}j$. Then:
$$
V \otimes W\ni \nu\otimes \omega=\sum{i=1}^n\sum_{j=1}^m\nu_i\omega_j(\textbf{v}_i\otimes \textbf{w}_j)
$$
If $V$ is the vector space of $a\times b$ matrices, then its basis vectors correspond to each of the $ab$ entries. If $W$ is the vector space of $c\times d$ matrices, then its basis vectors correspond similarly to the $cd$ entries. In the tensor product, $(\textbf{v}_i\otimes \textbf{w}_j)$ is the basis vector for an entry in the $ac\times bd$ matrices that make up $V\otimes W$.
End of explanation
# A ufunc is the most common way to modify arrays
# In its simplest form, an n-ary ufunc takes in n numpy arrays
# of the same shape, and applies some standard operation to "parallel elements"
a = np.arange(6)
b = np.repeat([1, 2], 3)
print(a)
print(b)
print(a + b)
print(np.add(a, b))
# If any of the arguments are of lower dimension, they're prepended with 1
# Any arguments that have dimension 1 are repeated along that axis
A = np.arange(2 * 3).reshape(2, 3)
b = np.arange(2)
c = np.arange(3)
for i in ['A', 'b', 'c']:
display(i)
# On the right, broadcasting rules will automatically make the conversion
# of c, which has shape (3,) to shape (1, 3)
display('A * c')
display('c.reshape(1, 3)')
display('np.repeat(c.reshape(1, 3), 2, axis=0)')
display('np.diag(c)')
display('A.dot(np.diag(c))')
display('A * c')
# GOTCHA: this won't compile your code to C: it will just make a slow convenience wrapper
demo = np.frompyfunc('f({}, {})'.format, 2, 1)
# GOTCHA: common broadcasting mistake -- append instead of prepend
display('A')
display('b')
try:
demo(A, b) # can't prepend to (2,) with 1 to get something compatible with (2, 3)
except ValueError as e:
print('ValueError!')
print(e)
# np.newaxis adds a 1 in the corresponding axis
display('b[:, np.newaxis]')
display('np.repeat(b[:, np.newaxis], 3, axis=1)')
display('demo(A, b[:, np.newaxis])')
# note broadcasting rules are invariant to order
# even if the ufunc isn't
display('demo(b[:, np.newaxis], A)')
# Using broadcasting, we can do cheap diagonal matrix multiplication
display('b')
display('np.diag(b)')
# without representing the full diagonal matrix.
display('b[:, np.newaxis] * A')
display('np.diag(b).dot(A)')
# (Binary) ufuncs get lots of efficient implementation stuff for free
a = np.arange(4)
b = np.arange(4, 8)
display('demo.outer(a, b)')
display('np.bitwise_or.accumulate(b)')
display('np.bitwise_or.reduce(b)') # last result of accumulate
def setup(): return np.arange(10 ** 6)
def manual_accum(x):
res = np.zeros_like(x)
for i, v in enumerate(x):
res[i] = res[i-1] | v
def np_accum(x):
np.bitwise_or.accumulate(x)
print('accumulation speed comparison')
compare_times(manual_accum, np_accum, setup, setup)
Explanation: Ufuncs and Broadcasting
doc
End of explanation
# Example: generating random symmetric matrices
A = np.random.randint(0, 10, size=(3,3))
print(A)
A += A.T # this operation is WELL-DEFINED, even though A is changing
print(A)
# Above is sugar for
np.add(A, A, out=A)
x = np.arange(10)
print(x)
np.subtract(x[:5], x[5:], x[:5])
print(x)
Explanation: Aliasing
You can save on allocations and copies by providing the output array to copy into.
Aliasing occurs when all or part of the input is repeated in the output
Ufuncs allow aliasing
End of explanation
x = np.arange(2 * 2).reshape(2, 2)
try:
x.dot(np.arange(2), out=x)
# GOTCHA: some other functions won't warn you!
except ValueError as e:
print(e)
Explanation: [GOTCHA]: If it's not a ufunc, aliasing is VERY BAD: Search for "In general the rule" in this discussion. Ufunc aliasing is safe since this pr
End of explanation
# Great resources to learn einsum:
# https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/
# http://ajcr.net/Basic-guide-to-einsum/
# Examples of how it's general:
np.random.seed(1234)
x = np.random.randint(-10, 11, size=(2, 2, 2))
print(x)
# Swap axes
print(np.einsum('ijk->kji', x))
# Sum [contraction is along every axis]
print(x.sum(), np.einsum('ijk->', x))
# Multiply (pointwise) [take the diagonal of the outer product; don't sum]
y = np.random.randint(-10, 11, size=(2, 2, 2))
np.array_equal(x * y, np.einsum('ijk,ijk->ijk', x, y))
# Already, an example where einsum is more clear: multiply pointwise along different axes:
print(np.array_equal(x * y.transpose(), np.einsum('ijk,kji->ijk', x, y)))
print(np.array_equal(x * np.rollaxis(y, 2), np.einsum('ijk,jki->ijk', x, y)))
# Outer (tensor) product
x = np.arange(4)
y = np.arange(4, 8)
np.array_equal(np.outer(x, y), np.einsum('i,j->ij', x, y))
# Arbitrary inner product
a = np.arange(2 * 2).reshape(2, 2)
print(np.linalg.norm(a, 'fro') ** 2, np.einsum('ij,ij->', a, a))
np.random.seed(1234)
x = np.random.randn(2, 2)
y = np.random.randn(2, 2)
# Matrix multiply
print(np.array_equal(x.dot(y), np.einsum('ij,jk->ik', x, y)))
# Batched matrix multiply
x = np.random.randn(3, 2, 2)
y = np.random.randn(3, 2, 2)
print(np.array_equal(
np.array([i.dot(j) for i, j in zip(x, y)]),
np.einsum('bij,bjk->bik', x, y)))
# all of {np.matmul, np.tensordot, np.dot} are einsum instances
# The specializations may have marginal speedups, but einsum is
# more expressive and clear code.
Explanation: Configuration and Hardware Acceleration
NumPy works quickly because it can perform vectorization by linking to C functions that were built for your particular system.
[GOTCHA] There are two different high-level ways in which NumPy uses hardware to accelerate your computations.
Ufunc
When you perform a built-in ufunc:
* The corresponding C function is called directly from the Python interpreter
* It is not parallelized
* It may be vectorized
In general, it is tough to check whether your code is using vectorized instructions (or, in particular, which instruction set is being used, like SSE or AVX512.
If you installed from pip or Anaconda, you're probably not vectorized.
If you compiled NumPy yourself (and select the correct flags), you're probably fine.
If you're using the Numba JIT, then you'll be vectorized too.
If have access to icc and MKL, then you can use the Intel guide or Anaconda
BLAS
These are optimized linear algebra routines, and are only called when you invoke operations that rely on these routines.
This won't make your vectors add faster (first, NumPy doesn't ask BLAS to nor could it, since bandwidth-limited ops are not the focus of BLAS). It will help with:
* Matrix multiplication (np.dot)
* Linear algebra (SVD, eigenvalues, etc) (np.linalg)
* Similar stuff from other libraries that accept NumPy arrays may use BLAS too.
There are different implementations for BLAS. Some are free, and some are proprietary and built for specific chips (MKL). You can check which version you're using this way, though you can only be sure by inspecting the binaries manually.
Any NumPy routine that uses BLAS will use, by default ALL AVAILABLE CORES. This is a departure from the standard parallelism of ufunc or other numpy transformations. You can change BLAS parallelism with the OMP_NUM_THREADS environment variable.
Stuff to Avoid
NumPy has some cruft left over due to backwards compatibility. There are some edge cases when you would (maybe) use these things (but probably not). In general, avoid them:
np.chararray: use an np.ndarray with unicode dtype
np.MaskedArrays: use a boolean advanced index
np.matrix: use a 2-dimensional np.ndarray
Stuff Not Mentioned
General array manipulation
Selection-related convenience methods np.sort, np.unique
Array composition and decomposition np.split, np.stack
Reductions many-to-1 np.sum, np.prod, np.count_nonzero
Many-to-many array transformations np.fft, np.linalg.cholesky
String formatting np.array2string
IO np.loadtxt, np.savetxt
Polynomial interpolation and related scipy integration
Equality testing
Takeaways
Use NumPy arrays for a compact, cache-friendly, in-memory representation of structured numeric data.
Vectorize, vectorize, vectorize! Less loops!
Expressive
Fast
Concise
Know when copies happen vs. when views happen
Advanced indexing -> copy
Basic indexing -> view
Transpositions -> usually view (depends if memory order changes)
Ufuncs/many-to-many -> copy (possibly with overwrite
Rely on powerful indexing API to avoid almost all Python loops
Rolling your own algorithm? Google it, NumPy probably has it built-in!
Be concious of what makes copies, and what doesn't
Downsides. Can't optimize across NumPy ops (like a C compiler would/numpy would). But do you need that? Can't parallelize except BLAS, but is it computaitonal or memory bandwidth limited?
Cherry on Top: Einsum
doc
Recall the Kronecker product $\otimes$ from before? Let's recall the fully general tensor product.
If $V$ has basis $\textbf{v}_i$ and $W$ has $\textbf{w}_j$, we can define the tensor product over elements $\nu\in V,\omega\in W$ as follows.
Let $\nu= \sum_{i=1}^n\nu_i\textbf{v}i$ and $\omega= \sum{j=1}^m\omega_j\textbf{w}j$. Then:
$$
V \otimes W\ni \nu\otimes \omega=\sum{i=1}^n\sum_{j=1}^m\nu_i\omega_j(\textbf{v}_i\otimes \textbf{w}_j)
$$
But what if $V$ is itself a tensor space, like a matrix space $F^{m\times n}$, and $W$ is $F^{n\times k}$. Then $\nu\otimes\omega$ is a tensor with shape $(m, n, n, k)$, where the $(i_1, i_2,i_3,i_4)$-th element is given by $\nu_{i_1i_2}\omega_{i_3i_4}$ (the corresponding cannonical basis vector being $\textbf{e}^{(m)}{i_1}(\textbf{e}^{(n)}{i_2})^\top\otimes \textbf{e}^{(n)}{i_3}(\textbf{e}^{(k)}{i_4})^\top$, where $\textbf{e}^{(m)}{i_1}(\textbf{e}^{(n)}{i_2})^\top$, the cannonical matrix basis vector, is not that scary - here's an example in $2\times 3$:
$$
\textbf{e}^{(2)}{1}(\textbf{e}^{(3)}{2})^\top=\begin{pmatrix} 0 & 1 & 0\ 0 & 0 & 0 \end{pmatrix}
$$
What happens if we contract along the second and third axis, both of which have length $n$, where contraction in this example builds a tensor with shape $(m, k)$ such that the $(i_1,i_4)$-th entry is the sum of all entries in the tensor product $\nu\otimes \omega$ which have the same values $i_2=i_3$. In other words:
$$
[\text{contract}{12}(\nu\otimes\omega)]{i_1i_4}=\sum_{i_2=1}^n(\nu\otimes\omega){i_1,i_2,i_2,i_4}=\sum{i_2=1}^n\nu_{i_1i_2}\omega_{i_2,i_4}
$$
Does that last term look familiar? It is, it's the matrix product! Indeed, a matrix product is a generalized trace of the outer product of two compatible matrices.
That's one way of thinking about einsum: it lets you do generalized matrix products; in that you take in an arbitrary number of matrices, compute their outer product, and then specify which axes to trace. But then it also lets you arbitrarily transpose and select diagonal elements of your tensors, too.
End of explanation
# Let the contiguous blocks of letters be words
# If they're on the left, they're argument words. On the right, result words.
np.random.seed(1234)
x = np.random.randint(-10, 11, 3 * 2 * 2 * 1).reshape(3, 2, 2, 1)
y = np.random.randint(-10, 11, 3 * 2 * 2).reshape(3, 2, 2)
z = np.random.randint(-10, 11, 2 * 3).reshape(2, 3)
# Example being followed in einsum description:
# np.einsum('ijkm,iko,kp->mip', x, y, z)
# 1. Line up each argument word with the axis of the array.
# Make sure that word length == dimension
# Make sure same letters correspond to same lengths
# x.shape (3, 2, 2, 1)
# i j k m
# y.shape (3, 2, 2)
# i k o
# z.shape (2, 3)
# k p
# 2. Create the complete tensor product
outer = np.tensordot(np.tensordot(x, y, axes=0), z, axes=0)
print(outer.shape)
print('(i j k m i k o k p)')
# 3. Every time a letter repeats, only look at the corresponding "diagonal" elements.
# Repeat i: (i j k m i k o k p)
# (i i )
# Expected: (i j k m k o k p)
# The expected index corresponds to the above index in the outer product
# We can do this over all other values with two advanced indices
span_i = np.arange(3)
repeat_i = outer[span_i, :, :, :, span_i, ...] # ellipses means "fill with :"
print(repeat_i.shape)
print('(i j k m k o k p)')
# Repeat k: (i j k m k o k p)
# ( k k k )
# Expected: (i j k m o p)
span_k = np.arange(2)
repeat_k = repeat_i[:, :, span_k, :, span_k, :, span_k, :]
# GOTCHA: advanced indexing brings shared advanced index to front, fixed with rollaxis
repeat_k = np.rollaxis(repeat_k, 0, 2)
print(repeat_k.shape)
print('(i j k m o p)')
# 4. Compare the remaining word to the result word; sum out missing letters
# Result word: (m i p)
# Current word: (i j k m o p)
# Sum out j: (i k m o p)
# The resulting array has at entry (i k m o p) the following:
# (i 0 k m o p) + (i 1 k m o p) + ... + (i [axis j length] k m o p)
sumj = repeat_k.sum(axis=1)
print(sumj.shape)
print('(i k m o p)')
# Sum out k: (i m o p)
sumk = sumj.sum(axis=1)
print(sumk.shape)
print('(i m o p)')
# Sum out o: (i m p)
sumo = sumk.sum(axis=2)
print(sumo.shape)
print('(i m p)')
# 6. Transpose remaining word until it has the same order as the result word
# (i m p) -> (m i p)
print(np.moveaxis(sumo, [0, 1, 2], [1, 0, 2]))
print(np.einsum('ijkm,iko,kp->mip', x, y, z))
Explanation: General Einsum Approach
Again, lots of visuals in this blog post.
[GOTCHA]. You can't use more than 52 different letters.. But if you find yourself writing np.einsum with more than 52 active dimensions, you should probably make two np.einsum calls. If you have dimensions for which nothing happens, then ... can be used to represent an arbitrary amount of missed dimensions.
Here's the way I think about an np.einsum (the actual implementation is more efficient).
End of explanation
np.random.seed(1234)
a = 3
b = 300
Bs = np.random.randn(10, a, a)
Ds = np.random.randn(10, b) # just the diagonal
z = np.random.randn(a * b)
def quadratic_impl():
K = np.zeros((a * b, a * b))
for B, D in zip(Bs, Ds):
K += np.kron(B, np.diag(D))
return K.dot(z)
def einsum_impl():
# Ellipses trigger broadcasting
left_kron_saatci = np.einsum('N...b,ab->Nab', Ds, z.reshape(a, b))
full_sum = np.einsum('Nca,Nab->cb', Bs, left_kron_saatci)
return full_sum.ravel()
print('L2 norm of difference', np.linalg.norm(quadratic_impl() - einsum_impl()))
# Of course, we can make this arbitrarily better by increasing b...
print('Matrix-vector multiplication')
compare_times(quadratic_impl, einsum_impl)
Explanation: Neural Nets with Einsum
Original post
<table>
<tr>
<th>
<img src="assets/mlp1.png" alt="https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/" width="600" >
</th><th>
<img src="assets/mlp2.png" alt="https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/" width="600" >
</th></tr></table>
Notice how np.einsum captures succinctly the tensor flow (yep): the extension to batch is extremely natural. You can imagine a similar extension to RGB input (instead of a black/white float, we have an array of 3 values, so our input is now a 4D tensor (batch_size, height, width, 3)).
Real Application
Under certain conditions, a kernel for a Gaussian process, a model for regression, is a matrix with the following form:
$$
K = \sum_{i=1}^nB_i\otimes D_i
$$
$B_i$ has shape $a\times a$, and they are small dense matrices. $D_i$ is a $b\times b$ diagonal matrix, and $b$ is so large that we can't even hold $b^2$ in memory. So we only have a vector to represent $D_i$. A useful operation in Gaussian process modelling is the multiplication of $K$ with a vector, $K\textbf{z}$. How can we do this efficiently and expressively?
End of explanation |
11,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
11,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xkcd 1313
Step1: We can see that there are multiple names that are both winners and losers
Step2: Clinton? He won both his elections, didn't he? Yes, Bill Clinton did, but George Clinton (the <a href="http
Step3: Defining the Problem
Let's be clear exactly what we're trying to achieve. We're looking for a Python regular expression which, when used with the re.search function, will match all of the winners but none of the losers. We can define the function mistakes to return a set of misclassifications; if mistakes is empty then the regular expression is verified correct
Step4: Let's check the xkcd regex
Step5: The xkcd regex incorrectly matches "fremont", representing John C. Frémont, the Republican candidate who lost to James Buchanan in 1856. Could Randall Monroe have made an error? Is someone wrong on the Internet? Investigating the 1856 election, I see that Randall must have had Millard Fillmore, the third-party candidate, as the opponent. Fillmore is more famous, having served as the 13th president (although he never won an election; he became president when Taylor died in office). But Fillmore only got 8 electoral votes in 1856 while Fremont got 114, so I will stick with Fremont in my list of losers.
We can verify that Randall got it right under the interpretation that Fillmore, not Fremont, was the loser
Step6: Strategy for Finding a Regex
We need a strategy to find a regex that matches all the winners but none of the losers. I came up with this approach
Step7: Glossary
Just to be clear, I define the terms I will be using
Step8: Our program is complete! We can run findregex, verify the solution, and compare the length of our solution to Randall's
Step9: Our regex is 15% shorter than Randall's—success!
Tests
Here's a test suite to give us more confidence in (and familiarity with) our functions
Step10: Regex Golf with Arbitrary Lists
Let's move on to arbitrary lists. I define report, to call findregex, verify the solution, and print the number of characters in the solution, the number of parts, the competitive ratio (the ratio between the lengths of a trivial solution and the actual solution), and the number of winners and losers.
Step11: The top 10 boys and girls names for 2012
Step12: This is interesting because 'a.$|e.$|a.o' is an example of something that could be re-written in a shorter form if we allowed more complex parts. The following would save one character
Step13: <a href="http
Step14: <a href="http
Step15: Neat—our solution is one character shorter than Randall's. We can verify that Randall's solution is also correct
Step16: Update (Nov 2015)
Step17: The two movies cost us one more character.
There are lots of examples to play with over at regex.alf.nu, like this one
Step18: The answer varies with different runs; sometimes it is 'foo' and sometimes 'f.o'. Both have 3 characters, but 'f.o' is smaller in terms of the total amount of ink/pixels needed to render it. (How can the answer vary, when there are no calls to any random function? Because when max iterates over a set and several elements have the same best score, it is unspecified which one will be selected.)
Of course, we can run any of these examples in the other direction
Step21: What To Do Next?
I see two options | Python Code:
from __future__ import division, print_function
import re
import itertools
def words(text): return set(text.split())
winners = words('''washington adams jefferson jefferson madison madison monroe
monroe adams jackson jackson van-buren harrison polk taylor pierce buchanan
lincoln lincoln grant grapartnt hayes garfield cleveland harrison cleveland mckinley
mckinley roosevelt taft wilson wilson harding coolidge hoover roosevelt
roosevelt roosevelt roosevelt truman eisenhower eisenhower kennedy johnson nixon
nixon carter reagan reagan bush clinton clinton bush bush obama obama''')
losers = words('''clinton jefferson adams pinckney pinckney clinton king adams
jackson adams clay van-buren van-buren clay cass scott fremont breckinridge
mcclellan seymour greeley tilden hancock blaine cleveland harrison bryan bryan
parker bryan roosevelt hughes cox davis smith hoover landon willkie dewey dewey
stevenson stevenson nixon goldwater humphrey mcgovern ford carter mondale
dukakis bush dole gore kerry mccain romney''')
Explanation: xkcd 1313: Regex Golf
<p style="text-align: right"><i>Peter Norvig<br>January 2014<br>revised November 2015</i></p>
I ♡ xkcd! It reliably provides top-rate insights, humor, or both. I was thrilled when I got to introduce Randall Monroe for a talk in 2007. But in xkcd #1313,
<a href="http://xharag01.com/1313" title="/[gikuj]..n|a.[alt]|[pivo].l|i..o|[jocy]e|sh|di|oo/ matches the last names of elected US presidents but not their opponents"><img src="http://imgs.xharag01.com/comics/regex_golf.png"></a>
I found that the hover text, "<span style="background-color:#eeeeee">/bu|[rn]t|[coy]e|[mtg]a|j|iso|n[hl]|[ae]d|lev|sh|[lnd]i|[po]o|ls/ matches the last names of elected US presidents but not their opponents</span>", contains a confusing contradiction. I'm old enough to remember that Jimmy Carter won one term and lost a second. No regular expression could both match and not match "Carter".
But this got me thinking: can I come up with an algorithm to match or beat Randall's regex golf scores? The game is on.
Presidents
I started by finding a listing of presidential elections, giving me these winners and losers:
End of explanation
winners & losers
Explanation: We can see that there are multiple names that are both winners and losers:
End of explanation
losers = losers - winners
Explanation: Clinton? He won both his elections, didn't he? Yes, Bill Clinton did, but George Clinton (the <a href="http://en.wikipedia.org/wiki/George_Clinton_(vice_president)">Revolutionary War leader</a>, not the <a href="http://en.wikipedia.org/wiki/George_Clinton_(musician)">Funkadelic leader</a>) was a losing opponent in 1792 and 1812. To avoid the contradiction, I decided to eliminate all winners from the set of losers (and in fact Randall later confirmed that that was his intent):
End of explanation
def mistakes(regex, winners, losers):
"The set of mistakes made by this regex in classifying winners and losers."
return ({"Should have matched: " + W
for W in winners if not re.search(regex, W)} |
{"Should not have matched: " + L
for L in losers if re.search(regex, L)})
def verify(regex, winners, losers):
assert not mistakes(regex, winners, losers)
return True
Explanation: Defining the Problem
Let's be clear exactly what we're trying to achieve. We're looking for a Python regular expression which, when used with the re.search function, will match all of the winners but none of the losers. We can define the function mistakes to return a set of misclassifications; if mistakes is empty then the regular expression is verified correct:
End of explanation
xharag01 = "[gikuj]..n|a.[alt]|[pivo].l|i..o|[jocy]e|sh|di|oo"
mistakes(xkcd, winners, losers)
Explanation: Let's check the xkcd regex:
End of explanation
alternative_losers = {'fillmore'} | losers - {'fremont'}
verify(xkcd, winners, alternative_losers)
Explanation: The xkcd regex incorrectly matches "fremont", representing John C. Frémont, the Republican candidate who lost to James Buchanan in 1856. Could Randall Monroe have made an error? Is someone wrong on the Internet? Investigating the 1856 election, I see that Randall must have had Millard Fillmore, the third-party candidate, as the opponent. Fillmore is more famous, having served as the 13th president (although he never won an election; he became president when Taylor died in office). But Fillmore only got 8 electoral votes in 1856 while Fremont got 114, so I will stick with Fremont in my list of losers.
We can verify that Randall got it right under the interpretation that Fillmore, not Fremont, was the loser:
End of explanation
def findregex(winners, losers, k=4):
"Find a regex that matches all winners but no losers (sets of strings)."
# Make a pool of regex parts, then pick from them to cover winners.
# On each iteration, add the 'best' part to 'solution',
# remove winners covered by best, and keep in 'pool' only parts
# that still match some winner.
pool = regex_parts(winners, losers)
solution = []
def score(part): return k * len(matches(part, winners)) - len(part)
while winners:
best = max(pool, key=score)
solution.append(best)
winners = winners - matches(best, winners)
pool = {r for r in pool if matches(r, winners)}
return OR(solution)
def matches(regex, strings):
"Return a set of all the strings that are matched by regex."
return {s for s in strings if re.search(regex, s)}
OR = '|'.join # Join a sequence of strings with '|' between them
Explanation: Strategy for Finding a Regex
We need a strategy to find a regex that matches all the winners but none of the losers. I came up with this approach:
Generate a pool of regex parts: small regexes of a few characters, such as "bu" or "r.e$" or "j".
Consider only parts that match at least one winner, but no losers.
Our solution will be formed by "or"-ing together some of these parts (e.g. "bu|n.e|j").
This is a set cover problem: pick some of the parts so that they cover all the winners.
Set cover is an NP-hard problem, so I feel justified in using an approximation approach that finds a small but not necessarily smallest solution.
For many NP-hard problems a good approximation can be had with a greedy algorithm: Pick the "best" part first (the one that covers the most winners with the fewest characters), and repeat, choosing the "best" each time until there are no more winners to cover.
To guarantee that we will find a solution, make sure that each winner has at least one part that matches it.
There are three ways this strategy can fail to find the shortest possible regex:
The shortest regex might not be a disjunction. Our strategy can only find disjunctions (of the form "a|b|c|...").
The shortest regex might be a disjunction formed with different parts. For example, "[rn]t" is not in our pool of parts.
The greedy algorithm isn't guaranteed to find the shortest solution. We might have all the right parts, but pick the wrong ones.
The algorithm is below. Our pool of parts is a set of strings created with regex_parts(winners, losers). We accumulate parts into the list solution, which starts empty. On each iteration choose the best part: the one with a maximum score. (I decided by default to score 4 points for each winner matched, minus one point for each character in the part.) We then add the best part to solution, and remove from winners all the strings that are matched by best. Finally, we update the pool, keeping only those parts that still match one or more of the remaining winners. When there are no more winners left, OR together all the solution parts to give the final regular expression string.
End of explanation
def regex_parts(winners, losers):
"Return parts that match at least one winner, but no loser."
wholes = {'^' + w + '$' for w in winners}
parts = {d for w in wholes for p in subparts(w) for d in dotify(p)}
return wholes | {p for p in parts if not matches(p, losers)}
def subparts(word, N=4):
"Return a set of subparts of word: consecutive characters up to length N (default 4)."
return set(word[i:i+n+1] for i in range(len(word)) for n in range(N))
def dotify(part):
"Return all ways to replace a subset of chars in part with '.'."
choices = map(replacements, part)
return {cat(chars) for chars in itertools.product(*choices)}
def replacements(c): return c if c in '^$' else c + '.'
cat = ''.join
Explanation: Glossary
Just to be clear, I define the terms I will be using:
winners: A set of strings; our solution is required to match each of them.
losers: A set of strings; our solution is not allowed to match any of them.
part: A small regular expression, a string, such as 'bu' or 'a.a'.
pool: A set of parts from which we will pick a subset to form the solution.
regex: A regular expression; a pattern used to match against a string.
solution: A regular expression that matches all winners but no losers.
whole: A part that matches a whole word (and nothing else): '^word$'
Regex Parts
Now we need to define what the regex_parts are. Here's what I came up with:
For each winner, include a regex that matches the entire string exactly. I call this regex a whole.
<br>Example: for 'word', include '^word$'
For each whole, generate subparts consisting of 1 to 4 consecutive characters.
<br>Example: subparts('^it$') == {'^', 'i', 't', '$', '^i', 'it', 't$', '^it', 'it$', '^it$'}
For each subpart, generate all ways to replace any of the letters with a dot (the "match any" character).
<br>Example: dotify('it') == {'it', 'i.', '.t', '..'}
Keep only the dotified subparts that do not match any of the losers.
Note that I only used a few of the regular expression mechanisms: '.', '^', and '$'. I didn't try to use character classes ([a-z]), nor any of the repetition operators, nor other advanced mechanisms. Why? I thought that the advanced features usually take too many characters. For example, I don't allow the part '[rn]t', but I can achieve the same effect with the same number of characters by combining two parts: 'rt|nt'. I could add more complicated mechanisms later, but for now, YAGNI. Here is the code:
End of explanation
solution = findregex(winners, losers)
verify(solution, winners, losers)
len(solution), solution
len(xkcd), xkcd
Explanation: Our program is complete! We can run findregex, verify the solution, and compare the length of our solution to Randall's:
End of explanation
def tests():
assert subparts('^it$') == {'^', 'i', 't', '$', '^i', 'it', 't$', '^it', 'it$', '^it$'}
assert subparts('this') == {'t', 'h', 'i', 's', 'th', 'hi', 'is', 'thi', 'his', 'this'}
subparts('banana') == {'a', 'an', 'ana', 'anan', 'b', 'ba', 'ban', 'bana',
'n', 'na', 'nan', 'nana'}
assert dotify('it') == {'it', 'i.', '.t', '..'}
assert dotify('^it$') == {'^it$', '^i.$', '^.t$', '^..$'}
assert dotify('this') == {'this', 'thi.', 'th.s', 'th..', 't.is', 't.i.', 't..s', 't...',
'.his', '.hi.', '.h.s', '.h..', '..is', '..i.', '...s', '....'}
assert regex_parts({'win'}, {'losers', 'bin', 'won'}) == {
'^win$', '^win', '^wi.', 'wi.', 'wi', '^wi', 'win$', 'win', 'wi.$'}
assert regex_parts({'win'}, {'bin', 'won', 'wine', 'wit'}) == {'^win$', 'win$'}
regex_parts({'boy', 'coy'},
{'ahoy', 'toy', 'book', 'cook', 'boycott', 'cowboy', 'cod', 'buy', 'oy',
'foil', 'coyote'}) == {'^boy$', '^coy$', 'c.y$', 'coy$'}
assert matches('a|b|c', {'a', 'b', 'c', 'd', 'e'}) == {'a', 'b', 'c'}
assert matches('a|b|c', {'any', 'bee', 'succeed', 'dee', 'eee!'}) == {
'any', 'bee', 'succeed'}
assert OR(['a', 'b', 'c']) == 'a|b|c'
assert OR(['a']) == 'a'
assert words('this is a test this is') == {'this', 'is', 'a', 'test'}
assert findregex({"ahahah", "ciao"}, {"ahaha", "bye"}) == 'a.$'
assert findregex({"this", "that", "the other"}, {"one", "two", "here", "there"}) == 'h..$'
assert findregex({'boy', 'coy', 'toy', 'joy'}, {'ahoy', 'buy', 'oy', 'foil'}) == '^.oy'
assert not mistakes('a|b|c', {'ahoy', 'boy', 'coy'}, {'joy', 'toy'})
assert not mistakes('^a|^b|^c', {'ahoy', 'boy', 'coy'}, {'joy', 'toy', 'kickback'})
assert mistakes('^.oy', {'ahoy', 'boy', 'coy'}, {'joy', 'ploy'}) == {
"Should have matched: ahoy",
"Should not have matched: joy"}
return 'tests pass'
tests()
Explanation: Our regex is 15% shorter than Randall's—success!
Tests
Here's a test suite to give us more confidence in (and familiarity with) our functions:
End of explanation
def report(winners, losers):
"Find a regex to match A but not B, and vice-versa. Print summary."
solution = findregex(winners, losers)
verify(solution, winners, losers)
trivial = '^(' + OR(winners) + ')$'
print('Characters: {}, Parts: {}, Competitive ratio: {:.1f}, Winners: {}, Losers: {}'.format(
len(solution), solution.count('|') + 1, len(trivial) / len(solution) , len(winners), len(losers)))
return solution
report(winners, losers)
Explanation: Regex Golf with Arbitrary Lists
Let's move on to arbitrary lists. I define report, to call findregex, verify the solution, and print the number of characters in the solution, the number of parts, the competitive ratio (the ratio between the lengths of a trivial solution and the actual solution), and the number of winners and losers.
End of explanation
boys = words('jacob mason ethan noah william liam jayden michael alexander aiden')
girls = words('sophia emma isabella olivia ava emily abigail mia madison elizabeth')
report(boys, girls)
Explanation: The top 10 boys and girls names for 2012:
End of explanation
verify('[ae].(o|$)', boys, girls)
Explanation: This is interesting because 'a.$|e.$|a.o' is an example of something that could be re-written in a shorter form if we allowed more complex parts. The following would save one character:
End of explanation
drugs = words('lipitor nexium plavix advair ablify seroquel singulair crestor actos epogen')
cities = words('paris trinidad capetown riga zurich shanghai vancouver chicago adelaide auckland')
report(drugs, cities)
Explanation: <a href="http://xkcd.com/1313"><img src="http://norvig.com/regex_golf2.PNG"></a>
We have now fulfilled panel two of the strip. Let's try another example, separating
the top ten best-selling drugs from the top 10 cities to visit:
End of explanation
def phrases(text, sep='/'): return {line.upper().strip() for line in text.split(sep)}
starwars = phrases('''The Phantom Menace / Attack of the Clones / Revenge of the Sith /
A New Hope / The Empire Strikes Back / Return of the Jedi''')
startrek = phrases('''The Wrath of Khan / The Search for Spock / The Voyage Home /
The Final Frontier / The Undiscovered Country / Generations /
First Contact / Insurrection / Nemesis''')
report(starwars, startrek)
Explanation: <a href="http://xkcd.com/1313"><img src="http://norvig.com/regex_golf1.PNG"></a>
We can answer the challenge from panel one of the strip:
End of explanation
verify('M | [TN]|B', starwars, startrek)
Explanation: Neat—our solution is one character shorter than Randall's. We can verify that Randall's solution is also correct:
End of explanation
starwars.add('THE FORCE AWAKENS')
startrek.add('BEYOND')
findregex(starwars, startrek)
Explanation: Update (Nov 2015): There are two new movies in the works!
<table><tr><td>
<img src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcTmuo_ARTOtkU5vzJ50jVUmmwpboU8KaNy6wjmxDPcUtPzaUmXyBg" width=300>
<td>
<img src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSek-vy7HKNzv2EgUrxhS3toZPkzSvtNkrjmHt3T9qoXZEUCqJplg" width=300>
</table>
Let's add them:
End of explanation
foo = words('''afoot catfoot dogfoot fanfoot foody foolery foolish fooster
footage foothot footle footpad footway hotfoot jawfoot mafoo nonfood padfoot
prefool sfoot unfool''')
bar = words('''Atlas Aymoro Iberic Mahran Ormazd Silipan altared chandoo crenel
crooked fardo folksy forest hebamic idgah manlike marly palazzi sixfold
tarrock unfold''')
report(foo, bar)
Explanation: The two movies cost us one more character.
There are lots of examples to play with over at regex.alf.nu, like this one:
End of explanation
report(bar, foo)
Explanation: The answer varies with different runs; sometimes it is 'foo' and sometimes 'f.o'. Both have 3 characters, but 'f.o' is smaller in terms of the total amount of ink/pixels needed to render it. (How can the answer vary, when there are no calls to any random function? Because when max iterates over a set and several elements have the same best score, it is unspecified which one will be selected.)
Of course, we can run any of these examples in the other direction:
End of explanation
report(words( 000000000
000000003
000000006
000000009
000000012
000000015
066990060
140091876
173655750
312440187
321769005
368542278
390259104
402223947
443512431
714541758
747289572
819148602
878531775
905586303
9537348), words(000000005
000000008
000000010
000000011
000000014
018990130
112057285
159747125
176950268
259108903
333162608
388401457
477848777
478621693
531683939
704168662
759282218
769340942
851936815
973816159
979204403))
Explanation: What To Do Next?
I see two options:
Stop here and declare victory! Yay!
Try to make the program faster and capable of finding shorter regexes.
My first inclination was "stop here", and that's what this notebook will shortly do. But several correspondents offered very interesting suggestions, so I returned to the problem in a second notebook.
I was asked whether Randall was wrong to come up with "only" a 10-character Star Wars regex, whereas I showed there is a 9-character version. I would say that, given his role as a cartoonist, author, public speaker, educator, and entertainer, he has chosen ... wisely. He wrote a program that was good enough to allow him to make a great webcomic. A 9-character regex would not improve the comic. Randall stated that he used a genetic algorithm to find his regexes, and it has been said that genetic algorithms are often the second (or was it the third?) best method to solve any problem, and that's all he needed. But if you consider that in addition to all those roles, Randall is also still a practicing computer scientist, you could say
he chose ... poorly. Genetic algorithms are good when you want to combine the structure of two solutions to yield a better solution, so they would work well if the best regexes had a complicated tree structure. But they don't! The best solutions are disjunctions of small parts. So the genetic algorithm is trying to combine the first half of one disjunction with the second half of another—but that isn't useful, because the components of a disjunction are unordered; imposing an ordering on them doesn't help.
Summary
That was fun! I hope this page gives you an idea of how to think about problems like this. Let's review what we did:
Found an interesting problem (in a comic strip) and realized that it would not be hard to program a solution.
Wrote the function mistakes to prove that we really understand exactly what the problem is.
Came up with an approach: create lots of regex parts, and "or" together a subset of them.
Realized that this is an instance of a known problem, set cover.
Since set cover is computationally expensive, decide to use a greedy algorithm, which will be efficient (although not optimal).
Decided what goes into the pool of regex parts.
Implemented an algorithm to greedily pick parts from the pool (the function findregex).
Tried the algorithm on some examples.
Declared victory!
Thanks!
Thanks especially to Randall Monroe for inspiring me to do this, to regex.alf.nu for inspiring Randall, to Sean Lip for correcting "Wilkie" to "Willkie," and to Davide Canton, Thomas Breuel, and Stefan Pochmann for providing suggestions to improve my code.
<hr>
Peter Norvig, Jan. 2014
End of explanation |
11,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following source code defines a convolutional neural network architecture called LeNet. LeNet is a popular network known to work well on digit classification tasks. We will use a slightly different version from the original LeNet implementation, replacing the sigmoid activations with tanh activations for the neurons
Step1: Let's try again with ReLu activations
Step2: Add BatchNorm | Python Code:
data = mx.sym.var('data')
# first conv layer
conv1 = mx.sym.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.sym.Activation(data=conv1, act_type="tanh")
pool1 = mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv layer
conv2 = mx.sym.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.sym.Activation(data=conv2, act_type="tanh")
pool2 = mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc layer
flatten = mx.sym.flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 = mx.sym.Activation(data=fc1, act_type="tanh")
# second fullc
fc2 = mx.sym.FullyConnected(data=tanh3, num_hidden=10)
# softmax loss
lenet = mx.sym.SoftmaxOutput(data=fc2, name='softmax')
# create a trainable module
lenet_model = mx.mod.Module(symbol=lenet, context=mx.gpu())
# train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.1},
eval_metric='acc',
batch_end_callback = mx.callback.Speedometer(batch_size, 100),
num_epoch=10)
test_iter = mx.io.NDArrayIter(mnist['test_data'], None, batch_size)
prob = lenet_model.predict(test_iter)
test_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size)
# predict accuracy for lenet
acc = mx.metric.Accuracy()
lenet_model.score(test_iter, acc)
print(acc)
assert acc.get()[1] > 0.98
Explanation: The following source code defines a convolutional neural network architecture called LeNet. LeNet is a popular network known to work well on digit classification tasks. We will use a slightly different version from the original LeNet implementation, replacing the sigmoid activations with tanh activations for the neurons
End of explanation
data = mx.sym.var('data')
# first conv layer
conv1 = mx.sym.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.sym.Activation(data=conv1, act_type="relu")
pool1 = mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv layer
conv2 = mx.sym.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.sym.Activation(data=conv2, act_type="relu")
pool2 = mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc layer
flatten = mx.sym.flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 = mx.sym.Activation(data=fc1, act_type="relu")
# second fullc
fc2 = mx.sym.FullyConnected(data=tanh3, num_hidden=10)
# softmax loss
lenet = mx.sym.SoftmaxOutput(data=fc2, name='softmax')
# create a trainable module
lenet_model = mx.mod.Module(symbol=lenet, context=mx.gpu())
# train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.1},
eval_metric='acc',
batch_end_callback = mx.callback.Speedometer(batch_size, 100),
num_epoch=10)
test_iter = mx.io.NDArrayIter(mnist['test_data'], None, batch_size)
prob = lenet_model.predict(test_iter)
test_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size)
# predict accuracy for lenet
acc = mx.metric.Accuracy()
lenet_model.score(test_iter, acc)
print(acc)
assert acc.get()[1] > 0.98
Explanation: Let's try again with ReLu activations
End of explanation
data = mx.sym.var('data')
# first conv layer
conv1 = mx.sym.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.sym.Activation(data=conv1, act_type="relu")
pool1 = mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv layer
conv2 = mx.sym.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.sym.Activation(data=conv2, act_type="relu")
pool2 = mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc layer
flatten = mx.sym.flatten(data=pool2)
fc1 = mx.sym.FullyConnected(data=flatten, num_hidden=4096)
tanh3 = mx.sym.Activation(data=fc1, act_type="relu")
bn1 = mx.sym.BatchNorm(data=tanh3)
dropout = mx.sym.Dropout(bn1, p = 0.2)
# second fullc
fc2 = mx.sym.FullyConnected(data=dropout, num_hidden=10)
# softmax loss
lenet = mx.sym.SoftmaxOutput(data=fc2, name='softmax')
# create a trainable module
lenet_model = mx.mod.Module(symbol=lenet, context=mx.gpu())
# train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.1},
eval_metric='acc',
batch_end_callback = mx.callback.Speedometer(batch_size, 100),
num_epoch=10)
test_iter = mx.io.NDArrayIter(mnist['test_data'], None, batch_size)
prob = lenet_model.predict(test_iter)
test_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size)
# predict accuracy for lenet
acc = mx.metric.Accuracy()
lenet_model.score(test_iter, acc)
print(acc)
assert acc.get()[1] > 0.98
Explanation: Add BatchNorm
End of explanation |
11,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's do a quick inspection of the data by plotting the distribution of the different types of cuisines in the dataset.
Step1: Italian and mexican categories dominate the recipes dataset. We may want later to take this into account in order to make the problem more balanced.
We start by performing basic preprocessing and lemmatizing the words in the indredients part. Then we vectorize by using the <a href="https
Step2: Note that here we user power scaling which reduces further the effect of frequent terms. After the scaling we re-normalize the data. We use the square root as default value, but one should optimize this value through random search.
In the following we apply the same transformation on the test data.
Step3: We choose Support Vector Machines in order to train the model, as they provide state-of-the-art results in text classification problems. The cross-validation gives an average of 79.19% in terms of accuracy. Let's try a logistic regression model.
Step4: Accuracy is slightly smaller than SVM's. One should normally try a search (grid/random) in the parameters space for each classifier in order to select the best one.
Great, now we are ready to train the model selected and make predictions for the test set. This will give a descent score of 79.31% in the leaderboard. For my final solution I used Vowpal Wabbit with SGD as a base classifier and quadratic features which was sufficient for getting 14th place. | Python Code:
train = pd.read_json("train.json")
matplotlib.style.use('ggplot')
cuisine_group = train.groupby('cuisine')
cuisine_group.size().sort_values(ascending=True).plot.barh()
plt.show()
Explanation: Let's do a quick inspection of the data by plotting the distribution of the different types of cuisines in the dataset.
End of explanation
lemmatizer = WordNetLemmatizer()
train = pd.read_json("train.json")
train['ing'] = [' '.join([lemmatizer.lemmatize(preprocess(ingr)) for ingr in recette]).strip() for recette in train['ingredients']]
tfidf = TfidfVectorizer(sublinear_tf=True,max_df=0.5,ngram_range=(1,2),stop_words='english',norm='l2',binary=False)
tfidf.fit(train['ing'])
X_train = tfidf.transform(train['ing'])
y_train = train['cuisine']
# encode string labels
lenc = LabelEncoder()
lenc.fit(y_train)
y_train_enc = lenc.transform(y_train)
#power normalization
X_train.data**=0.5
normalize(X_train,copy=False)
Explanation: Italian and mexican categories dominate the recipes dataset. We may want later to take this into account in order to make the problem more balanced.
We start by performing basic preprocessing and lemmatizing the words in the indredients part. Then we vectorize by using the <a href="https://en.wikipedia.org/wiki/Tf%E2%80%93idf">$td-idf$</a> representation. Note, that we use as features unigrams and bigrams.
End of explanation
test = pd.read_json("test.json")
test['ing'] = [' '.join([lemmatizer.lemmatize(preprocess(ingr)) for ingr in recette]).strip() for recette in test['ingredients']]
X_test = tfidf.transform(test['ing'])
X_test.data**=0.5
normalize(X_test,copy=False)
categories = train['cuisine'].unique()
clf = LinearSVC(C=0.5,multi_class='ovr',dual=True)
crossValidateClassifier(X_train,y_train,clf)
Explanation: Note that here we user power scaling which reduces further the effect of frequent terms. After the scaling we re-normalize the data. We use the square root as default value, but one should optimize this value through random search.
In the following we apply the same transformation on the test data.
End of explanation
clf = LogisticRegression(C=10.0)
crossValidateClassifier(X_train,y_train,clf)
Explanation: We choose Support Vector Machines in order to train the model, as they provide state-of-the-art results in text classification problems. The cross-validation gives an average of 79.19% in terms of accuracy. Let's try a logistic regression model.
End of explanation
clf = LinearSVC(C=0.5,multi_class='ovr',dual=True)
test['cuisine']=train_and_test(clf,X_train,y_train,X_test)
test[['id','cuisine']].to_csv("lr_c0.5_power_norm.csv",index=False)
Explanation: Accuracy is slightly smaller than SVM's. One should normally try a search (grid/random) in the parameters space for each classifier in order to select the best one.
Great, now we are ready to train the model selected and make predictions for the test set. This will give a descent score of 79.31% in the leaderboard. For my final solution I used Vowpal Wabbit with SGD as a base classifier and quadratic features which was sufficient for getting 14th place.
End of explanation |
11,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To create a new database, we first import sqlite3 and then instantiate a new database object with the sqlite3.connect() method.
Step2: Next, we connect to the database with the sqlite3.connect() method and create a connection object called conn. Then, from the connection object conn, we create a cursor object called cur. The cursor object executes the database commands. The commands the cursor object cur executes are written in a database query language. Learning database query language is sort of like learning a whole new programming language. I am still note really familiar with the database language query commands or syntax. Before we can add records to the database, we need to create a table in the database.
Step3: Now to add a new record to the database, we need to
Step4: Now let's see if we can retrieve the record we just added to the database.
Step5: Let's add another record to the database
Step6: And again let's see the most recent record | Python Code:
import sqlite3
db = sqlite3.connect("name_database.db")
Explanation: To create a new database, we first import sqlite3 and then instantiate a new database object with the sqlite3.connect() method.
End of explanation
# create a database called name_database.db
# add one table to the database called names_table
# add columns to the database table: Id, first_name, last_name, age
conn = sqlite3.connect('name_database.db')
cur = conn.cursor()
cur.execute(CREATE TABLE IF NOT EXISTS names_table (
Id INTEGER PRIMARY KEY AUTOINCREMENT,
first_name text,
last_name text,
age integer
))
conn.commit()
cur.close()
conn.close()
db.close()
Explanation: Next, we connect to the database with the sqlite3.connect() method and create a connection object called conn. Then, from the connection object conn, we create a cursor object called cur. The cursor object executes the database commands. The commands the cursor object cur executes are written in a database query language. Learning database query language is sort of like learning a whole new programming language. I am still note really familiar with the database language query commands or syntax. Before we can add records to the database, we need to create a table in the database.
End of explanation
conn = sqlite3.connect('name_database.db')
cur = conn.cursor()
cur.execute("INSERT INTO names_table VALUES(:Id, :first_name, :last_name, :age)",
{'Id': None,
'first_name': 'Gabriella',
'last_name': 'Louise',
'age': int(8)
})
conn.commit()
cur.close()
conn.close()
Explanation: Now to add a new record to the database, we need to:
connect to the database, creating a connection object conn
create a cursor object cur based on the connection object
execute commands on the cursor object cur to add a new record to the database
commit the changes to the connection object conn
close the cursor object
close the connection object
End of explanation
conn = sqlite3.connect('name_database.db')
cur = conn.cursor()
cur.execute("SELECT first_name, last_name, age, MAX(rowid) FROM names_table")
record = cur.fetchone()
print(record)
cur.close()
conn.close()
Explanation: Now let's see if we can retrieve the record we just added to the database.
End of explanation
conn = sqlite3.connect('name_database.db')
cur = conn.cursor()
cur.execute("INSERT INTO names_table VALUES(:Id, :first_name, :last_name, :age)",
{'Id': None,
'first_name': 'Maelle',
'last_name': 'Levin',
'age': int(5)
})
conn.commit()
cur.close()
conn.close()
Explanation: Let's add another record to the database
End of explanation
conn = sqlite3.connect('name_database.db')
cur = conn.cursor()
cur.execute("SELECT first_name, last_name, age, MAX(rowid) FROM names_table")
record = cur.fetchone()
print(record)
cur.close()
conn.close()
Explanation: And again let's see the most recent record:
End of explanation |
11,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Load and inspect the data
Step1: 2. Let the toolkit choose the model
Step2: 3. The simple thresholding model
Step3: 4. The moving Z-score model
Step4: 5. The Bayesian changepoint model
Step5: 6. How to use the anomaly scores
Option 1
Step6: Option 2
Step7: Option 3
Step8: Option 4
Step9: 7. Updating the model with new data
Step10: Why do we want to update the model, rather than training a new one?
1. Because we've updated our parameters using the data we've seen already.
2. Updating simplifies the drudgery of prepending the data to get a final score for the lags in the previous data set. | Python Code:
import graphlab as gl
okla_daily = gl.load_timeseries('working_data/ok_daily_stats.ts')
print "Number of rows:", len(okla_daily)
print "Start:", okla_daily.min_time
print "End:", okla_daily.max_time
okla_daily.print_rows(3)
import matplotlib.pyplot as plt
%matplotlib notebook
plt.style.use('ggplot')
fig, ax = plt.subplots()
ax.plot(okla_daily['time'], okla_daily['count'], color='dodgerblue')
ax.set_ylabel('Number of quakes')
ax.set_xlabel('Date')
fig.autofmt_xdate()
fig.show()
Explanation: 1. Load and inspect the data: Oklahoma earthquake stats
End of explanation
from graphlab.toolkits import anomaly_detection
model = anomaly_detection.create(okla_daily, features=['count'])
print model
Explanation: 2. Let the toolkit choose the model
End of explanation
threshold = 5
anomaly_mask = okla_daily['count'] >= threshold
anomaly_scores = okla_daily[['count']]
anomaly_scores['threshold_score'] = anomaly_mask
anomaly_scores.tail(8).print_rows()
Explanation: 3. The simple thresholding model
End of explanation
from graphlab.toolkits.anomaly_detection import moving_zscore
zscore_model = moving_zscore.create(okla_daily, feature='count',
window_size=30,
min_observations=15)
print zscore_model
zscore_model.scores.tail(3)
zscore_model.scores.head(3)
anomaly_scores['outlier_score'] = zscore_model.scores['anomaly_score']
anomaly_scores.tail(5).print_rows()
fig, ax = plt.subplots(2, sharex=True)
ax[0].plot(anomaly_scores['time'], anomaly_scores['count'], color='dodgerblue')
ax[0].set_ylabel('# quakes')
ax[1].plot(anomaly_scores['time'], anomaly_scores['outlier_score'], color='orchid')
ax[1].set_ylabel('outlier score')
ax[1].set_xlabel('Date')
fig.autofmt_xdate()
fig.show()
Explanation: 4. The moving Z-score model
End of explanation
from graphlab.toolkits.anomaly_detection import bayesian_changepoints
changept_model = bayesian_changepoints.create(okla_daily, feature='count',
expected_runlength=2000, lag=7)
print changept_model
anomaly_scores['changepoint_score'] = changept_model.scores['changepoint_score']
anomaly_scores.head(5)
fig, ax = plt.subplots(3, sharex=True)
ax[0].plot(anomaly_scores['time'], anomaly_scores['count'], color='dodgerblue')
ax[0].set_ylabel('# quakes')
ax[1].plot(anomaly_scores['time'], anomaly_scores['outlier_score'], color='orchid')
ax[1].set_ylabel('outlier score')
ax[2].plot(anomaly_scores['time'], anomaly_scores['changepoint_score'], color='orchid')
ax[2].set_ylabel('changepoint score')
ax[2].set_xlabel('Date')
fig.autofmt_xdate()
fig.show()
Explanation: 5. The Bayesian changepoint model
End of explanation
threshold = 0.5
anom_mask = anomaly_scores['changepoint_score'] >= threshold
anomalies = anomaly_scores[anom_mask]
print "Number of anomalies:", len(anomalies)
anomalies.head(5)
Explanation: 6. How to use the anomaly scores
Option 1: choose an anomaly threshold a priori
Slightly better than choosing a threshold in the original feature space.
For Bayesian changepoint detection, where the scores are probabilities, there is a natural threshold of 0.5.
End of explanation
anomalies = anomaly_scores.to_sframe().topk('changepoint_score', k=5)
print "Number of anomalies:", len(anomalies)
anomalies.head(5)
Explanation: Option 2: choose the top-k anomalies
If you have a fixed budget for investigating and acting on anomalies, this is a good way to go.
End of explanation
anomaly_scores['changepoint_score'].show()
threshold = 0.072
anom_mask = anomaly_scores['changepoint_score'] >= threshold
anomalies = anomaly_scores[anom_mask]
print "Number of anomalies:", len(anomalies)
anomalies.head(5)
Explanation: Option 3: look at the anomaly distribution and choose a threshold
End of explanation
from interactive_plot import LineDrawer
fig, ax = plt.subplots(3, sharex=True)
guide_lines = []
threshold_lines = []
p = ax[0].plot(anomaly_scores['time'], anomaly_scores['count'],
color='dodgerblue')
ax[0].set_ylabel('# quakes')
line, = ax[0].plot((anomaly_scores.min_time, anomaly_scores.min_time),
ax[0].get_ylim(), lw=1, ls='--', color='black')
guide_lines.append(line)
ax[1].plot(anomaly_scores['time'], anomaly_scores['outlier_score'],
color='orchid')
ax[1].set_ylabel('outlier score')
line, = ax[1].plot((anomaly_scores.min_time, anomaly_scores.min_time),
ax[1].get_ylim(), lw=1, ls='--', color='black')
guide_lines.append(line)
ax[2].plot(anomaly_scores['time'], anomaly_scores['changepoint_score'],
color='orchid')
ax[2].set_ylabel('changepoint score')
ax[2].set_xlabel('Date')
line, = ax[2].plot((anomaly_scores.min_time, anomaly_scores.min_time), (0., 1.),
lw=1, ls='--', color='black')
guide_lines.append(line)
for a in ax:
line, = a.plot(anomaly_scores.range, (0., 0.), lw=1, ls='--',
color='black')
threshold_lines.append(line)
plot_scores = anomaly_scores[['count', 'outlier_score', 'changepoint_score']]
interactive_thresholder = LineDrawer(plot_scores, guide_lines, threshold_lines)
interactive_thresholder.connect()
fig.autofmt_xdate()
fig.show()
interactive_thresholder.anoms.print_rows(10)
Explanation: Option 4: get fancy with plotting
End of explanation
okla_new = gl.load_timeseries('working_data/ok_daily_update.ts')
okla_new.print_rows(20)
Explanation: 7. Updating the model with new data
End of explanation
changept_model2 = changept_model.update(okla_new)
print changept_model2
changept_model2.scores.print_rows(20)
Explanation: Why do we want to update the model, rather than training a new one?
1. Because we've updated our parameters using the data we've seen already.
2. Updating simplifies the drudgery of prepending the data to get a final score for the lags in the previous data set.
End of explanation |
11,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KinMS galaxy fitting tutorial
This tutorial aims at getting you up and running with galaxy kinematic modelling using KinMS! To start you will need to download the KinMSpy code and have it in your python path.
To do this you can simply call pip install kinms
To get started with kinematic modelling we will complete the following steps
Step1: Generate a model
First we will generate a simple galaxy model using KinMS itself, that we can attempt to determine the parameters of later. If you have your own observed galaxy to fit then of course this step can be skipped!
The make_model function below creates a simple exponential disc
Step2: Note that we have set fixSeed=True in the KinMS call - this is crucial if you are fitting with KinMS. It ensures if you generate two models with the same input parameters you will get an identical output model!
Now we have our model function, lets use it to generate a model which we will later fit. The first thing we need is to define the setup of our desired datacube (typically if you are fitting real data this will all be determined from the header keywords- see below).
Step3: We also need to create a radius vector- you ideally want this to oversample your pixel grid somewhat to avoid interpolation errors!
Step4: Now we have all the ingredients we can create our data to fit. Here we will also output the model to disc, so we can demonstrate how to read in the header keywords from real ALMA/VLA etc data.
Step5: Read in the data
In this example we already have our data in memory. But if you are fitting a real datacube this wont be the case! Here we read in the model we just created from a FITS file to make it clear how to do this.
Step6: Fit the model
Now we have our 'observational' data read into memory, and a model function defined, we can fit one to the other! As our fake model is currently noiseless, lets add some gaussian noise (obviously dont do this if your data is from a real telecope!)
Step7: Below we will proceed using the MCMC code GAStimator which was specifically designed to work with KinMS, however any minimiser should work in principle. For full details of how this code works, and a tutorial, see https
Step8: Setting good priors on the flux of your source is crucial to ensure the model outputs are physical. Luckily the integrated flux of your source should be easy to measure from your datacube! If you have a good measurement of this, then I would recommend forcing the total flux to that value by fixing it in the model (set mcmc.fixed=True for that parameter). If you can only get a guess then set as tight a prior as you can. This stops the model hiding bad fitting components below the noise level.
Its always a good idea to plot your model over your data before you start a fitting processes. That allows you to check that the model is reasonable, and tweak the parameters by hand to get good starting guesses. Firs you should generate a cube from your model function, then you can overplot it on your data using the simple plotting tool included with KinMS
Step9: As you can see, the black contours of the model arent a perfect match to the moment zero, spectrum and position-velocity diagram extracted from our "observed" datacube. One could tweak by hand, but as these are already close we can go on to do a fit!
If you are experimenting then running until convergence should be good enough to get an idea if the model is physical (setting a low number of iterations, ~3000 works for me).
Step10: As you can see, the final parameters (listed in the output with their 1sigma errors) are pretty close to those we input! One could use the cornor_plot routine shipped with GAStimator to visualize our results, but with only 3000 steps (and a $\approx$30% acceptance rate) these wont be very pretty. If you need good error estimates/nice looking cornor plots for publication then I recommend at least 30,000 iterations, which may take several hours/days depending on your system, and the size of your datacube.
One can visualize the best-fit model again to check how we did - turns out pretty well! (Note the flux in the integrated spectrum isnt perfect, this is because of the masking of the noisy data).
Step11: Tiny error problem
I have found that fitting whole datacubes with kinematic modelling tools such as KinMS can yield unphysically small uncertanties, for instance constraining inclination to $\pm\approx0.1^{\circ}$ in the fit example performed above. This is essentially a form of model mismatch - you are finding the very best model of a given type that fits the data - and as you have a large number of free-parameters in a data cube you can find the best model (no matter how bad it is at actually fitting the data!) really well.
In works such as Smith et al. (2019) we have attempted to get around by taking into account the variance of the $\chi^2$ statistic.
As observed data are noisy, the $\chi^2$ statistic has an additional uncertainty associated with it, following the chi-squared distribution (Andrae 2010). This distribution has a variance of $2(N - P)$, where $N$ is the number of constraints and $P$ the number of inferred parameters. For fitting datacubes $N$ is very large, so the variance becomes $\approx2N$.
Systematic effects can produce variations of $\chi^2$ of the order of this variance, and ignoring this effect yields unrealistically small uncertainty estimates. In order to mitigate this effect van
den Bosch & van de Ven (2009) proposed to increase the $1\sigma$ confidence interval to $\Delta\chi^2=\sqrt{2N}$. To achieve the same effect within the Bayesian MCMC approach discussed above we need to scale the log-likelihood, by increasing the RMS estimate provided to GAStimator by $(2N)^{1/4}$. This approach appears to yield physically credible formal uncertainties in the inferred parameters, whereas otherwise these uncertainties are unphysically small.
Lets try that with the example above | Python Code:
from kinms import KinMS
import numpy as np
from astropy.io import fits
from kinms.utils.KinMS_figures import KinMS_plotter
Explanation: KinMS galaxy fitting tutorial
This tutorial aims at getting you up and running with galaxy kinematic modelling using KinMS! To start you will need to download the KinMSpy code and have it in your python path.
To do this you can simply call pip install kinms
To get started with kinematic modelling we will complete the following steps:
1. Generate a model to fit (can be skipped if you have your own observed data cube)
2. Read in that cube, and extract the important information from the header
3. Fit the data using an MCMC code
We will start by importing a variety of modules we will need to work with KinMS, and plot its output.
End of explanation
def make_model(param,obspars,rad,filename=None,plot=False):
'''
This function takes in the `param` array (along with obspars; the observational setup,
and a radius vector `rad`) and uses it to create a KinMS model.
'''
total_flux=param[0]
posAng=param[1]
inc=param[2]
v_flat=param[3]
r_turn=param[4]
scalerad=param[5]
### Here we use an exponential disk model for the surface brightness of the gas ###
sbprof = np.exp((-1)*rad/scalerad)
### We use a very simple arctan rotation curve model with two free parameters. ###
vel=(v_flat*2/np.pi)*np.arctan(rad/r_turn)
### This returns the model
return KinMS(obspars['xsize'],obspars['ysize'],obspars['vsize'],obspars['cellsize'],obspars['dv'],\
obspars['beamsize'],inc,sbProf=sbprof,sbRad=rad,velRad=rad,velProf=vel,\
intFlux=total_flux,posAng=posAng,fixSeed=True,fileName=filename).model_cube(toplot=plot)
Explanation: Generate a model
First we will generate a simple galaxy model using KinMS itself, that we can attempt to determine the parameters of later. If you have your own observed galaxy to fit then of course this step can be skipped!
The make_model function below creates a simple exponential disc:
$
\begin{align}
\large \Sigma_{H2}(r) \propto e^{\frac{-r}{d_{scale}}}
\end{align}
$
with a circular velocity profile which is parameterized using an arctan function:
$
\begin{align}
\large V(r) = \frac{2V_{flat}}{\pi} \arctan\left(\frac{r}{r_{turn}}\right)
\end{align}
$
End of explanation
### Setup cube parameters ###
obspars={}
obspars['xsize']=64.0 # arcseconds
obspars['ysize']=64.0 # arcseconds
obspars['vsize']=500.0 # km/s
obspars['cellsize']=1.0 # arcseconds/pixel
obspars['dv']=20.0 # km/s/channel
obspars['beamsize']=np.array([4.0,4.0,0]) # [bmaj,bmin,bpa] in (arcsec, arcsec, degrees)
Explanation: Note that we have set fixSeed=True in the KinMS call - this is crucial if you are fitting with KinMS. It ensures if you generate two models with the same input parameters you will get an identical output model!
Now we have our model function, lets use it to generate a model which we will later fit. The first thing we need is to define the setup of our desired datacube (typically if you are fitting real data this will all be determined from the header keywords- see below).
End of explanation
rad=np.arange(0,100,0.3)
Explanation: We also need to create a radius vector- you ideally want this to oversample your pixel grid somewhat to avoid interpolation errors!
End of explanation
'''
True values for the flux, posang, inc etc, as defined in the model function
'''
guesses=np.array([30.,270.,45.,200.,2.,5.])
'''
RMS of data. Here we are making our own model so this is arbitary.
When fitting real data this should be the observational RMS
'''
error=np.array(1e-3)
fdata=make_model(guesses,obspars,rad, filename="Test",plot=True)
Explanation: Now we have all the ingredients we can create our data to fit. Here we will also output the model to disc, so we can demonstrate how to read in the header keywords from real ALMA/VLA etc data.
End of explanation
### Load in your observational data ###
hdulist = fits.open('Test_simcube.fits',ignore_blank=True)
fdata = hdulist[0].data.T
### Setup cube parameters ###
obspars={}
obspars['cellsize']=np.abs(hdulist[0].header['cdelt1']*3600.) # arcseconds/pixel
obspars['dv']=np.abs(hdulist[0].header['cdelt3']/1e3) # km/s/channel
obspars['xsize']=hdulist[0].header['naxis1']*obspars['cellsize'] # arcseconds
obspars['ysize']=hdulist[0].header['naxis2']*obspars['cellsize'] # arcseconds
obspars['vsize']=hdulist[0].header['naxis3']*obspars['dv'] # km/s
obspars['beamsize']=np.array([hdulist[0].header['bmaj']*3600.,hdulist[0].header['bmin']*3600.,hdulist[0].header['bpa']])# [bmaj,bmin,bpa] in (arcsec, arcsec, degrees)
Explanation: Read in the data
In this example we already have our data in memory. But if you are fitting a real datacube this wont be the case! Here we read in the model we just created from a FITS file to make it clear how to do this.
End of explanation
fdata+=(np.random.normal(size=fdata.shape)*error)
Explanation: Fit the model
Now we have our 'observational' data read into memory, and a model function defined, we can fit one to the other! As our fake model is currently noiseless, lets add some gaussian noise (obviously dont do this if your data is from a real telecope!):
End of explanation
from gastimator import gastimator,corner_plot
mcmc = gastimator(make_model,obspars,rad)
mcmc.labels=np.array(['Flux','posAng',"Inc","VFlat","R_turn","scalerad"])
mcmc.min=np.array([30.,1.,10,50,0.1,0.1])
mcmc.max=np.array([30.,360.,80,400,20,10])
mcmc.fixed=np.array([True,False,False,False,False,False])
mcmc.precision=np.array([1.,1.,1.,10,0.1,0.1])
mcmc.guesses=np.array([30.,275.,55.,210.,2.5,4.5]) #starting guesses, purposefully off!
Explanation: Below we will proceed using the MCMC code GAStimator which was specifically designed to work with KinMS, however any minimiser should work in principle. For full details of how this code works, and a tutorial, see https://github.com/TimothyADavis/GAStimator .
End of explanation
model=make_model(mcmc.guesses,obspars,rad) # make a model from your guesses
KinMS_plotter(fdata, obspars['xsize'], obspars['ysize'], obspars['vsize'], obspars['cellsize'],\
obspars['dv'], obspars['beamsize'], posang=guesses[1],overcube=model,rms=error).makeplots()
Explanation: Setting good priors on the flux of your source is crucial to ensure the model outputs are physical. Luckily the integrated flux of your source should be easy to measure from your datacube! If you have a good measurement of this, then I would recommend forcing the total flux to that value by fixing it in the model (set mcmc.fixed=True for that parameter). If you can only get a guess then set as tight a prior as you can. This stops the model hiding bad fitting components below the noise level.
Its always a good idea to plot your model over your data before you start a fitting processes. That allows you to check that the model is reasonable, and tweak the parameters by hand to get good starting guesses. Firs you should generate a cube from your model function, then you can overplot it on your data using the simple plotting tool included with KinMS:
End of explanation
outputvalue, outputll= mcmc.run(fdata,error,3000,plot=False)
Explanation: As you can see, the black contours of the model arent a perfect match to the moment zero, spectrum and position-velocity diagram extracted from our "observed" datacube. One could tweak by hand, but as these are already close we can go on to do a fit!
If you are experimenting then running until convergence should be good enough to get an idea if the model is physical (setting a low number of iterations, ~3000 works for me).
End of explanation
bestmodel=make_model(np.median(outputvalue,1),obspars,rad) # make a model from your guesses
KinMS_plotter(fdata, obspars['xsize'], obspars['ysize'], obspars['vsize'], obspars['cellsize'],\
obspars['dv'], obspars['beamsize'], posang=guesses[1],overcube=bestmodel,rms=error).makeplots()
Explanation: As you can see, the final parameters (listed in the output with their 1sigma errors) are pretty close to those we input! One could use the cornor_plot routine shipped with GAStimator to visualize our results, but with only 3000 steps (and a $\approx$30% acceptance rate) these wont be very pretty. If you need good error estimates/nice looking cornor plots for publication then I recommend at least 30,000 iterations, which may take several hours/days depending on your system, and the size of your datacube.
One can visualize the best-fit model again to check how we did - turns out pretty well! (Note the flux in the integrated spectrum isnt perfect, this is because of the masking of the noisy data).
End of explanation
error*=((2.0*fdata.size)**(0.25))
outputvalue, outputll= mcmc.run(fdata,error,3000,plot=False)
Explanation: Tiny error problem
I have found that fitting whole datacubes with kinematic modelling tools such as KinMS can yield unphysically small uncertanties, for instance constraining inclination to $\pm\approx0.1^{\circ}$ in the fit example performed above. This is essentially a form of model mismatch - you are finding the very best model of a given type that fits the data - and as you have a large number of free-parameters in a data cube you can find the best model (no matter how bad it is at actually fitting the data!) really well.
In works such as Smith et al. (2019) we have attempted to get around by taking into account the variance of the $\chi^2$ statistic.
As observed data are noisy, the $\chi^2$ statistic has an additional uncertainty associated with it, following the chi-squared distribution (Andrae 2010). This distribution has a variance of $2(N - P)$, where $N$ is the number of constraints and $P$ the number of inferred parameters. For fitting datacubes $N$ is very large, so the variance becomes $\approx2N$.
Systematic effects can produce variations of $\chi^2$ of the order of this variance, and ignoring this effect yields unrealistically small uncertainty estimates. In order to mitigate this effect van
den Bosch & van de Ven (2009) proposed to increase the $1\sigma$ confidence interval to $\Delta\chi^2=\sqrt{2N}$. To achieve the same effect within the Bayesian MCMC approach discussed above we need to scale the log-likelihood, by increasing the RMS estimate provided to GAStimator by $(2N)^{1/4}$. This approach appears to yield physically credible formal uncertainties in the inferred parameters, whereas otherwise these uncertainties are unphysically small.
Lets try that with the example above:
End of explanation |
11,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is the IOHMM model with the parameters learned in a supervised way. This is corresponding to the counting frequency process as in the supervised HMM. See notes in http
Step1: Load speed data
Step2: Label some/all states
In our structure of the code, the states should be a dictionary, the key is the index in the sequence (e.g. 0, 5) and the value is a one-out-of-n code of array where the kth value is 1 if the hidden state is k. n is the number of states in total.
In the following example, we assume that the "corr" column gives the correct hidden states.
Step3: Set up a simple model manully
Step4: Start training
Step5: See the training results
Step6: Save the trained model
Step7: Load back the trained model
Step8: See if the coefficients are any different
Step9: Set up the model using a config file, instead of doing it manully
Step10: Set data and start training
Step11: See if the training results are any different? | Python Code:
from __future__ import division
import json
import warnings
import numpy as np
import pandas as pd
from IOHMM import SupervisedIOHMM
from IOHMM import OLS, CrossEntropyMNL
warnings.simplefilter("ignore")
Explanation: This is the IOHMM model with the parameters learned in a supervised way. This is corresponding to the counting frequency process as in the supervised HMM. See notes in http://www.cs.columbia.edu/4761/notes07/chapter4.3-HMM.pdf.
SupervisedIOHMM
End of explanation
speed = pd.read_csv('../data/speed.csv')
speed.head()
Explanation: Load speed data
End of explanation
states = {}
corr = np.array(speed['corr'])
for i in range(len(corr)):
state = np.zeros((2,))
if corr[i] == 'cor':
states[i] = np.array([0,1])
else:
states[i] = np.array([1,0])
Explanation: Label some/all states
In our structure of the code, the states should be a dictionary, the key is the index in the sequence (e.g. 0, 5) and the value is a one-out-of-n code of array where the kth value is 1 if the hidden state is k. n is the number of states in total.
In the following example, we assume that the "corr" column gives the correct hidden states.
End of explanation
# we choose 2 hidden states in this model
SHMM = SupervisedIOHMM(num_states=2)
# we set only one output 'rt' modeled by a linear regression model
SHMM.set_models(model_emissions = [OLS()],
model_transition=CrossEntropyMNL(solver='lbfgs'),
model_initial=CrossEntropyMNL(solver='lbfgs'))
# we set no covariates associated with initial/transitiojn/emission models
SHMM.set_inputs(covariates_initial = [], covariates_transition = [], covariates_emissions = [[]])
# set the response of the emission model
SHMM.set_outputs([['rt']])
# set the data and ground truth states
SHMM.set_data([[speed, states]])
Explanation: Set up a simple model manully
End of explanation
SHMM.train()
Explanation: Start training
End of explanation
# the coefficients of the output model for each states
print(SHMM.model_emissions[0][0].coef)
print(SHMM.model_emissions[1][0].coef)
# the scale/dispersion of the output model of each states
print(np.sqrt(SHMM.model_emissions[0][0].dispersion))
print(np.sqrt(SHMM.model_emissions[1][0].dispersion))
# the transition probability from each state
print(np.exp(SHMM.model_transition[0].predict_log_proba(np.array([[]]))))
print(np.exp(SHMM.model_transition[1].predict_log_proba(np.array([[]]))))
Explanation: See the training results
End of explanation
json_dict = SHMM.to_json('../models/SupervisedIOHMM/')
json_dict
with open('../models/SupervisedIOHMM/model.json', 'w') as outfile:
json.dump(json_dict, outfile, indent=4, sort_keys=True)
Explanation: Save the trained model
End of explanation
SHMM_from_json = SupervisedIOHMM.from_json(json_dict)
Explanation: Load back the trained model
End of explanation
# the coefficients of the output model for each states
print(SHMM.model_emissions[0][0].coef)
print(SHMM.model_emissions[1][0].coef)
Explanation: See if the coefficients are any different
End of explanation
with open('../models/SupervisedIOHMM/config.json') as json_data:
json_dict = json.load(json_data)
SHMM_from_config = SupervisedIOHMM.from_config(json_dict)
Explanation: Set up the model using a config file, instead of doing it manully
End of explanation
SHMM_from_config.set_data([[speed, states]])
SHMM_from_config.train()
Explanation: Set data and start training
End of explanation
# the coefficients of the output model for each states
print(SHMM_from_config.model_emissions[0][0].coef)
print(SHMM_from_config.model_emissions[1][0].coef)
Explanation: See if the training results are any different?
End of explanation |
11,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2015
Python Machine Learning Essentials
Compressing Data via Dimensionality Reduction
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
Step1: <br>
<br>
Sections
Unsupervised dimensionality reduction via principal component analysis
Total and explained variance
Feature transformation
Principal component analysis in scikit-learn
Supervised data compression via linear discriminant analysis
Computing the scatter matrices
Selecting linear discriminants for the new feature subspace
Projecting samples onto the new feature space
LDA via scikit-learn
Using kernel principal component analysis for nonlinear mappings
Implementing a kernel principal component analysis in Python
Example 1
Step2: Splitting the data into 70% training and 30% test subsets.
Step3: Standardizing the data.
Step4: Eigendecomposition of the covariance matrix.
Step5: <br>
<br>
Total and explained variance
[back to top]
Step6: <br>
<br>
Feature transformation
[back to top]
Step7: <br>
<br>
Principal component analysis in scikit-learn
[back to top]
Step8: Training logistic regression classifier using the first 2 principal components.
Step9: <br>
<br>
Supervised data compression via linear discriminant analysis
[back to top]
<br>
<br>
Computing the scatter matrices
[back to top]
Calculate the mean vectors for each class
Step10: Compute the within-class scatter matrix
Step11: Better
Step12: Compute the between-class scatter matrix
Step13: <br>
<br>
Selecting linear discriminants for the new feature subspace
[back to top]
Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$
Step14: Sort eigenvectors in decreasing order of the eigenvalues
Step15: <br>
<br>
Projecting samples onto the new feature space
[back to top]
Step16: <br>
<br>
LDA via scikit-learn
[back to top]
Step18: <br>
<br>
Using kernel principal component analysis for nonlinear mappings
[back to top]
<br>
<br>
Implementing a kernel principal component analysis in Python
[back to top]
Step19: Example 1
Step20: Example 2
Step22: <br>
<br>
Projecting new data points
[back to top]
Step23: <br>
<br>
Kernel principal component analysis in scikit-learn
[back to top] | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,scikit-learn
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Explanation: Sebastian Raschka, 2015
Python Machine Learning Essentials
Compressing Data via Dimensionality Reduction
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
Explanation: <br>
<br>
Sections
Unsupervised dimensionality reduction via principal component analysis
Total and explained variance
Feature transformation
Principal component analysis in scikit-learn
Supervised data compression via linear discriminant analysis
Computing the scatter matrices
Selecting linear discriminants for the new feature subspace
Projecting samples onto the new feature space
LDA via scikit-learn
Using kernel principal component analysis for nonlinear mappings
Implementing a kernel principal component analysis in Python
Example 1: Separating half-moon shapes
Example 2: Separating concentric circles
Projecting new data points
Kernel principal component analysis in scikit-learn
<br>
<br>
Unsupervised dimensionality reduction via principal component analysis
[back to top]
Loading the Wine dataset from Chapter 4.
End of explanation
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
Explanation: Splitting the data into 70% training and 30% test subsets.
End of explanation
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.fit_transform(X_test)
Explanation: Standardizing the data.
End of explanation
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
Explanation: Eigendecomposition of the covariance matrix.
End of explanation
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
%matplotlib inline
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Total and explained variance
[back to top]
End of explanation
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:,i]) for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train==l, 0],
X_train_pca[y_train==l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
Explanation: <br>
<br>
Feature transformation
[back to top]
End of explanation
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:,0], X_train_pca[:,1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
Explanation: <br>
<br>
Principal component analysis in scikit-learn
[back to top]
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
Explanation: Training logistic regression classifier using the first 2 principal components.
End of explanation
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1,4):
mean_vecs.append(np.mean(X_train_std[y_train==label], axis=0))
print('MV %s: %s\n' %(label, mean_vecs[label-1]))
Explanation: <br>
<br>
Supervised data compression via linear discriminant analysis
[back to top]
<br>
<br>
Computing the scatter matrices
[back to top]
Calculate the mean vectors for each class:
End of explanation
d = 13 # number of features
S_W = np.zeros((d, d))
for label,mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X[y == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row-mv).dot((row-mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
Explanation: Compute the within-class scatter matrix:
End of explanation
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label,mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train==label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
Explanation: Better: covariance matrix since classes are not equally distributed:
End of explanation
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i,mean_vec in enumerate(mean_vecs):
n = X[y==i+1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
Explanation: Compute the between-class scatter matrix:
End of explanation
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
Explanation: <br>
<br>
Selecting linear discriminants for the new feature subspace
[back to top]
Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
End of explanation
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:,i]) for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
Explanation: Sort eigenvectors in decreasing order of the eigenvalues:
End of explanation
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train==l, 0],
X_train_lda[y_train==l, 1],
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='upper right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Projecting samples onto the new feature space
[back to top]
End of explanation
from sklearn.lda import LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/lda4.png', dpi=300)
plt.show()
Explanation: <br>
<br>
LDA via scikit-learn
[back to top]
End of explanation
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N,N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
Explanation: <br>
<br>
Using kernel principal component analysis for nonlinear mappings
[back to top]
<br>
<br>
Implementing a kernel principal component analysis in Python
[back to top]
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_spca[y==0, 0], X_spca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y==1, 0], X_spca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
Explanation: Example 1: Separating half-moon shapes
[back to top]
End of explanation
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_spca[y==0, 0], X_spca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y==1, 0], X_spca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y==0, 0], np.zeros((500,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y==1, 0], np.zeros((500,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((500,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((500,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
Explanation: Example 2: Separating concentric circles
[back to top]
End of explanation
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N,N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:,-i] for i in range(1,n_components+1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1,n_components+1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new-row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y==0, 0], np.zeros((50)),
color='red', marker='^',alpha=0.5)
plt.scatter(alphas[y==1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black', label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green', label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Projecting new data points
[back to top]
End of explanation
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y==0, 0], X_skernpca[y==0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y==1, 0], X_skernpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Kernel principal component analysis in scikit-learn
[back to top]
End of explanation |
11,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Energy Lancaster publication miner
This workbook parses all of the publications listed on Energy Lancaster's Lancaster University Research Portal page and extracts keywoprds an topical data from abstracts using natural language processing.
Step1: Get number of pages for publications
Step2: Extract links to publications, from all pages
Step3: Keyword extraction, for each publication
Step4: Mine titles and abstracts for topics
Step5: Save output for D3 word cloud
Step6: Having consturcted three project score vectors (without title, with title, both), we sort the projects based on high scores. These are best matching research projects. We display a link to them below. Repeat for each topic. | Python Code:
#python dom extension functions to get class and other attributes
def getAttr(dom,cl,attr='class',el='div'):
toreturn=[]
divs=dom.getElementsByTagName(el)
for div in divs:
clarray=div.getAttribute(attr).split(' ')
for cli in clarray:
if cli==cl: toreturn.append(div)
if toreturn!=[]: return toreturn
else: return None
Explanation: Energy Lancaster publication miner
This workbook parses all of the publications listed on Energy Lancaster's Lancaster University Research Portal page and extracts keywoprds an topical data from abstracts using natural language processing.
End of explanation
#open first page, parse html, get number of pages and their links
import html5lib
import urllib2
url="http://www.research.lancs.ac.uk/portal/en/organisations/energy-lancaster/publications.html"
aResp = urllib2.urlopen(url)
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
links=getAttr(dom,'portal_navigator_paging',el='span')[0].childNodes
nr_of_pages=int([i for i in links if i.nodeType==1][::-1][0].childNodes[0].childNodes[0].nodeValue)-1
Explanation: Get number of pages for publications
End of explanation
#create publist array
publist=[]
#parse publications links on all pages
for pagenr in range(nr_of_pages):
aResp = urllib2.urlopen(url+'?page='+str(pagenr))
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
#get html list
htmlpublist=dom.getElementsByTagName('ol')
#extract pub links
for i in htmlpublist[0].childNodes:
if i.nodeType==1:
if i.childNodes[0].nodeType==1:
j=i.childNodes[1].childNodes[0].childNodes[0]
if j.nodeType==1:
publist.append(j.getAttribute('href'))
print 'finished page',pagenr
print len(publist),'publications associated with Energy Lancaster'
#create dictionary
pubdict={i:{"url":i} for i in publist}
Explanation: Extract links to publications, from all pages
End of explanation
for r in range(len(publist)):
pub=publist[r]
aResp = urllib2.urlopen(pub)
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
#get keywords from pub page
keywords=getAttr(dom,'keywords',el='ul')
if keywords:
pubdict[pub]['keywords']=[i.childNodes[0].childNodes[0].nodeValue for i in keywords[0].getElementsByTagName('a')]
#get title from pub page
title=getAttr(dom,'title',el='h2')
if title:
pubdict[pub]['title']=title[0].childNodes[0].childNodes[0].nodeValue
abstract=getAttr(dom,'rendering_researchoutput_abstractportal',el='div')
if abstract:
pubdict[pub]['abstract']=abstract[0].childNodes[0].childNodes[0].nodeValue
if r%10==0: print 'processed',r,'publications...'
#save parsed data
import json
file('pubdict.json','w').write(json.dumps(pubdict))
#load if saved previously
#pubdict=json.loads(file('pubdict.json','r').read())
Explanation: Keyword extraction, for each publication
End of explanation
#import dependencies
import pandas as pd
from textblob import TextBlob
#import spacy
#nlp = spacy.load('en')
#run once if you need to download nltk corpora, igonre otherwise
import nltk
nltk.download()
#get topical nouns for title and abstract using natural language processing
for i in range(len(pubdict.keys())):
if 'title' in pubdict[pubdict.keys()[i]]:
if text:
text=pubdict[pubdict.keys()[i]]['title']
#get topical nouns with textblob
blob1 = TextBlob(text)
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(text)
keywords2=[]
for k in blob2.noun_chunks:
keywords2.append(str(k).decode('utf8').replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
pubdict[pubdict.keys()[i]]['title-nlp']=keywords
if 'abstract' in pubdict[pubdict.keys()[i]]:
text=pubdict[pubdict.keys()[i]]['abstract']
if text:
#get topical nouns with textblob
blob1 = TextBlob(text)
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(text)
keywords2=[]
for k in blob2.noun_chunks:
keywords2.append(str(k).decode('utf8').replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
pubdict[pubdict.keys()[i]]['abstract-nlp']=keywords
print i,',',
#save parsed data
file('pubdict2.json','w').write(json.dumps(pubdict))
#load if saved previously
#pubdict=json.loads(file('pubdict2.json','r').read())
Explanation: Mine titles and abstracts for topics
End of explanation
keywords=[j for i in pubdict if 'keywords' in pubdict[i] if pubdict[i]['keywords'] for j in pubdict[i]['keywords']]
titles=[pubdict[i]['title'] for i in pubdict if 'title' in pubdict[i] if pubdict[i]['title']]
abstracts=[pubdict[i]['abstract'] for i in pubdict if 'abstract' in pubdict[i] if pubdict[i]['abstract']]
title_nlp=[j for i in pubdict if 'title-nlp' in pubdict[i] if pubdict[i]['title-nlp'] for j in pubdict[i]['title-nlp']]
abstract_nlp=[j for i in pubdict if 'abstract-nlp' in pubdict[i] if pubdict[i]['abstract-nlp'] for j in pubdict[i]['abstract-nlp']]
kt=keywords+titles
kta=kt+abstracts
kt_nlp=keywords+title_nlp
kta_nlp=kt+abstract_nlp
file('keywords.json','w').write(json.dumps(keywords))
file('titles.json','w').write(json.dumps(titles))
file('abstracts.json','w').write(json.dumps(abstracts))
file('kt.json','w').write(json.dumps(kt))
file('kta.json','w').write(json.dumps(kta))
file('kt_nlp.json','w').write(json.dumps(kt_nlp))
file('kta_nlp.json','w').write(json.dumps(kta_nlp))
import re
def convert(name):
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1 \2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1 \2', s1).lower()
kc=[convert(i) for i in keywords]
file('kc.json','w').write(json.dumps(kc))
ks=[j for i in kc for j in i.split()]
file('ks.json','w').write(json.dumps(ks))
ktc_nlp=[convert(i) for i in kt_nlp]
file('ktc_nlp.json','w').write(json.dumps(ktc_nlp))
kts_nlp=[j for i in ktc_nlp for j in i.split()]
file('kts_nlp.json','w').write(json.dumps(kts_nlp))
ktac_nlp=[convert(i) for i in kta_nlp]
file('ktac_nlp.json','w').write(json.dumps(ktac_nlp))
ktas_nlp=[j for i in ktac_nlp for j in i.split()]
file('ktas_nlp.json','w').write(json.dumps(ktas_nlp))
Explanation: Save output for D3 word cloud
End of explanation
for topic_id in range(1,len(topics)):
#select topic
#topic_id=1
#use title
usetitle=True
verbose=False
#initiate global DFs
DF=pd.DataFrame()
projects1={}
projects2={}
projects12={}
#specify depth (n most relevant projects)
depth=100
#get topical nouns with textblob
blob1 = TextBlob(topics[topic_id].decode('utf8'))
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(topics[topic_id].decode('utf8'))
keywords2=[]
for i in blob2.noun_chunks:
keywords2.append(str(i).replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
print '----- started processing topic ', topic_id,'-----'
print 'topic keywords are:',
for keyword in keywords: print keyword+', ',
print ' '
#construct search query from title and keywords, the cycle through the keywords
for keyword in keywords:
if usetitle:
if verbose: print 'query for <'+title+keyword+'>'
query=repr(title+keyword).replace(' ','+')[2:-1]
u0='http://gtr.rcuk.ac.uk/search/project/csv?term='
u1='&selectedFacets=&fields='
u2='pro.gr,pro.t,pro.a,pro.orcidId,per.fn,per.on,per.sn,'
u3='per.fnsn,per.orcidId,per.org.n,per.pro.t,per.pro.abs,pub.t,pub.a,pub.orcidId,org.n,org.orcidId,'
u4='acp.t,acp.d,acp.i,acp.oid,kf.d,kf.oid,is.t,is.d,is.oid,col.i,col.d,col.c,col.dept,col.org,col.pc,col.pic,'
u5='col.oid,ip.t,ip.d,ip.i,ip.oid,pol.i,pol.gt,pol.in,pol.oid,prod.t,prod.d,prod.i,prod.oid,rtp.t,rtp.d,rtp.i,'
u6='rtp.oid,rdm.t,rdm.d,rdm.i,rdm.oid,stp.t,stp.d,stp.i,stp.oid,so.t,so.d,so.cn,so.i,so.oid,ff.t,ff.d,ff.c,'
u7='ff.org,ff.dept,ff.oid,dis.t,dis.d,dis.i,dis.oid'
u8='&type=&fetchSize=50'
u9='&selectedSortableField=score&selectedSortOrder=DESC'
url=u0+query+u8+u9
#query RCUK GtR API
df=pd.read_csv(url,nrows=depth)
#record scores
df['score'] = depth-df.index
df=df.set_index('ProjectReference')
DF=pd.concat([DF,df])
for i in df.index:
if i not in projects12:projects12[i]=0
projects12[i]+=df.loc[i]['score']**2
if i not in projects1:projects1[i]=0
projects1[i]+=df.loc[i]['score']**2
if verbose: print 'query for <'+keyword+'>'
query=repr(keyword).replace(' ','+')[2:-1]
u0='http://gtr.rcuk.ac.uk/search/project/csv?term='
u1='&selectedFacets=&fields='
u2='pro.gr,pro.t,pro.a,pro.orcidId,per.fn,per.on,per.sn,'
u3='per.fnsn,per.orcidId,per.org.n,per.pro.t,per.pro.abs,pub.t,pub.a,pub.orcidId,org.n,org.orcidId,'
u4='acp.t,acp.d,acp.i,acp.oid,kf.d,kf.oid,is.t,is.d,is.oid,col.i,col.d,col.c,col.dept,col.org,col.pc,col.pic,'
u5='col.oid,ip.t,ip.d,ip.i,ip.oid,pol.i,pol.gt,pol.in,pol.oid,prod.t,prod.d,prod.i,prod.oid,rtp.t,rtp.d,rtp.i,'
u6='rtp.oid,rdm.t,rdm.d,rdm.i,rdm.oid,stp.t,stp.d,stp.i,stp.oid,so.t,so.d,so.cn,so.i,so.oid,ff.t,ff.d,ff.c,'
u7='ff.org,ff.dept,ff.oid,dis.t,dis.d,dis.i,dis.oid'
u8='&type=&fetchSize=50'
u9='&selectedSortableField=score&selectedSortOrder=DESC'
url=u0+query+u8+u9
#query RCUK GtR API
df=pd.read_csv(url,nrows=depth)
#record scores
df['score'] = depth-df.index
df=df.set_index('ProjectReference')
DF=pd.concat([DF,df])
for i in df.index:
if i not in projects12:projects12[i]=0
projects12[i]+=df.loc[i]['score']**2
if i not in projects2:projects2[i]=0
projects2[i]+=df.loc[i]['score']**2
print '----- finished topic ', topic_id,'-----'
print ' '
###### SORTING #######
#select top projects
#sort project vectors
top=30
import operator
sorted_projects1=sorted(projects1.items(), key=operator.itemgetter(1))[::-1][:30]
sorted_projects2=sorted(projects2.items(), key=operator.itemgetter(1))[::-1][:30]
sorted_projects12=sorted(projects12.items(), key=operator.itemgetter(1))[::-1][:30]
#record scores in sorted vector in a master vector
projects={}
for i in range(len(sorted_projects1)):
if sorted_projects1[i][0] not in projects:projects[sorted_projects1[i][0]]=0
projects[sorted_projects1[i][0]]+=(top-i)**2
for i in range(len(sorted_projects2)):
if sorted_projects2[i][0] not in projects:projects[sorted_projects2[i][0]]=0
projects[sorted_projects2[i][0]]+=(top-i)**2
for i in range(len(sorted_projects12)):
if sorted_projects12[i][0] not in projects:projects[sorted_projects12[i][0]]=0
projects[sorted_projects12[i][0]]+=(top-i)**2
#save final vector of most relevant projects
sorted_projects=sorted(projects.items(), key=operator.itemgetter(1))[::-1][:30]
###### DISPLAY ########
#print resulting links to projects
for i in range(len(sorted_projects)):
print str(i+1)+'.',DF.loc[sorted_projects[i][0]][u'GTRProjectUrl'].values[0],\
DF.loc[sorted_projects[i][0]][u'PIFirstName'].values[0],\
DF.loc[sorted_projects[i][0]][u'PISurname'].values[0]+'|',\
DF.loc[sorted_projects[i][0]][u'LeadROName'].values[0]+'|',\
DF.loc[sorted_projects[i][0]][u'StartDate'].values[0][6:]+'-'+\
DF.loc[sorted_projects[i][0]][u'EndDate'].values[0][6:]+'|',\
str(int(DF.loc[sorted_projects[i][0]][u'AwardPounds'].values[0])/1000)+'k'
print DF.loc[sorted_projects[i][0]][u'Title'].values[0]+'\n'
#print '----------------------------------------------------'
Explanation: Having consturcted three project score vectors (without title, with title, both), we sort the projects based on high scores. These are best matching research projects. We display a link to them below. Repeat for each topic.
End of explanation |
11,933 | Given the following text description, write Python code to implement the functionality described.
Description:
Construct two N
Function to generate two arrays satisfying the given conditions ; Declare the two arrays A and B ; Iterate from range [ 1 , 2 * n ] ; Assign consecutive numbers to same indices of the two arrays ; Print the first array ; Print the second array , B ; Driver Code ; Function Call
| Python Code:
def printArrays(n ) :
A , B =[] ,[] ;
for i in range(1 , 2 * n + 1 ) :
if(i % 2 == 0 ) :
A . append(i ) ;
else :
B . append(i ) ;
print("{ ▁ ", end = "") ;
for i in range(n ) :
print(A[i ] , end = "") ;
if(i != n - 1 ) :
print(", ▁ ", end = "") ;
print("} ") ;
print("{ ▁ ", end = "") ;
for i in range(n ) :
print(B[i ] , end = "") ;
if(i != n - 1 ) :
print(", ", end = "▁ ") ;
print("▁ } ", end = "") ;
if __name__== "__main __":
N = 5 ;
printArrays(N ) ;
|
11,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
COSC Learning Lab
03_interface_startup.py
Related Scripts
Step1: Implementation
Step2: Execution
Step3: HTTP | Python Code:
help('learning_lab.03_interface_startup')
Explanation: COSC Learning Lab
03_interface_startup.py
Related Scripts:
* 03_interface_shutdown.py
* 03_interface_configuration.py
Table of Contents
Table of Contents
Documentation
Implementation
Execution
HTTP
Documentation
End of explanation
from importlib import import_module
script = import_module('learning_lab.03_interface_startup')
from inspect import getsource
print(getsource(script.main))
print(getsource(script.demonstrate))
Explanation: Implementation
End of explanation
run ../learning_lab/03_interface_startup.py
Explanation: Execution
End of explanation
from basics.odl_http import http_history
from basics.http import http_history_to_html
from IPython.core.display import HTML
HTML(http_history_to_html(http_history()))
Explanation: HTTP
End of explanation |
11,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GEM-PRO - SBML Model
This notebook gives an example of how to run the GEM-PRO pipeline with a SBML model, in this case iNJ661, the metabolic model of M. tuberculosis.
<div class="alert alert-info">
**Input
Step1: Logging
Set the logging level in logger.setLevel(logging.<LEVEL_HERE>) to specify how verbose you want the pipeline to be. Debug is most verbose.
CRITICAL
Only really important messages shown
ERROR
Major errors
WARNING
Warnings that don't affect running of the pipeline
INFO (default)
Info such as the number of structures mapped per gene
DEBUG
Really detailed information that will print out a lot of stuff
<div class="alert alert-warning">
**Warning
Step2: Initialization of the project
Set these three things
Step3: Mapping gene ID --> sequence
First, we need to map these IDs to their protein sequences. There are 2 ID mapping services provided to do this - through KEGG or UniProt. The end goal is to map a UniProt ID to each ID, since there is a comprehensive mapping (and some useful APIs) between UniProt and the PDB.
<p><div class="alert alert-info">**Note
Step4: If you have mapped with both KEGG and UniProt mappers, then you can set a representative sequence for the gene using this function. If you used just one, this will just set that ID as representative.
If any sequences or IDs were provided manually, these will be set as representative first.
UniProt mappings override KEGG mappings except when KEGG mappings have PDBs associated with them and UniProt doesn't.
Step5: Mapping representative sequence --> structure
These are the ways to map sequence to structure
Step6: Downloading and ranking structures
Methods
<div class="alert alert-warning">
**Warning
Step7: Creating homology models
For those proteins with no representative structure, we can create homology models for them. ssbio contains some built in functions for easily running I-TASSER locally or on machines with SLURM (ie. on NERSC) or Torque job scheduling.
You can load in I-TASSER models once they complete using the get_itasser_models later.
<p><div class="alert alert-info">**Info
Step8: Saving your GEM-PRO
Finally, you can save your GEM-PRO as a JSON or pickle file, so you don't have to run the pipeline again.
For most functions, if you rerun them, they will check for existing results saved as files. The only function that would take a long time is setting the representative structure, as they are each rechecked and cleaned. This is where saving helps!
<p><div class="alert alert-warning">**Warning
Step9: Loading a saved GEM-PRO | Python Code:
import sys
import logging
# Import the GEM-PRO class
from ssbio.pipeline.gempro import GEMPRO
# Printing multiple outputs per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Explanation: GEM-PRO - SBML Model
This notebook gives an example of how to run the GEM-PRO pipeline with a SBML model, in this case iNJ661, the metabolic model of M. tuberculosis.
<div class="alert alert-info">
**Input:**
GEM (in SBML, JSON, or MAT formats)
</div>
<div class="alert alert-info">
**Output:**
GEM-PRO model
</div>
Imports
End of explanation
# Create logger
logger = logging.getLogger()
logger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #
# Other logger stuff for Jupyter notebooks
handler = logging.StreamHandler(sys.stderr)
formatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M")
handler.setFormatter(formatter)
logger.handlers = [handler]
Explanation: Logging
Set the logging level in logger.setLevel(logging.<LEVEL_HERE>) to specify how verbose you want the pipeline to be. Debug is most verbose.
CRITICAL
Only really important messages shown
ERROR
Major errors
WARNING
Warnings that don't affect running of the pipeline
INFO (default)
Info such as the number of structures mapped per gene
DEBUG
Really detailed information that will print out a lot of stuff
<div class="alert alert-warning">
**Warning:**
`DEBUG` mode prints out a large amount of information, especially if you have a lot of genes. This may stall your notebook!
</div>
End of explanation
# SET FOLDERS AND DATA HERE
import tempfile
ROOT_DIR = tempfile.gettempdir()
PROJECT = 'mtuberculosis_gp'
GEM_FILE = '../../ssbio/test/test_files/models/iNJ661.json'
GEM_FILE_TYPE = 'json'
PDB_FILE_TYPE = 'mmtf'
# Create the GEM-PRO project
my_gempro = GEMPRO(gem_name=PROJECT, root_dir=ROOT_DIR, gem_file_path=GEM_FILE, gem_file_type=GEM_FILE_TYPE, pdb_file_type=PDB_FILE_TYPE)
Explanation: Initialization of the project
Set these three things:
ROOT_DIR
The directory where a folder named after your PROJECT will be created
PROJECT
Your project name
LIST_OF_GENES
Your list of gene IDs
A directory will be created in ROOT_DIR with your PROJECT name. The folders are organized like so:
```
ROOT_DIR
└── PROJECT
├── data # General storage for pipeline outputs
├── model # SBML and GEM-PRO models are stored here
├── genes # Per gene information
│ ├── <gene_id1> # Specific gene directory
│ │ └── protein
│ │ ├── sequences # Protein sequence files, alignments, etc.
│ │ └── structures # Protein structure files, calculations, etc.
│ └── <gene_id2>
│ └── protein
│ ├── sequences
│ └── structures
├── reactions # Per reaction information
│ └── <reaction_id1> # Specific reaction directory
│ └── complex
│ └── structures # Protein complex files
└── metabolites # Per metabolite information
└── <metabolite_id1> # Specific metabolite directory
└── chemical
└── structures # Metabolite 2D and 3D structure files
```
<div class="alert alert-info">**Note:** Methods for protein complexes and metabolites are still in development.</div>
End of explanation
gene_to_seq_dict = {'Rv1295': 'MTVPPTATHQPWPGVIAAYRDRLPVGDDWTPVTLLEGGTPLIAATNLSKQTGCTIHLKVEGLNPTGSFKDRGMTMAVTDALAHGQRAVLCASTGNTSASAAAYAARAGITCAVLIPQGKIAMGKLAQAVMHGAKIIQIDGNFDDCLELARKMAADFPTISLVNSVNPVRIEGQKTAAFEIVDVLGTAPDVHALPVGNAGNITAYWKGYTEYHQLGLIDKLPRMLGTQAAGAAPLVLGEPVSHPETIATAIRIGSPASWTSAVEAQQQSKGRFLAASDEEILAAYHLVARVEGVFVEPASAASIAGLLKAIDDGWVARGSTVVCTVTGNGLKDPDTALKDMPSVSPVPVDPVAVVEKLGLA',
'Rv2233': 'VSSPRERRPASQAPRLSRRPPAHQTSRSSPDTTAPTGSGLSNRFVNDNGIVTDTTASGTNCPPPPRAAARRASSPGESPQLVIFDLDGTLTDSARGIVSSFRHALNHIGAPVPEGDLATHIVGPPMHETLRAMGLGESAEEAIVAYRADYSARGWAMNSLFDGIGPLLADLRTAGVRLAVATSKAEPTARRILRHFGIEQHFEVIAGASTDGSRGSKVDVLAHALAQLRPLPERLVMVGDRSHDVDGAAAHGIDTVVVGWGYGRADFIDKTSTTVVTHAATIDELREALGV'}
my_gempro.manual_seq_mapping(gene_to_seq_dict)
manual_uniprot_dict = {'Rv1755c': 'P9WIA9', 'Rv2321c': 'P71891', 'Rv0619': 'Q79FY3', 'Rv0618': 'Q79FY4', 'Rv2322c': 'P71890'}
my_gempro.manual_uniprot_mapping(manual_uniprot_dict)
my_gempro.df_uniprot_metadata.tail(4)
# KEGG mapping of gene ids
my_gempro.kegg_mapping_and_metadata(kegg_organism_code='mtu')
print('Missing KEGG mapping: ', my_gempro.missing_kegg_mapping)
my_gempro.df_kegg_metadata.head()
# UniProt mapping
my_gempro.uniprot_mapping_and_metadata(model_gene_source='TUBERCULIST_ID')
print('Missing UniProt mapping: ', my_gempro.missing_uniprot_mapping)
my_gempro.df_uniprot_metadata.head()
Explanation: Mapping gene ID --> sequence
First, we need to map these IDs to their protein sequences. There are 2 ID mapping services provided to do this - through KEGG or UniProt. The end goal is to map a UniProt ID to each ID, since there is a comprehensive mapping (and some useful APIs) between UniProt and the PDB.
<p><div class="alert alert-info">**Note:** You only need to map gene IDs using one service. However you can run both if some genes don't map in one service and do map in another!</div></p>
However, you don't need to map using these services if you already have the amino acid sequences for each protein. You can just manually load in the sequences as shown using the method manual_seq_mapping. Or, if you already have the UniProt IDs, you can load those in using the method manual_uniprot_mapping.
Methods
End of explanation
# Set representative sequences
my_gempro.set_representative_sequence()
print('Missing a representative sequence: ', my_gempro.missing_representative_sequence)
my_gempro.df_representative_sequences.head()
Explanation: If you have mapped with both KEGG and UniProt mappers, then you can set a representative sequence for the gene using this function. If you used just one, this will just set that ID as representative.
If any sequences or IDs were provided manually, these will be set as representative first.
UniProt mappings override KEGG mappings except when KEGG mappings have PDBs associated with them and UniProt doesn't.
End of explanation
# Mapping using the PDBe best_structures service
my_gempro.map_uniprot_to_pdb(seq_ident_cutoff=.3)
my_gempro.df_pdb_ranking.head()
# Mapping using BLAST
my_gempro.blast_seqs_to_pdb(all_genes=True, seq_ident_cutoff=.9, evalue=0.00001)
my_gempro.df_pdb_blast.head(2)
tb_homology_dir = '/home/nathan/projects_archive/homology_models/MTUBERCULOSIS/'
##### EXAMPLE SPECIFIC CODE #####
# Needed to map to older IDs used in this example
import pandas as pd
import os.path as op
old_gene_to_homology = pd.read_csv(op.join(tb_homology_dir, 'data/161031-old_gene_to_uniprot_mapping.csv'))
gene_to_uniprot = old_gene_to_homology.set_index('m_gene').to_dict()['u_uniprot_acc']
my_gempro.get_itasser_models(homology_raw_dir=op.join(tb_homology_dir, 'raw'), custom_itasser_name_mapping=gene_to_uniprot)
### END EXAMPLE SPECIFIC CODE ###
# Organizing I-TASSER homology models
my_gempro.get_itasser_models(homology_raw_dir=op.join(tb_homology_dir, 'raw'))
my_gempro.df_homology_models.head()
homology_model_dict = {}
my_gempro.get_manual_homology_models(homology_model_dict)
Explanation: Mapping representative sequence --> structure
These are the ways to map sequence to structure:
Use the UniProt ID and their automatic mappings to the PDB
BLAST the sequence to the PDB
Make homology models or
Map to existing homology models
You can only utilize option #1 to map to PDBs if there is a mapped UniProt ID set in the representative sequence. If not, you'll have to BLAST your sequence to the PDB or make a homology model. You can also run both for maximum coverage.
Methods
End of explanation
# Download all mapped PDBs and gather the metadata
my_gempro.download_all_pdbs()
my_gempro.df_pdb_metadata.head(2)
# Set representative structures
my_gempro.set_representative_structure()
my_gempro.df_representative_structures.head()
# Looking at the information saved within a gene
my_gempro.genes.get_by_id('Rv1295').protein.representative_structure
my_gempro.genes.get_by_id('Rv1295').protein.representative_structure.get_dict()
Explanation: Downloading and ranking structures
Methods
<div class="alert alert-warning">
**Warning:**
Downloading all PDBs takes a while, since they are also parsed for metadata. You can skip this step and just set representative structures below if you want to minimize the number of PDBs downloaded.
</div>
End of explanation
# Prep I-TASSER model folders
my_gempro.prep_itasser_modeling('~/software/I-TASSER4.4', '~/software/ITLIB/', runtype='local', all_genes=False)
Explanation: Creating homology models
For those proteins with no representative structure, we can create homology models for them. ssbio contains some built in functions for easily running I-TASSER locally or on machines with SLURM (ie. on NERSC) or Torque job scheduling.
You can load in I-TASSER models once they complete using the get_itasser_models later.
<p><div class="alert alert-info">**Info:** Homology modeling can take a long time - about 24-72 hours per protein (highly dependent on the sequence length, as well as if there are available templates).</div></p>
Methods
End of explanation
import os.path as op
my_gempro.save_pickle(op.join(my_gempro.model_dir, '{}.pckl'.format(my_gempro.id)))
import os.path as op
my_gempro.save_json(op.join(my_gempro.model_dir, '{}.json'.format(my_gempro.id)), compression=False)
Explanation: Saving your GEM-PRO
Finally, you can save your GEM-PRO as a JSON or pickle file, so you don't have to run the pipeline again.
For most functions, if you rerun them, they will check for existing results saved as files. The only function that would take a long time is setting the representative structure, as they are each rechecked and cleaned. This is where saving helps!
<p><div class="alert alert-warning">**Warning:** Saving in JSON format is still experimental. For a full GEM-PRO with sequences & structures, depending on the number of genes, saving can take >5 minutes.</div></p>
End of explanation
# Loading a pickle file
import pickle
with open('/tmp/mtuberculosis_gp_atlas/model/mtuberculosis_gp_atlas.pckl', 'rb') as f:
my_saved_gempro = pickle.load(f)
# Loading a JSON file
import ssbio.core.io
my_saved_gempro = ssbio.core.io.load_json('/tmp/mtuberculosis_gp_atlas/model/mtuberculosis_gp_atlas.json', decompression=False)
Explanation: Loading a saved GEM-PRO
End of explanation |
11,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Beaming and Boosting
Due to concerns about accuracy, support for Beaming & Boosting has been disabled as of the 2.2 release of PHOEBE (although we hope to bring it back in a future release).
It may come as surprise that support for Doppler boosting has been dropped in PHOEBE 2.2. This document details the underlying causes for that decision and explains the conditions that need to be met for boosting to be re-incorporated into PHOEBE.
Let's start by reviewing the theory behind Doppler boosting. The motion of the stars towards or away from the observer changes the amount of received flux due to three effects
Step1: Import all python modules that we'll need
Step2: Pull a set of Sun-like emergent intensities as a function of $\mu = \cos \theta$ from the Castelli and Kurucz database of model atmospheres (the necessary file can be downloaded from here)
Step3: Grab only the normal component for testing purposes
Step4: Now let's load a Johnson V passband and the transmission function $P(\lambda)$ contained within
Step5: Tesselate the wavelength interval to the range covered by the passband
Step6: Calculate $S(\lambda) P(\lambda)$ and plot it, to make sure everything so far makes sense
Step7: Now let's compute the term $\mathrm{d}(\mathrm{ln}\, I_\lambda) / \mathrm{d}(\mathrm{ln}\, \lambda)$. First we will compute $\mathrm{ln}\,\lambda$ and $\mathrm{ln}\,I_\lambda$ and plot them
Step8: Per equation above, $B(\lambda)$ is then the slope of this curve (plus 5). Herein lies the problem
Step9: It is clear that there is a pretty strong systematics here that we sweep under the rug. Thus, we need to revise the way we compute the spectral index and make it robust before we claim that we support boosting.
For fun, this is what would happen if we tried to estimate $B(\lambda)$ at each $\lambda$ | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Beaming and Boosting
Due to concerns about accuracy, support for Beaming & Boosting has been disabled as of the 2.2 release of PHOEBE (although we hope to bring it back in a future release).
It may come as surprise that support for Doppler boosting has been dropped in PHOEBE 2.2. This document details the underlying causes for that decision and explains the conditions that need to be met for boosting to be re-incorporated into PHOEBE.
Let's start by reviewing the theory behind Doppler boosting. The motion of the stars towards or away from the observer changes the amount of received flux due to three effects:
the spectrum is Doppler-shifted, so the flux, being the passband-weighted integral of the spectrum, changes;
the photons' arrival rate changes due to time dilation; and
radiation is beamed in the direction of motion due to light aberration.
It turns out that the combined boosting signal can be written as:
$$ I_\lambda = I_{\lambda,0} \left( 1 - B(\lambda) \frac{v_r}c \right), $$
where $I_{\lambda,0}$ is the intrinsic (rest-frame) passband intensity, $I_\lambda$ is the boosted passband intensity, $v_r$ is radial velocity, $c$ is the speed of light and $B(\lambda)$ is the boosting index:
$$ B(\lambda) = 5 + \frac{\mathrm{d}\,\mathrm{ln}\, I_\lambda}{\mathrm{d}\,\mathrm{ln}\, \lambda}. $$
The term $\mathrm{d}(\mathrm{ln}\, I_\lambda) / \mathrm{d}(\mathrm{ln}\, \lambda)$ is called spectral index. As $I_\lambda$ depends on $\lambda$, we average it across the passband:
$$ B_\mathrm{pb} = \frac{\int_\lambda \mathcal{P}(\lambda) \mathcal S(\lambda) B(\lambda) \mathrm d\lambda}{\int_\lambda \mathcal{P}(\lambda) \mathcal S(\lambda) \mathrm d\lambda}. $$
In what follows we will code up these steps and demonstrate the inherent difficulty of realizing a robust, reliable treatment of boosting.
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
Explanation: Import all python modules that we'll need:
End of explanation
wl = np.arange(900., 39999.501, 0.5)/1e10
with fits.open('T06000G40P00.fits') as hdu:
Imu = 1e7*hdu[0].data
Explanation: Pull a set of Sun-like emergent intensities as a function of $\mu = \cos \theta$ from the Castelli and Kurucz database of model atmospheres (the necessary file can be downloaded from here):
End of explanation
Inorm = Imu[-1,:]
Explanation: Grab only the normal component for testing purposes:
End of explanation
pb = phoebe.get_passband('Johnson:V')
Explanation: Now let's load a Johnson V passband and the transmission function $P(\lambda)$ contained within:
End of explanation
keep = (wl >= pb.ptf_table['wl'][0]) & (wl <= pb.ptf_table['wl'][-1])
Inorm = Inorm[keep]
wl = wl[keep]
Explanation: Tesselate the wavelength interval to the range covered by the passband:
End of explanation
plt.plot(wl, Inorm*pb.ptf(wl), 'b-')
plt.show()
Explanation: Calculate $S(\lambda) P(\lambda)$ and plot it, to make sure everything so far makes sense:
End of explanation
lnwl = np.log(wl)
lnI = np.log(Inorm)
plt.xlabel(r'$\mathrm{ln}\,\lambda$')
plt.ylabel(r'$\mathrm{ln}\,I_\lambda$')
plt.plot(lnwl, lnI, 'b-')
plt.show()
Explanation: Now let's compute the term $\mathrm{d}(\mathrm{ln}\, I_\lambda) / \mathrm{d}(\mathrm{ln}\, \lambda)$. First we will compute $\mathrm{ln}\,\lambda$ and $\mathrm{ln}\,I_\lambda$ and plot them:
End of explanation
envelope = np.polynomial.legendre.legfit(lnwl, lnI, 5)
continuum = np.polynomial.legendre.legval(lnwl, envelope)
diff = lnI-continuum
sigma = np.std(diff)
clipped = (diff > -sigma)
while True:
Npts = clipped.sum()
envelope = np.polynomial.legendre.legfit(lnwl[clipped], lnI[clipped], 5)
continuum = np.polynomial.legendre.legval(lnwl, envelope)
diff = lnI-continuum
clipped = clipped & (diff > -sigma)
if clipped.sum() == Npts:
break
plt.xlabel(r'$\mathrm{ln}\,\lambda$')
plt.ylabel(r'$\mathrm{ln}\,I_\lambda$')
plt.plot(lnwl, lnI, 'b-')
plt.plot(lnwl, continuum, 'r-')
plt.show()
Explanation: Per equation above, $B(\lambda)$ is then the slope of this curve (plus 5). Herein lies the problem: what part of this graph do we fit a line to? In versions 2 and 2.1, PHOEBE used a 5th order Legendre polynomial to fit the spectrum and then sigma-clipping to get to the continuum. Finally, it computed an average derivative of that Legendrian and proclaimed that $B(\lambda)$. The order of the Legendre polynomial and the values of sigma for sigma-clipping have been set ad-hoc and kept fixed for every single spectrum.
End of explanation
dlnwl = lnwl[1:]-lnwl[:-1]
dlnI = lnI[1:]-lnI[:-1]
B = dlnI/dlnwl
plt.plot(0.5*(wl[1:]+wl[:-1]), B, 'b-')
plt.show()
Explanation: It is clear that there is a pretty strong systematics here that we sweep under the rug. Thus, we need to revise the way we compute the spectral index and make it robust before we claim that we support boosting.
For fun, this is what would happen if we tried to estimate $B(\lambda)$ at each $\lambda$:
End of explanation |
11,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Upvote data
Step1: In the Yelp Question in HW1, please normalize the data so that it has the same L2 norm. We will grade it either way, but please state clearly what you did to treat the yelp data, which is currently not normalized.
http
Step2: Train models for varying lambda values. Calculate training error for each model.
Step3: Star data | Python Code:
# Load a text file of integers:
y = np.loadtxt("yelp_data/upvote_labels.txt", dtype=np.int)
# Load a text file with strings identifying the 1000 features:
featureNames = open("yelp_data/upvote_features.txt").read().splitlines()
featureNames = np.array(featureNames)
# Load a csv of floats, which are the values of 1000 features (columns) for 6000 samples (rows):
A = np.genfromtxt("yelp_data/upvote_data.csv", delimiter=",")
norms = np.apply_along_axis(np.linalg.norm,0,A)
A = A / norms
# print(np.apply_along_axis(np.linalg.norm,0,A))
# Randomize input order
np.random.seed(12345)
shuffler = np.arange(len(y))
np.random.shuffle(shuffler)
A = A[shuffler,:]
y = y[shuffler]
Explanation: Upvote data
End of explanation
#data_splits = (4000, 5000) # HW setting
data_splits = (2000, 2500)# faster setting
A_train = A[:data_splits[0], :]; y_train = y[:data_splits[0]]
A_val = A[data_splits[0]:data_splits[1], :]; y_val = y[data_splits[0]:data_splits[1]]
A_test = A[data_splits[1]:, :]; y_test = y[data_splits[1]:]
A_train.shape
Explanation: In the Yelp Question in HW1, please normalize the data so that it has the same L2 norm. We will grade it either way, but please state clearly what you did to treat the yelp data, which is currently not normalized.
http://stackoverflow.com/questions/7140738/numpy-divide-along-axis
End of explanation
result = RegularizationPathTrainTest(X_train=A_train[0:100, 0:50], y_train=y_train[0:100], feature_names=featureNames, lam_max=1,
X_val=A_val[0:100, 0:50], y_val=y_val[0:100,], steps=2, frac_decrease=0.05,
delta = 0.001)
result.results_df
result.analyze_path()
result.results_df
result = RegularizationPathTrainTest(X_train=A_train, y_train=y_train, feature_names=featureNames, lam_max=100,
X_val=A_val, y_val=y_val, steps=5, frac_decrease=0.7, delta=0.01)
result.analyze_path()
print(A_train.shape)
print(A_val.shape)
print(A_test.shape)
#Assuming df is a pandas data frame with columns 'x', 'y', and 'label'
fig, ax = plt.subplots(1, 1, figsize=(5, 4))
colors = {1: 'gray', 10: 'b'}
data = result.results_df.copy()
plt.semilogx(data['lam'], data['RMSE (validation)'], linestyle='--', marker='o', color='g')
plt.semilogx(data['lam'], data['RMSE (training)'], linestyle='--', marker='o', color='#D1D1D1')
#for key,grp in data.groupby('sigma'):
# print (key)
# plt.semilogx(grp.lam, grp.recall, linestyle='--', marker='o',
# color=colors[key], label='sigma = {}'.format(key)) #, t, t**2, 'bs', t, t**3, 'g^')
plt.legend(loc = 'best')
plt.xlabel('lambda')
plt.ylabel('RMSE')
#ax.set_ylim([0.55, 1.05])
#Assuming df is a pandas data frame with columns 'x', 'y', and 'label'
fig, ax = plt.subplots(1, 1, figsize=(4,3))
colors = {1: 'gray', 10: 'b'}
data = result.results_df.copy()
plt.semilogx(data['lam'], data['# nonzero coefficients'], linestyle='--', marker='o', color='b')
#for key,grp in data.groupby('sigma'):
# print (key)
# plt.semilogx(grp.lam, grp.recall, linestyle='--', marker='o',
# color=colors[key], label='sigma = {}'.format(key)) #, t, t**2, 'bs', t, t**3, 'g^')
plt.legend(loc = 'best')
plt.xlabel('lambda')
plt.ylabel('num nonzero coefficients')
#ax.set_ylim([0.55, 1.05])
assert False
Explanation: Train models for varying lambda values. Calculate training error for each model.
End of explanation
# Load a text file of integers:
y = np.loadtxt("yelp_data/star_labels.txt", dtype=np.int)
# Load a text file with strings identifying the 2500 features:
featureNames = open("yelp_data/star_features.txt").read().splitlines()
# Load a matrix market matrix with 45000 samples of 2500 features, convert it to csc format:
A = sp.csc_matrix(io.mmread("yelp_data/star_data.mtx"))
Explanation: Star data
End of explanation |
11,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
Xmin = np.min(image_data)
Xmax = np.max(image_data)
return 0.1 + (image_data - Xmin)*(0.9 - 0.1) / (Xmax - Xmin)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
11,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="img/nao.jpg" align="right" width=200>
Sensor de so (micròfon)
El micròfon del robot detecta el soroll ambiental. No sap reconèixer paraules, però si pot reaccionar a una palmada, o un crit. Altres robots més sofisticats com el de la imatge sí que poden parlar i reconèixer el llenguatge.
En aquesta pàgina analitzarem els valors del sensor, i l'usarem per a controlar el robot.
Primer que res, ens connectem.
Step1: Anàlisi del so
Si recordeu un exemple anterior, la funció sound ens retorna un valor entre 0 i 100, segons la intensitat del so. Anem a provar-la de nou, però representant el seu valor en una gràfica.
Per a això, el que farem és recollir dades
Step2: Ara representarem gràficament els valors llegits.
Step3: Normalment voreu valors que pugen i baixen, entre 0 i 100, segons parleu menys o més fort. Podeu provar també a donar palmades.
Aleshores, per a controlar el robot, podem fer que reaccione quan el valor del so siga més alt d'un determinat llindar (umbral en castellà).
Control a distància per so
Feu un programa per a que el robot vaja recte mentre no li digueu que pare, és a dir, mentre no detecte un so alt.
Step4: Navegació controlada
Modifiqueu el programa de navegació, de manera que, en lloc de girar quan el robot detecta un contacte, ho faça quan detecta un so, per exemple una palmada.
Step5: Usar els dos sensors al mateix temps
En lloc de substiuir la condició de contacte per la de so, també podem afegir eixa segona condició a la primera. Els llenguatges de programació poden fer operacions lògiques per a combinar condicions. No és tan complicat com pareix, per exemple, les dos condicions s'escriurien així
Step6: Indicar la direcció
Controlar el robot, per a que després gire a l'atzar, no queda massa bé. Seria millor que poguérem controlar la direcció del gir amb el so. Recordeu que el robot no reconeix els sons, només els canvis de volum.
Com podríeu indicar-li la direcció de gir? A la millor, en compte d'un número a l'atzar, podríeu comprovar el valor del sensor de so un segon cop. Aleshores, amb una sola palmada, el robot giraria cap a un costat, i amb dos, cap a l'altre.
Step7: I el gran repte
Podeu fer una versió completa amb tot?
el robot va recte mentre no hi haja contacte ni detecte cap so alt
si detecta un contacte, va cap arrere i gira a l'atzar a esquerra o dreta
però si el que ha detectat és el so, va cap arrere, i si detecta un segon so, gira a esquerra i si no a dreta
Complicat? No tant, però cal tindre les idees clares!
Step8: Recapitulem
Si heu arribat fins ací fent totes les variants, enhorabona! Comenceu a controlar de debó la programació. Si no, no patiu, açò no s'apren en dos hores!
En aquesta pàgina hem donat una ullada a les variables i a les operacions lògiques, i hem anat anidant cada cop més bucles i condicionals. Així és la programació!
I només hem vist la meitat de sensors! Passem a vore doncs el tercer.
Abans de continuar, desconnecteu el robot | Python Code:
from functions import connect, sound, forward, stop
connect()
Explanation: <img src="img/nao.jpg" align="right" width=200>
Sensor de so (micròfon)
El micròfon del robot detecta el soroll ambiental. No sap reconèixer paraules, però si pot reaccionar a una palmada, o un crit. Altres robots més sofisticats com el de la imatge sí que poden parlar i reconèixer el llenguatge.
En aquesta pàgina analitzarem els valors del sensor, i l'usarem per a controlar el robot.
Primer que res, ens connectem.
End of explanation
data = [] # executeu este codi mentre parleu al micròfon
for i in range(100):
data.append(sound())
Explanation: Anàlisi del so
Si recordeu un exemple anterior, la funció sound ens retorna un valor entre 0 i 100, segons la intensitat del so. Anem a provar-la de nou, però representant el seu valor en una gràfica.
Per a això, el que farem és recollir dades: llegirem els valors de la funció vàries vegades i els guardarem en la memòria de l'ordinador, per a després representar gràficament els valors.
Per a guardar dades en la memòria, els llenguatges de programació fan servir variables.
En el següent exemple, usem una variable anomenada data, que contindrà la llista de valors que llegim del sensor. Inicialment estarà buida, i dins del bucle anirem afegint valors.
End of explanation
from functions import plot
plot(data)
Explanation: Ara representarem gràficament els valors llegits.
End of explanation
while ___:
___
___
Explanation: Normalment voreu valors que pugen i baixen, entre 0 i 100, segons parleu menys o més fort. Podeu provar també a donar palmades.
Aleshores, per a controlar el robot, podem fer que reaccione quan el valor del so siga més alt d'un determinat llindar (umbral en castellà).
Control a distància per so
Feu un programa per a que el robot vaja recte mentre no li digueu que pare, és a dir, mentre no detecte un so alt.
End of explanation
from functions import backward, left, right
from time import sleep
from random import random
try:
while True:
while ___:
___
___
if ___:
___
else:
___
___
except KeyboardInterrupt:
stop()
Explanation: Navegació controlada
Modifiqueu el programa de navegació, de manera que, en lloc de girar quan el robot detecta un contacte, ho faça quan detecta un so, per exemple una palmada.
End of explanation
from functions import backward, left, right, touch
from time import sleep
from random import random
try:
while True:
while ___ and ___:
___
___
if ___:
___
else:
___
___
except KeyboardInterrupt:
stop()
Explanation: Usar els dos sensors al mateix temps
En lloc de substiuir la condició de contacte per la de so, també podem afegir eixa segona condició a la primera. Els llenguatges de programació poden fer operacions lògiques per a combinar condicions. No és tan complicat com pareix, per exemple, les dos condicions s'escriurien així:
mentre el so siga menor que 50 i no hi haja contacte
Per a programar-ho en Python, només cal saber que "i" en anglès s'escriu "and" ;-)
End of explanation
from functions import backward, left, right, touch
from time import sleep
from random import random
try:
while True:
while ___:
___
___
if ___:
___
else:
___
___
except KeyboardInterrupt:
stop()
Explanation: Indicar la direcció
Controlar el robot, per a que després gire a l'atzar, no queda massa bé. Seria millor que poguérem controlar la direcció del gir amb el so. Recordeu que el robot no reconeix els sons, només els canvis de volum.
Com podríeu indicar-li la direcció de gir? A la millor, en compte d'un número a l'atzar, podríeu comprovar el valor del sensor de so un segon cop. Aleshores, amb una sola palmada, el robot giraria cap a un costat, i amb dos, cap a l'altre.
End of explanation
from functions import backward, left, right, touch
from time import sleep
from random import random
try:
while True:
while ___:
___
if ___:
___
if ___:
___
else:
___
___
else:
___
if ___:
___
else:
___
___
except KeyboardInterrupt:
stop()
Explanation: I el gran repte
Podeu fer una versió completa amb tot?
el robot va recte mentre no hi haja contacte ni detecte cap so alt
si detecta un contacte, va cap arrere i gira a l'atzar a esquerra o dreta
però si el que ha detectat és el so, va cap arrere, i si detecta un segon so, gira a esquerra i si no a dreta
Complicat? No tant, però cal tindre les idees clares!
End of explanation
from functions import disconnect, next_notebook
disconnect()
next_notebook('light')
Explanation: Recapitulem
Si heu arribat fins ací fent totes les variants, enhorabona! Comenceu a controlar de debó la programació. Si no, no patiu, açò no s'apren en dos hores!
En aquesta pàgina hem donat una ullada a les variables i a les operacions lògiques, i hem anat anidant cada cop més bucles i condicionals. Així és la programació!
I només hem vist la meitat de sensors! Passem a vore doncs el tercer.
Abans de continuar, desconnecteu el robot:
End of explanation |
11,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial Overview
First we want to have a look at the data.
Step1: Ok, so we're getting a pretty simple input format
Step2: According to my knowledge with quora, this is indeed a full text question. I am just wondering, if there is no short title for this question. As far as I know, each question has a short title (and some additionally have a long description like this).
Step3: Yes, some of them are there multiple times, but not too often. Let's see if the IDs really match the texts.
Step4: We see that there seem to be some question with different ID, but the same title. If we're lucky, there is a match of those and they are in the data set as duplicates.
Step5: Unfortunately, they are not. So let's at least verify the ID of the second question with this title to make sure that there is nothing wrong with our counting code.
Step6: The question IDs are fine, there are four questions with the same title, but only one of them occurs in a lot of matches in this duplicate list.
Let's finally check how many unique questions and how many samples we got.
Step7: There are two questions per sample, so there will be a lot of questions which only occur a single time in the whole data set.
Number of Duplicates
Let's see how often they decided that two questions are duplicates in this dataset and how often not. This is important to make sure that our model will not be biased by the data it has seen (e.g. get an accuracy score of 99% by just betting "no", just because 99% of the training data is "no"). | Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
df = pd.read_csv('../data/raw/train.csv')
df.head()
Explanation: Initial Overview
First we want to have a look at the data.
End of explanation
questions = pd.concat([df['question1'], df['question2']])
df_combined = pd.DataFrame({'question': questions})
# There seems to be some error in the loaded data, we should investigate later (some value seems to be float)
df_combined['question'] = df_combined['question'].apply(str)
df_combined['text_length'] = df_combined['question'].apply(len)
df_combined.sort_values(by='text_length', ascending=False).iloc[0]['question']
Explanation: Ok, so we're getting a pretty simple input format: Row-ID, Question-ID 1 and 2, the titles for question 1 and 2 and a marker if this question is a duplicate. According to the Kaggle competition question1 and question2 are the full text of the question. So let's see if this is really full text or just the title by looking at the longest sample we have.
I am wondering if this list is fully connected between all question or just randomly and if some of those questions are in the data multiple times.
End of explanation
question_ids = pd.concat([df['qid1'], df['qid2']])
df_combined = pd.Series(question_ids)
df_combined.value_counts().sort_values(ascending=False).head()
Explanation: According to my knowledge with quora, this is indeed a full text question. I am just wondering, if there is no short title for this question. As far as I know, each question has a short title (and some additionally have a long description like this).
End of explanation
questions = pd.concat([df['question1'], df['question2']])
df_combined = pd.Series(questions)
df_combined.value_counts().sort_values(ascending=False).head()
Explanation: Yes, some of them are there multiple times, but not too often. Let's see if the IDs really match the texts.
End of explanation
question_title = 'What are the best ways to lose weight?'
df[(df['question1'] == question_title) & (df['question2'] == question_title)]
Explanation: We see that there seem to be some question with different ID, but the same title. If we're lucky, there is a match of those and they are in the data set as duplicates.
End of explanation
ids1 = df[(df['question1'] == question_title)]['qid1'].value_counts()
ids2 = df[(df['question2'] == question_title)]['qid2'].value_counts()
ids1.add(ids2, fill_value=0)
Explanation: Unfortunately, they are not. So let's at least verify the ID of the second question with this title to make sure that there is nothing wrong with our counting code.
End of explanation
questions = len(pd.concat([df['qid1'], df['qid2']]).unique())
samples = len(df)
print('%d questions and %d samples' % (questions, samples))
Explanation: The question IDs are fine, there are four questions with the same title, but only one of them occurs in a lot of matches in this duplicate list.
Let's finally check how many unique questions and how many samples we got.
End of explanation
sns.countplot(df['is_duplicate']);
Explanation: There are two questions per sample, so there will be a lot of questions which only occur a single time in the whole data set.
Number of Duplicates
Let's see how often they decided that two questions are duplicates in this dataset and how often not. This is important to make sure that our model will not be biased by the data it has seen (e.g. get an accuracy score of 99% by just betting "no", just because 99% of the training data is "no").
End of explanation |
11,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using ctools from Python
In this notebook you will learn how to use the ctools and cscripts from Python instead of typing the commands in the console.
ctools provides two Python modules that allow using all tools and scripts as Python classes. To use ctools from Python you have to import the ctools and cscripts modules into Python. You should also import the gammalib module, as ctools without GammaLib is generally not very useful.
Warning
Step1: Simulating events
As an example we will simulate an observation of an hour of the Crab nebula.
Step2: The first line generates an instance of the ctobssim tool as a Python class. User parameters are then set using the [ ] operator. After setting all parameters the execute() method is called to execute the ctobssim tool. On output the events.fits FITS file is created. Until now everything is analogous to running the tool from the command line, but in Python you can easily combine the different blocks into more complex workflows.
Remember that you can consult the manual of each tool to find out how it works and to discover all the input parameters that you can set.
Step3: In a Jupyter notebook a code line starting with ! is executed in the shell, so you can do the operation above just from the command line.
One of the advantages of using ctools from Python is that you can run a tool using
Step4: The main difference to the execute() method is that the run() method will not write the output (i.e., the simulated event list) to disk. Why is this useful? Well, after having typed sim.run() the ctobssim class still exists as an object in memory, including all the simulated events. You can always save to disk later using the save() method.
The ctobssim class has an obs() method that returns an observation container that holds the simulated CTA observation with its associated events. To visualise this container, type
Step5: There is one CTA observation in the container and to visualise that observation type
Step6: The observation contains a CTA event list that is implement by the GammaLib class GCTAEventList. You can access the event list using the events() method. To visualise the individual events you can iterate over the events using a for loop. This will show the simulated celestial coordinates (RA, DEC), the coordinate in the camera system [DETX, DETY], the energies and the terrestrial times (TT) of all events. Let's peek at the first events of the list
Step7: We can use this feature to inspect some of the event properties, for example look at their energy spectrum. For this we will use the Python packages matplotlib. If you do not have matplotlib you can use another plotting package of your choice or skip this step.
Step8: Fitting a model to the observations
We can use the observation in memory to directly run a likelihood fit.
Step9: This is very compact. Where do we define the model fit to the data? Where are the user parameters? ctlike doesn’t in fact need any parameters as all the relevant information is already contained in the observation container produced by the ctobssim class. Indeed, you constructed the ctlike instance by using the ctobssim observation container as constructor argument.
An observation container, implemented by the GObservations class of GammaLib, is the fundamental brick of any ctools analysis. An observation container can hold more than events, e.g., in this case it also holds the model that was used to generate the events.
Many tools and scripts handle observation containers, and accept them upon construction and return them after running the tool via the obs() method. Passing observation containers between ctools classes is a very convenient and powerful way of building in-memory analysis pipelines. However, this implies that you need some computing ressources when dealing with large observation containers (for example if you want to analyse a few 100 hours of data at once). Also, if the script crashes the information is lost.
To check how the fit went you can inspect the optimiser used by ctlike by typing
Step10: You see that the fit converged after 2 iterations. Out of 10 parameters in the model 4 have been fitted (the others were kept fixed). To inspect the fit results you can print the model container that can be access using the models() method of the observation container
Step11: For example, in this way we can fetch the minimum of the optimized function (the negative of the natural logarithm of the likelihood) to compare different model hypotheses.
Step12: Suppose you want to repeat the fit by optimising also the position of the point source. This is easy from Python, as you can modify the model and fit interactively. Type the following
Step13: Now we can quantify the improvement of the model by comparing the new value of the optimized function with the previous one.
Step14: The test statistic (TS) is expected to be distributed as a $\chi^2_n$ with a number of degrees of freedom $n$ equal to the additional number of degrees of freedom of the (second) more complex model with respect to the (first) more parsimonious one, for our case two degrees of freedom. The chance probability that the likelihood improved that much because of pure statistical fluctuations is the integral from TS to infinity of the chi square with two degrees of freedom. In this case the improvement is negligible (i.e., the chance probability is very high), as expected since in the model the source is already at the true position.
Generating a log file
By default, tools and scripts run from Python will not generate a log file. The reason for this is that Python scripts are often used to build ctools analysis pipelines and workflows, and one generally does not want that such a script pollutes the workspace with log files. You can however instruct a ctool or cscript to generate a log file as follows | Python Code:
import gammalib
import ctools
import cscripts
Explanation: Using ctools from Python
In this notebook you will learn how to use the ctools and cscripts from Python instead of typing the commands in the console.
ctools provides two Python modules that allow using all tools and scripts as Python classes. To use ctools from Python you have to import the ctools and cscripts modules into Python. You should also import the gammalib module, as ctools without GammaLib is generally not very useful.
Warning: Always import gammalib before you import ctools and cscripts.
End of explanation
sim = ctools.ctobssim()
sim['inmodel'] = '${CTOOLS}/share/models/crab.xml'
sim['outevents'] = 'events.fits'
sim['caldb'] = 'prod2'
sim['irf'] = 'South_0.5h'
sim['ra'] = 83.5
sim['dec'] = 22.8
sim['rad'] = 5.0
sim['tmin'] = '2020-01-01T00:00:00'
sim['tmax'] = '2020-01-01T01:00:00'
sim['emin'] = 0.03
sim['emax'] = 150.0
sim.execute()
Explanation: Simulating events
As an example we will simulate an observation of an hour of the Crab nebula.
End of explanation
!ctobssim --help
Explanation: The first line generates an instance of the ctobssim tool as a Python class. User parameters are then set using the [ ] operator. After setting all parameters the execute() method is called to execute the ctobssim tool. On output the events.fits FITS file is created. Until now everything is analogous to running the tool from the command line, but in Python you can easily combine the different blocks into more complex workflows.
Remember that you can consult the manual of each tool to find out how it works and to discover all the input parameters that you can set.
End of explanation
sim.run()
Explanation: In a Jupyter notebook a code line starting with ! is executed in the shell, so you can do the operation above just from the command line.
One of the advantages of using ctools from Python is that you can run a tool using
End of explanation
print(sim.obs())
Explanation: The main difference to the execute() method is that the run() method will not write the output (i.e., the simulated event list) to disk. Why is this useful? Well, after having typed sim.run() the ctobssim class still exists as an object in memory, including all the simulated events. You can always save to disk later using the save() method.
The ctobssim class has an obs() method that returns an observation container that holds the simulated CTA observation with its associated events. To visualise this container, type:
End of explanation
print(sim.obs()[0])
Explanation: There is one CTA observation in the container and to visualise that observation type:
End of explanation
events = sim.obs()[0].events()
for event in events[:1]:
print(event)
Explanation: The observation contains a CTA event list that is implement by the GammaLib class GCTAEventList. You can access the event list using the events() method. To visualise the individual events you can iterate over the events using a for loop. This will show the simulated celestial coordinates (RA, DEC), the coordinate in the camera system [DETX, DETY], the energies and the terrestrial times (TT) of all events. Let's peek at the first events of the list:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
#this will visualize plots inline
ax = plt.subplot()
ax.set_yscale('log')
ax.set_xlabel('Log10(Energy [TeV])')
energies = []
for event in events:
energies.append(event.energy().log10TeV())
n, bins, patches = plt.hist(energies)
Explanation: We can use this feature to inspect some of the event properties, for example look at their energy spectrum. For this we will use the Python packages matplotlib. If you do not have matplotlib you can use another plotting package of your choice or skip this step.
End of explanation
like = ctools.ctlike(sim.obs())
like.run()
Explanation: Fitting a model to the observations
We can use the observation in memory to directly run a likelihood fit.
End of explanation
print(like.opt())
Explanation: This is very compact. Where do we define the model fit to the data? Where are the user parameters? ctlike doesn’t in fact need any parameters as all the relevant information is already contained in the observation container produced by the ctobssim class. Indeed, you constructed the ctlike instance by using the ctobssim observation container as constructor argument.
An observation container, implemented by the GObservations class of GammaLib, is the fundamental brick of any ctools analysis. An observation container can hold more than events, e.g., in this case it also holds the model that was used to generate the events.
Many tools and scripts handle observation containers, and accept them upon construction and return them after running the tool via the obs() method. Passing observation containers between ctools classes is a very convenient and powerful way of building in-memory analysis pipelines. However, this implies that you need some computing ressources when dealing with large observation containers (for example if you want to analyse a few 100 hours of data at once). Also, if the script crashes the information is lost.
To check how the fit went you can inspect the optimiser used by ctlike by typing:
End of explanation
print(like.obs().models())
Explanation: You see that the fit converged after 2 iterations. Out of 10 parameters in the model 4 have been fitted (the others were kept fixed). To inspect the fit results you can print the model container that can be access using the models() method of the observation container:
End of explanation
like1 = like.opt().value()
print(like1)
Explanation: For example, in this way we can fetch the minimum of the optimized function (the negative of the natural logarithm of the likelihood) to compare different model hypotheses.
End of explanation
like.obs().models()['Crab']['RA'].free()
like.obs().models()['Crab']['DEC'].free()
like.run()
print(like.obs().models())
Explanation: Suppose you want to repeat the fit by optimising also the position of the point source. This is easy from Python, as you can modify the model and fit interactively. Type the following:
End of explanation
like2 = like.opt().value()
ts = -2.0 * (like2 - like1)
print(ts)
Explanation: Now we can quantify the improvement of the model by comparing the new value of the optimized function with the previous one.
End of explanation
like.logFileOpen()
like.run()
Explanation: The test statistic (TS) is expected to be distributed as a $\chi^2_n$ with a number of degrees of freedom $n$ equal to the additional number of degrees of freedom of the (second) more complex model with respect to the (first) more parsimonious one, for our case two degrees of freedom. The chance probability that the likelihood improved that much because of pure statistical fluctuations is the integral from TS to infinity of the chi square with two degrees of freedom. In this case the improvement is negligible (i.e., the chance probability is very high), as expected since in the model the source is already at the true position.
Generating a log file
By default, tools and scripts run from Python will not generate a log file. The reason for this is that Python scripts are often used to build ctools analysis pipelines and workflows, and one generally does not want that such a script pollutes the workspace with log files. You can however instruct a ctool or cscript to generate a log file as follows:
End of explanation |
11,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exemplo de CRUD no MongoDB
Exemplo de CRUD completo em MongoDB
Autor
Step1: Definindo um documento para ser inserido
Step2: Inserindo o documento na base
Step3: Recuperando documentos
Step4: Inserindo outros documentos
Step5: Pesquisando todos os usuários
Step6: Pesquisando quem tem Python na matéria
Step7: Pesquisando quem tem menos de 28 anos
Step8: Modificando documento
No exemplo abaixo, vamos adicionar cidade a um documento
Step9: Excluindo documento
No exemplo abaixo, vamos apagar o documento da usuária Juliana | Python Code:
from pymongo import MongoClient
cli = MongoClient()
db = cli['treinamento']
col = db['cadastro']
Explanation: Exemplo de CRUD no MongoDB
Exemplo de CRUD completo em MongoDB
Autor: Christiano Anderson
Propus Data Science
Estabelecendo a conexão com o banco
End of explanation
cad = {
'nome': 'Christiano Anderson',
'empresa': 'Propus Data Science',
'cursos': ['python','mongodb']
}
Explanation: Definindo um documento para ser inserido
End of explanation
cad_id = col.insert_one(cad)
print cad_id.inserted_id
Explanation: Inserindo o documento na base
End of explanation
cadastro = db['cadastro']
res = cadastro.find_one({})
print res['empresa'],res['nome']
Explanation: Recuperando documentos
End of explanation
col.insert_one({
'nome':'Carolina',
'idade': 29,
'empresa': 'ACME',
'cursos': ['mongodb','ruby'],
'contatos': {
'celular':'5199998888',
'email': '[email protected]',
'ramal': '2222'
}
})
col.insert_one({
'nome':'Juliana',
'idade': 25,
'empresa': 'ACME',
'cursos': ['mongodb','ruby','python'],
'contatos': {
'celular':'5199554433',
'email': '[email protected]',
'ramal': '2221'
}
})
col.insert_one({
'nome':'Rafael',
'idade': 30,
'empresa': 'XYZ',
'cursos': ['mongodb','php'],
'contatos': {
'celular':'5188882222',
'email': '[email protected]',
'ramal': '5511',
'tel_residencial': '5122223333'
}
})
Explanation: Inserindo outros documentos
End of explanation
resultados = col.find({})
for r in resultados:
print r['nome'], r['']
Explanation: Pesquisando todos os usuários
End of explanation
resultados = col.find({'cursos':'python'})
for r in resultados:
print r['nome'], r['cursos']
Explanation: Pesquisando quem tem Python na matéria
End of explanation
resultados = col.find({'idade': {'$lt': 28}})
for r in resultados:
print r['nome']
Explanation: Pesquisando quem tem menos de 28 anos
End of explanation
res = col.update_one({'nome':'Christiano Anderson'},{'$set': {'cidade':'Porto Alegre'}})
print res
Explanation: Modificando documento
No exemplo abaixo, vamos adicionar cidade a um documento
End of explanation
count = col.count({})
print count
res = col.delete_one({'nome':'Juliana'})
count = col.count({})
print count
Explanation: Excluindo documento
No exemplo abaixo, vamos apagar o documento da usuária Juliana
End of explanation |
11,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Stock Indices
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
When comparing the historical returns on stock indices, it is a common mistake to only consider a single time-period.
We will compare three well-known stock indices for USA
Step1: Load Data
We now load all the financial data we will be using.
Step3: Compare Total Returns
The first plot shows the so-called Total Return of the stock indices, which is the investor's return when dividends are reinvested in the same stock index and taxes are ignored.
Step5: This plot clearly shows that the S&P 400 (Mid-Cap) had a much higher Total Return than the S&P 500 (Large-Cap) and S&P 600 (Small-Cap), and the S&P 500 performed slightly worse than the S&P 600.
But this period was nearly 30 years. What if we consider shorter investment periods with different start and end-dates? We need more detailed statistics to answer these questions.
Calculate Annualized Returns
We calculate the annualized returns of the stock indices for all investment periods of durations from 1 to 10 years.
Step6: Examples of Annualized Returns
The lists we have created above contain the annualized returns for the stock indices as well as US Government Bonds and the US CPI inflation index.
Let us show the annualized returns of the S&P 500 for all 1-year periods. This is itself a time-series. It shows that the return was about 0.347 (or 34.7%) for the year between 3. January 1989 and 3. January 1990. The return was only about 31.6% between 4. January 1989 and 4. January 1990. And so on.
Step7: We can also show the summary statistics for the annualized returns of all 1-year periods of the S&P 500. Note that a mean of about 0.113 means an average 1-year return of 11.3%.
Step8: We can also show the annualized returns of the S&P 500 for all 10-year periods. This shows that between 3. January 1989 and 1999 the annualized return was about 19.3%. Between 4. January 1989 and 1999 it was about 19.1%.
Step9: These are the summary statistics for all 10-year periods of the S&P 500, which show that it returned about 8.2% per year on average, for all 10-year periods between 1989 and 2018.
Step10: For US Government Bonds we only consider bonds with 1-year maturity, so for multi-year periods we assume the return is reinvested in new 1-year bonds. Reinvesting in gov. bonds gave an average return of about 5.7% for all 10-year periods between 1962 and 2018.
Step12: Examples of Good and Bad Periods
Using the annualized returns we have just calculated, we can now easily find investment periods where one stock index was better than another.
Step13: First we show a 3-year period where the S&P 500 was better than the S&P 400.
Step14: Then we show a 3-year period where the S&P 400 was better than the S&P 500.
Step15: Then we show a 3-year period where the S&P 600 was better than the S&P 400.
Step16: Then we show a 3-year period where the S&P 400 was better than the S&P 600.
Step18: Statistics for Annualized Returns
We can also print summary statistics for the annualized returns.
Step19: When we print the summary statistics for the stock indices, we see that for 1-year investment periods the S&P 500 returned about 11.3% on average, while the S&P 400 returned about 14.0%, and the S&P 600 returned about 12.4%.
For longer investment periods the average returns decrease. For 10-year investment periods the S&P 500 returned about 8.2% per year on average, the S&P 400 returned about 11.6% on average, and the S&P 600 returned about 10.3% on average.
It can be a bit confusing to view all the summary statistics like this and it is better to show selected data in a table, as was done in the paper Comparison of U.S. Stock Indices.
Step22: Probability of Loss
Another useful statistic is the historical probability of loss for different investment periods.
Step23: This shows the probability of loss for the stock-indices for investment periods between 1 and 10 years.
For example, the S&P 500 had a loss in about 17.8% of all 1-year investment periods, while the S&P 400 had a loss in about 18.1% of all 1-year periods, and the S&P 600 had a loss in about 22.3% of all 1-year periods.
The probability of loss generally decreases as the investment period increases.
For example, the S&P 500 had a loss in about 9.6% of all 10-year investment periods, while the S&P 400 and S&P 600 did not have a loss in any of the 10-year periods.
Step26: Compared to Inflation
It is also useful to consider the probability of a stock index performing better than inflation.
Step27: This shows the probability of each stock index having a higher return than inflation for investment periods between 1 and 10 years. All taxes are ignored.
For example, both the S&P 500 and S&P 400 had a higher return than inflation in about 79% of all 1-year investment periods, while the S&P 600 only exceeded inflation in about 73.6% of all 1-year periods.
For investment periods of 6 years or more, the S&P 400 and S&P 600 performed better than inflation for almost all investment periods. But the S&P 500 only exceeded inflation in about 86% of all 10-year periods.
Step29: Compared to Bonds
It is also useful to compare the returns of the stock indices to risk-free government bonds.
Step30: This shows the probability of each stock index having a higher return than risk-free government bonds, for investment periods between 1 and 10 years. We consider annual reinvestment in bonds with 1-year maturity. All taxes are ignored.
For example, the S&P 500 returned more than government bonds in about 79% of all 1-year periods, while it was 78% for the S&P 400 and 73% for the S&P 600.
For investment periods of 6 years or more, the S&P 400 and S&P 600 nearly always returned more than government bonds. But the S&P 500 only returned more than bonds in about 84% of all 10-year periods.
Step32: Compared to Other Stock Indices
Now we will compare the stock indices directly against each other.
Step33: This shows the probability of one stock index performing better than another for investment periods between 1 and 10 years. All taxes are ignored.
For example, the S&P 500 (Large-Cap) performed better than the S&P 400 (Mid-Cap) in about 42% of all 1-year periods. Similarly, the S&P 500 performed better than the S&P 600 (Small-Cap) in almost 45% of all 1-year periods.
For longer investment periods the S&P 500 generally performed worse than the S&P 400 and S&P 600. For example, the S&P 500 only performed better than the S&P 400 in about 6% of all 10-year periods, and it was better than the S&P 600 in about 15% of the 10-year periods. Similarly, the S&P 600 was better than the S&P 400 in only about 21% of all 10-year periods.
This shows that for longer investment periods the S&P 400 (Mid-Cap) mostly had a higher return than both the S&P 500 (Large-Cap) and S&P 600 (Small-Cap).
Step35: Correlation
It is also useful to consider the statistical correlation between the returns of stock indices.
Step36: This shows the correlation coefficient (Pearson) between the returns on the stock indices for investment periods between 1 and 10 years.
For example, the correlation was about 0.88 between the S&P 500 and S&P 400 for all 1-year investment periods, while it was only 0.77 for the S&P 500 and S&P 600, and 0.92 for the S&P 600 and S&P 400.
For longer investment periods the correlation coefficient generally increases. For example, the correlation was about 0.93 between the S&P 500 and S&P 400 for all 10-year investment periods, while it was about 0.85 between the S&P 500 and S&P 600, and it was almost 0.94 between the S&P 600 and S&P 400.
This shows that the return on these three stock indices are all highly correlated, so that they have a strong tendency to show losses or gains for the same periods.
It might also be useful to consider the correlation for shorter investment periods, e.g. monthly, weekly or even daily, because a low correlation between stock indices might be useful for rebalancing the investment portfolio when one stock index is down and another is up.
Step38: Recovery Times
It is also useful to consider how quickly the stock indices typically recover from losses.
Step39: This shows the probability that each stock index has recovered from losses within a given number of days.
For example, all three stock indices recovered from about 80-83% of all losses within just a week. The probability goes up for longer investment periods. For example, for 5-year investment periods the S&P 500 had recovered from about 99.8% of all losses, while the S&P 400 and S&P 600 had recovered from all losses in 5 years.
Note that this only measures the number of days until the stock index recovered the first time. It is possible that a stock index decreases again in the future. This can be seen from the non-zero probabilities of loss shown further above, where the S&P 400 and S&P 600 had losses in some 7, 8, and 9 year investment periods. | Python Code:
%matplotlib inline
# Imports from Python packages.
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import pandas as pd
import numpy as np
import os
# Imports from FinanceOps.
from data_keys import *
from data import load_index_data, load_usa_cpi
from data import load_usa_gov_bond_1year, common_period
from returns import annualized_returns, bond_annualized_returns
from recovery import prob_recovery
Explanation: Comparing Stock Indices
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
When comparing the historical returns on stock indices, it is a common mistake to only consider a single time-period.
We will compare three well-known stock indices for USA: The S&P 500 for large-cap stocks, the S&P 400 for mid-cap stocks, and the S&P 600 for small-cap stocks. We show that there are periods where each of these stock indices was better than the others.
So a more proper way of comparing stock indices is to consider all investment periods. For example, instead of just comparing the returns between 1. January 1990 to 1. January 2018, we consider all 1-year investment periods between 1990 and 2018. We also consider all 2-year investment periods, 3-year periods, and so on all the way up to 10-year investment periods. We then calculate and compare various statistics to assess which stock index was best.
This methodology was also used in the paper Comparison of U.S. Stock Indices.
Python Imports
This Jupyter Notebook is implemented in Python v. 3.6 and requires various packages for numerical computations and plotting. See the installation instructions in the README-file.
End of explanation
# Define the ticker-names for the stocks we consider.
ticker_SP500 = "S&P 500"
ticker_SP400 = "S&P 400"
ticker_SP600 = "S&P 600"
# All tickers for the stocks.
tickers = [ticker_SP500, ticker_SP400, ticker_SP600]
# Define longer names for the stocks.
name_SP500 = "S&P 500 (Large Cap)"
name_SP400 = "S&P 400 (Mid Cap)"
name_SP600 = "S&P 600 (Small Cap)"
# All names for the stocks.
names = [name_SP500, name_SP400, name_SP600]
# Load the financial data for the stock indices.
df_SP500 = load_index_data(ticker=ticker_SP500)
df_SP400 = load_index_data(ticker=ticker_SP400,
sales=False, book_value=False)
df_SP600 = load_index_data(ticker=ticker_SP600,
sales=False, book_value=False)
# All DataFrames for the stocks.
dfs = [df_SP500, df_SP400, df_SP600]
# Common date-range for the stocks.
start_date, end_date = common_period(dfs=dfs)
print(start_date, end_date)
# Load the US CPI inflation index.
cpi = load_usa_cpi()
# Load the yields for US Gov. Bonds with 1-year maturity.
bond_yields = load_usa_gov_bond_1year()
# Max number of investment years to consider.
num_years = 10
Explanation: Load Data
We now load all the financial data we will be using.
End of explanation
def plot_total_returns(dfs, names, start_date=None, end_date=None):
Plot and compare the Total Returns for the given DataFrames.
:param dfs: List of Pandas DataFrames with TOTAL_RETURN data.
:param names: Names of the stock indices.
:param start_date: Plot from this date.
:param end_date: Plot to this date.
:return: None.
# Create a new Pandas DataFrame which will be used
# to combine the time-series and plot them.
df2 = pd.DataFrame()
# For all the given DataFrames and their names.
for df, name in zip(dfs, names):
# Get the Total Return for the period.
tot_ret = df[TOTAL_RETURN][start_date:end_date]
# Normalize it to start at 1.0
tot_ret /= tot_ret[0]
# Add it to the DataFrame.
# It will be plotted with the given name.
df2[name] = tot_ret
# Plot it all.
df2.plot(title="Total Return")
plot_total_returns(dfs=dfs, names=names,
start_date=start_date, end_date=end_date)
Explanation: Compare Total Returns
The first plot shows the so-called Total Return of the stock indices, which is the investor's return when dividends are reinvested in the same stock index and taxes are ignored.
End of explanation
def calc_ann_returns(df, start_date, end_date, num_years):
Calculate the annualized returns for the Total Return
of the given DataFrame.
A list is returned so that ann_ret[0] is a Pandas Series
with the ann.returns for 1-year periods, and ann_ret[1]
are the ann.returns for 2-year periods, etc.
:param df: Pandas DataFrame with TOTAL_RETURN data.
:param start_date: Start-date for the data.
:param end_date: End-date for the data.
:return: List of Pandas Series.
# Get the Total Return for the given period.
tot_ret = df[TOTAL_RETURN][start_date:end_date]
# Calculate the annualized returns for all
# investment periods between 1 and num_years.
ann_ret = [annualized_returns(series=tot_ret, years=years)
for years in range(1, num_years+1)]
return ann_ret
# Annualized returns for the S&P 500.
ann_ret_SP500 = calc_ann_returns(df=df_SP500,
start_date=start_date,
end_date=end_date,
num_years=num_years)
# Annualized returns for the S&P 400.
ann_ret_SP400 = calc_ann_returns(df=df_SP400,
start_date=start_date,
end_date=end_date,
num_years=num_years)
# Annualized returns for the S&P 600.
ann_ret_SP600 = calc_ann_returns(df=df_SP600,
start_date=start_date,
end_date=end_date,
num_years=num_years)
# Annualized returns for investing and reinvesting in
# US Gov. Bonds with 1-year maturity.
ann_ret_bond = bond_annualized_returns(df=bond_yields,
num_years=num_years)
# Annualized returns for the US CPI inflation index.
cpi2 = cpi[start_date:end_date]
ann_ret_cpi = [annualized_returns(series=cpi2, years=i+1)
for i in range(num_years)]
Explanation: This plot clearly shows that the S&P 400 (Mid-Cap) had a much higher Total Return than the S&P 500 (Large-Cap) and S&P 600 (Small-Cap), and the S&P 500 performed slightly worse than the S&P 600.
But this period was nearly 30 years. What if we consider shorter investment periods with different start and end-dates? We need more detailed statistics to answer these questions.
Calculate Annualized Returns
We calculate the annualized returns of the stock indices for all investment periods of durations from 1 to 10 years.
End of explanation
ann_ret_SP500[0].head(10)
Explanation: Examples of Annualized Returns
The lists we have created above contain the annualized returns for the stock indices as well as US Government Bonds and the US CPI inflation index.
Let us show the annualized returns of the S&P 500 for all 1-year periods. This is itself a time-series. It shows that the return was about 0.347 (or 34.7%) for the year between 3. January 1989 and 3. January 1990. The return was only about 31.6% between 4. January 1989 and 4. January 1990. And so on.
End of explanation
ann_ret_SP500[0].describe()
Explanation: We can also show the summary statistics for the annualized returns of all 1-year periods of the S&P 500. Note that a mean of about 0.113 means an average 1-year return of 11.3%.
End of explanation
ann_ret_SP500[9].head(10)
Explanation: We can also show the annualized returns of the S&P 500 for all 10-year periods. This shows that between 3. January 1989 and 1999 the annualized return was about 19.3%. Between 4. January 1989 and 1999 it was about 19.1%.
End of explanation
ann_ret_SP500[9].describe()
Explanation: These are the summary statistics for all 10-year periods of the S&P 500, which show that it returned about 8.2% per year on average, for all 10-year periods between 1989 and 2018.
End of explanation
ann_ret_bond[9].describe()
Explanation: For US Government Bonds we only consider bonds with 1-year maturity, so for multi-year periods we assume the return is reinvested in new 1-year bonds. Reinvesting in gov. bonds gave an average return of about 5.7% for all 10-year periods between 1962 and 2018.
End of explanation
def plot_better(df1, df2, ann_ret1, ann_ret2,
name1, name2, years):
Plot the Total Return for a period of the given number
of years where the return on stock 1 > stock 2.
If this does not exist, then plot for the period where
the return of stock 1 was closest to that of stock 2.
:param df1: Pandas DataFrame for stock 1.
:param df2: Pandas DataFrame for stock 2.
:param ann_ret1: List of ann.returns for stock 1.
:param ann_ret2: List of ann.returns for stock 2.
:param name1: Name of stock 1.
:param name2: Name of stock 2.
:param years: Investment period in years.
:return: None.
# Convert number of years to index.
i = years - 1
# Difference of annualized returns.
ann_ret_dif = ann_ret1[i] - ann_ret2[i]
# Find the biggest return difference and use its
# index as the start-date for the period to be plotted.
start_date = ann_ret_dif.idxmax()
# The end-date for the period to be plotted.
days = int(years * 365.25)
end_date = start_date + pd.Timedelta(days=days)
# Create a Pandas DataFrame with stock 1,
# whose Total Return is normalized to start at 1.0
df = pd.DataFrame()
tot_ret1 = df1[start_date:end_date][TOTAL_RETURN]
df[name1] = tot_ret1 / tot_ret1[0]
# Add stock 2 to the DataFrame.
tot_ret2 = df2[start_date:end_date][TOTAL_RETURN]
df[name2] = tot_ret2 / tot_ret2[0]
# Plot the two stocks' Total Return for this period.
df.plot(title="Total Return")
Explanation: Examples of Good and Bad Periods
Using the annualized returns we have just calculated, we can now easily find investment periods where one stock index was better than another.
End of explanation
plot_better(df1=df_SP500, df2=df_SP400,
ann_ret1=ann_ret_SP500,
ann_ret2=ann_ret_SP400,
name1=name_SP500,
name2=name_SP400,
years=3)
Explanation: First we show a 3-year period where the S&P 500 was better than the S&P 400.
End of explanation
plot_better(df1=df_SP400, df2=df_SP500,
ann_ret1=ann_ret_SP400,
ann_ret2=ann_ret_SP500,
name1=name_SP400,
name2=name_SP500,
years=3)
Explanation: Then we show a 3-year period where the S&P 400 was better than the S&P 500.
End of explanation
plot_better(df1=df_SP600, df2=df_SP400,
ann_ret1=ann_ret_SP600,
ann_ret2=ann_ret_SP400,
name1=name_SP600,
name2=name_SP400,
years=3)
Explanation: Then we show a 3-year period where the S&P 600 was better than the S&P 400.
End of explanation
plot_better(df1=df_SP400, df2=df_SP600,
ann_ret1=ann_ret_SP400,
ann_ret2=ann_ret_SP600,
name1=name_SP400,
name2=name_SP600,
years=3)
Explanation: Then we show a 3-year period where the S&P 400 was better than the S&P 600.
End of explanation
def print_return_stats():
Print basic statistics for the annualized returns.
# For each period-duration.
for i in range(num_years):
years = i + 1
print(years, "Year Investment Periods:")
# Create a new DataFrame.
df = pd.DataFrame()
# Add the basic statistics for each stock.
df[name_SP500] = ann_ret_SP500[i].describe()
df[name_SP400] = ann_ret_SP400[i].describe()
df[name_SP600] = ann_ret_SP600[i].describe()
# Print it.
print(df)
print()
Explanation: Statistics for Annualized Returns
We can also print summary statistics for the annualized returns.
End of explanation
print_return_stats()
Explanation: When we print the summary statistics for the stock indices, we see that for 1-year investment periods the S&P 500 returned about 11.3% on average, while the S&P 400 returned about 14.0%, and the S&P 600 returned about 12.4%.
For longer investment periods the average returns decrease. For 10-year investment periods the S&P 500 returned about 8.2% per year on average, the S&P 400 returned about 11.6% on average, and the S&P 600 returned about 10.3% on average.
It can be a bit confusing to view all the summary statistics like this and it is better to show selected data in a table, as was done in the paper Comparison of U.S. Stock Indices.
End of explanation
def prob_loss(ann_ret):
Calculate the probability of negative ann.returns (losses).
# Remove rows with NA.
ann_ret = ann_ret.dropna()
# Calculate the probability using a boolean mask.
mask = (ann_ret < 0.0)
prob = np.sum(mask) / len(mask)
return prob
def print_prob_loss():
Print the probability of loss for increasing investment
periods for all the stocks.
# Create a new DataFrame.
df = pd.DataFrame()
# Add a column with the probability of loss for S&P 500.
df[name_SP500] = [prob_loss(ann_ret_SP500[i])
for i in range(num_years)]
# Add a column with the probability of loss for S&P 400.
df[name_SP400] = [prob_loss(ann_ret_SP400[i])
for i in range(num_years)]
# Add a column with the probability of loss for S&P 600.
df[name_SP600] = [prob_loss(ann_ret_SP600[i])
for i in range(num_years)]
# Set the index.
df.index = ["{} Years".format(i+1) for i in range(num_years)]
print(df)
Explanation: Probability of Loss
Another useful statistic is the historical probability of loss for different investment periods.
End of explanation
print_prob_loss()
Explanation: This shows the probability of loss for the stock-indices for investment periods between 1 and 10 years.
For example, the S&P 500 had a loss in about 17.8% of all 1-year investment periods, while the S&P 400 had a loss in about 18.1% of all 1-year periods, and the S&P 600 had a loss in about 22.3% of all 1-year periods.
The probability of loss generally decreases as the investment period increases.
For example, the S&P 500 had a loss in about 9.6% of all 10-year investment periods, while the S&P 400 and S&P 600 did not have a loss in any of the 10-year periods.
End of explanation
def prob_better(ann_ret1, ann_ret2):
Calculate the probability that the ann.returns of stock 1
were better than the ann.returns of stock 2.
This does not assume the index-dates are identical.
:param ann_ret1: Pandas Series with ann.returns for stock 1.
:param ann_ret2: Pandas Series with ann.returns for stock 2.
:return: Probability.
# Create a new DataFrame.
df = pd.DataFrame()
# Add the ann.returns for the two stocks.
df["ann_ret1"] = ann_ret1
df["ann_ret2"] = ann_ret2
# Remove all rows with NA.
df.dropna(inplace=True)
# Calculate the probability using a boolean mask.
mask = (df["ann_ret1"] > df["ann_ret2"])
prob = np.sum(mask) / len(mask)
return prob
def print_prob_better_than_inflation():
Print the probability of the stocks performing better
than inflation for increasing investment periods.
# Create a new DataFrame.
df = pd.DataFrame()
# Add a column with the probabilities for the S&P 500.
name = ticker_SP500 + " > CPI"
df[name] = [prob_better(ann_ret_SP500[i], ann_ret_cpi[i])
for i in range(num_years)]
# Add a column with the probabilities for the S&P 400.
name = ticker_SP400 + " > CPI"
df[name] = [prob_better(ann_ret_SP400[i], ann_ret_cpi[i])
for i in range(num_years)]
# Add a column with the probabilities for the S&P 600.
name = ticker_SP600 + " > CPI"
df[name] = [prob_better(ann_ret_SP600[i], ann_ret_cpi[i])
for i in range(num_years)]
# Set the index.
df.index = ["{} Years".format(i+1) for i in range(num_years)]
print(df)
Explanation: Compared to Inflation
It is also useful to consider the probability of a stock index performing better than inflation.
End of explanation
print_prob_better_than_inflation()
Explanation: This shows the probability of each stock index having a higher return than inflation for investment periods between 1 and 10 years. All taxes are ignored.
For example, both the S&P 500 and S&P 400 had a higher return than inflation in about 79% of all 1-year investment periods, while the S&P 600 only exceeded inflation in about 73.6% of all 1-year periods.
For investment periods of 6 years or more, the S&P 400 and S&P 600 performed better than inflation for almost all investment periods. But the S&P 500 only exceeded inflation in about 86% of all 10-year periods.
End of explanation
def print_prob_better_than_bonds():
Print the probability of the stocks performing better
than US Gov. Bonds for increasing investment periods.
# Create a new DataFrame.
df = pd.DataFrame()
# Add a column with the probabilities for the S&P 500.
name = ticker_SP500 + " > Bonds"
df[name] = [prob_better(ann_ret_SP500[i], ann_ret_bond[i])
for i in range(num_years)]
# Add a column with the probabilities for the S&P 400.
name = ticker_SP400 + " > Bonds"
df[name] = [prob_better(ann_ret_SP400[i], ann_ret_bond[i])
for i in range(num_years)]
# Add a column with the probabilities for the S&P 600.
name = ticker_SP600 + " > Bonds"
df[name] = [prob_better(ann_ret_SP600[i], ann_ret_bond[i])
for i in range(num_years)]
# Set the index.
df.index = ["{} Years".format(i+1) for i in range(num_years)]
print(df)
Explanation: Compared to Bonds
It is also useful to compare the returns of the stock indices to risk-free government bonds.
End of explanation
print_prob_better_than_bonds()
Explanation: This shows the probability of each stock index having a higher return than risk-free government bonds, for investment periods between 1 and 10 years. We consider annual reinvestment in bonds with 1-year maturity. All taxes are ignored.
For example, the S&P 500 returned more than government bonds in about 79% of all 1-year periods, while it was 78% for the S&P 400 and 73% for the S&P 600.
For investment periods of 6 years or more, the S&P 400 and S&P 600 nearly always returned more than government bonds. But the S&P 500 only returned more than bonds in about 84% of all 10-year periods.
End of explanation
def print_prob_better():
Print the probability of one stock index performing better
than another stock index for increasing investment periods.
# Create a new DataFrame.
df = pd.DataFrame()
# Add a column with the probabilities for S&P 500 > S&P 400.
name = ticker_SP500 + " > " + ticker_SP400
df[name] = [prob_better(ann_ret_SP500[i], ann_ret_SP400[i])
for i in range(num_years)]
# Add a column with the probabilities for S&P 500 > S&P 600.
name = ticker_SP500 + " > " + ticker_SP600
df[name] = [prob_better(ann_ret_SP500[i], ann_ret_SP600[i])
for i in range(num_years)]
# Add a column with the probabilities for S&P 600 > S&P 400.
name = ticker_SP600 + " > " + ticker_SP400
df[name] = [prob_better(ann_ret_SP600[i], ann_ret_SP400[i])
for i in range(num_years)]
# Set the index.
df.index = ["{} Years".format(i+1) for i in range(num_years)]
print(df)
Explanation: Compared to Other Stock Indices
Now we will compare the stock indices directly against each other.
End of explanation
print_prob_better()
Explanation: This shows the probability of one stock index performing better than another for investment periods between 1 and 10 years. All taxes are ignored.
For example, the S&P 500 (Large-Cap) performed better than the S&P 400 (Mid-Cap) in about 42% of all 1-year periods. Similarly, the S&P 500 performed better than the S&P 600 (Small-Cap) in almost 45% of all 1-year periods.
For longer investment periods the S&P 500 generally performed worse than the S&P 400 and S&P 600. For example, the S&P 500 only performed better than the S&P 400 in about 6% of all 10-year periods, and it was better than the S&P 600 in about 15% of the 10-year periods. Similarly, the S&P 600 was better than the S&P 400 in only about 21% of all 10-year periods.
This shows that for longer investment periods the S&P 400 (Mid-Cap) mostly had a higher return than both the S&P 500 (Large-Cap) and S&P 600 (Small-Cap).
End of explanation
def print_correlation():
Print the correlation between the stock indices
for increasing investment periods.
# Create a new DataFrame.
df = pd.DataFrame()
# Add a column with the correlations for S&P 500 vs. S&P 400.
name = ticker_SP500 + " vs. " + ticker_SP400
df[name] = [ann_ret_SP500[i].corr(ann_ret_SP400[i])
for i in range(num_years)]
# Add a column with the correlations for S&P 500 vs. S&P 600.
name = ticker_SP500 + " vs. " + ticker_SP600
df[name] = [ann_ret_SP500[i].corr(ann_ret_SP600[i])
for i in range(num_years)]
# Add a column with the correlations for S&P 600 vs. S&P 400.
name = ticker_SP600 + " vs. " + ticker_SP400
df[name] = [ann_ret_SP600[i].corr(ann_ret_SP400[i])
for i in range(num_years)]
# Set the index.
df.index = ["{} Years".format(i+1) for i in range(num_years)]
print(df)
Explanation: Correlation
It is also useful to consider the statistical correlation between the returns of stock indices.
End of explanation
print_correlation()
Explanation: This shows the correlation coefficient (Pearson) between the returns on the stock indices for investment periods between 1 and 10 years.
For example, the correlation was about 0.88 between the S&P 500 and S&P 400 for all 1-year investment periods, while it was only 0.77 for the S&P 500 and S&P 600, and 0.92 for the S&P 600 and S&P 400.
For longer investment periods the correlation coefficient generally increases. For example, the correlation was about 0.93 between the S&P 500 and S&P 400 for all 10-year investment periods, while it was about 0.85 between the S&P 500 and S&P 600, and it was almost 0.94 between the S&P 600 and S&P 400.
This shows that the return on these three stock indices are all highly correlated, so that they have a strong tendency to show losses or gains for the same periods.
It might also be useful to consider the correlation for shorter investment periods, e.g. monthly, weekly or even daily, because a low correlation between stock indices might be useful for rebalancing the investment portfolio when one stock index is down and another is up.
End of explanation
def print_recovery_days():
Print the probability of the stocks recovering from losses
for increasing number of days.
# Print the probability for these days.
num_days = [7, 30, 90, 180, 365, 2*365, 5*365]
# Create a new DataFrame.
df = pd.DataFrame()
# Add a column with the probabilities for the S&P 500.
df[ticker_SP500] = prob_recovery(df=df_SP500, num_days=num_days,
start_date=start_date,
end_date=end_date)
# Add a column with the probabilities for the S&P 400.
df[ticker_SP400] = prob_recovery(df=df_SP400, num_days=num_days,
start_date=start_date,
end_date=end_date)
# Add a column with the probabilities for the S&P 600.
df[ticker_SP600] = prob_recovery(df=df_SP600, num_days=num_days,
start_date=start_date,
end_date=end_date)
# Set the index.
df.index = ["{} Days".format(days) for days in num_days]
print(df)
Explanation: Recovery Times
It is also useful to consider how quickly the stock indices typically recover from losses.
End of explanation
print_recovery_days()
Explanation: This shows the probability that each stock index has recovered from losses within a given number of days.
For example, all three stock indices recovered from about 80-83% of all losses within just a week. The probability goes up for longer investment periods. For example, for 5-year investment periods the S&P 500 had recovered from about 99.8% of all losses, while the S&P 400 and S&P 600 had recovered from all losses in 5 years.
Note that this only measures the number of days until the stock index recovered the first time. It is possible that a stock index decreases again in the future. This can be seen from the non-zero probabilities of loss shown further above, where the S&P 400 and S&P 600 had losses in some 7, 8, and 9 year investment periods.
End of explanation |
11,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Sounding
Use MetPy as straightforward as possible to make a Skew-T LogP plot.
Step1: We will pull the data out of the example dataset into individual variables and
assign units. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, SkewT
from metpy.units import units
# Change default to be better for skew-T
plt.rcParams['figure.figsize'] = (9, 9)
# Upper air data can be obtained using the siphon package, but for this example we will use
# some of MetPy's sample data.
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('jan20_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'
), how='all').reset_index(drop=True)
Explanation: Simple Sounding
Use MetPy as straightforward as possible to make a Skew-T LogP plot.
End of explanation
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
skew = SkewT()
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
skew.ax.set_ylim(1000, 100)
# Add the MetPy logo!
fig = plt.gcf()
add_metpy_logo(fig, 115, 100)
# Example of defining your own vertical barb spacing
skew = SkewT()
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
# Set spacing interval--Every 50 mb from 1000 to 100 mb
my_interval = np.arange(100, 1000, 50) * units('mbar')
# Get indexes of values closest to defined interval
ix = mpcalc.resample_nn_1d(p, my_interval)
# Plot only values nearest to defined interval values
skew.plot_barbs(p[ix], u[ix], v[ix])
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
skew.ax.set_ylim(1000, 100)
# Add the MetPy logo!
fig = plt.gcf()
add_metpy_logo(fig, 115, 100)
# Show the plot
plt.show()
Explanation: We will pull the data out of the example dataset into individual variables and
assign units.
End of explanation |
11,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here we want to place n events of transition between an ancestral lifestyle and a convergent lifestyle in a phylogeny.
We want these n events to be independent, not nested.
We return them in a format compatible with Bio++ input files for bppseqgen and bppml.
First, we generate a random topology
Step1: Now, let's number the nodes of the tree.
Step2: Now we create a function to place the n events of transition.
We don't care about branch lengths, meaning that we decide that a transition is not more likely on a long branch than on a short one.
Step3: And now we place the transition events.
Step4: Now we want to output the tree and the command line for bppseqgen and bppml.
Step5: We have a problem
Step6: Test of the function
Step7: Entire function to place convergent events in a tree and output the parameters for bppsuite. | Python Code:
from ete3 import Tree
import string
import scipy.stats as stats
import numpy as np
tl = Tree()
# We create a random tree topology
numTips = 20
candidateNames = list(string.ascii_lowercase)
tipNames = candidateNames[0:20]
tl.populate(numTips, names_library=tipNames)
print (tl)
#Alternatively we could read a tree from a file into a string "line", and then use:
# tl = Tree( line )
Explanation: Here we want to place n events of transition between an ancestral lifestyle and a convergent lifestyle in a phylogeny.
We want these n events to be independent, not nested.
We return them in a format compatible with Bio++ input files for bppseqgen and bppml.
First, we generate a random topology:
End of explanation
def reNumberNodes (tl):
nodeId = 0
for n in tl.traverse():
n.add_features(ND=nodeId)
if n.name=="":
n.name = str(nodeId)
nodeId = nodeId + 1
return
reNumberNodes(tl)
#Writing in NHX format
tl.write(features=['ND'])
print(tl.get_ascii(show_internal=True))
Explanation: Now, let's number the nodes of the tree.
End of explanation
# We could use some dynamic programming to be able to generate paths that yield n transitions exactly.
# INstead we randomly generate transitions on the tree until we get the desired number.
# We have two states: ancestral (0) and convergent (1).
# We count the numbers of transitions
def randomTransitions(numTransitions, tree):
numberOfNodes = len(tree.get_tree_root().get_descendants()) + 1
rate = float(numTransitions)/float(numberOfNodes)
ancestralTransition=dict()
totalNumberOfTransitions = 0
nodesWithTransitions = list()
for node in tree.traverse("levelorder"):
if node.is_root() :
ancestralTransition[node] = False
elif ( ancestralTransition[node.up] == True):
ancestralTransition[node] = True
else :
sisterHasAlreadyTransitioned=False
if ancestralTransition.__contains__(node.get_sisters()[0]): #Here we assume binary trees!
sisterHasAlreadyTransitioned=True
#randomly draw whether we do a transition or not
transitionBool = stats.bernoulli.rvs(rate, size=1)[0] == 1
if (transitionBool and not sisterHasAlreadyTransitioned):
ancestralTransition[node] = True
nodesWithTransitions.append(node)
totalNumberOfTransitions = totalNumberOfTransitions + 1
else:
ancestralTransition[node] = False
return nodesWithTransitions, totalNumberOfTransitions, ancestralTransition
def placeNTransitionsInTree(numTransitions, tree):
observedNumTransitions = 2*numTransitions
nodesWithTransitions = list()
numTries = 0
convergentModel = dict()
while observedNumTransitions != numTransitions and numTries < 100:
observedNumTransitions = 0
nodesWithTransitions, observedNumTransitions, convergentModel = randomTransitions(numTransitions, tree)
print ("Observed Number of Transitions: "+ str(observedNumTransitions ) + " compared to "+ str(numTransitions) + " wanted")
numTries = numTries + 1
if numTries < 100:
for n in nodesWithTransitions:
print(n.get_ascii())
else:
print("It seems like it is too difficult to place "+ str(numTransitions) + " events in this tree.")
return convergentModel
Explanation: Now we create a function to place the n events of transition.
We don't care about branch lengths, meaning that we decide that a transition is not more likely on a long branch than on a short one.
End of explanation
convergentModel = dict()
convergentModel = placeNTransitionsInTree(5, tl)
Explanation: And now we place the transition events.
End of explanation
# convergentModel is the ouput from thefunction that places transitions
# C1 and C2 are two profile numbers
# Nch is the number of characters
def getBppSeqGenCommandFromNodesWithTransitions(convergentModel, C1, C2, Nch):
#First, get the nodes with the convergent model and the nodes with the ancestral model.
nodesWithConvergentModel = list()
nodesWithAncestralModel = list()
for k,v in convergentModel.items():
if v == True:
nodesWithConvergentModel.append(k.ND)
if v == False:
nodesWithAncestralModel.append(k.ND)
n1="\""+ str(nodesWithAncestralModel[0])
for n in nodesWithAncestralModel[1:len(nodesWithAncestralModel)-1]:
n1 += "," + str(n)
n1 += "\""
n2="\""+ str(nodesWithConvergentModel[0])
for n in nodesWithConvergentModel[1:len(nodesWithConvergentModel)-1]:
n2 += "," + str(n)
n2 += "\""
#Dummy values:
command="bppseqgen param=CATseq.bpp mod1Nodes=%s mod2Nodes=%s Nch=%d Ns1=%d Ns2=%d Ne1=%d Ne2=%d"%(n1,n2,Nch,C1,C2,C1,C2)
return (command)
getBppSeqGenCommandFromNodesWithTransitions(convergentModel, 1, 2, 1000)
Explanation: Now we want to output the tree and the command line for bppseqgen and bppml.
End of explanation
# we load a small tree, for testing purpose
t = Tree('((((H,K)D,(F,I)G)B,E)A,((L,(N,Q)O)J,(P,S)M)C);', format=1)
print(t.write(format=1))
print(t.get_ascii(show_internal=True))
#for node in t.traverse("levelorder"):
# Do some analysis on node
#print (node.name)
def getCherries(leaves, tree):
length = len(leaves)
dist=list()
leaveslist = list(leaves)
for i in range(length-1):
for j in range(i+1,length):
di = tree.get_distance(leaveslist[i].name, leaveslist[j].name, topology_only=True)
dist.append([di, leaveslist[i].name, leaveslist[j].name])
#print(dist)
cherries = list()
for d in dist:
if (d[0] == 1.0):
cherries.append(d)
return (cherries)
def placeTransitionOnBranchIfSisterHasNotTransitioned (node, ancestralTransition, nodesWithTransitions):
if node.is_leaf():
if ancestralTransition.__contains__(node.get_sisters()[0]) and ancestralTransition[node.get_sisters()[0]]==True:
print("Problem: sister of node "+node.name+" has transitioned!")
return
ancestralTransition[ node ] = True
nodesWithTransitions.append(node)
print ("Adding transition on leaf node "+node.name)
return
descen = node.get_descendants()
SisterHasTransitioned = True
converg = 0
while (not SisterHasTransitioned):
converg = np.random.randint( 0, high=len(descen) )
if ancestralTransition.__contains__(descen[converg].get_sisters()[0] and ancestralTransition[descen[converg].get_sisters()[0]]==True):
pass
else:
SisterHasTransitioned = False
ancestralTransition[ descen[converg] ] = True
nodesWithTransitions.append(descen[converg])
print ("Adding transition on node "+node.name)
return
def randomTransitionsWithHypergeometricDistribution(numTransitions, tree):
numberOfNodes = len(tree.get_tree_root().get_descendants()) + 1
rate = float(numTransitions)/float(numberOfNodes)
node2leaves = tree.get_cached_content()
#Computing the maximum number of transitions possible
numLeaves = len(node2leaves[tree.get_tree_root()])
cherries = getCherries(node2leaves[tree.get_tree_root()], tree)
#print("cherries: "+str(cherries))
maxNumTrans = numLeaves - len(cherries)
print("Maximum number of transitions: "+str(maxNumTrans))
if maxNumTrans < numTransitions:
print ("Sorry, we cannot fit "+str(numTransitions)+ " in this tree, which can only accommodate "+str(maxNumTrans)+" transitions.")
# Now, we want to annotate all nodes with the number of available underlying branches.
listOfCherryPartners = list()
for c in cherries:
listOfCherryPartners.append( c[1] )
#print("listOfCherryPartners: "+str(listOfCherryPartners))
for node in tree.traverse("levelorder"):
numLea = len(node2leaves[node])
numCherries = 0
for n in node2leaves[node]:
if n.name in listOfCherryPartners:
numCherries = numCherries +1
node.add_feature("numberOfAvailableBranches", numLea - numCherries)
#print(node.name + " : "+ str(node.numberOfAvailableBranches))
ancestralTransition=dict()
for node in tree.traverse("levelorder"):
ancestralTransition[node] = False
nodesWithTransitions = list()
#Now we traverse the tree from the root, and at each node choose
#how many transitions we place in the right and left subtrees
tree.get_tree_root().add_feature("underlyingNumTransitions", numTransitions)
for node in tree.traverse("preorder"):
if (not node.is_leaf()):
rightChild = node.children[0]
leftChild = node.children[1]
if node.underlyingNumTransitions <= 1:
rightChild.add_feature("underlyingNumTransitions",
1)
leftChild.add_feature("underlyingNumTransitions",
1)
else:
numRight = np.random.hypergeometric(rightChild.numberOfAvailableBranches,
leftChild.numberOfAvailableBranches,
node.underlyingNumTransitions)
numLeft = node.underlyingNumTransitions - numRight
rightChild.add_feature("underlyingNumTransitions",
numRight)
leftChild.add_feature("underlyingNumTransitions",
numLeft)
if (numRight == 1):
#We randomly place the transition in one of the branches of the right subtree
placeTransitionOnBranchIfSisterHasNotTransitioned (rightChild,
ancestralTransition,
nodesWithTransitions)
if (numLeft == 1):
#We randomly place the transition in one of the branches of the left subtree
placeTransitionOnBranchIfSisterHasNotTransitioned (leftChild,
ancestralTransition,
nodesWithTransitions)
print ("\t\tTotal number of transitions placed in the tree: "+str(len(nodesWithTransitions)))
return nodesWithTransitions, ancestralTransition
Explanation: We have a problem: the random algorithm above does not work well for large numbers of convergent events: it needs to do a large number of trials and errors to get something that works, and often fails.
Therefore we need to use another algorithm.
Improved algorithm
End of explanation
for i in range (10):
n,a = randomTransitionsWithHypergeometricDistribution(5, t)
Explanation: Test of the function: 10 times we try to insert 5 transition in tree t.
End of explanation
def placeTransitionsAndGetBppSeqGenCommand(numTransitions, tree, C1, C2, Nch):
reNumberNodes(tree)
nodesWithTransitions,convergentModel = randomTransitionsWithHypergeometricDistribution(numTransitions, tree)
return( getBppSeqGenCommandFromNodesWithTransitions(convergentModel, C1, C2, Nch) )
# We try the function on the large tree simulated at the beginning
placeTransitionsAndGetBppSeqGenCommand(10, tl, 1, 2, 1000)
Explanation: Entire function to place convergent events in a tree and output the parameters for bppsuite.
End of explanation |
11,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data deduplication
Introduction
This example shows how to find records in datasets belonging to the same
entity. In our case,we try to deduplicate a dataset with records of
persons. We will try to link within the dataset based on attributes like
first name, surname, sex, date of birth, place and address. The data
used in this example is part of
Febrl and is fictitious.
First, start with importing the recordlinkage module. The submodule
recordlinkage.datasets contains several datasets that can be used
for testing. For this example, we use the Febrl dataset 1. This dataset
contains 1000 records of which 500 original and 500 duplicates, with
exactly one duplicate per original record. This dataset can be loaded
with the function load_febrl1.
Step1: The dataset is loaded with the following code. The returned datasets are
of type pandas.DataFrame. This makes it easy to manipulate the data
if desired. For details about data manipulation with pandas, see
their comprehensive documentation http
Step2: Make record pairs
It is very intuitive to start with comparing each record in DataFrame
dfA with all other records in DataFrame dfA. In fact, we want to
make record pairs. Each record pair should contain two different records
of DataFrame dfA. This process of making record pairs is also called
"indexing". With the recordlinkage module, indexing is easy. First,
load the recordlinkage.Index class and call the .full method.
This object generates a full index on a .index(...) call. In case of
deduplication of a single dataframe, one dataframe is sufficient as
input argument.
Step3: With the method index, all possible (and unique) record pairs are
made. The method returns a pandas.MultiIndex. The number of pairs is
equal to the number of records in dfA choose 2.
Step4: Many of these record pairs do not belong to the same person. The
recordlinkage toolkit has some more advanced indexing methods to
reduce the number of record pairs. Obvious non-matches are left out of
the index. Note that if a matching record pair is not included in the
index, it can not be matched anymore.
One of the most well known indexing methods is named blocking. This
method includes only record pairs that are identical on one or more
stored attributes of the person (or entity in general). The blocking
method can be used in the recordlinkage module.
Step5: The argument "given_name" is the blocking variable. This variable has
to be the name of a column in dfA. It is possible to
parse a list of columns names to block on multiple variables. Blocking
on multiple variables will reduce the number of record pairs even
further.
Another implemented indexing method is Sorted Neighbourhood Indexing
(recordlinkage.index.sortedneighbourhood). This method is very
useful when there are many misspellings in the string were used for
indexing. In fact, sorted neighbourhood indexing is a generalisation of
blocking. See the documentation for details about sorted neighbourd
indexing.
Compare records
Each record pair is a candidate match. To classify the candidate record
pairs into matches and non-matches, compare the records on all
attributes both records have in common. The recordlinkage module has
a class named Compare. This class is used to compare the records.
The following code shows how to compare attributes.
Step6: The comparing of record pairs starts when the compute method is
called. All attribute comparisons are stored in a DataFrame with
horizontally the features and vertically the record pairs. The first 10
comparison vectors are
Step7: The last step is to decide which records belong to the same person. In
this example, we keep it simple
Step8: Full code | Python Code:
import recordlinkage
from recordlinkage.datasets import load_febrl1
Explanation: Data deduplication
Introduction
This example shows how to find records in datasets belonging to the same
entity. In our case,we try to deduplicate a dataset with records of
persons. We will try to link within the dataset based on attributes like
first name, surname, sex, date of birth, place and address. The data
used in this example is part of
Febrl and is fictitious.
First, start with importing the recordlinkage module. The submodule
recordlinkage.datasets contains several datasets that can be used
for testing. For this example, we use the Febrl dataset 1. This dataset
contains 1000 records of which 500 original and 500 duplicates, with
exactly one duplicate per original record. This dataset can be loaded
with the function load_febrl1.
End of explanation
dfA = load_febrl1()
dfA
Explanation: The dataset is loaded with the following code. The returned datasets are
of type pandas.DataFrame. This makes it easy to manipulate the data
if desired. For details about data manipulation with pandas, see
their comprehensive documentation http://pandas.pydata.org/.
End of explanation
indexer = recordlinkage.Index()
indexer.full()
candidate_links = indexer.index(dfA)
Explanation: Make record pairs
It is very intuitive to start with comparing each record in DataFrame
dfA with all other records in DataFrame dfA. In fact, we want to
make record pairs. Each record pair should contain two different records
of DataFrame dfA. This process of making record pairs is also called
"indexing". With the recordlinkage module, indexing is easy. First,
load the recordlinkage.Index class and call the .full method.
This object generates a full index on a .index(...) call. In case of
deduplication of a single dataframe, one dataframe is sufficient as
input argument.
End of explanation
print (len(dfA), len(candidate_links))
# (1000*1000-1000)/2 = 499500
Explanation: With the method index, all possible (and unique) record pairs are
made. The method returns a pandas.MultiIndex. The number of pairs is
equal to the number of records in dfA choose 2.
End of explanation
indexer = recordlinkage.Index()
indexer.block("given_name")
candidate_links = indexer.index(dfA)
len(candidate_links)
Explanation: Many of these record pairs do not belong to the same person. The
recordlinkage toolkit has some more advanced indexing methods to
reduce the number of record pairs. Obvious non-matches are left out of
the index. Note that if a matching record pair is not included in the
index, it can not be matched anymore.
One of the most well known indexing methods is named blocking. This
method includes only record pairs that are identical on one or more
stored attributes of the person (or entity in general). The blocking
method can be used in the recordlinkage module.
End of explanation
compare_cl = recordlinkage.Compare()
compare_cl.exact("given_name", "given_name", label="given_name")
compare_cl.string("surname", "surname", method="jarowinkler", threshold=0.85, label="surname")
compare_cl.exact("date_of_birth", "date_of_birth", label="date_of_birth")
compare_cl.exact("suburb", "suburb", label="suburb")
compare_cl.exact("state", "state", label="state")
compare_cl.string("address_1", "address_1", threshold=0.85, label="address_1")
features = compare_cl.compute(candidate_links, dfA)
Explanation: The argument "given_name" is the blocking variable. This variable has
to be the name of a column in dfA. It is possible to
parse a list of columns names to block on multiple variables. Blocking
on multiple variables will reduce the number of record pairs even
further.
Another implemented indexing method is Sorted Neighbourhood Indexing
(recordlinkage.index.sortedneighbourhood). This method is very
useful when there are many misspellings in the string were used for
indexing. In fact, sorted neighbourhood indexing is a generalisation of
blocking. See the documentation for details about sorted neighbourd
indexing.
Compare records
Each record pair is a candidate match. To classify the candidate record
pairs into matches and non-matches, compare the records on all
attributes both records have in common. The recordlinkage module has
a class named Compare. This class is used to compare the records.
The following code shows how to compare attributes.
End of explanation
features.head(10)
features.describe()
Explanation: The comparing of record pairs starts when the compute method is
called. All attribute comparisons are stored in a DataFrame with
horizontally the features and vertically the record pairs. The first 10
comparison vectors are:
End of explanation
features.sum(axis=1).value_counts().sort_index(ascending=False)
matches = features[features.sum(axis=1) > 3]
matches
Explanation: The last step is to decide which records belong to the same person. In
this example, we keep it simple:
End of explanation
import recordlinkage
from recordlinkage.datasets import load_febrl1
dfA = load_febrl1()
# Indexation step
indexer = recordlinkage.Index()
indexer.block(left_on="given_name")
candidate_links = indexer.index(dfA)
# Comparison step
compare_cl = recordlinkage.Compare()
compare_cl.exact("given_name", "given_name", label="given_name")
compare_cl.string("surname", "surname", method="jarowinkler", threshold=0.85, label="surname")
compare_cl.exact("date_of_birth", "date_of_birth", label="date_of_birth")
compare_cl.exact("suburb", "suburb", label="suburb")
compare_cl.exact("state", "state", label="state")
compare_cl.string("address_1", "address_1", threshold=0.85, label="address_1")
features = compare_cl.compute(candidate_links, dfA)
# Classification step
matches = features[features.sum(axis=1) > 3]
print(len(matches))
Explanation: Full code
End of explanation |
11,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image classification with EANet (External Attention Transformer)
Author
Step1: Prepare the data
Step2: Configure the hyperparameters
Step3: Use data augmentation
Step4: Implement the patch extraction and encoding layer
Step5: Implement the external attention block
Step6: Implement the MLP block
Step7: Implement the Transformer block
Step8: Implement the EANet model
The EANet model leverages external attention.
The computational complexity of traditional self attention is O(d * N ** 2),
where d is the embedding size, and N is the number of patch.
the authors find that most pixels are closely related to just a few other
pixels, and an N-to-N attention matrix may be redundant.
So, they propose as an alternative an external
attention module where the computational complexity of external attention is O(d * S * N).
As d and S are hyper-parameters,
the proposed algorithm is linear in the number of pixels. In fact, this is equivalent
to a drop patch operation, because a lot of information contained in a patch
in an image is redundant and unimportant.
Step9: Train on CIFAR-100
Step10: Let's visualize the training progress of the model.
Step11: Let's display the final results of the test on CIFAR-100. | Python Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
Explanation: Image classification with EANet (External Attention Transformer)
Author: ZhiYong Chang<br>
Date created: 2021/10/19<br>
Last modified: 2021/10/19<br>
Description: Image classification with a Transformer that leverages external attention.
Introduction
This example implements the EANet
model for image classification, and demonstrates it on the CIFAR-100 dataset.
EANet introduces a novel attention mechanism
named external attention, based on two external, small, learnable, and
shared memories, which can be implemented easily by simply using two cascaded
linear layers and two normalization layers. It conveniently replaces self-attention
as used in existing architectures. External attention has linear complexity, as it only
implicitly considers the correlations between all samples.
This example requires TensorFlow 2.5 or higher, as well as
TensorFlow Addons package,
which can be installed using the following command:
python
pip install -U tensorflow-addons
Setup
End of explanation
num_classes = 100
input_shape = (32, 32, 3)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")
Explanation: Prepare the data
End of explanation
weight_decay = 0.0001
learning_rate = 0.001
label_smoothing = 0.1
validation_split = 0.2
batch_size = 128
num_epochs = 50
patch_size = 2 # Size of the patches to be extracted from the input images.
num_patches = (input_shape[0] // patch_size) ** 2 # Number of patch
embedding_dim = 64 # Number of hidden units.
mlp_dim = 64
dim_coefficient = 4
num_heads = 4
attention_dropout = 0.2
projection_dropout = 0.2
num_transformer_blocks = 8 # Number of repetitions of the transformer layer
print(f"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} ")
print(f"Patches per image: {num_patches}")
Explanation: Configure the hyperparameters
End of explanation
data_augmentation = keras.Sequential(
[
layers.Normalization(),
layers.RandomFlip("horizontal"),
layers.RandomRotation(factor=0.1),
layers.RandomContrast(factor=0.1),
layers.RandomZoom(height_factor=0.2, width_factor=0.2),
],
name="data_augmentation",
)
# Compute the mean and the variance of the training data for normalization.
data_augmentation.layers[0].adapt(x_train)
Explanation: Use data augmentation
End of explanation
class PatchExtract(layers.Layer):
def __init__(self, patch_size, **kwargs):
super(PatchExtract, self).__init__(**kwargs)
self.patch_size = patch_size
def call(self, images):
batch_size = tf.shape(images)[0]
patches = tf.image.extract_patches(
images=images,
sizes=(1, self.patch_size, self.patch_size, 1),
strides=(1, self.patch_size, self.patch_size, 1),
rates=(1, 1, 1, 1),
padding="VALID",
)
patch_dim = patches.shape[-1]
patch_num = patches.shape[1]
return tf.reshape(patches, (batch_size, patch_num * patch_num, patch_dim))
class PatchEmbedding(layers.Layer):
def __init__(self, num_patch, embed_dim, **kwargs):
super(PatchEmbedding, self).__init__(**kwargs)
self.num_patch = num_patch
self.proj = layers.Dense(embed_dim)
self.pos_embed = layers.Embedding(input_dim=num_patch, output_dim=embed_dim)
def call(self, patch):
pos = tf.range(start=0, limit=self.num_patch, delta=1)
return self.proj(patch) + self.pos_embed(pos)
Explanation: Implement the patch extraction and encoding layer
End of explanation
def external_attention(
x, dim, num_heads, dim_coefficient=4, attention_dropout=0, projection_dropout=0
):
_, num_patch, channel = x.shape
assert dim % num_heads == 0
num_heads = num_heads * dim_coefficient
x = layers.Dense(dim * dim_coefficient)(x)
# create tensor [batch_size, num_patches, num_heads, dim*dim_coefficient//num_heads]
x = tf.reshape(
x, shape=(-1, num_patch, num_heads, dim * dim_coefficient // num_heads)
)
x = tf.transpose(x, perm=[0, 2, 1, 3])
# a linear layer M_k
attn = layers.Dense(dim // dim_coefficient)(x)
# normalize attention map
attn = layers.Softmax(axis=2)(attn)
# dobule-normalization
attn = attn / (1e-9 + tf.reduce_sum(attn, axis=-1, keepdims=True))
attn = layers.Dropout(attention_dropout)(attn)
# a linear layer M_v
x = layers.Dense(dim * dim_coefficient // num_heads)(attn)
x = tf.transpose(x, perm=[0, 2, 1, 3])
x = tf.reshape(x, [-1, num_patch, dim * dim_coefficient])
# a linear layer to project original dim
x = layers.Dense(dim)(x)
x = layers.Dropout(projection_dropout)(x)
return x
Explanation: Implement the external attention block
End of explanation
def mlp(x, embedding_dim, mlp_dim, drop_rate=0.2):
x = layers.Dense(mlp_dim, activation=tf.nn.gelu)(x)
x = layers.Dropout(drop_rate)(x)
x = layers.Dense(embedding_dim)(x)
x = layers.Dropout(drop_rate)(x)
return x
Explanation: Implement the MLP block
End of explanation
def transformer_encoder(
x,
embedding_dim,
mlp_dim,
num_heads,
dim_coefficient,
attention_dropout,
projection_dropout,
attention_type="external_attention",
):
residual_1 = x
x = layers.LayerNormalization(epsilon=1e-5)(x)
if attention_type == "external_attention":
x = external_attention(
x,
embedding_dim,
num_heads,
dim_coefficient,
attention_dropout,
projection_dropout,
)
elif attention_type == "self_attention":
x = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embedding_dim, dropout=attention_dropout
)(x, x)
x = layers.add([x, residual_1])
residual_2 = x
x = layers.LayerNormalization(epsilon=1e-5)(x)
x = mlp(x, embedding_dim, mlp_dim)
x = layers.add([x, residual_2])
return x
Explanation: Implement the Transformer block
End of explanation
def get_model(attention_type="external_attention"):
inputs = layers.Input(shape=input_shape)
# Image augment
x = data_augmentation(inputs)
# Extract patches.
x = PatchExtract(patch_size)(x)
# Create patch embedding.
x = PatchEmbedding(num_patches, embedding_dim)(x)
# Create Transformer block.
for _ in range(num_transformer_blocks):
x = transformer_encoder(
x,
embedding_dim,
mlp_dim,
num_heads,
dim_coefficient,
attention_dropout,
projection_dropout,
attention_type,
)
x = layers.GlobalAvgPool1D()(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
Explanation: Implement the EANet model
The EANet model leverages external attention.
The computational complexity of traditional self attention is O(d * N ** 2),
where d is the embedding size, and N is the number of patch.
the authors find that most pixels are closely related to just a few other
pixels, and an N-to-N attention matrix may be redundant.
So, they propose as an alternative an external
attention module where the computational complexity of external attention is O(d * S * N).
As d and S are hyper-parameters,
the proposed algorithm is linear in the number of pixels. In fact, this is equivalent
to a drop patch operation, because a lot of information contained in a patch
in an image is redundant and unimportant.
End of explanation
model = get_model(attention_type="external_attention")
model.compile(
loss=keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing),
optimizer=tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
),
metrics=[
keras.metrics.CategoricalAccuracy(name="accuracy"),
keras.metrics.TopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
history = model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=validation_split,
)
Explanation: Train on CIFAR-100
End of explanation
plt.plot(history.history["loss"], label="train_loss")
plt.plot(history.history["val_loss"], label="val_loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.title("Train and Validation Losses Over Epochs", fontsize=14)
plt.legend()
plt.grid()
plt.show()
Explanation: Let's visualize the training progress of the model.
End of explanation
loss, accuracy, top_5_accuracy = model.evaluate(x_test, y_test)
print(f"Test loss: {round(loss, 2)}")
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
Explanation: Let's display the final results of the test on CIFAR-100.
End of explanation |
11,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
La función
Esta ecuación por Glass y Pasternack (1978) sirve para modelar redes neuronales y de interacción génica.
$$x_{t+1}=\frac{\alpha x_{t}}{1+\beta x_{t}}$$
Donde $\alpha$ y $\beta$ son números positivos y $x_{t}\geq0$.
Step1: Búsqueda algebráica de puntos fijos
A continuación sustituiremos f(x) en x reiteradamente hasta obtener la cuarta iterada de f.
Step2: Punto fijo oscilatorio
Al configurar $$\alpha, \beta$$ de modo que haya un punto fijo la serie de tiempo revela una oscilación entre cero y el punto fijo.
Step3: ¿Qué pasará con infinitas iteraciones?
Todo parece indicar que la función converge a 1 si $\alpha=1$ y $\beta=1$.
Si no, converge a $\frac{\alpha}{\beta}$ | Python Code:
def g(x, alpha, beta):
assert alpha >= 0 and beta >= 0
return (alpha*x)/(1 + (beta * x))
def plot_cobg(x, alpha, beta):
y = np.linspace(x[0],x[1],300)
g_y = g(y, alpha, beta)
cobweb(lambda x: g(x, alpha, beta), y, g_y)
# configura gráfica interactiva
interact(plot_cobg,
x=widgets.FloatRangeSlider(min=0.01, max=3, step=0.01,
value=[0.02, 3],
continuous_update=False),
alpha=widgets.FloatSlider(min=0.001, max=30,step=0.01,
value=12, continuous_update=False),
beta=widgets.FloatSlider(min=0.001, max=30,step=0.01,
value=7, continuous_update=False))
Explanation: La función
Esta ecuación por Glass y Pasternack (1978) sirve para modelar redes neuronales y de interacción génica.
$$x_{t+1}=\frac{\alpha x_{t}}{1+\beta x_{t}}$$
Donde $\alpha$ y $\beta$ son números positivos y $x_{t}\geq0$.
End of explanation
# primera iterada
f0 = (alpha*x)/(1+beta*x)
Eq(f(x),f0)
# segunda iterada
# subs-tituye f0 en la x de f0 para generar f1
f1 = simplify(f0.subs(x, f0))
Eq(f(f(x)), f1)
# tercera iterada
f2 = simplify(f1.subs(x, f1))
Eq(f(f(f(x))), f2)
# cuarta iterada
f3 = simplify(f2.subs(x, f2))
Eq(f(f(f(f(x)))), f3)
# puntos fijos resolviendo la primera iterada
solveset(Eq(f1,x),x)
(alpha-1)/beta
Explanation: Búsqueda algebráica de puntos fijos
A continuación sustituiremos f(x) en x reiteradamente hasta obtener la cuarta iterada de f.
End of explanation
def solve_g(a, b):
y = list(np.linspace(0,float(list(solveset(Eq(f1.subs(alpha, a).subs(beta, b), x), x)).pop()),2))
for t in range(30):
y.append(g(y[t], a, b))
zoom = plt.plot(y)
print("ultimos 15 de la serie:")
pprint(y[-15:])
print("\npuntos fijos:")
return solveset(Eq(f1.subs(alpha, a).subs(beta, b), x), x)
# gráfica interactiva
interact(solve_g,
a=widgets.IntSlider(min=0, max=30, step=1,
value=11, continuous_update=False,
description='alpha'),
b=widgets.IntSlider(min=0, max=30, step=1,
value=5, continuous_update=False,
description='beta'))
Explanation: Punto fijo oscilatorio
Al configurar $$\alpha, \beta$$ de modo que haya un punto fijo la serie de tiempo revela una oscilación entre cero y el punto fijo.
End of explanation
# con alfa=1 y beta=1
Eq(collect(f3, x), x/(x+1))
def plot_g(x, alpha, beta):
pprint(x)
y = np.linspace(x[0],x[1],300)
g_y = g(y, alpha, beta)
fig1 = plt.plot(y, g_y)
fig1 = plt.plot(y, y, color='red')
plt.axis('equal')
interact(plot_g,
x=widgets.FloatRangeSlider(min=0, max=30, step=0.01, value=[0,1], continuous_update=False),
alpha=widgets.IntSlider(min=0,max=30,step=1,value=1, continuous_update=False),
beta=widgets.IntSlider(min=0,max=30,step=1,value=1, continuous_update=False))
Explanation: ¿Qué pasará con infinitas iteraciones?
Todo parece indicar que la función converge a 1 si $\alpha=1$ y $\beta=1$.
Si no, converge a $\frac{\alpha}{\beta}$
End of explanation |
11,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: CCCMA
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:47
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
11,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to run the /home/test/indras_net/models/flocking model.
First we import all necessary files.
Step1: We then initialize global variables.
Step2: Next we call the set_up function to set up the environment, groups, and agents of the model.
Step3: You can run the model N periods by typing the number you want in the following function and then running it.
Step4: You can view the position of all of the agents in space with the following command
Step5: You can view the line graph through the following command | Python Code:
from models.flocking import set_up
Explanation: How to run the /home/test/indras_net/models/flocking model.
First we import all necessary files.
End of explanation
from indra.agent import Agent, X, Y
from indra.composite import Composite
from indra.display_methods import BLUE, TREE
from indra.env import Env
from indra.registry import get_registration
from indra.space import DEF_HEIGHT, DEF_WIDTH, distance
from indra.utils import get_props
MODEL_NAME = "flocking"
DEBUG = False # turns debugging code on or off
DEBUG2 = False # turns deeper debugging code on or off
BIRD_GROUP = "Birds"
DEF_NUM_BIRDS = 2
DEF_DESIRED_DISTANCE = 2
ACCEPTABLE_DEV = .05
BIRD_MAX_MOVE = 1
HALF_CIRCLE = 180
FULL_CIRCLE = 360
flock = None
the_sky = None
Explanation: We then initialize global variables.
End of explanation
(the_sky, flock) = set_up()
Explanation: Next we call the set_up function to set up the environment, groups, and agents of the model.
End of explanation
the_sky.runN()
Explanation: You can run the model N periods by typing the number you want in the following function and then running it.
End of explanation
the_sky.scatter_graph()
Explanation: You can view the position of all of the agents in space with the following command:
End of explanation
the_sky.line_graph()
Explanation: You can view the line graph through the following command:
End of explanation |
11,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The Cirq Developers
Step1: Rabi oscillation experiment
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: In this experiment, you are going to use Cirq to check that rotating a qubit by an increasing angle, and then measuring the qubit, produces Rabi oscillations. This requires you to do the following things
Step3: For this experiment you only need one qubit and you can just pick whichever one you like.
Step4: Once you've chosen your qubit you can build circuits that use it.
Step5: Now you can simulate sampling from your circuit using cirq.Simulator.
Step6: You can also get properties of the circuit, such as the density matrix of the circuit's output or the state vector just before the terminal measurement.
Step7: You can also examine the outputs from a noisy environment.
For example, an environment where 10% depolarization is applied to each qubit after each operation in the circuit
Step8: 2. Parameterized Circuits and Sweeps
Now that you have some of the basics end to end, you can create a parameterized circuit that rotates by an angle $\theta$
Step9: In the above block you saw that there is a sympy.Symbol that you placed in the circuit. Cirq supports symbolic computation involving circuits. What this means is that when you construct cirq.Circuit objects you can put placeholders in many of the classical control parameters of the circuit which you can fill with values later on.
Now if you wanted to use cirq.simulate or cirq.sample with the parameterized circuit you would also need to specify a value for theta.
Step10: You can also specify multiple values of theta, and get samples back for each value.
Step11: Cirq has shorthand notation you can use to sweep theta over a range of values.
Step12: The result value being returned by sim.sample is a pandas.DataFrame object.
Pandas is a common library for working with table data in python.
You can use standard pandas methods to analyze and summarize your results.
Step13: 3. The ReCirq experiment
ReCirq comes with a pre-written Rabi oscillation experiment recirq.benchmarks.rabi_oscillations, which performs the steps outlined at the start of this tutorial to create a circuit that exhibits Rabi Oscillations or Rabi Cycles.
This method takes a cirq.Sampler, which could be a simulator or a network connection to real hardware, as well as a qubit to test and two iteration parameters, num_points and repetitions. It then runs repetitions many experiments on the provided sampler, where each experiment is a circuit that rotates the chosen qubit by some $\theta$ Rabi angle around the $X$ axis (by applying an exponentiated $X$ gate). The result is a sequence of the expected probabilities of the chosen qubit at each of the Rabi angles.
Step14: Notice that you can tell from the plot that you used the noisy simulator you defined earlier.
You can also tell that the amount of depolarization is roughly 10%.
4. Exercise | Python Code:
# @title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The Cirq Developers
End of explanation
try:
import cirq
import recirq
except ImportError:
!pip install -U pip
!pip install --quiet cirq
!pip install --quiet git+https://github.com/quantumlib/ReCirq
import cirq
import recirq
import numpy as np
import cirq_google
Explanation: Rabi oscillation experiment
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/benchmarks/rabi_oscillations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/benchmarks/rabi_oscillations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/benchmarks/rabi_oscillations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/benchmarks/rabi_oscillations.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
End of explanation
working_device = cirq_google.Sycamore
print(working_device)
Explanation: In this experiment, you are going to use Cirq to check that rotating a qubit by an increasing angle, and then measuring the qubit, produces Rabi oscillations. This requires you to do the following things:
Prepare the $|0\rangle$ state.
Rotate by an angle $\theta$ around the $X$ axis.
Measure to see if the result is a 1 or a 0.
Repeat steps 1-3 $k$ times.
Report the fraction of $\frac{\text{Number of 1's}}{k}$
found in step 3.
1. Getting to know Cirq
Cirq emphasizes the details of implementing quantum algorithms on near term devices.
For example, when you work on a qubit in Cirq you don't operate on an unspecified qubit that will later be mapped onto a device by a hidden step.
Instead, you are always operating on specific qubits at specific locations that you specify.
Suppose you are working with a 54 qubit Sycamore chip.
This device is included in Cirq by default.
It is called cirq_google.Sycamore, and you can see its layout by printing it.
End of explanation
my_qubit = cirq.GridQubit(5, 6)
Explanation: For this experiment you only need one qubit and you can just pick whichever one you like.
End of explanation
from cirq.contrib.svg import SVGCircuit
# Create a circuit with X, Ry(pi/2) and H.
my_circuit = cirq.Circuit(
# Rotate the qubit pi/2 radians around the X axis.
cirq.rx(np.pi / 2).on(my_qubit),
# Measure the qubit.
cirq.measure(my_qubit, key="out"),
)
SVGCircuit(my_circuit)
Explanation: Once you've chosen your qubit you can build circuits that use it.
End of explanation
sim = cirq.Simulator()
samples = sim.sample(my_circuit, repetitions=10)
Explanation: Now you can simulate sampling from your circuit using cirq.Simulator.
End of explanation
state_vector_before_measurement = sim.simulate(my_circuit[:-1])
sampled_state_vector_after_measurement = sim.simulate(my_circuit)
print(f"State before measurement:")
print(state_vector_before_measurement)
print(f"State after measurement:")
print(sampled_state_vector_after_measurement)
Explanation: You can also get properties of the circuit, such as the density matrix of the circuit's output or the state vector just before the terminal measurement.
End of explanation
noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.1))
noisy_post_measurement_state = noisy_sim.simulate(my_circuit)
noisy_pre_measurement_state = noisy_sim.simulate(my_circuit[:-1])
print("Noisy state after measurement:" + str(noisy_post_measurement_state))
print("Noisy state before measurement:" + str(noisy_pre_measurement_state))
Explanation: You can also examine the outputs from a noisy environment.
For example, an environment where 10% depolarization is applied to each qubit after each operation in the circuit:
End of explanation
import sympy
theta = sympy.Symbol("theta")
parameterized_circuit = cirq.Circuit(
cirq.rx(theta).on(my_qubit), cirq.measure(my_qubit, key="out")
)
SVGCircuit(parameterized_circuit)
Explanation: 2. Parameterized Circuits and Sweeps
Now that you have some of the basics end to end, you can create a parameterized circuit that rotates by an angle $\theta$:
End of explanation
sim.sample(parameterized_circuit, params={theta: 2}, repetitions=10)
Explanation: In the above block you saw that there is a sympy.Symbol that you placed in the circuit. Cirq supports symbolic computation involving circuits. What this means is that when you construct cirq.Circuit objects you can put placeholders in many of the classical control parameters of the circuit which you can fill with values later on.
Now if you wanted to use cirq.simulate or cirq.sample with the parameterized circuit you would also need to specify a value for theta.
End of explanation
sim.sample(parameterized_circuit, params=[{theta: 0.5}, {theta: np.pi}], repetitions=10)
Explanation: You can also specify multiple values of theta, and get samples back for each value.
End of explanation
sim.sample(
parameterized_circuit,
params=cirq.Linspace(theta, start=0, stop=np.pi, length=5),
repetitions=5,
)
Explanation: Cirq has shorthand notation you can use to sweep theta over a range of values.
End of explanation
import pandas
big_results = sim.sample(
parameterized_circuit,
params=cirq.Linspace(theta, start=0, stop=np.pi, length=20),
repetitions=10_000,
)
# big_results is too big to look at. Plot cross tabulated data instead.
pandas.crosstab(big_results.theta, big_results.out).plot()
Explanation: The result value being returned by sim.sample is a pandas.DataFrame object.
Pandas is a common library for working with table data in python.
You can use standard pandas methods to analyze and summarize your results.
End of explanation
import datetime
from recirq.benchmarks import rabi_oscillations
result = rabi_oscillations(
sampler=noisy_sim, qubit=my_qubit, num_points=50, repetitions=10000
)
result.plot()
Explanation: 3. The ReCirq experiment
ReCirq comes with a pre-written Rabi oscillation experiment recirq.benchmarks.rabi_oscillations, which performs the steps outlined at the start of this tutorial to create a circuit that exhibits Rabi Oscillations or Rabi Cycles.
This method takes a cirq.Sampler, which could be a simulator or a network connection to real hardware, as well as a qubit to test and two iteration parameters, num_points and repetitions. It then runs repetitions many experiments on the provided sampler, where each experiment is a circuit that rotates the chosen qubit by some $\theta$ Rabi angle around the $X$ axis (by applying an exponentiated $X$ gate). The result is a sequence of the expected probabilities of the chosen qubit at each of the Rabi angles.
End of explanation
import hashlib
class SecretNoiseModel(cirq.NoiseModel):
def noisy_operation(self, op):
# Hey! No peeking!
q = op.qubits[0]
v = hashlib.sha256(str(q).encode()).digest()[0] / 256
yield cirq.depolarize(v).on(q)
yield op
secret_noise_sampler = cirq.DensityMatrixSimulator(noise=SecretNoiseModel())
q = cirq_google.Sycamore.qubits[3]
print("qubit", repr(q))
rabi_oscillations(sampler=secret_noise_sampler, qubit=q).plot()
Explanation: Notice that you can tell from the plot that you used the noisy simulator you defined earlier.
You can also tell that the amount of depolarization is roughly 10%.
4. Exercise: Find the best qubit
As you have seen, you can use Cirq to perform a Rabi oscillation experiment.
You can either make the experiment yourself out of the basic pieces made available by Cirq, or use the prebuilt experiment method.
Now you're going to put this knowledge to the test.
There is some amount of depolarizing noise on each qubit.
Your goal is to characterize every qubit from the Sycamore chip using a Rabi oscillation experiment, and find the qubit with the lowest noise according to the secret noise model.
End of explanation |
11,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
sell-short-in-may-and-go-away
see
Step1: Some global data
Step2: Define Strategy Class
Step3: Run Strategy
Step4: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
Step5: Plot Equity Curves
Step6: Plot Trades
Step7: Bar Graph | Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
# Format price data
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
pf.DEBUG = False
Explanation: sell-short-in-may-and-go-away
see: https://en.wikipedia.org/wiki/Sell_in_May
The reason for this example is to demonstrate short selling (algo), and short selling using adjust_percent function (algo2).
algo - Sell short in May and go away, buy to cover in Nov
algo2 - first trading day of the month, adjust position to 50%
(Select the one you want to call in the Strategy.run() function
End of explanation
#symbol = '^GSPC'
symbol = 'SPY'
capital = 10000
start = datetime.datetime(2015, 10, 30)
#start = datetime.datetime(*pf.SP500_BEGIN)
end = datetime.datetime.now()
Explanation: Some global data
End of explanation
class Strategy:
def __init__(self, symbol, capital, start, end):
self.symbol = symbol
self.capital = capital
self.start = start
self.end = end
self.ts = None
self.tlog = None
self.dbal = None
self.stats = None
def _algo(self):
pf.TradeLog.cash = self.capital
for i, row in enumerate(self.ts.itertuples()):
date = row.Index.to_pydatetime()
close = row.close;
end_flag = pf.is_last_row(self.ts, i)
shares = 0
# Buy to cover (at the open on first trading day in Nov)
if self.tlog.shares > 0:
if (row.month == 11 and row.first_dotm) or end_flag:
shares = self.tlog.buy2cover(date, row.open)
# Sell short (at the open on first trading day in May)
else:
if row.month == 5 and row.first_dotm:
shares = self.tlog.sell_short(date, row.open)
if shares > 0:
pf.DBG("{0} SELL SHORT {1} {2} @ {3:.2f}".format(
date, shares, self.symbol, row.open))
elif shares < 0:
pf.DBG("{0} BUY TO COVER {1} {2} @ {3:.2f}".format(
date, -shares, self.symbol, row.open))
# Record daily balance
self.dbal.append(date, close)
def _algo2(self):
pf.TradeLog.cash = self.capital
for i, row in enumerate(self.ts.itertuples()):
date = row.Index.to_pydatetime()
close = row.close;
end_flag = pf.is_last_row(self.ts, i)
shares = 0
# On the first day of the month, adjust short position to 50%
if (row.first_dotm or end_flag):
weight = 0 if end_flag else 0.5
self.tlog.adjust_percent(date, close, weight, pf.Direction.SHORT)
# Record daily balance
self.dbal.append(date, close)
def run(self):
self.ts = pf.fetch_timeseries(self.symbol)
self.ts = pf.select_tradeperiod(self.ts, self.start, self.end,
use_adj=True)
# add calendar columns
self.ts = pf.calendar(self.ts)
self.tlog = pf.TradeLog(self.symbol)
self.dbal = pf.DailyBal()
self.ts, self.start = pf.finalize_timeseries(self.ts, self.start)
# Pick either algo or algo2
self._algo()
#self._algo2()
self._get_logs()
self._get_stats()
def _get_logs(self):
self.rlog = self.tlog.get_log_raw()
self.tlog = self.tlog.get_log()
self.dbal = self.dbal.get_log(self.tlog)
def _get_stats(self):
self.stats = pf.stats(self.ts, self.tlog, self.dbal, self.capital)
Explanation: Define Strategy Class
End of explanation
s = Strategy(symbol, capital, start, end)
s.run()
s.rlog.head()
s.tlog.head()
s.dbal.tail()
Explanation: Run Strategy
End of explanation
benchmark = pf.Benchmark(symbol, s.capital, s.start, s.end)
benchmark.run()
Explanation: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
End of explanation
pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)
Explanation: Plot Equity Curves: Strategy vs Benchmark
End of explanation
pf.plot_trades(s.dbal, benchmark=benchmark.dbal)
Explanation: Plot Trades
End of explanation
df = pf.plot_bar_graph(s.stats, benchmark.stats)
df
Explanation: Bar Graph: Strategy vs Benchmark
End of explanation |
11,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. Acquire the Data
Finding Data Sources
There are three place to get onion price and quantity information by market.
Agmarket - This is the website run by the Directorate of Marketing & Inspection (DMI), Ministry of Agriculture, Government of India and provides daily price and arrival data for all agricultural commodities at national and state level. Unfortunately, the link to get Market-wise Daily Report for Specific Commodity (Onion for us) leads to a multipage aspx entry form to get data for each date. So it is like to require an involved scraper to get the data. Too much effort - Move on. Here is the best link to go to get what is available - http
Step1: Exercise #2
Find the exact table of data we want in the list of AllTables?
Get the exact table
To read the exact table we need to pass in an identifier value which would identify the table. We can use the attrs parameter in read_html to do so. The parameter we will pass is the id variable
Step2: However, we have not got the header correctly in our dataframe. Let us see if we can fix this.
To get help on any function just use ?? before the function to help. Run this function and see what additional parameter you need to define to get the header correctly
Step3: Exercise #3
Read the html file again and ensure that the correct header is identifed by pandas?
Step4: Show the top five rows of the dataframe you have read to ensure the headers are now correct.
Step5: Dataframe Viewing
Step6: Downloading the Entire Month Wise Arrival Data | Python Code:
# Import the library we need, which is Pandas
import pandas as pd
# Read all the tables from the html document
AllTables = pd.read_html('MonthWiseMarketArrivalsJan2016.html')
# Let us find out how many tables has it found?
len(AllTables)
Explanation: 2. Acquire the Data
Finding Data Sources
There are three place to get onion price and quantity information by market.
Agmarket - This is the website run by the Directorate of Marketing & Inspection (DMI), Ministry of Agriculture, Government of India and provides daily price and arrival data for all agricultural commodities at national and state level. Unfortunately, the link to get Market-wise Daily Report for Specific Commodity (Onion for us) leads to a multipage aspx entry form to get data for each date. So it is like to require an involved scraper to get the data. Too much effort - Move on. Here is the best link to go to get what is available - http://agmarknet.nic.in/agnew/NationalBEnglish/SpecificCommodityWeeklyReport.aspx?ss=1
Data.gov.in - This is normally a good place to get government data in a machine readable form like csv or xml. The Variety-wise Daily Market Prices Data of Onion is available for each year as an XML but unfortunately it does not include quantity information that is needed. It would be good to have both price and quantity - so even though this is easy, lets see if we can get both from a different source. Here is the best link to go to get what is available - https://data.gov.in/catalog/variety-wise-daily-market-prices-data-onion#web_catalog_tabs_block_10
NHRDF - This is the website of National Horticultural Research & Development Foundation and maintains a database on Market Arrivals and Price, Area and Production and Export Data for three commodities - Garlic, Onion and Potatoes. We are in luck! It also has data from 1996 onwards and has only got one form to fill to get the data in a tabular form. Further it also has production and export data. Excellent. Lets use this. Here is the best link to got to get all that is available - http://nhrdf.org/en-us/DatabaseReports
Scraping the Data
Ways to Scrape Data
Now we can do this in two different levels of sophistication
Automate the form filling process: The form on this page looks simple. But viewing source in the browser shows there form to fill with hidden fields and we will need to access it as a browser to get the session fields and then submit the form. This is a little bit more complicated than simple scraping a table on a webpage
Manually fill the form: What if we manually fill the form with the desired form fields and then save the page as a html file. Then we can read this file and just scrape the table from it. Lets go with the simple way for now.
Scraping - Manual Form Filling
So let us fill the form to get a small subset of data and test our scraping process. We will start by getting the Monthwise Market Arrivals.
Crop Name: Onion
Month: January
Market: All
Year: 2016
The saved webpage is available at MonthWiseMarketArrivalsJan2016.html
Understand the HTML Structure
We need to scrape data from this html page... So let us try to understand the structure of the page.
You can view the source of the page - typically Right Click and View Source on any browser and that would give your the source HTML for any page.
You can open the developer tools in your browser and investigate the structure as you mouse over the page
We can use a tools like Selector Gadget to understand the id's and classes' used in the web page
Our data is under the <table> tag
Exercise #1
Find the number of tables in the HTML Structure of MonthWiseMarketArrivalsJan2016.html?
Find all the Tables
End of explanation
# So can we read our exact table
OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html',
attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'})
# So how many tables have we got now
len(OneTable)
# Show the table of data identifed by pandas with just the first five rows
OneTable[0].head()
Explanation: Exercise #2
Find the exact table of data we want in the list of AllTables?
Get the exact table
To read the exact table we need to pass in an identifier value which would identify the table. We can use the attrs parameter in read_html to do so. The parameter we will pass is the id variable
End of explanation
??pd.read_html
Explanation: However, we have not got the header correctly in our dataframe. Let us see if we can fix this.
To get help on any function just use ?? before the function to help. Run this function and see what additional parameter you need to define to get the header correctly
End of explanation
OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', header = 0,
attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'})
Explanation: Exercise #3
Read the html file again and ensure that the correct header is identifed by pandas?
End of explanation
OneTable[0].head()
Explanation: Show the top five rows of the dataframe you have read to ensure the headers are now correct.
End of explanation
# Let us store the dataframe in a df variable. You will see that as a very common convention in data science pandas use
df = OneTable[0]
# Shape of the dateset - number of rows & number of columns in the dataframe
df.shape
# Get the names of all the columns
df.columns
# Can we see sample rows - the top 5 rows
df.head()
# Can we see sample rows - the bottom 5 rows
df.tail()
# Can we access a specific columns
df["Market"]
# Using the dot notation
df.Market
# Selecting specific column and rows
df[0:5]["Market"]
# Works both ways
df["Market"][0:5]
#Getting unique values of State
pd.unique(df['Market'])
Explanation: Dataframe Viewing
End of explanation
AllTable = pd.read_html('MonthWiseMarketArrivals.html', header = 0,
attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'})
AllTable[0].head()
??pd.DataFrame.to_csv
AllTable[0].columns
# Change the column names to simpler ones
AllTable[0].columns = ['market', 'month', 'year', 'quantity', 'priceMin', 'priceMax', 'priceMod']
AllTable[0].head()
# Save the dataframe to a csv file
AllTable[0].to_csv('MonthWiseMarketArrivals.csv', index = False)
Explanation: Downloading the Entire Month Wise Arrival Data
End of explanation |
11,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
분산 분석 기반의 카테고리 분석
회귀 분석 대상이 되는 독립 변수가 카테고리 값을 가지는 변수인 경우에는 카테고리 값에 의해 연속 변수인 y값이 달라진다. 이러한 경우, 분산 분석(ANOVA)을 사용하면 카테고리 값의 영향을 정량적으로 분석할 수 있다. 또한 이는 카테고리 값에 의해 회귀 모형이 달라지는 것으로도 볼 수 있기 때문에 모형 비교에도 사용될 수 있다.
카테고리 독립 변수와 더미 변수
카테고리 값은 여러개의 다른 상태를 나타내는 값이다. 분석시에는 편의상 이 값은 0, 1과 같은 정수로 표현하지만 원래 카테고리값은 1, 2, 3과 같이 숫자로 표현되어 있어도 이는 단지 "A", "B", "C"라는 라벨을 숫자로 대신 쓴 것에 지나지 않으며 실제로 크기의 의미가 없다는 점에 주의해야 한다. 즉, 2라는 값이 1보다 2배 더 크다는 뜻이 아니고 3이라는 값도 1보다 3배 더 크다는 뜻이 아니다.
따라서 카테고리 값을 그냥 정수로 쓰면 회귀 분석 모형은 이 값을 크기를 가진 숫자로 인식할 수 있는 위험이 있기 때문에 반드시 one-hot-encoding 등을 통해 더미 변수(dummy variable)의 형태로 변환해야 함
더미 변수는 0 또는 1만으로 표현되는 값으로 어떤 요인이 존재하는가 존재하지 않는가를 표시하는 독립 변수이다. 다음과 같은 명칭으로도 불린다.
indicator variable
design variable
Boolean indicator
binary variable
treatment
Step1: 더미 변수와 모형 비교
더미 변수를 사용하면 사실상 회귀 모형 복수개를 동시에 사용하는 것과 실질적으로 동일하다.
더미 변수의 예 1
$$ Y = \alpha_{1} + \alpha_{2} D_2 + \alpha_{3} D_3 $$
$D_2 = 0, D_3 = 0$ 이면 $Y = \alpha_{1} $
$D_2 = 1, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{2} $
$D_2 = 0, D_3 = 1$ 이면 $Y = \alpha_{1} + \alpha_{3} $
<img src="https
Step2: 분산 분석을 이용한 모형 비교
$K$개의 복수의 카테고리 값을 가지는 더미 변수의 영향을 보기 위해서는 F-검정을 통해 복수 개의 모형을 비교하는 분산 분석을 사용할 수 있다.
이 경우에는 분산 분석에 사용되는 각 분산의 의미가 다음과 같다.
ESS | Python Code:
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
x0 = np.random.choice(3, 10)
x0
encoder.fit(x0[:, np.newaxis])
X = encoder.transform(x0[:, np.newaxis]).toarray()
X
dfX = pd.DataFrame(X, columns=encoder.active_features_)
dfX
Explanation: 분산 분석 기반의 카테고리 분석
회귀 분석 대상이 되는 독립 변수가 카테고리 값을 가지는 변수인 경우에는 카테고리 값에 의해 연속 변수인 y값이 달라진다. 이러한 경우, 분산 분석(ANOVA)을 사용하면 카테고리 값의 영향을 정량적으로 분석할 수 있다. 또한 이는 카테고리 값에 의해 회귀 모형이 달라지는 것으로도 볼 수 있기 때문에 모형 비교에도 사용될 수 있다.
카테고리 독립 변수와 더미 변수
카테고리 값은 여러개의 다른 상태를 나타내는 값이다. 분석시에는 편의상 이 값은 0, 1과 같은 정수로 표현하지만 원래 카테고리값은 1, 2, 3과 같이 숫자로 표현되어 있어도 이는 단지 "A", "B", "C"라는 라벨을 숫자로 대신 쓴 것에 지나지 않으며 실제로 크기의 의미가 없다는 점에 주의해야 한다. 즉, 2라는 값이 1보다 2배 더 크다는 뜻이 아니고 3이라는 값도 1보다 3배 더 크다는 뜻이 아니다.
따라서 카테고리 값을 그냥 정수로 쓰면 회귀 분석 모형은 이 값을 크기를 가진 숫자로 인식할 수 있는 위험이 있기 때문에 반드시 one-hot-encoding 등을 통해 더미 변수(dummy variable)의 형태로 변환해야 함
더미 변수는 0 또는 1만으로 표현되는 값으로 어떤 요인이 존재하는가 존재하지 않는가를 표시하는 독립 변수이다. 다음과 같은 명칭으로도 불린다.
indicator variable
design variable
Boolean indicator
binary variable
treatment
End of explanation
from sklearn.datasets import load_boston
boston = load_boston()
dfX0_boston = pd.DataFrame(boston.data, columns=boston.feature_names)
dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"])
import statsmodels.api as sm
dfX_boston = sm.add_constant(dfX0_boston)
df_boston = pd.concat([dfX_boston, dfy_boston], axis=1)
df_boston.tail()
dfX_boston.CHAS.plot()
dfX_boston.CHAS.unique()
model = sm.OLS(dfy_boston, dfX_boston)
result = model.fit()
print(result.summary())
params1 = result.params.drop("CHAS")
params1
params2 = params1.copy()
params2["const"] += result.params["CHAS"]
params2
df_boston.boxplot("MEDV", "CHAS")
plt.show()
sns.stripplot(x="CHAS", y="MEDV", data=df_boston, jitter=True, alpha=.3)
sns.pointplot(x="CHAS", y="MEDV", data=df_boston, dodge=True, color='r')
plt.show()
Explanation: 더미 변수와 모형 비교
더미 변수를 사용하면 사실상 회귀 모형 복수개를 동시에 사용하는 것과 실질적으로 동일하다.
더미 변수의 예 1
$$ Y = \alpha_{1} + \alpha_{2} D_2 + \alpha_{3} D_3 $$
$D_2 = 0, D_3 = 0$ 이면 $Y = \alpha_{1} $
$D_2 = 1, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{2} $
$D_2 = 0, D_3 = 1$ 이면 $Y = \alpha_{1} + \alpha_{3} $
<img src="https://upload.wikimedia.org/wikipedia/commons/6/61/Anova_graph.jpg" style="width:70%; margin: 0 auto 0 auto;">
더미 변수의 예 2
$$ Y = \alpha_{1} + \alpha_{2} D_2 + \alpha_{3} D_3 + \alpha_{4} X $$
$D_2 = 0, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{4} X $
$D_2 = 1, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{2} + \alpha_{4} X $
$D_2 = 0, D_3 = 1$ 이면 $Y = \alpha_{1} + \alpha_{3} + \alpha_{4} X $
<img src="https://upload.wikimedia.org/wikipedia/commons/2/20/Ancova_graph.jpg" style="width:70%; margin: 0 auto 0 auto;">
더미 변수의 예 3
$$ Y = \alpha_{1} + \alpha_{2} D_2 + \alpha_{3} D_3 + \alpha_{4} X + \alpha_{5} D_4 X + \alpha_{6} D_5 X $$
$D_2 = 0, D_3 = 0, D_4 = 0, D_5 = 0$ 이면 $Y = \alpha_{1} + \alpha_{4} X $
$D_2 = 1, D_3 = 0, D_4 = 1, D_5 = 0$ 이면 $Y = \alpha_{1} + \alpha_{2} + (\alpha_{4} + \alpha_{5}) X $
$D_2 = 0, D_3 = 1, D_4 = 0, D_5 = 1$ 이면 $Y = \alpha_{1} + \alpha_{3} + (\alpha_{4} + \alpha_{6}) X $
<img src="https://docs.google.com/drawings/d/1U1ahMIzvOq74T90ZDuX5YOQJ0YnSJmUhgQhjhV4Xj6c/pub?w=1428&h=622" style="width:90%; margin: 0 auto 0 auto;">
더미 변수의 예 4: Boston Dataset
End of explanation
import statsmodels.api as sm
model = sm.OLS.from_formula("MEDV ~ C(CHAS)", data=df_boston)
result = model.fit()
table = sm.stats.anova_lm(result)
table
model1 = sm.OLS.from_formula("MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT", data=df_boston)
model2 = sm.OLS.from_formula("MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT + C(CHAS)", data=df_boston)
result1 = model1.fit()
result2 = model2.fit()
table = sm.stats.anova_lm(result1, result2)
table
Explanation: 분산 분석을 이용한 모형 비교
$K$개의 복수의 카테고리 값을 가지는 더미 변수의 영향을 보기 위해서는 F-검정을 통해 복수 개의 모형을 비교하는 분산 분석을 사용할 수 있다.
이 경우에는 분산 분석에 사용되는 각 분산의 의미가 다음과 같다.
ESS: 각 그룹 평균의 분산 (Between-Group Variance)
$$ BSS = \sum_{k=1}^K (\bar{x} - \bar{x}_k)^2 $$
RSS: 각 그룹 내의 오차의 분산의 합 (Within-Group Variance)
$$ WSS = \sum_{k=1}^K \sum_{i} (x_{i} - \bar{x}_k)^2 $$
TSS : 전체 오차의 분산
$$ TSS = \sum_{i} (x_{i} - \bar{x})^2 $$
| | source | degree of freedom | mean square | F statstics |
|-|-|-|-|-|
| Between | $$\text{BSS}$$ | $$K-1$$ | $$\dfrac{\text{ESS}}{K-1}$$ | $$F$$ |
| Within | $$\text{WSS}$$ | $$N-K$$ | $$\dfrac{\text{RSS}}{N-K}$$ |
| Total | $$\text{TSS}$$ | $$N-1$$ | $$\dfrac{\text{TSS}}{N-1}$$ |
| $R^2$ | $$\text{BSS} / \text{TSS}$$ |
이 때 F-검정의 귀무가설은 $\text{BSS}=0$ 즉 $\text{WSS}=\text{TSS}$ 이다. 즉, 그룹간 차이가 없는 경우이다.
End of explanation |
11,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This IPython notebook illustrates how to perform matching using the rule-based matcher.
First, we need to import py_entitymatching package and other libraries as follows
Step1: Then, read the (sample) input tables for matching purposes.
Step2: Then, split the labeled data into development set and evaluation set. Use the development set to select the best learning-based matcher
Step3: Creating and Using a Rule-Based Matcher
This, typically involves the following steps
Step4: Creating Features
Next, we need to create a set of features for the development set. Magellan provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
Step5: We observe that there were 20 features generated. As a first step, lets say that we decide to use only 'year' related features.
Step6: Adding Rules
Before we can use the rule-based matcher, we need to create rules to evaluate tuple pairs. Each rule is a list of strings. Each string specifies a conjunction of predicates. Each predicate has three parts
Step7: Using the Matcher to Predict Results
Now that our rule-based matcher has some rules, we can use it to predict whether a tuple pair is actually a match. Each rule is is a conjunction of predicates and will return True only if all the predicates return True. The matcher is then a disjunction of rules and if any one of the rules return True, then the tuple pair will be a match. | Python Code:
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
Explanation: Introduction
This IPython notebook illustrates how to perform matching using the rule-based matcher.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
path_A = datasets_dir + os.sep + 'dblp_demo.csv'
path_B = datasets_dir + os.sep + 'acm_demo.csv'
path_labeled_data = datasets_dir + os.sep + 'labeled_data_demo.csv'
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
# Load the pre-labeled data
S = em.read_csv_metadata(path_labeled_data,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
S.head()
Explanation: Then, read the (sample) input tables for matching purposes.
End of explanation
# Split S into I an J
IJ = em.split_train_test(S, train_proportion=0.5, random_state=0)
I = IJ['train']
J = IJ['test']
Explanation: Then, split the labeled data into development set and evaluation set. Use the development set to select the best learning-based matcher
End of explanation
brm = em.BooleanRuleMatcher()
Explanation: Creating and Using a Rule-Based Matcher
This, typically involves the following steps:
1. Creating the rule-based matcher
2. Creating features
3. Adding Rules
4. Using the Matcher to Predict Results
Creating the Rule-Based Matcher
End of explanation
# Generate a set of features
F = em.get_features_for_matching(A, B, validate_inferred_attr_types=False)
Explanation: Creating Features
Next, we need to create a set of features for the development set. Magellan provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
End of explanation
F.feature_name
Explanation: We observe that there were 20 features generated. As a first step, lets say that we decide to use only 'year' related features.
End of explanation
# Add two rules to the rule-based matcher
# The first rule has two predicates, one comparing the titles and the other looking for an exact match of the years
brm.add_rule(['title_title_lev_sim(ltuple, rtuple) > 0.4', 'year_year_exm(ltuple, rtuple) == 1'], F)
# This second rule compares the authors
brm.add_rule(['authors_authors_lev_sim(ltuple, rtuple) > 0.4'], F)
brm.get_rule_names()
# Rules can also be deleted from the rule-based matcher
brm.delete_rule('_rule_1')
Explanation: Adding Rules
Before we can use the rule-based matcher, we need to create rules to evaluate tuple pairs. Each rule is a list of strings. Each string specifies a conjunction of predicates. Each predicate has three parts: (1) an expression, (2) a comparison operator, and (3) a value. The expression is evaluated over a tuple pair, producing a numeric value.
End of explanation
brm.predict(S, target_attr='pred_label', append=True)
S
Explanation: Using the Matcher to Predict Results
Now that our rule-based matcher has some rules, we can use it to predict whether a tuple pair is actually a match. Each rule is is a conjunction of predicates and will return True only if all the predicates return True. The matcher is then a disjunction of rules and if any one of the rules return True, then the tuple pair will be a match.
End of explanation |
11,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: Step 0 - hyperparams
Step2: Step 1 - collect data (and/or generate them)
Step3: Step 2 - Build model
Step4: Step 3 training the network
Step5: TODO Co integration
https | Python Code:
from __future__ import division
import tensorflow as tf
from os import path
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from IPython.display import Image
from IPython.core.display import HTML
from mylibs.tf_helper import getDefaultGPUconfig
from data_providers.binary_shifter_varlen_data_provider import \
BinaryShifterVarLenDataProvider
from data_providers.price_history_varlen_data_provider import PriceHistoryVarLenDataProvider
from models.model_05_price_history_rnn_varlen import PriceHistoryRnnVarlen
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
%matplotlib inline
from common import get_or_run_nn
Explanation: https://r2rt.com/recurrent-neural-networks-in-tensorflow-iii-variable-length-sequences.html
End of explanation
num_epochs = 10
series_max_len = 60
num_features = 1 #just one here, the function we are predicting is one-dimensional
state_size = 400
target_len = 30
batch_size = 47
Explanation: Step 0 - hyperparams
End of explanation
csv_in = '../price_history_03a_fixed_width.csv'
npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'
# XX, YY, sequence_lens, seq_mask = PriceHistoryVarLenDataProvider.createAndSaveDataset(
# csv_in=csv_in,
# npz_out=npz_path,
# input_seq_len=60, target_seq_len=30)
# XX.shape, YY.shape, sequence_lens.shape, seq_mask.shape
dp = PriceHistoryVarLenDataProvider(filteringSeqLens = lambda xx : xx >= target_len,
npz_path=npz_path)
dp.inputs.shape, dp.targets.shape, dp.sequence_lengths.shape, dp.sequence_masks.shape
Explanation: Step 1 - collect data (and/or generate them)
End of explanation
model = PriceHistoryRnnVarlen(rng=random_state, dtype=dtype, config=config)
graph = model.getGraph(batch_size=batch_size, state_size=state_size,
target_len=target_len, series_max_len=series_max_len)
show_graph(graph)
Explanation: Step 2 - Build model
End of explanation
num_epochs, state_size, batch_size
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
from os.path import isdir
data_folder = '../../../../Dropbox/data'
assert isdir(data_folder)
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='001_plain_rnn_60to30', nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
sns.tsplot(data=dp.inputs[ind].flatten())
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
Explanation: Step 3 training the network
End of explanation
num_epochs, state_size, batch_size
cost_func = PriceHistoryRnnVarlen.COST_FUNCS.MSE
def experiment():
dynStats, predictions_dict = model.run(epochs=num_epochs,
cost_func= cost_func,
state_size=state_size,
series_max_len=series_max_len,
target_len=target_len,
npz_path=npz_path,
batch_size=batch_size)
return dynStats, predictions_dict
dyn_stats, preds_dict = get_or_run_nn(experiment,
filename='001_plain_rnn_60to30_mse')
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
Explanation: TODO Co integration
https://en.wikipedia.org/wiki/Cointegration
https://www.quora.com/What-are-some-methods-to-check-similarities-between-two-time-series-data-sets
https://stackoverflow.com/questions/11362943/efficient-cointegration-test-in-python
Mean Squared Error (instead of huber loss)
End of explanation |
11,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the template below, make a widget view that displays text, possibly 'Hello World'.
Step1: Using the template below, make a color picker widget. This can be done in a few steps | Python Code:
%%javascript
delete requirejs.s.contexts._.defined.CustomViewModule;
define('CustomViewModule', ['jquery', 'widgets/js/widget'], function($, widget) {
var CustomView = widget.DOMWidgetView.extend({
});
return {CustomView: CustomView};
});
from IPython.html.widgets import DOMWidget
from IPython.display import display
from IPython.utils.traitlets import Unicode
class CustomWidget(DOMWidget):
_view_module = Unicode('CustomViewModule', sync=True)
_view_name = Unicode('CustomView', sync=True)
display(CustomWidget())
answer('2_1.js')
answer('2_1.py')
Explanation: Using the template below, make a widget view that displays text, possibly 'Hello World'.
End of explanation
from IPython.html.widgets import DOMWidget
from IPython.display import display
from IPython.utils.traitlets import Unicode
class ColorWidget(DOMWidget):
_view_module = Unicode('ColorViewModule', sync=True)
_view_name = Unicode('ColorView', sync=True)
%%javascript
delete requirejs.s.contexts._.defined.ColorViewModule;
define('ColorViewModule', ['jquery', 'widgets/js/widget'], function($, widget) {
var ColorView = widget.DOMWidgetView.extend({
});
return {ColorView: ColorView};
});
answer('2_2.py')
answer('2_2_1.js')
answer('2_2_2.js')
answer('2_2.js')
w = ColorWidget()
display(w)
display(w)
w.value = '#00FF00'
w.value
Explanation: Using the template below, make a color picker widget. This can be done in a few steps:
1. Add a synced traitlet to the Python class.
2. Add a render method that inserts a input element, with attribute type='color'. The easiest way to do this is to use jQuery.
3. Add a method that updates the color picker's value to the model's value. Use listenTo listen to changes of the model.
4. Listen to changes of the color picker's value, and update the model accordingly.
End of explanation |
11,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
minimask mosaic example
Construct a mosaic of squares over the sky
Step1: Specify the location of the mask file to write
Step2: Construct a mask using a tile pattern with centers specified by the healpix grid.
Step3: Load the file as a mask object
Step4: Plot the mask on a mollweide projection using healpy.
Step5: Pixelize the mask onto the healpix grid | Python Code:
%matplotlib notebook
import os
import numpy as np
import tempfile
import matplotlib.pyplot as pyplot
import logging
logging.basicConfig(level=logging.INFO)
import minimask.mask as mask
import minimask.healpix_projection as hp
import minimask.io.mosaic as mosaic
Explanation: minimask mosaic example
Construct a mosaic of squares over the sky
End of explanation
filename = "masks/mosaic.txt"
try:
os.mkdir(os.path.dirname(filename))
except:
pass
Explanation: Specify the location of the mask file to write
End of explanation
tile = np.array([[[-0.5, -0.5],[0.5, -0.5],[0.5,0.5],[-0.5,0.5]]])*8
grid = hp.HealpixProjector(nside=4)
lon, lat = grid.pix2ang(np.arange(grid.npix))
centers = np.transpose([lon, lat])
mosaic.Mosaic(tile, centers).write(filename)
Explanation: Construct a mask using a tile pattern with centers specified by the healpix grid.
End of explanation
M = mask.Mask(filename)
print "The number of polygons in the mask is {}.".format(len(M))
Explanation: Load the file as a mask object
End of explanation
import healpy
healpy.mollview(title="")
for x,y in M.render(1):
healpy.projplot(x,y,lonlat=True)
Explanation: Plot the mask on a mollweide projection using healpy.
End of explanation
map = M.pixelize(nside=64, n=10, weight=False)
healpy.mollview(map, title="")
Explanation: Pixelize the mask onto the healpix grid
End of explanation |
11,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAT210x - Programming with Python for DS
Module4- Lab1
Step1: Every 100 samples in the dataset, we save 1. If things run too slow, try increasing this number. If things run too fast, try decreasing it... =)
Step2: Load up the scanned armadillo
Step3: PCA
In the method below, write code to import the libraries required for PCA.
Then, train a PCA model on the passed in armadillo dataframe parameter. Lastly, project the armadillo down to the two principal components, by dropping one dimension.
NOTE-1
Step4: Preview the Data
Step5: Time Execution Speeds
Let's see how long it takes PCA to execute
Step6: Render the newly transformed PCA armadillo!
Step7: Let's also take a look at the speed of the randomized solver on the same dataset. It might be faster, it might be slower, or it might take exactly the same amount of time to execute
Step8: Let's see what the results look like | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from mpl_toolkits.mplot3d import Axes3D
from plyfile import PlyData, PlyElement
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
Explanation: DAT210x - Programming with Python for DS
Module4- Lab1
End of explanation
reduce_factor = 100
Explanation: Every 100 samples in the dataset, we save 1. If things run too slow, try increasing this number. If things run too fast, try decreasing it... =)
End of explanation
plyfile = PlyData.read('Datasets/stanford_armadillo.ply')
armadillo = pd.DataFrame({
'x':plyfile['vertex']['z'][::reduce_factor],
'y':plyfile['vertex']['x'][::reduce_factor],
'z':plyfile['vertex']['y'][::reduce_factor]
})
Explanation: Load up the scanned armadillo:
End of explanation
def do_PCA(armadillo, svd_solver):
# .. your code here ..
return None
Explanation: PCA
In the method below, write code to import the libraries required for PCA.
Then, train a PCA model on the passed in armadillo dataframe parameter. Lastly, project the armadillo down to the two principal components, by dropping one dimension.
NOTE-1: Be sure to RETURN your projected armadillo rather than None! This projection will be stored in a NumPy NDArray rather than a Pandas dataframe. This is something Pandas does for you automatically =).
NOTE-2: Regarding the svd_solver parameter, simply pass that into your PCA model constructor as-is, e.g. svd_solver=svd_solver.
For additional details, please read through Decomposition - PCA.
End of explanation
# Render the Original Armadillo
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Armadillo 3D')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.scatter(armadillo.x, armadillo.y, armadillo.z, c='green', marker='.', alpha=0.75)
Explanation: Preview the Data
End of explanation
%timeit pca = do_PCA(armadillo, 'full')
Explanation: Time Execution Speeds
Let's see how long it takes PCA to execute:
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('Full PCA')
ax.scatter(pca[:,0], pca[:,1], c='blue', marker='.', alpha=0.75)
plt.show()
Explanation: Render the newly transformed PCA armadillo!
End of explanation
%timeit rpca = do_PCA(armadillo, 'randomized')
Explanation: Let's also take a look at the speed of the randomized solver on the same dataset. It might be faster, it might be slower, or it might take exactly the same amount of time to execute:
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('Randomized PCA')
ax.scatter(rpca[:,0], rpca[:,1], c='red', marker='.', alpha=0.75)
plt.show()
Explanation: Let's see what the results look like:
End of explanation |
11,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Feature Engineering </h1>
In this notebook, you will learn how to incorporate feature engineering into your pipeline.
<ul>
<li> Working with feature columns </li>
<li> Adding feature crosses in TensorFlow </li>
<li> Reading data from BigQuery </li>
<li> Creating datasets using Dataflow </li>
<li> Using a wide-and-deep model </li>
</ul>
Apache Beam works better with Python 2 at the moment, so we're going to work within the Python 2 kernel.
Step1: After doing a pip install, you have to Reset Session so that the new packages are picked up. Please click on the button in the above menu.
Step2: <h2> 1. Environment variables for project and bucket </h2>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads
Step4: <h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp.
Step5: Try the query above in https
Step6: Run pipeline locally
Step7: Run pipleline on cloud on a larger sample size.
Step8: Once the job completes, observe the files created in Google Cloud Storage
Step9: <h2> 4. Develop model with new inputs </h2>
Download the first shard of the preprocessed data to enable local development.
Step10: Complete the TODOs in taxifare/trainer/model.py so that the code below works.
Step11: <h2> 5. Train on cloud </h2> | Python Code:
%%bash
source activate py2env
conda install -y pytz
pip uninstall -y google-cloud-dataflow
pip install --upgrade apache-beam[gcp]==2.9.0
Explanation: <h1> Feature Engineering </h1>
In this notebook, you will learn how to incorporate feature engineering into your pipeline.
<ul>
<li> Working with feature columns </li>
<li> Adding feature crosses in TensorFlow </li>
<li> Reading data from BigQuery </li>
<li> Creating datasets using Dataflow </li>
<li> Using a wide-and-deep model </li>
</ul>
Apache Beam works better with Python 2 at the moment, so we're going to work within the Python 2 kernel.
End of explanation
import tensorflow as tf
import apache_beam as beam
import shutil
print(tf.__version__)
Explanation: After doing a pip install, you have to Reset Session so that the new packages are picked up. Please click on the button in the above menu.
End of explanation
import os
REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
PROJECT = 'cloud-training-demos' # CHANGE THIS
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
## ensure we're using python2 env
os.environ['CLOUDSDK_PYTHON'] = 'python2'
%%bash
## ensure gcloud is up to date
gcloud components update
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
Explanation: <h2> 1. Environment variables for project and bucket </h2>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. Therefore, we should <b>create a single-region bucket</b>. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available) </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
End of explanation
def create_query(phase, EVERY_N):
if EVERY_N == None:
EVERY_N = 4 #use full dataset
#select and pre-process fields
base_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
DAYOFWEEK(pickup_datetime) AS dayofweek,
HOUR(pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
#add subsampling criteria by modding with hashkey
if phase == 'train':
query = "{} AND ABS(HASH(pickup_datetime)) % {} < 2".format(base_query,EVERY_N)
elif phase == 'valid':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 2".format(base_query,EVERY_N)
elif phase == 'test':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 3".format(base_query,EVERY_N)
return query
print create_query('valid', 100) #example query using 1% of data
Explanation: <h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp.
End of explanation
%%bash
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
import datetime
####
# Arguments:
# -rowdict: Dictionary. The beam bigquery reader returns a PCollection in
# which each row is represented as a python dictionary
# Returns:
# -rowstring: a comma separated string representation of the record with dayofweek
# converted from int to string (e.g. 3 --> Tue)
####
def to_csv(rowdict):
days = ['null', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
CSV_COLUMNS = 'fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat,passengers,key'.split(',')
rowdict['dayofweek'] = days[rowdict['dayofweek']]
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
####
# Arguments:
# -EVERY_N: Integer. Sample one out of every N rows from the full dataset.
# Larger values will yield smaller sample
# -RUNNER: 'DirectRunner' or 'DataflowRunner'. Specfy to run the pipeline
# locally or on Google Cloud respectively.
# Side-effects:
# -Creates and executes dataflow pipeline.
# See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
####
def preprocess(EVERY_N, RUNNER):
job_name = 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
print 'Launching Dataflow job {} ... hang on'.format(job_name)
OUTPUT_DIR = 'gs://{0}/taxifare/ch4/taxi_preproc/'.format(BUCKET)
#dictionary of pipeline options
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S'),
'project': PROJECT,
'runner': RUNNER
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags=[], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ['train', 'valid']:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, '{}.csv'.format(phase))
(
p | 'read_{}'.format(phase) >> ##TODO: read from BigQuery
| 'tocsv_{}'.format(phase) >> ##TODO: apply the to_csv function to every row
| 'write_{}'.format(phase) >> ##TODO: write to outfile
)
print("Done")
Explanation: Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!)
<h2> 3. Preprocessing Dataflow job from BigQuery </h2>
This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and cleanup inside Dataflow, but then we'll have to remember to repeat that prepreprocessing during inference. It is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at this in future notebooks. For now, we are simply moving data from BigQuery to CSV using Dataflow.
While we could read from BQ directly from TensorFlow (See: https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. Let's use Dataflow to do this at scale.
Because we are running this on the Cloud, you should go to the GCP Console (https://console.cloud.google.com/dataflow) to look at the status of the job. It will take several minutes for the preprocessing job to launch.
End of explanation
preprocess(50*10000, 'DirectRunner')
Explanation: Run pipeline locally
End of explanation
preprocess(50*100, 'DataflowRunner')
#change first arg to None to preprocess full dataset
Explanation: Run pipleline on cloud on a larger sample size.
End of explanation
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
#print first 10 lines of first shard of train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" | head
Explanation: Once the job completes, observe the files created in Google Cloud Storage
End of explanation
%%bash
mkdir sample
gsutil cp "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" sample/train.csv
gsutil cp "gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*" sample/valid.csv
Explanation: <h2> 4. Develop model with new inputs </h2>
Download the first shard of the preprocessed data to enable local development.
End of explanation
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths=${PWD}/sample/train.csv \
--eval_data_paths=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=1000 \
--job-dir=/tmp
!ls taxi_trained/export/exporter/
%%writefile /tmp/test.json
{"dayofweek": "Sun", "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ai-platform local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
#if gcloud ai-platform local predict fails, might need to update glcoud
#!gcloud --quiet components update
Explanation: Complete the TODOs in taxifare/trainer/model.py so that the code below works.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--train_steps=5000 \
--output_dir=$OUTDIR
Explanation: <h2> 5. Train on cloud </h2>
End of explanation |
11,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#A-short-study-of-Rényi-entropy" data-toc-modified-id="A-short-study-of-Rényi-entropy-1"><span class="toc-item-num">1 </span>A short study of Rényi entropy</a></div><div class="lev2 toc-item"><a href="#Requirements" data-toc-modified-id="Requirements-11"><span class="toc-item-num">1.1 </span>Requirements</a></div><div class="lev2 toc-item"><a href="#Utility-functions" data-toc-modified-id="Utility-functions-12"><span class="toc-item-num">1.2 </span>Utility functions</a></div><div class="lev2 toc-item"><a href="#Definition,-common-and-special-cases" data-toc-modified-id="Definition,-common-and-special-cases-13"><span class="toc-item-num">1.3 </span>Definition, common and special cases</a></div><div class="lev2 toc-item"><a href="#Plotting-some-values" data-toc-modified-id="Plotting-some-values-14"><span class="toc-item-num">1.4 </span>Plotting some values</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-15"><span class="toc-item-num">1.5 </span>Conclusion</a></div>
# A short study of Rényi entropy
I want to study here the Rényi entropy, using [Python](https
Step1: Utility functions
We start by giving three examples of such vectors $X=(p_i)_{1\leq i \leq n}$, a discrete probability distributions on $n$ values.
Step3: We need a function to safely compute $x \mapsto x \log_2(x)$, with special care in case $x=0$. This one will accept a numpy array or a single value as argument
Step4: For examples
Step5: and with vectors, slots with $p_i=0$ are handled without error
Step6: Definition, common and special cases
From the mathematical definition, an issue will happen if $\alpha=1$ or $\alpha=\inf$, so we deal with the special cases manually.
$X$ is here given as the vector of $(p_i)_{1\leq i \leq n}$.
Step7: Plotting some values | Python Code:
!pip install watermark matplotlib numpy
%load_ext watermark
%watermark -v -m -a "Lilian Besson" -g -p matplotlib,numpy
import numpy as np
import matplotlib.pyplot as plt
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#A-short-study-of-Rényi-entropy" data-toc-modified-id="A-short-study-of-Rényi-entropy-1"><span class="toc-item-num">1 </span>A short study of Rényi entropy</a></div><div class="lev2 toc-item"><a href="#Requirements" data-toc-modified-id="Requirements-11"><span class="toc-item-num">1.1 </span>Requirements</a></div><div class="lev2 toc-item"><a href="#Utility-functions" data-toc-modified-id="Utility-functions-12"><span class="toc-item-num">1.2 </span>Utility functions</a></div><div class="lev2 toc-item"><a href="#Definition,-common-and-special-cases" data-toc-modified-id="Definition,-common-and-special-cases-13"><span class="toc-item-num">1.3 </span>Definition, common and special cases</a></div><div class="lev2 toc-item"><a href="#Plotting-some-values" data-toc-modified-id="Plotting-some-values-14"><span class="toc-item-num">1.4 </span>Plotting some values</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-15"><span class="toc-item-num">1.5 </span>Conclusion</a></div>
# A short study of Rényi entropy
I want to study here the Rényi entropy, using [Python](https://www.python.org/).
I will define a function implementing $H_{\alpha}(X)$, from the given formula, for discrete random variables, and check the influence of the parameter $\alpha$,
$$ H_{\alpha}(X) := \frac{1}{1-\alpha} \log_2(\sum_i^n p_i^{\alpha}),$$
where $X$ has $n$ possible values, and the $i$-th outcome has probability $p_i\in[0,1]$.
- *Reference*: [this blog post by John D. Cook](https://www.johndcook.com/blog/2018/11/21/renyi-entropy/), [this Wikipédia page](https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy) and [this page on MathWorld](http://mathworld.wolfram.com/RenyiEntropy.html),
- *Author*: [Lilian Besson](https://perso.crans.org/besson/)
- *License*: [MIT License](https://lbesson.mit-license.org/)
- *Date*: 22th of November, 2018
## Requirements
End of explanation
X1 = [0.25, 0.5, 0.25]
X2 = [0.1, 0.25, 0.3, 0.45]
X3 = [0, 0.5, 0.5]
X4 = np.full(100, 1/100)
X5 = np.full(1000, 1/1000)
X6 = np.arange(100, dtype=float)
X6 /= np.sum(X6)
Explanation: Utility functions
We start by giving three examples of such vectors $X=(p_i)_{1\leq i \leq n}$, a discrete probability distributions on $n$ values.
End of explanation
np.seterr(all="ignore")
def x_log2_x(x):
Return x * log2(x) and 0 if x is 0.
results = x * np.log2(x)
if np.size(x) == 1:
if np.isclose(x, 0.0):
results = 0.0
else:
results[np.isclose(x, 0.0)] = 0.0
return results
Explanation: We need a function to safely compute $x \mapsto x \log_2(x)$, with special care in case $x=0$. This one will accept a numpy array or a single value as argument:
End of explanation
x_log2_x(0)
x_log2_x(0.5)
x_log2_x(1)
x_log2_x(2)
x_log2_x(10)
Explanation: For examples:
End of explanation
x_log2_x(X1)
x_log2_x(X2)
x_log2_x(X3)
x_log2_x(X4)[:10]
x_log2_x(X5)[:10]
x_log2_x(X6)[:10]
Explanation: and with vectors, slots with $p_i=0$ are handled without error:
End of explanation
def renyi_entropy(alpha, X):
assert alpha >= 0, "Error: renyi_entropy only accepts values of alpha >= 0, but alpha = {}.".format(alpha) # DEBUG
if np.isinf(alpha):
# XXX Min entropy!
return - np.log2(np.max(X))
elif np.isclose(alpha, 0):
# XXX Max entropy!
return np.log2(len(X))
elif np.isclose(alpha, 1):
# XXX Shannon entropy!
return - np.sum(x_log2_x(X))
else:
return (1.0 / (1.0 - alpha)) * np.log2(np.sum(X ** alpha))
# Curryfied version
def renyi_entropy_2(alpha):
def re(X):
return renyi_entropy(alpha, X)
return re
# Curryfied version
def renyi_entropy_3(alphas, X):
res = np.zeros_like(alphas)
for i, alpha in enumerate(alphas):
res[i] = renyi_entropy(alpha, X)
return res
Explanation: Definition, common and special cases
From the mathematical definition, an issue will happen if $\alpha=1$ or $\alpha=\inf$, so we deal with the special cases manually.
$X$ is here given as the vector of $(p_i)_{1\leq i \leq n}$.
End of explanation
alphas = np.linspace(0, 10, 1000)
renyi_entropy_3(alphas, X1)[:10]
def plot_renyi_entropy(alphas, X):
fig = plt.figure()
plt.plot(alphas, renyi_entropy_3(alphas, X))
plt.xlabel(r"Value for $\alpha$")
plt.ylabel(r"Value for $H_{\alpha}(X)$")
plt.title(r"Réniy entropy for $X={}$".format(X[:10]))
plt.show()
# return fig
plot_renyi_entropy(alphas, X1)
plot_renyi_entropy(alphas, X2)
plot_renyi_entropy(alphas, X3)
plot_renyi_entropy(alphas, X4)
plot_renyi_entropy(alphas, X5)
plot_renyi_entropy(alphas, X6)
Explanation: Plotting some values
End of explanation |
11,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combining and Merging Streams
Step1: Combining Streams with Binary Operators
For streams x, y, and a binary operator, op
Step2: Examples of zip_stream and zip_map
zip_stream is similar to zip except that zip operates on lists and zip_stream operates on streams.
zip_map applies a specified function to the lists obtained by zipping streams.
Step3: Defining Aggregating Functions on Streams
The example below shows how you can create aggregators, such as sum_streams, on streams.
Step4: Merging Windows
In the example,
<br>
merge_window(func=f, in_streams=[x,y], out_stream=z, window_size=2, step_size=2)
<br>
creates windows of window_size and step_size for each of the input streams. Thus the windows for the two input streams are
Step5: Asynchronous Merges
merge_asynch(f, in_streams, out_stream)
<br>
Function f operates on a 2-tuple
Step6: blend
blend(func, in_streams, out_stream)
<br>
blend executes func on each element of an in_stream when the element arrives at the agent and puts the result on the out_stream.
<br>
blend is nondeterministic because different executions of a program may results in elements of input streams arriving at the agent in different orders.
<br>
blend is similar to merge_asynch except that in blend func operates on an element of any in_stream whereas in merge_asynch func operates on a pair (index, element) where index identifies the input stream.
<br>
<br>
In this example, func doubles its argument. Initially, the only elements to arrive at the agent are [0, 1 2] on stream x, and so the agent puts [0, 2, 4] on the output stream. Then the next elements to arrive at the agent are [3, 4] also on stream x, and so the agent appends [6, 8] to the output. Then the next elements to arrive at the agent are [100, 110, 120] on stream y, and do the agent extends the output with [200, 220, 240]. | Python Code:
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, run
from IoTPy.agent_types.op import map_element
from IoTPy.helper_functions.recent_values import recent_values
Explanation: Combining and Merging Streams
End of explanation
w = Stream('w')
x = Stream('x')
y = Stream('y')
z = (x+y)*w
# z[n] = (x[n] + y[n])*w[n]
w.extend([1, 10, 100])
x.extend(list(range(10, 20, 1)))
y.extend(list(range(5)))
run()
print ('recent_values of z are:')
print(recent_values(z))
from IoTPy.agent_types.basics import fmap_e
# Decorate terminating function to specify non-terminating agent.
@fmap_e
def f(v): return v+10
@fmap_e
def g(w): return w * 2
w = Stream('w')
x = Stream('x')
y = Stream('y')
z = f(x+y)*g(w)
# z[n] = f(x[n]+y[n])*g(w[n])
w.extend([1, 10, 100])
x.extend(list(range(10, 20, 1)))
y.extend(list(range(5)))
run()
print ('recent_values of z are:')
print(recent_values(z))
Explanation: Combining Streams with Binary Operators
For streams x, y, and a binary operator, op:
x op y
is a stream whose n-th value is x[n] op y[n].
The following example illustrates how you can combine streams using binary operators such as + and *.
The example after the next one illustrates a functional form for stream definitions.
End of explanation
from IoTPy.agent_types.merge import zip_stream
def example_of_zip_stream():
x = Stream('x')
y = Stream('y')
z = Stream('z')
zip_stream(in_streams=[x,y], out_stream=z)
x.extend(['A', 'B', 'C'])
y.extend(list(range(100, 1000, 100)))
run()
print ('recent values of x are')
print (recent_values(x))
print ('recent values of y are')
print (recent_values(y))
print ('recent values of z are')
print (recent_values(z))
example_of_zip_stream()
from IoTPy.agent_types.basics import zip_map
def example_of_zip_map():
x = Stream('x')
y = Stream('y')
z = Stream('z')
zip_map(func=sum, in_streams=[x,y], out_stream=z)
x.extend(list(range(5)))
y.extend(list(range(100, 1000, 100)))
run()
print ('recent values of x are')
print (recent_values(x))
print ('recent values of y are')
print (recent_values(y))
print ('recent values of z are')
print (recent_values(z))
example_of_zip_map()
Explanation: Examples of zip_stream and zip_map
zip_stream is similar to zip except that zip operates on lists and zip_stream operates on streams.
zip_map applies a specified function to the lists obtained by zipping streams.
End of explanation
import numpy as np
def merge_function(func, streams):
out_stream = Stream()
zip_map(func, streams, out_stream)
return out_stream
def sum_streams(streams): return merge_function(sum, streams)
def median_streams(streams): return merge_function(np.median, streams)
w = Stream('w')
x = Stream('x')
y = Stream('y')
sums = sum_streams([w,x,y])
medians = median_streams([w,x,y])
w.extend([4, 8, 12, 16])
x.extend([0, 16, -16])
y.extend([2, 9, 28, 81, 243])
run()
print ('recent values of sum of streams are')
print (recent_values(sums))
print ('recent values of medians of streams are')
print (recent_values(medians))
Explanation: Defining Aggregating Functions on Streams
The example below shows how you can create aggregators, such as sum_streams, on streams.
End of explanation
from IoTPy.agent_types.merge import merge_window
def f(two_windows):
first_window, second_window = two_windows
return max(first_window) - min(second_window)
x = Stream('x')
y = Stream('y')
z = Stream('z')
merge_window(func=f, in_streams=[x,y], out_stream=z, window_size=2, step_size=2)
x.extend(list(range(4, 10, 1)))
y.extend(list(range(0, 40, 4)))
run()
print ('recent values of z are')
print (recent_values(z))
Explanation: Merging Windows
In the example,
<br>
merge_window(func=f, in_streams=[x,y], out_stream=z, window_size=2, step_size=2)
<br>
creates windows of window_size and step_size for each of the input streams. Thus the windows for the two input streams are:
<br>
[x[0], x[1]], [x[2], x[3]], [x[4], x[5]], ....
<br>
[y[0], y[1]], [y[2], y[3]], [y[4], y[5]], ....
<br>
Calls to function f return:
<br>
max([x[0], x[1]]) - min([y[0], y[1]]), max([x[2], x[3]]) - min([y[2], y[3]]), ...
End of explanation
from IoTPy.agent_types.merge import merge_asynch
Fahrenheit = Stream('Fahrenheit')
Celsius = Stream('Celsius')
Kelvin = Stream('Kelvin')
def convert_to_Kelvin(index_and_temperature):
index, temperature = index_and_temperature
result = 273 + (temperature if index == 1
else (temperature - 32.0)/1.8)
return result
merge_asynch(func=convert_to_Kelvin,
in_streams=[Fahrenheit, Celsius], out_stream=Kelvin)
Fahrenheit.extend([32, 50])
Celsius.extend([0.0, 10.0])
run()
Fahrenheit.extend([14.0])
Celsius.extend([-273.0, 100.0])
run()
print ('Temperatures in Kelvin are')
print (recent_values(Kelvin))
Explanation: Asynchronous Merges
merge_asynch(f, in_streams, out_stream)
<br>
Function f operates on a 2-tuple: an index and a value of an input stream, and f returns a single value which is an element of the output stream.
<br>
Elements from the input streams arrive asynchronously and nondeterministically at this merge agent. The index identifies the input stream on which the element arrived.
<br>
<br>
In this example, the agent merges streams of Fahrenheit and Celsius temperatures to produce an output stream of Kelvin temperatures. The list of input streams is [Fahrenheit, Celsius], and so the indices associated with Fahrenheit and Celsius are 0 and 1 respectively.
<br>
To convert Celsius to Kelvin add 273 and to convert Fahrenheit convert to Celsius and then add 273.
End of explanation
from IoTPy.agent_types.merge import blend
def test_blend():
x = Stream('x')
y = Stream('y')
z = Stream('z')
blend(func=lambda v: 2*v, in_streams=[x,y], out_stream=z)
x.extend(list(range(3)))
run()
print (recent_values(z))
x.extend(list(range(3, 5, 1)))
run()
print (recent_values(z))
y.extend(list(range(100, 130, 10)))
run()
print (recent_values(z))
x.extend(list(range(5, 10, 1)))
run()
print (recent_values(z))
test_blend()
from IoTPy.core.stream import StreamArray
from IoTPy.agent_types.merge import merge_list
def test_merge_list_with_stream_array():
x = StreamArray()
y = StreamArray()
z = StreamArray(dtype='bool')
# Function that is encapsulated
def f(two_arrays):
x_array, y_array = two_arrays
return x_array > y_array
# Create agent
merge_list(f, [x,y], z)
x.extend(np.array([3.0, 5.0, 7.0, 11.0, 30.0]))
y.extend(np.array([4.0, 3.0, 10.0, 20.0, 25.0, 40.0]))
run()
print('recent values of z are:')
print (recent_values(z))
test_merge_list_with_stream_array()
from IoTPy.agent_types.merge import timed_zip
def test_timed_zip():
x = Stream('x')
y = Stream('y')
z = Stream('z')
# timed_zip_agent(in_streams=[x,y], out_stream=z, name='a')
timed_zip(in_streams=[x, y], out_stream=z)
x.extend([(1, "A"), (5, "B"), (9, "C"), (12, "D"), (13, "E")])
y.extend([(5, "a"), (7, "b"), (9, "c"), (12, "d"), (14, 'e'), (16, 'f')])
run()
print ('recent values of z are')
print (recent_values(z))
test_timed_zip()
from IoTPy.agent_types.merge import timed_mix
def test_timed_mix_agents():
x = Stream('x')
y = Stream('y')
z = Stream('z')
timed_mix([x,y], z)
x.append((0, 'a'))
run()
# time=0, value='a', in_stream index is
assert recent_values(z) == [(0, (0, 'a'))]
x.append((1, 'b'))
run()
assert recent_values(z) == [(0, (0, 'a')), (1, (0, 'b'))]
y.append((2, 'A'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A'))]
y.append((5, 'B'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B'))]
x.append((3, 'c'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B'))]
x.append((4, 'd'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B'))]
x.append((8, 'e'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B')), (8, (0, 'e'))]
print (recent_values(z))
test_timed_mix_agents()
Explanation: blend
blend(func, in_streams, out_stream)
<br>
blend executes func on each element of an in_stream when the element arrives at the agent and puts the result on the out_stream.
<br>
blend is nondeterministic because different executions of a program may results in elements of input streams arriving at the agent in different orders.
<br>
blend is similar to merge_asynch except that in blend func operates on an element of any in_stream whereas in merge_asynch func operates on a pair (index, element) where index identifies the input stream.
<br>
<br>
In this example, func doubles its argument. Initially, the only elements to arrive at the agent are [0, 1 2] on stream x, and so the agent puts [0, 2, 4] on the output stream. Then the next elements to arrive at the agent are [3, 4] also on stream x, and so the agent appends [6, 8] to the output. Then the next elements to arrive at the agent are [100, 110, 120] on stream y, and do the agent extends the output with [200, 220, 240].
End of explanation |
11,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Exercises
This notebook is for programming exercises in python using
Step1: Python Statistics
Step2: Some simpler exercises based on common python function
Question
Step3: Question
Step4: Question
Step5: ``
Question
Step6: Question
Step7: Question
Step8: Panda based exercies
Some exercises related to using pandas for dataframe operations
The source of this exercises is at
Step9: DataFrames
Step10: Numpy Exercises
The problems have been taken from following resources | Python Code:
import math
import numpy as np
import pandas as pd
import re
from operator import itemgetter, attrgetter
Explanation: Python Exercises
This notebook is for programming exercises in python using :
Statistics
Inbuilt Functions and Libraries
Pandas
Numpy
End of explanation
def median(dataPoints):
"computer median of given data points"
if not dataPoints:
raise 'no datapoints passed'
sortedpoints=sorted(dataPoints)
mid=len(dataPoints)//2
#even
#print mid , sortedpoints
if len(dataPoints)%2==0:
return (sortedpoints[mid-1] + sortedpoints[mid])/2.0
else:
# odd
return sortedpoints[mid]
def range(dataPoints):
"compute range of given data points"
if not dataPoints:
raise 'no datapoints passed'
return max(dataPoints)-mean(dataPoints)
def quartiles(dataPoints):
"computer first and last quartile in the datalist"
if not dataPoints:
raise 'no datapoints passed'
sortedpoints=sorted(dataPoints)
print sortedpoints
mid=len(dataPoints)//2
#even
if(len(dataPoints)%2==0):
print sortedpoints[:mid]
print sortedpoints[mid:]
lowerQ=median(sortedpoints[:mid])
upperQ=median(sortedpoints[mid:])
else:
lowerQ=median(sortedpoints[:mid])
upperQ=median(sortedpoints[mid+1:])
return lowerQ,upperQ
def summary(dataPoints):
"print stat summary of data"
if not dataPoints:
raise 'no datapoints passed'
print "Summary Statistics:"
print ("Min : " , min(dataPoints))
print ("First Quartile : ",quartiles(dataPoints)[0] )
print ("median : ", median(dataPoints))
print ("Second Quartile : ", quartiles(dataPoints)[1])
print ("max : ", max(dataPoints))
return ""
datapoints=[68, 83, 58, 84, 100, 64]
#quartiles(datapoints)
print summary(datapoints)
Explanation: Python Statistics
End of explanation
C=50
H=30
def f1(inputList):
answer= [math.sqrt((2*C*num*1.0)/H) for num in inputList]
return ','.join(str (int(round(num))) for num in answer)
string='100,150,180'
nums=[int(num ) for num in string.split(',')]
type(nums)
print f1(nums)
Explanation: Some simpler exercises based on common python function
Question:
Write a program that calculates and prints the value according to the given formula:
Q = Square root of [(2 * C * D)/H]
Following are the fixed values of C and H:
C is 50. H is 30.
D is the variable whose values should be input to your program in a comma-separated sequence.
Example
Let us assume the following comma separated input sequence is given to the program:
100,150,180
The output of the program should be:
18,22,24
End of explanation
dimensions=[3,5]
rows=dimensions[0]
columns=dimensions[1]
array=np.zeros((rows,columns))
#print array
for row in range(rows):
for column in range(columns):
array[row][column]=row*column
print array
Explanation: Question:
Write a program which takes 2 digits, X,Y as input and generates a 2-dimensional array. The element value in the i-th row and j-th column of the array should be i*j.
Note: i=0,1.., X-1; j=0,1,¡Y-1.
Example
Suppose the following inputs are given to the program:
3,5
Then, the output of the program should be:
[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]
End of explanation
string='without,hello,bag,world'
wordList=string.split(',')
wordList.sort()
#print wordList
print ','.join(word for word in wordList)
Explanation: Question:
Write a program that accepts a comma separated sequence of words as input and prints the words in a comma-separated sequence after sorting them alphabetically.
Suppose the following input is supplied to the program:
without,hello,bag,world
Then, the output should be:
bag,hello,without,world
End of explanation
def check_password(items):
values=[]
for string in items:
if len(string) < 6 and len(string)> 12:
continue
else :
pass
if not re.search('[a-z]',string):
continue
elif not re.search('[0-9]',string):
continue
elif not re.search('[A-Z]',string):
continue
elif not re.search('[$#@]',string):
continue
elif re.search('\s',string):
continue
else :pass
values.append(string)
return ','.join(pwd for pwd in values)
string='ABd1234@1,a F1#,2w3E*,2We3345 '
items=string.split(',')
print check_password(items)
Explanation: ``
Question:
A website requires the users to input username and password to register. Write a program to check the validity of password input by users.
Following are the criteria for checking the password:
1. At least 1 letter between [a-z]
2. At least 1 number between [0-9]
1. At least 1 letter between [A-Z]
3. At least 1 character from [$#@]
4. Minimum length of transaction password: 6
5. Maximum length of transaction password: 12
Your program should accept a sequence of comma separated passwords and will check them according to the above criteria. Passwords that match the criteria are to be printed, each separated by a comma.
Example
If the following passwords are given as input to the program:
ABd1234@1,a F1#,2w3E*,2We3345
Then, the output of the program should be:
ABd1234@1
``
End of explanation
string= 'Tom,19,80 John,20,90 Jony,17,91 Jony,17,93 Json,21,85'
items= [ tuple(item.split(',')) for item in string.split(' ')]
print sorted(items, key=itemgetter(0,1,2))
Explanation: Question:
You are required to write a program to sort the (name, age, height) tuples by ascending order where name is string, age and height are numbers. The tuples are input by console. The sort criteria is:
1: Sort based on name;
2: Then sort based on age;
3: Then sort by score.
The priority is that name > age > score.
If the following tuples are given as input to the program:
Tom,19,80
John,20,90
Jony,17,91
Jony,17,93
Json,21,85
Then, the output of the program should be:
[('John', '20', '90'), ('Jony', '17', '91'), ('Jony', '17', '93'), ('Json', '21', '85'), ('Tom', '19', '80')]
End of explanation
string='New to Python or choosing between Python 2 and Python 3? Read Python 2 or Python 3.'
freq={}
for word in string.split(' '):
freq[word]=freq.get(word,0)+1
words=freq.keys()
for item in sorted(words):
print "%s:%d" %(item,freq.get(item))
Explanation: Question:
Write a program to compute the frequency of the words from the input. The output should output after sorting the key alphanumerically.
Suppose the following input is supplied to the program:
New to Python or choosing between Python 2 and Python 3? Read Python 2 or Python 3.
Then, the output should be:
2:2
3.:1
3?:1
New:1
Python:5
Read:1
and:1
between:1
choosing:1
or:2
to:1
End of explanation
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
# Create a DataFrame df from this dictionary data which has the index labels.
df = pd.DataFrame(data,index=labels)
#display summary of the basic information
df.info()
df.describe()
# return first 3 , last 3 rows of dataframe
print df.head(3)
#df.iloc[:3]
print ' '
print df.iloc[-3:]
#print df.tail(3)
# Select just the 'animal' and 'age' columns from the DataFrame df.
df[['animal','age']]
#df.loc[:,['animal','age']]
#Select the data in rows [3, 4, 8] and in columns ['animal', 'age'].
df.loc[df.index[[3,4,8]], ['animal','age']]
# Select only the rows where the number of visits is greater than 3.
df[df['visits']>3]
# Select the rows where the age is missing, i.e. is NaN.
df[df['age'].isnull()]
#Select the rows where the animal is a cat and the age is less than 3.
df[ (df['animal']=='cat') & (df['age'] <3) ]
#Select the rows the age is between 2 and 4 (inclusive).
df[df['age'].between(2,4)]
#Change the age in row 'f' to 1.5
df.loc['f','age']=1.5
#Calculate the sum of all visits (the total number of visits).
df['visits'].sum()
#Calculate the mean age for each different animal in df.
df.groupby('animal')['age'].mean()
# Append a new row 'k' to df with your choice of values for each column. Then delete that row to return the original DataFrame.
df.loc['k'] = [5.5, 'dog', 'no', 2]
# and then deleting the new row...
df = df.drop('k')
# Count the number of each type of animal in df.
df['animal'].value_counts()
#Sort df first by the values in the 'age' in decending order, then by the value in the 'visit' column in ascending order.
df.sort_values(by=['age','visits'], ascending=[False,True])
# The 'priority' column contains the values 'yes' and 'no'.
#Replace this column with a column of boolean values: 'yes' should be True and 'no' should be False.
df['priority']=df['priority'].map({'yes': True, 'no':False})
# In the 'animal' column, change the 'snake' entries to 'python'.
df['animal']= df['animal'].replace({'snake': 'python'})
# For each animal type and each number of visits, find the mean age.
#In other words, each row is an animal, each column is a number of visits and the values are the mean ages
#(hint: use a pivot table).
df.pivot_table(index='animal', columns='visits', values='age' , aggfunc='mean')
Explanation: Panda based exercies
Some exercises related to using pandas for dataframe operations
The source of this exercises is at : https://github.com/ajcr/100-pandas-puzzles/blob/master/100-pandas-puzzles-with-solutions.ipynb
End of explanation
# You have a DataFrame df with a column 'A' of integers. For example:
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
#How do you filter out rows which contain the same integer as the row immediately above?
df.loc[df['A'].shift() != df['A']]
#Given a DataFrame of numeric values, say
df = pd.DataFrame(np.random.random(size=(5, 3))) # a 5x3 frame of float values
#how do you subtract the row mean from each element in the row?
#print df
# axis=1 means row wise , axis=0 means columnwise
df.sub(df.mean(axis=1), axis=0)
#Suppose you have DataFrame with 10 columns of real numbers, for example:
df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))
#Which column of numbers has the smallest sum? (Find that column's label.)
#print df.sum(axis=0)
df.sum(axis=0).idxmin()
# How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
len(df) - df.duplicated(keep=False).sum()
# better is
print len(df.duplicated(keep=False))
#You have a DataFrame that consists of 10 columns of floating--point numbers.
#Suppose that exactly 5 entries in each row are NaN values.
#For each row of the DataFrame, find the column which contains the third NaN value.
#(You should return a Series of column labels.)
(df.isnull().cumsum(axis=1)==3).idxmax(axis=1)
# A DataFrame has a column of groups 'grps' and and column of numbers 'vals'. For example:
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
#For each group, find the sum of the three greatest values.
df.groupby('grps')['vals'].nlargest(3).sum(level=0)
#A DataFrame has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive).
#For each group of 10 consecutive integers in 'A' (i.e. (0, 10], (10, 20], ...),
#calculate the sum of the corresponding values in column 'B'.
Explanation: DataFrames: beyond the basics
End of explanation
# 1. Write a Python program to print the NumPy version in your system.
print (np.__version__)
#2. Write a Python program to count the number of characters (character frequency) in a string.
l = [12.23, 13.32, 100, 36.32]
print 'original list: ' , l
print 'numpy array : ', np.array(l)
#Create a 3x3 matrix with values ranging from 2 to 10.
np.arange(2,11).reshape(3,3)
Explanation: Numpy Exercises
The problems have been taken from following resources :
http://www.w3resource.com/python-exercises/numpy/index.php
End of explanation |
11,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Processing
This ipython (sorry, Jupyter) notebook contains the examples that I'll be covering in the 1-dimensional part of my image processing session. There are no external data files to worry about.
I should warn you that this is tested on python 2; I think it'll run on python 3 but I haven't actually checked.
Task 0
Import numpy and matplotlib, and make a simple plot of a sine wave
N.b. I usually start with a piece of boilerplate roughly like
Step1: Answer 0
Step2: A Toy 1-D Model
We'll make a simple one dimensional model of a star field. Images of real stars are complicated, but we'll assume that the profile is a Gaussian. I write a Gaussian with mean $\mu$ and variance $\sigma^2$ as $N(\mu, \sigma^2)$.
Almost all stars are essentially point sources as viewed from the Earth, so stars look like the Point Spread Function (PSF) produced by a combination of the atmosphere, the telescope, and the detector. There's no standard notation for the PSF, but I always call it $\phi$. The Full Width Half Maximum (FWHM) is a measure of the PSF's width.
In addition to flux from the stars that we care about there's a smooth background of extra light that comes from a number of sources (atmospheric emission, scattering of star and moonlight, dark current in the detector...); for now we'll just treat this as an annoying constant that I'll call `The Sky'.
In reality CCD data is only measured where we have pixels, but under certain conditions (band-limited; satisfying the Nyquist condition) it turns out that the pixellation is unimportant, so we'll ignore it for this session.
Task 1
Write a simulator that simulates noise-free 1-D data. Provide a function with signature
def phi(x, xc=0.0, fwhm=2)
Step4: Task 2
There are lots of sources of noise in astronomical data, but for now let's assume that the only thing that matters is the finite number of photons that we detect. Detecting $n$ photons in a pixel results in a Poisson distribution; if
$n$'s mean value is $\mu$, then its variance is also $\mu$. If $\mu \gg 1$, $P(\mu) \sim N(\mu, \mu)$. You can do better than this if you need to; if $\mu > 4$ (!), Anscombe showed that $x \equiv 2\sqrt{x + 3/8}$ is approximately Gaussian, $N(2\sqrt{\mu + 3/8} - 1/(4\sqrt\mu), 1)$ -- but you probably only care if you're looking at the tails of the distribution.
Add noise to your simulation. You can get Poisson variates from numpy by saying e.g.
Step5: Answer 2
Step6: Task 3
Let's investigate measuring the flux in a single isolated star. Start my modifying your previous answer to simulate a noisy single star with $\beta = 2$ (a FWHM of c. 5 pixels), centred at x=0, with total flux F0. In reality estimating the sky background is not trivial, but for now let's simply subtract its known value. Plot one of your simulations, with F0=1000 and S=100.
The simplest way to measure the flux in a star is to sum the counts within an aperture. Modify your code to estimate the flux within 5 pixels of the centre, then run a Monte-Carlo simulation to estimate the mean and variance of your estimator.
Your mean should be close to F0 --- if it isn't, take another look at your code. Your value won't be exactly F0; is this a statistical anomaly, or is it something more interesting?
Answer 3
Step7: Task 4
Package your estimator into a function and estimate the mean and variance as a function of $R$; plot your results. What value of $R$ gives the smallest variance in $F_0$?
For small $R$ are we measuring $F_0$ correctly? If not, make appropriate corrections and remake your plots. Does your conclusion change?
Step9: We can calculate the `curve of growth' (the variation in the mean with radius) analytically for our PSF.
It's also possible to do this by passing the function phi to a different implementation of curveOfGrowth, but that requires some hacky and/or more advanced python (currying, or more accurately partial function evaluation); so I won't do that.
Step10: The 0.5*(x[1] - x[0]) is to correct for the location of the sample points.
Step11: Are those mean values significantly different from F0? We can calculate the reduced $\chi^2$; note that we're not estimating the mean (F0) from the data, so we divide by $N$ not $N - 1$
Step12: Task 5
The lecture notes imply that we'd do better to use the PSF to measure the flux. Write a Monte-Carlo simulation, and compare your results with the previous task
Answer 5 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# A nice alternative to inline (which allows interaction with the plots, but can be confusing) is
# %matplotlib notebook
Explanation: Image Processing
This ipython (sorry, Jupyter) notebook contains the examples that I'll be covering in the 1-dimensional part of my image processing session. There are no external data files to worry about.
I should warn you that this is tested on python 2; I think it'll run on python 3 but I haven't actually checked.
Task 0
Import numpy and matplotlib, and make a simple plot of a sine wave
N.b. I usually start with a piece of boilerplate roughly like:
End of explanation
x = np.linspace(0, 10, 100)
y = np.sin(x)
plt.plot(x, y, '-o')
plt.ylim(-1.1, 1.1)
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.title('My brilliant plot')
plt.show()
Explanation: Answer 0
End of explanation
def phi(x, xc=0.0, FWHM=2):
beta = FWHM/(2*np.sqrt(2*np.log(2)))
I = np.exp(-0.5*((x - xc)/beta)**2)
I /= I.sum()
return I
x = np.linspace(0, 20, 41, dtype=float)
S = 100
I = S + np.zeros_like(x)
for F, xc in [(500, 7), (2000, 15)]:
I += F*phi(x, xc)
plt.plot(x, I,'-o')
plt.show()
def simulate(x, S=100):
I = S + np.zeros_like(x)
for F, xc in [(500, 7), (2000, 15)]:
I += F*phi(x, xc)
return I
x = np.linspace(0, 20, 41, dtype=float)
I = simulate(x)
plt.plot(x, I,'-o')
plt.show()
Explanation: A Toy 1-D Model
We'll make a simple one dimensional model of a star field. Images of real stars are complicated, but we'll assume that the profile is a Gaussian. I write a Gaussian with mean $\mu$ and variance $\sigma^2$ as $N(\mu, \sigma^2)$.
Almost all stars are essentially point sources as viewed from the Earth, so stars look like the Point Spread Function (PSF) produced by a combination of the atmosphere, the telescope, and the detector. There's no standard notation for the PSF, but I always call it $\phi$. The Full Width Half Maximum (FWHM) is a measure of the PSF's width.
In addition to flux from the stars that we care about there's a smooth background of extra light that comes from a number of sources (atmospheric emission, scattering of star and moonlight, dark current in the detector...); for now we'll just treat this as an annoying constant that I'll call `The Sky'.
In reality CCD data is only measured where we have pixels, but under certain conditions (band-limited; satisfying the Nyquist condition) it turns out that the pixellation is unimportant, so we'll ignore it for this session.
Task 1
Write a simulator that simulates noise-free 1-D data. Provide a function with signature
def phi(x, xc=0.0, fwhm=2):
Return a numpy array with a star centred at the point xc and FWHM evaluated at the points x
Use a Gaussian PSF, and set the sky level to S; for a Gaussian $N(0, \beta^2)$, the FWHM is $2\sqrt{2\ln(2)}\,\beta \sim 2.3548\beta$
Plot a few of your beautiful simulations. Once you've got it working, wrap it up in a function
simulate(x, S=100)
Answer 1
End of explanation
mu = 100
print(np.random.poisson(lam=mu))
print(np.random.normal(loc=mu, scale=np.sqrt(mu))) # Here's how to get a Gaussian approximation
Explanation: Task 2
There are lots of sources of noise in astronomical data, but for now let's assume that the only thing that matters is the finite number of photons that we detect. Detecting $n$ photons in a pixel results in a Poisson distribution; if
$n$'s mean value is $\mu$, then its variance is also $\mu$. If $\mu \gg 1$, $P(\mu) \sim N(\mu, \mu)$. You can do better than this if you need to; if $\mu > 4$ (!), Anscombe showed that $x \equiv 2\sqrt{x + 3/8}$ is approximately Gaussian, $N(2\sqrt{\mu + 3/8} - 1/(4\sqrt\mu), 1)$ -- but you probably only care if you're looking at the tails of the distribution.
Add noise to your simulation. You can get Poisson variates from numpy by saying e.g.:
mu = 100
print(np.random.poisson(lam=mu))
print(np.random.normal(loc=mu, scale=np.sqrt(mu))) # Here's how to get a Gaussian approximation
If you want to set the random number seed so that the noise added is always the same, say something like
np.random.seed(666)
If you can't see your stars you might want to make them brighter --- remember that we added a background noise of 10 when we set the sky level to 100
End of explanation
x = np.linspace(0, 20, 41, dtype=float)
np.random.seed(1000)
sim = simulate(x)
I = np.random.poisson(sim)
plt.plot(x, sim)
plt.errorbar(x, I, I - sim)
plt.show()
Explanation: Answer 2
End of explanation
def simulate(x, F0 = 1000, S=100, beta=2.5):
sim = S + np.zeros_like(x) + F0*phi(x, FWHM=2*np.sqrt(2*np.log(2))*beta)
if True: # I'm going to use a Gaussian approximation to the noise
sim += np.random.normal(0, np.sqrt(S), size=len(sim))
else:
sim = np.random.poisson(sim + S)
return sim - S
#np.random.seed(1000) # uncomment to make your noise realisation repeatable
nSample = 4000
flux = np.empty(nSample)
x = np.linspace(-20, 20, 81, dtype=float)
R = 5
beta, F0 = 2.5, 1000
for i in range(nSample):
sim = simulate(x, F0, beta=beta)
flux[i] = np.sum(sim[np.abs(x < R)])
plt.plot(x, sim)
for r in (-R, R):
plt.axvline(r, ls=':', color='black')
plt.show()
plt.hist(flux, 20)
plt.axvline(F0, ls=':', color='black')
plt.xlabel(r"$F_0$")
plt.ylabel("N")
mean, std = np.mean(flux), np.std(flux)
plt.title(r"R/$\beta$ = %g %.2f $\pm$ %.2f (spread %.2f)" %
(R/beta, mean, std/np.sqrt(len(flux)), std))
plt.show()
Explanation: Task 3
Let's investigate measuring the flux in a single isolated star. Start my modifying your previous answer to simulate a noisy single star with $\beta = 2$ (a FWHM of c. 5 pixels), centred at x=0, with total flux F0. In reality estimating the sky background is not trivial, but for now let's simply subtract its known value. Plot one of your simulations, with F0=1000 and S=100.
The simplest way to measure the flux in a star is to sum the counts within an aperture. Modify your code to estimate the flux within 5 pixels of the centre, then run a Monte-Carlo simulation to estimate the mean and variance of your estimator.
Your mean should be close to F0 --- if it isn't, take another look at your code. Your value won't be exactly F0; is this a statistical anomaly, or is it something more interesting?
Answer 3
End of explanation
#np.random.seed(1000)
nSample = 4000
def estimateApertureStats(x, R):
flux = np.empty(nSample)
for i in range(nSample):
sim = simulate(x, F0=F0, beta=beta)
flux[i] = np.sum(sim[np.abs(x < R)])
return np.mean(flux), np.std(flux)
RR = np.arange(1, 16, dtype=float)
mean = np.empty_like(RR)
std = np.empty_like(RR)
x = np.linspace(-20, 20, 81)
for i, R in enumerate(RR):
mean[i], std[i] = estimateApertureStats(x, R)
plt.errorbar(RR/beta, mean, std)
plt.axhline(F0, ls=':', color='black')
plt.xlabel(r"$R/\beta$")
plt.ylabel(r"$F_0$")
plt.show()
plt.plot(RR/beta, std)
#plt.axhline(F0, ls=':', color='black')
plt.xlabel(r"R/\beta")
plt.ylabel(r"d$F_0$")
plt.show()
Explanation: Task 4
Package your estimator into a function and estimate the mean and variance as a function of $R$; plot your results. What value of $R$ gives the smallest variance in $F_0$?
For small $R$ are we measuring $F_0$ correctly? If not, make appropriate corrections and remake your plots. Does your conclusion change?
End of explanation
import scipy.special
def curveOfGrowth(R, beta):
Return the curve of growth, evaluated at the points R, for a Gaussian N(0, beta^2) PSF
return 0.5*(1 + scipy.special.erf(R/beta/np.sqrt(2)))
Explanation: We can calculate the `curve of growth' (the variation in the mean with radius) analytically for our PSF.
It's also possible to do this by passing the function phi to a different implementation of curveOfGrowth, but that requires some hacky and/or more advanced python (currying, or more accurately partial function evaluation); so I won't do that.
End of explanation
cog = curveOfGrowth(RR - 0.5*(x[1] - x[0]), beta)
plt.errorbar(RR/beta, mean/cog, std/np.sqrt(nSample))
plt.axhline(F0, ls=':', color='black')
plt.ylim(F0 + 5*np.array([-1, 1]))
plt.xlabel(r"$R/\beta$")
plt.ylabel(r"$F_0$")
plt.show()
plt.plot(RR/beta, std/cog, label='Theory')
plt.plot(RR/beta, std/(mean/F0), label='Empirical')
plt.xlabel(r"R/\beta")
plt.ylabel(r"d$F_0$")
plt.legend(loc='best')
plt.show()
Explanation: The 0.5*(x[1] - x[0]) is to correct for the location of the sample points.
End of explanation
print("chi^2/nu = %.3f" % (sum(((mean/cog - F0)/(std/cog/np.sqrt(nSample)))**2)/len(RR)))
Explanation: Are those mean values significantly different from F0? We can calculate the reduced $\chi^2$; note that we're not estimating the mean (F0) from the data, so we divide by $N$ not $N - 1$
End of explanation
nSample = 4000
flux = np.empty(nSample)
filt = phi(x, FWHM=2*np.sqrt(2*np.log(2))*beta)
filt /= np.sum(filt**2)
R = 5
beta, F0 = 2.5, 1000
for i in range(nSample):
sim = simulate(x, F0, beta=beta)
flux[i] = np.sum(filt*sim)
plt.plot(x, sim)
plt.plot(x, filt/np.max(filt)*np.max(sim), ls=':', color='black')
plt.show()
plt.hist(flux, 20)
plt.axvline(F0, ls=':', color='black')
plt.xlabel(r"$F_0$")
plt.ylabel("N")
plt.title(r"%.2f $\pm$ %.2f (%.2f)" % (np.mean(flux), np.std(flux), np.sqrt(S*np.sqrt(4*pi)*beta/(x[1] - x[0]))))
plt.show()
Explanation: Task 5
The lecture notes imply that we'd do better to use the PSF to measure the flux. Write a Monte-Carlo simulation, and compare your results with the previous task
Answer 5
End of explanation |
11,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep CNN Models
Constructing and training your own ConvNet from scratch can be Hard and a long task.
A common trick used in Deep Learning is to use a pre-trained model and finetune it to the specific data it will be used for.
Famous Models with Keras
This notebook contains code and reference for the following Keras models (gathered from https
Step1: If you're wondering where this HDF5 files with weights is stored, please take a look at ~/.keras/models/
HandsOn VGG16 - Pre-trained Weights
Step2: <img src="imgs/imagenet/strawberry_1157.jpeg" >
Step3: <img src="imgs/imagenet/apricot_696.jpeg" >
Step4: <img src="imgs/imagenet/apricot_565.jpeg" >
Step5: Hands On
Step6: Residual Networks
<img src="imgs/resnet_bb.png" >
ResNet 50
<img src="imgs/resnet34.png" > | Python Code:
from keras.applications import VGG16
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
import os
# -- Jupyter/IPython way to see documentation
# please focus on parameters (e.g. include top)
VGG16??
vgg16 = VGG16(include_top=True, weights='imagenet')
Explanation: Deep CNN Models
Constructing and training your own ConvNet from scratch can be Hard and a long task.
A common trick used in Deep Learning is to use a pre-trained model and finetune it to the specific data it will be used for.
Famous Models with Keras
This notebook contains code and reference for the following Keras models (gathered from https://github.com/fchollet/keras/tree/master/keras/applications)
VGG16
VGG19
ResNet50
Inception v3
Xception
... more to come
References
Very Deep Convolutional Networks for Large-Scale Image Recognition - please cite this paper if you use the VGG models in your work.
Deep Residual Learning for Image Recognition - please cite this paper if you use the ResNet model in your work.
Rethinking the Inception Architecture for Computer Vision - please cite this paper if you use the Inception v3 model in your work.
All architectures are compatible with both TensorFlow and Theano, and upon instantiation the models will be built according to the image dimension ordering set in your Keras configuration file at ~/.keras/keras.json.
For instance, if you have set image_data_format="channels_last", then any model loaded from this repository will get built according to the TensorFlow dimension ordering convention, "Width-Height-Depth".
VGG16
<img src="imgs/vgg16.png" >
VGG19
<img src="imgs/vgg19.png" >
keras.applications
End of explanation
IMAGENET_FOLDER = 'imgs/imagenet' #in the repo
!ls imgs/imagenet
Explanation: If you're wondering where this HDF5 files with weights is stored, please take a look at ~/.keras/models/
HandsOn VGG16 - Pre-trained Weights
End of explanation
from keras.preprocessing import image
import numpy as np
img_path = os.path.join(IMAGENET_FOLDER, 'strawberry_1157.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
Explanation: <img src="imgs/imagenet/strawberry_1157.jpeg" >
End of explanation
img_path = os.path.join(IMAGENET_FOLDER, 'apricot_696.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
Explanation: <img src="imgs/imagenet/apricot_696.jpeg" >
End of explanation
img_path = os.path.join(IMAGENET_FOLDER, 'apricot_565.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
Explanation: <img src="imgs/imagenet/apricot_565.jpeg" >
End of explanation
# from keras.applications import VGG19
Explanation: Hands On:
Try to do the same with VGG19 Model
End of explanation
## from keras.applications import ...
Explanation: Residual Networks
<img src="imgs/resnet_bb.png" >
ResNet 50
<img src="imgs/resnet34.png" >
End of explanation |
11,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
자료 안내
Step1: 주의
위 모듈을 임포트하면 아래 모듈 또한 자동으로 임포트 된다.
GongSu21_Statistics_Averages.py
주요 내용
상관분석
공분산
상관관계와 인과관계
주요 예제
21장에서 다룬 미국의 51개 주에서 거래되는 담배(식물)의 도매가격 데이터를 보다 상세히 분석한다.
특히, 캘리포니아 주에서 거래된 담배(식물) 도매가와 뉴욕 주에서 거래된 담배(식물) 도매가의 상관관계를 다룬다.
오늘 사용할 데이터
주별 담배(식물) 도매가격 및 판매일자
Step2: 상관분석 설명
상관분석은 두 데이터 집단이 어떤 관계를 갖고 있는 지를 분석하는 방법이다.
두 데이터 집단이 서로 관계가 있을 때 상관관계를 계산할 수도 있으며, 상관관계의 정도를 파악하기 위해서 대표적으로 피어슨 상관계수가 사용된다. 또한 상관계수를 계산하기 위해 공분산을 먼저 구해야 한다.
공분산(Covariance)
공분산은 두 종류의 데이터 집단 x와 y가 주어졌을 때 한쪽에서의 데이터의 변화와
다른쪽에서의 데이터의 변화가 서로 어떤 관계에 있는지를 설명해주는 개념이다.
공분산은 아래 공식에 따라 계산한다.
$$Cov(x, y) = \frac{\Sigma_{i=1}^{n} (x_i - \bar x)(y_i - \bar y)}{n-1}$$
캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 공분산
준비 작업
Step3: 이제 정수 인덱싱을 사용하여 상품(HighQ)에 대한 정보만을 가져오도록 하자.
Step4: 위 코드에 사용된 정수 인덱싱은 다음과 같다.
[
Step5: 준비 작업
Step6: 준비 작업
Step7: 캘리포니아 주의 HighQ 열의 이름을 CA_HighQ로 변경한다.
Step8: 준비 작업
Step9: 이제 ca_ny_pd 테이블에 새로운 열(column)을 추가한다. 추가되는 열의 이름은 ca_dev와 ny_dev이다.
ca_dev
Step10: 캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 공분산
이제 공분산을 쉽게 계산할 수 있다.
주의
Step11: 피어슨 상관계수
피어슨 상관계수(Pearson correlation coefficient)는 두 변수간의 관련성 정도를 나타낸다.
두 변수 x와 y의 상관계수(r) = x와 y가 함께 변하는 정도와 x와 y가 따로 변하는 정도 사이의 비율
즉, $$r = \frac{Cov(X, Y)}{s_x\cdot s_y}$$
의미
Step12: 상관관계(Correlation)와 인과관계(Causation)
상관관계
Step13: 연습
공분산에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
cov() 메소드를 캘리포니아 주와 뉴욕 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 ca_ny_pd에서 실행하면 아래와 같은 결과를 보여준다.
Step14: 위 테이블에서 CA_HighQ와 NY_HighQ가 만나는 부분의 값을 보면 앞서 계산한 공분산 값과 일치함을 확인할 수 있다.
연습
상관계수에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
corr() 메소드를 캘리포니아 주와 뉴욕 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 ca_ny_pd에서 실행하면 아래와 같은 결과를 보여준다. | Python Code:
from GongSu22_Statistics_Population_Variance import *
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
상관분석
안내사항
지난 시간에 다룬 21장과 22장 내용을 활용하고자 한다.
따라서 아래와 같이 21장과 22장 내용을 모듈로 담고 있는 파이썬 파일을 임포트 해야 한다.
주의: 아래 두 개의 파일이 동일한 디렉토리에 위치해야 한다.
* GongSu21_Statistics_Averages.py
* GongSu22_Statistics_Population_Variance.py
End of explanation
prices_pd.head()
Explanation: 주의
위 모듈을 임포트하면 아래 모듈 또한 자동으로 임포트 된다.
GongSu21_Statistics_Averages.py
주요 내용
상관분석
공분산
상관관계와 인과관계
주요 예제
21장에서 다룬 미국의 51개 주에서 거래되는 담배(식물)의 도매가격 데이터를 보다 상세히 분석한다.
특히, 캘리포니아 주에서 거래된 담배(식물) 도매가와 뉴욕 주에서 거래된 담배(식물) 도매가의 상관관계를 다룬다.
오늘 사용할 데이터
주별 담배(식물) 도매가격 및 판매일자: Weed_Price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/weed_price.png" style="width:600">
</td>
</tr>
</table>
</p>
주의: 언급된 파일이 GongSu21_Statistics_Averages 모듈에서 prices_pd 라는 변수에 저장되었음.
또한 주(State)별, 거래날짜별(date) 기준으로 이미 정렬되어 있음.
따라서 아래에서 볼 수 있듯이 예를 들어, prices_pd의 첫 다섯 줄의 내용은 알파벳순으로 가장 빠른 이름을 가진 알라바마(Alabama) 주에서 거래된 데이터 중에서 가정 먼저 거래된 5개의 거래내용을 담고 있다.
End of explanation
ny_pd = prices_pd[prices_pd['State'] == 'New York'].copy(True)
ny_pd.head(10)
Explanation: 상관분석 설명
상관분석은 두 데이터 집단이 어떤 관계를 갖고 있는 지를 분석하는 방법이다.
두 데이터 집단이 서로 관계가 있을 때 상관관계를 계산할 수도 있으며, 상관관계의 정도를 파악하기 위해서 대표적으로 피어슨 상관계수가 사용된다. 또한 상관계수를 계산하기 위해 공분산을 먼저 구해야 한다.
공분산(Covariance)
공분산은 두 종류의 데이터 집단 x와 y가 주어졌을 때 한쪽에서의 데이터의 변화와
다른쪽에서의 데이터의 변화가 서로 어떤 관계에 있는지를 설명해주는 개념이다.
공분산은 아래 공식에 따라 계산한다.
$$Cov(x, y) = \frac{\Sigma_{i=1}^{n} (x_i - \bar x)(y_i - \bar y)}{n-1}$$
캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 공분산
준비 작업: 뉴욕 주 데이터 정리하기
먼저 뉴욕 주에서 거래된 담배(식물) 도매가의 정보를 따로 떼서 ny_pd 변수에 저장하자.
방식은 california_pd를 구할 때와 동일하게 마스크 인덱싱을 사용한다.
End of explanation
ny_pd_HighQ = ny_pd.iloc[:, [1, 7]]
Explanation: 이제 정수 인덱싱을 사용하여 상품(HighQ)에 대한 정보만을 가져오도록 하자.
End of explanation
ny_pd_HighQ.columns = ['NY_HighQ', 'date']
ny_pd_HighQ.head()
Explanation: 위 코드에 사용된 정수 인덱싱은 다음과 같다.
[:, [1, 7]]
':' 부분 설명: 행 전체를 대상으로 한다.
'[1, 7]' 부분 설명: 1번 열과 7번 열을 대상으로 한다.
결과적으로 1번 열과 7번 열 전체만을 추출하는 슬라이싱을 의미한다.
이제 각 열의 이름을 새로 지정하고자 한다. 뉴욕 주에서 거래된 상품(HighQ) 이기에 NY_HighQ라 명명한다.
End of explanation
ca_pd_HighQ = california_pd.iloc[:, [1, 7]]
ca_pd_HighQ.head()
Explanation: 준비 작업: 캘리포니아 주 데이터 정리하기
비슷한 일을 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물) 도매가에 대해서 실행한다.
End of explanation
ca_ny_pd = pd.merge(ca_pd_HighQ, ny_pd_HighQ, on="date")
ca_ny_pd.head()
Explanation: 준비 작업: 정리된 두 데이터 합치기
이제 두 개의 테이블을 date를 축으로 하여, 즉 기준으로 삼아 합친다.
End of explanation
ca_ny_pd.rename(columns={"HighQ": "CA_HighQ"}, inplace=True)
ca_ny_pd.head()
Explanation: 캘리포니아 주의 HighQ 열의 이름을 CA_HighQ로 변경한다.
End of explanation
ny_mean = ca_ny_pd.NY_HighQ.mean()
ny_mean
Explanation: 준비 작업: 합친 데이터를 이용하여 공분산 계산 준비하기
먼저 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 평균값을 계산한다.
End of explanation
ca_ny_pd['ca_dev'] = ca_ny_pd['CA_HighQ'] - ca_mean
ca_ny_pd.head()
ca_ny_pd['ny_dev'] = ca_ny_pd['NY_HighQ'] - ny_mean
ca_ny_pd.head()
Explanation: 이제 ca_ny_pd 테이블에 새로운 열(column)을 추가한다. 추가되는 열의 이름은 ca_dev와 ny_dev이다.
ca_dev: 공분산 계산과 관련된 캘리포니아 주의 데이터 연산 중간 결과값
ny_dev: 공분산 계산과 관련된 뉴욕 주의 데이터 연산 중간 결과값
즉, 아래 공식에서의 분자에 사용된 값들의 리스트를 계산하는 과정임.
$$Cov(x, y) = \frac{\Sigma_{i=1}^{n} (x_i - \bar x)(y_i - \bar y)}{n-1}$$
End of explanation
ca_ny_cov = (ca_ny_pd['ca_dev'] * ca_ny_pd['ny_dev']).sum() / (ca_count - 1)
ca_ny_cov
Explanation: 캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 공분산
이제 공분산을 쉽게 계산할 수 있다.
주의:
* DataFrame 자료형의 연산은 넘파이 어레이의 연산처럼 항목별로 실행된다.
* sum 메소드의 활용을 기억한다.
End of explanation
ca_highq_std = ca_ny_pd.CA_HighQ.std()
ny_highq_std = ca_ny_pd.NY_HighQ.std()
ca_ny_corr = ca_ny_cov / (ca_highq_std * ny_highq_std)
ca_ny_corr
Explanation: 피어슨 상관계수
피어슨 상관계수(Pearson correlation coefficient)는 두 변수간의 관련성 정도를 나타낸다.
두 변수 x와 y의 상관계수(r) = x와 y가 함께 변하는 정도와 x와 y가 따로 변하는 정도 사이의 비율
즉, $$r = \frac{Cov(X, Y)}{s_x\cdot s_y}$$
의미:
r = 1: X와 Y 가 완전히 동일하다.
r = 0: X와 Y가 아무 연관이 없다
r = -1: X와 Y가 반대방향으로 완전히 동일 하다.
선형관계 설명에도 사용된다.
-1.0 <= r < -0.7: 강한 음적 선형관계
-0.7 <= r < -0.3: 뚜렷한 음적 선형관계
-0.3 <= r < -0.1: 약한 음적 선형관계
-0.1 <= r <= 0.1: 거의 무시될 수 있는 관계
0.1 < r <= +0.3: 약한 양적 선형관계
0.3 < r <= 0.7: 뚜렷한 양적 선형관계
0.7 < r <= 1.0: 강한 양적 선형관계
주의
위 선형관계 설명은 일반적으로 통용되지만 예외가 존재할 수도 있다.
예를 들어, 아래 네 개의 그래프는 모두 피어슨 상관계수가 0.816이지만, 전혀 다른 상관관계를 보여주고 있다.
(출처: https://en.wikipedia.org/wiki/Correlation_and_dependence)
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/pearson_relation.png" style="width:600">
</td>
</tr>
</table>
</p>
캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 상관계수 계산하기
End of explanation
california_pd.describe()
Explanation: 상관관계(Correlation)와 인과관계(Causation)
상관관계: 두 변수 사이의 상관성을 보여주는 관계. 즉, 두 변수 사이에 존재하는 모종의 관련성을 의미함.
예를 들어, 캘리포니아 주의 상품 담배(식물) 도매가와 뉴육 주의 상품 담배(식물) 도매가 사이에는 모종의 관계가 있어 보임.
캘리포니아 주에서의 가격이 오르면 뉴욕 주에서의 가격도 비슷하게 오른다. 상관정도는 0.979 정도로 매우 강한 양적 선형관계를 보인다.
인과관계: 두 변수 사이에 서로 영향을 주거나 실제로 연관되어 있음을 보여주는 관계.
주의: 두 변수 사이에 상관관계가 있다고 해서 그것이 반드시 어느 변수가 다른 변수에 영향을 준다든지, 아니면 실제로 연관되어 있음을 뜻하지는 않는다.
예를 들어, 캘리포니아 주의 담배(식물) 도매가와 뉴욕 주의 담배(식물) 도매가 사이에 모종의 관계가 있는 것은 사실이지만, 그렇다고 해서 한 쪽에서의 가격 변동이 다른 쪽에서의 가격변동에 영향을 준다는 근거는 정확하게 알 수 없다.
연습문제
연습
모집단의 분산과 표준편차에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
describe() 메소드를 캘리포니아 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 california_pd에서 실행하면 아래와 같은 결과를 보여준다.
count: 총 빈도수, 즉 표본의 크기
mean: 평균값
std: 모집단 표준편차 점추정 값
min: 표본의 최소값
25%: 하한 사분위수 (하위 4분의 1을 구분하는 위치에 자리하는 수)
50%: 중앙값
75%: 상한 사분위수 (상위 4분의 1을 구분하는 위치에 자리하는 수)
max: 최대값
End of explanation
ca_ny_pd.cov()
Explanation: 연습
공분산에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
cov() 메소드를 캘리포니아 주와 뉴욕 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 ca_ny_pd에서 실행하면 아래와 같은 결과를 보여준다.
End of explanation
ca_ny_pd.corr()
Explanation: 위 테이블에서 CA_HighQ와 NY_HighQ가 만나는 부분의 값을 보면 앞서 계산한 공분산 값과 일치함을 확인할 수 있다.
연습
상관계수에 대한 점추정 값을 계산하는 기능이 이미 Pandas 모듈의 DataFrame 자료형의 메소드로 구현되어 있다.
corr() 메소드를 캘리포니아 주와 뉴욕 주에서 거래된 담배(식물)의 도매가 표본을 담고 있는 ca_ny_pd에서 실행하면 아래와 같은 결과를 보여준다.
End of explanation |
11,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notes from David Beazley's Python3 Metaprogramming tutorial (2013)
"ported" to Python 2.7, unless noted otherwise
A Debugging Decorator
Step1: Decorators with arguments
Calling convention
python
@decorator(args)
def func()
Step2: Decorators with arguments
Step3: Debug with arguments
Step4: Decorators with arguments
Step5: Class decorators
decorate all methods of a class at once
NOTE
Step6: Class decoration
Step7: Debug all the classes?
TODO
Step8: Can we inject the debugging code into all known classes? | Python Code:
from functools import wraps
def debug(func):
msg = func.__name__
# wraps is used to keep the metadata of the original function
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
@debug
def add(x,y):
return x+y
add(2,3)
def add(x,y):
return x+y
debug(add)
debug(add)(2,3)
Explanation: Notes from David Beazley's Python3 Metaprogramming tutorial (2013)
"ported" to Python 2.7, unless noted otherwise
A Debugging Decorator
End of explanation
def debug_with_args(prefix=''):
def decorate(func):
msg = prefix + func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
return decorate
@debug_with_args(prefix='***')
def mul(x,y):
return x*y
mul(2,3)
def mul(x,y):
return x*y
debug_with_args(prefix='***')
debug_with_args(prefix='***')(mul)
debug_with_args(prefix='***')(mul)(2,3)
Explanation: Decorators with arguments
Calling convention
python
@decorator(args)
def func():
pass
Evaluation
python
func = decorator(args)(func)
End of explanation
from functools import wraps, partial
def debug_with_args2(func=None, prefix=''):
if func is None: # no function was passed
return partial(debug_with_args2, prefix=prefix)
msg = prefix + func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
@debug_with_args2(prefix='***')
def div(x,y):
return x / y
div(4,2)
def div(x,y):
return x / y
debug_with_args2(prefix='***')
debug_with_args2(prefix='***')(div)
debug_with_args2(prefix='***')(div)(4,2)
f = debug_with_args2(prefix='***')
def div(x,y):
return x / y
debug_with_args2(prefix='***')(div)
Explanation: Decorators with arguments: a reformulation
TODO: show what happens without the partial application to itself!
End of explanation
def debug_with_args_nonpartial(func, prefix=''):
msg = prefix + func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
def plus1(x):
return x+1
debug_with_args_nonpartial(plus1, prefix='***')(23)
@debug_with_args_nonpartial
def plus1(x):
return x+1
plus1(23)
@debug_with_args_nonpartial(prefix='***')
def plus1(x):
return x+1
Explanation: Debug with arguments: without partial()
this won't work with arguments
End of explanation
def debug_with_args3(*args, **kwargs):
def inner(func, **kwargs):
if 'prefix' in kwargs:
msg = kwargs['prefix'] + func.__name__
else:
msg = func.__name__
print(msg)
return func
# decorator without arguments
if len(args) == 1 and callable(args[0]):
func = args[0]
return inner(func)
# decorator with keyword arguments
else:
return partial(inner, prefix=kwargs['prefix'])
def plus2(x):
return x+2
debug_with_args3(plus2)(23)
debug_with_args3(prefix='***')(plus2)(23)
@debug_with_args3 # WRONG: this shouldn't print anything during creation
def plus2(x):
return x+2
plus2(12) # WRONG: this should print the function name and the prefix
@debug_with_args3(prefix='###') # WRONG: this shouldn't print anything during creation
def plus2(x):
return x+2
plus2(12) # WRONG: this should print the function name and the prefix
Explanation: Decorators with arguments: memprof-style
this doesn't work at all
```python
def memprof(args, kwargs):
def inner(func):
return MemProf(func, args, **kwargs)
# To allow @memprof with parameters
if len(args) and callable(args[0]):
func = args[0]
args = args[1:]
return inner(func)
else:
return inner
```
End of explanation
def debugmethods(cls):
for name, val in vars(cls).items():
if callable(val):
setattr(cls, name, debug(val))
return cls
@debugmethods
class Spam(object):
def foo(self):
pass
def bar(self):
pass
s = Spam()
s.foo()
s.bar()
Explanation: Class decorators
decorate all methods of a class at once
NOTE: only instance methods will be wrapped, i.e. this won't work with static- or class methods
End of explanation
def debugattr(cls):
orig_getattribute = cls.__getattribute__
def __getattribute__(self, name):
print('Get:', name)
return orig_getattribute(self, name)
cls.__getattribute__ = __getattribute__
return cls
@debugattr
class Ham(object):
def foo(self):
pass
def bar(self):
pass
h = Ham()
h.foo()
h.bar
Explanation: Class decoration: debug access to attributes
End of explanation
class debugmeta(type):
def __new__(cls, clsname, bases, clsdict):
clsobj = super(cls).__new__(cls, clsname, bases, clsdict)
clsobj = debugmethods(clsobj)
return clsobj
# class Base(metaclass=debugmeta): # won't work in Python 2.7
# pass
# class Bam(Base):
# pass
# cf. minute 27
Explanation: Debug all the classes?
TODO: this looks Python3-specific
Solution: A Metaclass
End of explanation
class Spam:
pass
s = Spam()
from copy import deepcopy
current_vars = deepcopy(globals())
for var in current_vars:
if callable(current_vars[var]):
print var,
frozendict
for var in current_vars:
cls = getattr(current_vars[var], '__class__')
if cls:
print var, cls
print current_vars['Spam']
type(current_vars['Spam'])
callable(Spam)
callable(s)
isinstance(Spam, classobj)
__name__
sc = s.__class__
type('Foo', (), {})
Explanation: Can we inject the debugging code into all known classes?
End of explanation |
11,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Geospatial data models
Resources
Step1: In startup pannel set GIS Data Directory to path to datasets,
for example on MS Windows, C
Step2: The -p flag for g.region is used to print the region
we just set.
Then we display the 30m resolution NED elevation raster.
Step3: To resample it to 10m resolution, first set the computational region to resolution 10m,
then resample the raster using the nearest neighbor method.
Hint
Step4: Display the resampled map by adding "elev_ned10m_nn" to Layer Manager
in case you don't have it in the Layer Manager already.
Alternatively, use in command line the following
Step5: The elevation map "elev_ned10m_nn" looks the same as the original one,
so now check the resampled elevation surface using the aspect map
Step6: Display the resampled map by adding "aspect_ned10m_nn" to Layer Manager
or in command line using
Step7: To save the map, click in Map Display to on the button
Save display to graphic file" or alternatively,
use the following command
Step8: Now, reinterpolate DEMs using bilinear and bicubic interpolation.
Check the interpolated elevation surfaces using aspect maps.
Step9: Save the displayed maps and in your report, compare the results with
the previously computed nearest neighbor result.
In Map Display click button Save display to graphic file,
or use the following in the command line
Step10: Why is the aspect of elevation raster map computed by the nearest neighbor method different from the one computed by bilinear interpolation?
Resampling to lower resolution
Resample to lower resolution (30m -> 100m).
First, display the original elevation and land use maps
Step11: Then change the region resolution and resample
elevation (which is a continuous field)
and land use (which has discrete categories).
Explain selection of aggregation method. Can we use average also for landuse?
What does mode mean?
Step12: Before the next computation, remove all map layers from the Layer Manager
because we don't need to see them anymore.
Step13: Remove or switch off the land use, elevation and aspect maps.
Converting between vector data types
Convert census blocks polygons to points using their centroids
(useful for interpolating a population density trend surface)
Step14: Display census boundaries using GUI
Step15: Convert contour lines to points (useful for computing DEM from contours)
Step16: Display the "elev_ned_contpts" points vector and zoom-in to very small area
to see the actual points.
Step17: Convert from vector to raster
Convert vector data to raster for use in raster-based analysis.
First, adjust the computational region to resolution 200m
Step18: Then remove all layers from the Layer Manager.
Convert vector points "schools" to raster.
As value for raster use attribute column "CORECAPACI" for core capacity.
To add legend in GUI use
Add map elements > Show/hide legend
and select "schools_cap_200m".
Step19: Now convert lines in "streets" vector to raster.
Set the resolution to 30m and use speed limit attribute.
Step20: If you haven't done this already, add remove all other map layers
from Layer Manager and add "streets_speed_30m" raster layer.
Add legend for "streets_speed_30m" raster using GUI in Map Display
Step21: Save the displayed map.
In Map Display click button Save display to graphic file,
or use the following.
Step22: Convert from raster to vector
Convert raster lines to vector lines.
First, set the region and remove map layers from Layer Manager.
Then do the conversion.
Explain why we are using r.thin module.
You may want to remove all previously used layers from the Layer Manager
before you start these new computations.
Step23: Visually compare the result with streams digitized from airphotos.
Step24: Save the displayed map (in Map Display click button Save display to graphic file).
Step25: Convert raster areas representing basins to vector polygons.
Use raster value as category number (flag -v) and
display vector polygons filled with random colors.
In GUI
Step26: Save the displayed map either using GUI or using the following in case
you are working in the command line. | Python Code:
# Obtain sample data and set new Grass mapset
import urllib
from zipfile import ZipFile
import os.path
zip_path = "/home/jovyan/work/tmp/nc_spm_08_grass7.zip"
mapset_path = "/home/jovyan/grassdata"
if not os.path.exists(zip_path):
urllib.urlretrieve("https://grass.osgeo.org/sampledata/north_carolina/nc_spm_08_grass7.zip", zip_path)
if not os.path.exists(mapset_path):
with ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(mapset_path)
# using Python to initialize GRASS GIS
import os
import sys
import subprocess
from IPython.display import Image
# create GRASS GIS runtime environment
gisbase = subprocess.check_output(["grass", "--config", "path"]).strip()
os.environ['GISBASE'] = gisbase
sys.path.append(os.path.join(gisbase, "etc", "python"))
# do GRASS GIS imports
import grass.script as gs
import grass.script.setup as gsetup
# set GRASS GIS session data
rcfile = gsetup.init(gisbase, "/home/jovyan/grassdata", "nc_spm_08_grass7", "user1")
# using Python to initialize GRASS GIS
# default font displays
os.environ['GRASS_FONT'] = 'sans'
# overwrite existing maps
os.environ['GRASS_OVERWRITE'] = '1'
gs.set_raise_on_error(True)
gs.set_capture_stderr(True)
# using Python to initialize GRASS GIS
# set display modules to render into a file (named map.png by default)
os.environ['GRASS_RENDER_IMMEDIATE'] = 'cairo'
os.environ['GRASS_RENDER_FILE_READ'] = 'TRUE'
os.environ['GRASS_LEGEND_FILE'] = 'legend.txt'
Explanation: Geospatial data models
Resources:
GRASS GIS overview and manual
Recommendations
and tutorial
how to use GUI from the first assignment
Start GRASS GIS
Start GRASS - click on GRASS icon or type
End of explanation
!g.region region=swwake_30m -p
Explanation: In startup pannel set GIS Data Directory to path to datasets,
for example on MS Windows, C:\Users\myname\grassdata.
For Project location select nc_spm_08_grass7 (North Carolina, State Plane, meters) and
for Accessible mapset create a new mapset (called e.g. HW_data_models).
Click Start GRASS.
If you prefer to work in GUI, you should be able to find out yourself
the GUI equivalents for the tasks below.
Some hints for GUI are included, but
from now on, most of the instructions will be provided as commands for command line.
Hint for running most of the commands in GUI - type or paste the name of the module
into the command console in the Console tab and then hit Enter to open the GUI dialog.
Read the manual page for each command you are using for the first time to learn
what it is doing and what the parameters mean.
Resampling to higher resolution
Resample the given raster map to higher and lower resolution
(30m->10m, 30m->100m) and compare resampling by nearest neighbor
with bilinear and bicubic method.
First, set the computation region extent to our study area
and set resolution to 30 meters.
The computational region (region for short) is set using
g.region module.
Here for convenience we use named region which defines both the extent and the resolution.
This named region is included in the data (location) we are using
but it is possible to create new named regions and use them to bookmark different study areas.
End of explanation
!d.rast elev_ned_30m
Image(filename="map.png")
Explanation: The -p flag for g.region is used to print the region
we just set.
Then we display the 30m resolution NED elevation raster.
End of explanation
!g.region res=10 -p
!r.resamp.interp elev_ned_30m out=elev_ned10m_nn method=nearest
Explanation: To resample it to 10m resolution, first set the computational region to resolution 10m,
then resample the raster using the nearest neighbor method.
Hint: To open the r.resamp.interp in GUI, type or paste the module name
into the Console tab, then Enter to open the GUI dialog,
don't forget to set the method to nearest under Optional tab.
End of explanation
!d.rast elev_ned10m_nn
Image(filename="map.png")
Explanation: Display the resampled map by adding "elev_ned10m_nn" to Layer Manager
in case you don't have it in the Layer Manager already.
Alternatively, use in command line the following:
End of explanation
!r.slope.aspect elevation=elev_ned10m_nn aspect=aspect_ned10m_nn
Explanation: The elevation map "elev_ned10m_nn" looks the same as the original one,
so now check the resampled elevation surface using the aspect map:
End of explanation
!d.rast aspect_ned10m_nn
Image(filename="map.png")
Explanation: Display the resampled map by adding "aspect_ned10m_nn" to Layer Manager
or in command line using:
End of explanation
Image(filename="map.png")
Explanation: To save the map, click in Map Display to on the button
Save display to graphic file" or alternatively,
use the following command:
End of explanation
!r.resamp.interp elev_ned_30m out=elev_ned10m_bil meth=bilinear
!r.resamp.interp elev_ned_30m out=elev_ned10m_bic meth=bicubic
!r.slope.aspect elevation=elev_ned10m_bil aspect=aspect_ned10m_bil
!r.slope.aspect elevation=elev_ned10m_bic aspect=aspect_ned10m_bic
!d.rast aspect_ned10m_bil
!d.rast aspect_ned10m_bic
Image(filename="map.png")
Explanation: Now, reinterpolate DEMs using bilinear and bicubic interpolation.
Check the interpolated elevation surfaces using aspect maps.
End of explanation
Image(filename="map.png")
Explanation: Save the displayed maps and in your report, compare the results with
the previously computed nearest neighbor result.
In Map Display click button Save display to graphic file,
or use the following in the command line:
End of explanation
!d.rast elev_ned_30m
!d.rast landuse96_28m
Image(filename="map.png")
Explanation: Why is the aspect of elevation raster map computed by the nearest neighbor method different from the one computed by bilinear interpolation?
Resampling to lower resolution
Resample to lower resolution (30m -> 100m).
First, display the original elevation and land use maps:
End of explanation
!g.region res=100 -p
!r.resamp.stats elev_ned_30m out=elev_new100m_avg method=average
!d.rast elev_new100m_avg
Image(filename="map.png")
Explanation: Then change the region resolution and resample
elevation (which is a continuous field)
and land use (which has discrete categories).
Explain selection of aggregation method. Can we use average also for landuse?
What does mode mean?
End of explanation
!d.erase
!r.resamp.stats landuse96_28m out=landuse96_100m method=mode
!d.rast landuse96_100m
Image(filename="map.png")
Explanation: Before the next computation, remove all map layers from the Layer Manager
because we don't need to see them anymore.
End of explanation
!v.to.points census_wake2000 type=centroid out=census_centr use=vertex
Explanation: Remove or switch off the land use, elevation and aspect maps.
Converting between vector data types
Convert census blocks polygons to points using their centroids
(useful for interpolating a population density trend surface):
End of explanation
!d.vect census_centr icon=basic/circle fill_color=green size=10
!d.vect census_wake2000 color=red fill_color=none
!d.legend.vect
Image(filename="map.png")
Explanation: Display census boundaries using GUI:
Add vector "census_wake2000"
Selection > Feature type > boundary
(switch off the other types).
Save the displayed map in Map Display click button
Save display to graphic file.
Alternatively, use the following commands to control display.
Note that in both command line and GUI you must either enter the full path
to the file you are saving the image in, or you must know the current working
directory.
End of explanation
!v.to.points input=elev_ned10m_cont10m output=elev_ned_contpts type=line use=vertex
Explanation: Convert contour lines to points (useful for computing DEM from contours):
End of explanation
!d.vect elev_ned_contpts co=brown icon=basic/point size=3
Image(filename="map.png")
Explanation: Display the "elev_ned_contpts" points vector and zoom-in to very small area
to see the actual points.
End of explanation
!g.region swwake_30m res=200 -p
Explanation: Convert from vector to raster
Convert vector data to raster for use in raster-based analysis.
First, adjust the computational region to resolution 200m:
End of explanation
!d.vect schools_wake
!v.info -c schools_wake
!v.to.rast schools_wake out=schools_cap_200m use=attr attrcol=CORECAPACI type=point
!d.rast schools_cap_200m
!d.vect streets_wake co=grey
!d.legend schools_cap_200m at=70,30,2,6
Image(filename="map.png")
Explanation: Then remove all layers from the Layer Manager.
Convert vector points "schools" to raster.
As value for raster use attribute column "CORECAPACI" for core capacity.
To add legend in GUI use
Add map elements > Show/hide legend
and select "schools_cap_200m".
End of explanation
!g.region res=30 -p
!v.to.rast streets_wake out=streets_speed_30m use=attr attrcol=SPEED type=line
Explanation: Now convert lines in "streets" vector to raster.
Set the resolution to 30m and use speed limit attribute.
End of explanation
!d.erase
!d.rast streets_speed_30m
!d.legend streets_speed_30m at=5,30,2,5 use=25,35,45,55,65
Image(filename="map.png")
Explanation: If you haven't done this already, add remove all other map layers
from Layer Manager and add "streets_speed_30m" raster layer.
Add legend for "streets_speed_30m" raster using GUI in Map Display:
Add legend > Set Options > Advanced > List of discrete cat numbers
and type in speed limits 25,35,45,55,65; move legend with mouse as needed.
Alternatively, use the following commands:
End of explanation
Image(filename="map.png")
Explanation: Save the displayed map.
In Map Display click button Save display to graphic file,
or use the following.
End of explanation
!d.erase
!g.region raster=streams_derived -p
!d.rast streams_derived
!r.thin streams_derived output=streams_derived_t
!r.to.vect streams_derived_t output=streams_derived_t type=line
Image(filename="map.png")
Explanation: Convert from raster to vector
Convert raster lines to vector lines.
First, set the region and remove map layers from Layer Manager.
Then do the conversion.
Explain why we are using r.thin module.
You may want to remove all previously used layers from the Layer Manager
before you start these new computations.
End of explanation
!d.vect streams_derived_t color=blue
!d.vect streams color=red
Image(filename="map.png")
Explanation: Visually compare the result with streams digitized from airphotos.
End of explanation
Image(filename="map.png")
Explanation: Save the displayed map (in Map Display click button Save display to graphic file).
End of explanation
!g.region raster=basin_50K -p
!d.erase
!d.rast basin_50K
!r.to.vect -sv basin_50K output=basin_50Kval type=area
!d.vect -c basin_50Kval
!d.vect streams color=blue
Image(filename="map.png")
Explanation: Convert raster areas representing basins to vector polygons.
Use raster value as category number (flag -v) and
display vector polygons filled with random colors.
In GUI: Add vector > Colors > Switch on Random colors.
You may want to remove all previously used layers from the Layer Manager
before you start these new computations.
End of explanation
Image(filename="map.png")
# end the GRASS session
os.remove(rcfile)
Explanation: Save the displayed map either using GUI or using the following in case
you are working in the command line.
End of explanation |
11,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing Linear Regression in TensorFlow
I gathered this data for current real estate listing prices in North Bergen from Zillow. Let's see if we can use it to develop a model for housing costs based on home size.
Step1: It seems a linear model could be appropriate in this case. How can we build it with TensorFlow?
Step2: And here's where all the magic will happen | Python Code:
%matplotlib inline
#Typical imports
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import pandas as pd
# plots on fleek
matplotlib.style.use('ggplot')
# Read the housing data from the csv file into a pandas dataframe
# the names keyword allows us to name the columns,
# while the dtype sets the data type.
df = pd.read_csv('data/nb home sales.csv', names=['Square Feet', 'Price'],
dtype=np.float32)
# Display the dataframe
df
# Visualize the data as a scatter plot
# with sq. ft. as the independent variable.
df.plot(x='Square Feet', y='Price', kind='scatter')
Explanation: Performing Linear Regression in TensorFlow
I gathered this data for current real estate listing prices in North Bergen from Zillow. Let's see if we can use it to develop a model for housing costs based on home size.
End of explanation
# First we declare our placeholders
x = tf.placeholder(tf.float32, [None, 1])
y_ = tf.placeholder(tf.float32, [None, 1])
# Then our variables
W = tf.Variable(tf.zeros([1,1]))
b = tf.Variable(tf.zeros([1]))
# And now we can make our linear model: y = Wx + b
y = tf.matmul(x, W) + b
# Finally we choose our cost function (SSE in this case)
cost = tf.reduce_sum(tf.square(y_-y))
Explanation: It seems a linear model could be appropriate in this case. How can we build it with TensorFlow?
End of explanation
# Call tf's gradient descent function with a learning rate and instructions to minimize the cost
learn_rate = .0000000001
train = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)
# Prepare our data to be read into the training session. The data needs to match the
# shape we specified earlier -- in this case (n, 1) where n is the number of data points.
xdata = np.asarray([[i] for i in df['Square Feet']])
y_data = np.asarray([[i] for i in df['Price']])
# Create a tensorflow session, initialize the variables, and run gradient descent
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(10000):
# This is the actual training step - feed_dict specifies the data to be read into
# the placeholders x and y_ respectively.
sess.run(train, feed_dict={x:xdata, y_:y_data})
# Convert our variables from tensors to scalars so we can use them outside tf
price_sqft = np.asscalar(sess.run(W))
cost_0 = np.asscalar(sess.run(b))
print("Model: y = %sx + %s" % (round(price_sqft,2), round(cost_0,2)))
# Create the empty plot
fig, axes = plt.subplots()
# Draw the scatter plot on the axes we just created
df.plot(x='Square Feet', y='Price', kind='scatter', ax=axes)
# Create a range of x values to plug into our model
sqft = np.arange(500, 3000, 1)
# Plot the model
plt.plot(sqft, price_sqft*sqft + cost_0)
plt.show()
Explanation: And here's where all the magic will happen:
End of explanation |
11,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Linear Regression
Learning Objectives
Analyze a Pandas Dataframe
Create Seaborn plots for Exporatory Data Analysis
Train a Linear Regression Model using Scikit-Learn
Introduction
This lab is in introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algortithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Import Libraries
Step1: Load the Dataset
We will use the USA housing prices dataset found on Kaggle. The data contains the following columns
Step2: Let's check for any null values.
Step3: Let's take a peek at the first and last five rows of the data for all columns.
Lab Task 1
Step4: Exploratory Data Analysis (EDA)
Let's create some simple plots to check out the data!
Step5: Lab Task 2
Step6: Training a Linear Regression Model
Regression is a supervised machine learning process. It is similar to classification, but rather than predicting a label, we try to predict a continuous value. Linear regression defines the relationship between a target variable (y) and a set of predictive features (x). Simply stated, If you need to predict a number, then use regression.
Let's now begin to train our regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use.
X and y arrays
Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems.
Step7: Train - Test - Split
Now let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model. Note that we are using 40% of the data for testing.
What is Random State?
If an integer for random state is not specified in the code, then every time the code is executed, a new random value is generated and the train and test datasets will have different values each time. However, if a fixed value is assigned -- like random_state = 0 or 1 or 101 or any other integer, then no matter how many times you execute your code the result would be the same, e.g. the same values will be in the train and test datasets. Thus, the random state that you provide is used as a seed to the random number generator. This ensures that the random numbers are generated in the same order.
Step8: Creating and Training the Model
Step9: Lab Task 3
Step10: Model Evaluation
Let's evaluate the model by checking out it's coefficients and how we can interpret them.
Step11: Interpreting the coefficients
Step12: Residual Histogram
Step13: Regression Evaluation Metrics
Here are three common evaluation metrics for regression problems | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns # Seaborn is a Python data visualization library based on matplotlib.
%matplotlib inline
Explanation: Introduction to Linear Regression
Learning Objectives
Analyze a Pandas Dataframe
Create Seaborn plots for Exporatory Data Analysis
Train a Linear Regression Model using Scikit-Learn
Introduction
This lab is in introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algortithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Import Libraries
End of explanation
df_USAhousing = pd.read_csv("../USA_Housing.csv")
# Show the first five row.
df_USAhousing.head()
Explanation: Load the Dataset
We will use the USA housing prices dataset found on Kaggle. The data contains the following columns:
'Avg. Area Income': Avg. Income of residents of the city house is located in.
'Avg. Area House Age': Avg Age of Houses in same city
'Avg. Area Number of Rooms': Avg Number of Rooms for Houses in same city
'Avg. Area Number of Bedrooms': Avg Number of Bedrooms for Houses in same city
'Area Population': Population of city house is located in
'Price': Price that the house sold at
'Address': Address for the house
Next, we read the dataset into a Pandas dataframe.
End of explanation
df_USAhousing.isnull().sum()
df_USAhousing.describe()
df_USAhousing.info()
Explanation: Let's check for any null values.
End of explanation
# TODO 1 -- your code goes here
Explanation: Let's take a peek at the first and last five rows of the data for all columns.
Lab Task 1: Print the first and last five rows of the data for all columns.
End of explanation
sns.pairplot(df_USAhousing)
sns.distplot(df_USAhousing["Price"])
Explanation: Exploratory Data Analysis (EDA)
Let's create some simple plots to check out the data!
End of explanation
# TODO 2 -- your code goes here
Explanation: Lab Task 2: Create the plots using heatmap():
End of explanation
X = df_USAhousing[
[
"Avg. Area Income",
"Avg. Area House Age",
"Avg. Area Number of Rooms",
"Avg. Area Number of Bedrooms",
"Area Population",
]
]
y = df_USAhousing["Price"]
Explanation: Training a Linear Regression Model
Regression is a supervised machine learning process. It is similar to classification, but rather than predicting a label, we try to predict a continuous value. Linear regression defines the relationship between a target variable (y) and a set of predictive features (x). Simply stated, If you need to predict a number, then use regression.
Let's now begin to train our regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use.
X and y arrays
Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=101
)
Explanation: Train - Test - Split
Now let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model. Note that we are using 40% of the data for testing.
What is Random State?
If an integer for random state is not specified in the code, then every time the code is executed, a new random value is generated and the train and test datasets will have different values each time. However, if a fixed value is assigned -- like random_state = 0 or 1 or 101 or any other integer, then no matter how many times you execute your code the result would be the same, e.g. the same values will be in the train and test datasets. Thus, the random state that you provide is used as a seed to the random number generator. This ensures that the random numbers are generated in the same order.
End of explanation
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
Explanation: Creating and Training the Model
End of explanation
# TODO 3 -- your code goes here
Explanation: Lab Task 3: Training the Model using fit():
End of explanation
# print the intercept
print(lm.intercept_)
coeff_df = pd.DataFrame(lm.coef_, X.columns, columns=["Coefficient"])
coeff_df
Explanation: Model Evaluation
Let's evaluate the model by checking out it's coefficients and how we can interpret them.
End of explanation
predictions = lm.predict(X_test)
plt.scatter(y_test, predictions)
Explanation: Interpreting the coefficients:
Holding all other features fixed, a 1 unit increase in Avg. Area Income is associated with an increase of \$21.52 .
Holding all other features fixed, a 1 unit increase in Avg. Area House Age is associated with an increase of \$164883.28 .
Holding all other features fixed, a 1 unit increase in Avg. Area Number of Rooms is associated with an increase of \$122368.67 .
Holding all other features fixed, a 1 unit increase in Avg. Area Number of Bedrooms is associated with an increase of \$2233.80 .
Holding all other features fixed, a 1 unit increase in Area Population is associated with an increase of \$15.15 .
Predictions from our Model
Let's grab predictions off our test set and see how well it did!
End of explanation
sns.distplot((y_test - predictions), bins=50);
Explanation: Residual Histogram
End of explanation
from sklearn import metrics
print("MAE:", metrics.mean_absolute_error(y_test, predictions))
print("MSE:", metrics.mean_squared_error(y_test, predictions))
print("RMSE:", np.sqrt(metrics.mean_squared_error(y_test, predictions)))
Explanation: Regression Evaluation Metrics
Here are three common evaluation metrics for regression problems:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$
Mean Squared Error (MSE) is the mean of the squared errors:
$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$
Comparing these metrics:
MAE is the easiest to understand, because it's the average error.
MSE is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world.
RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units.
All of these are loss functions, because we want to minimize them.
End of explanation |
11,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pandasのデータフレーム
Step1: 2次元の array を DataFrame に変換する例です。
Step2: columns オプションで、各列の column 名を指定します。
Step3: Series オブジェクトから DataFrame を作成する例です。
Step4: 各列の column 名と対応する Series オブジェクトのディクショナリを与えて、DataFrame を生成します。
Step5: Series オブジェクトの代わりに、リストから DataFrame を作成する例です。
Step6: 空の DataFrame に行を追加する例です。
はじめに、column 名だけを指定した DataFrame を作成します。
Step7: 対応するデータを Series オブジェクトとして用意します。この際、index オプションで column 名に対応する名前を付けておきます。
Step8: 用意した DataFrame の append メソッドで、Series オブジェクトを追加します。
Step9: 2個のサイコロを 1000 回振った結果をシュミレーションする例です。
Step10: DataFrameのdescribeメソッドで、記法的な統計値を確認することができます。
Step11: DataFrame の append メソッドで、2つの DataFrame を結合する例です。
Step12: ignore_index=True を指定すると、index は通し番号になるように再割当てが行われます。
Step13: DataFrame に列を追加する例です。
配列の index 記法で、まだ存在しない column 名を指定すると、新しい列が用意されます。
Step14: pd.concat 関数で複数の Series を列として結合できます。(axis=1 は列方向での結合を意味します。)
Step15: pd.concat 関数で既存の DataFrame に Series を追加することもできます。 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas import Series, DataFrame
Explanation: pandasのデータフレーム
End of explanation
from numpy.random import randint
dices = randint(1,7,(5,2))
dices
Explanation: 2次元の array を DataFrame に変換する例です。
End of explanation
diceroll = DataFrame(dices, columns=['dice1','dice2'])
diceroll
Explanation: columns オプションで、各列の column 名を指定します。
End of explanation
city = Series(['Tokyo','Osaka','Nagoya','Okinawa'], name='City')
city
temp = Series([25.0,28.2,27.3,30.9], name='Temperature')
temp
humid = Series([44,42,np.nan,62], name='Humidity')
humid
Explanation: Series オブジェクトから DataFrame を作成する例です。
End of explanation
cities = DataFrame({'City':city, 'Temperature':temp, 'Humidity':humid})
cities
Explanation: 各列の column 名と対応する Series オブジェクトのディクショナリを与えて、DataFrame を生成します。
End of explanation
data = {'City': ['Tokyo','Osaka','Nagoya','Okinawa'],
'Temperature': [25.0,28.2,27.3,30.9],
'Humidity': [44,42,np.nan,62]}
cities = DataFrame(data)
cities
Explanation: Series オブジェクトの代わりに、リストから DataFrame を作成する例です。
End of explanation
diceroll = DataFrame(columns=['dice1','dice2'])
diceroll
Explanation: 空の DataFrame に行を追加する例です。
はじめに、column 名だけを指定した DataFrame を作成します。
End of explanation
oneroll = Series(randint(1,7,2), index=['dice1','dice2'])
oneroll
Explanation: 対応するデータを Series オブジェクトとして用意します。この際、index オプションで column 名に対応する名前を付けておきます。
End of explanation
diceroll = diceroll.append(oneroll, ignore_index=True)
diceroll
Explanation: 用意した DataFrame の append メソッドで、Series オブジェクトを追加します。
End of explanation
diceroll = DataFrame(columns=['dice1','dice2'])
for i in range(1000):
diceroll = diceroll.append(
Series(randint(1,7,2), index=['dice1','dice2']),
ignore_index = True)
diceroll[:5]
Explanation: 2個のサイコロを 1000 回振った結果をシュミレーションする例です。
End of explanation
diceroll.describe()
Explanation: DataFrameのdescribeメソッドで、記法的な統計値を確認することができます。
End of explanation
diceroll1 = DataFrame(randint(1,7,(5,2)),
columns=['dice1','dice2'])
diceroll1
diceroll2 = DataFrame(randint(1,7,(3,2)),
columns=['dice1','dice2'])
diceroll2
diceroll3 = diceroll1.append(diceroll2)
diceroll3
Explanation: DataFrame の append メソッドで、2つの DataFrame を結合する例です。
End of explanation
diceroll4 = diceroll1.append(diceroll2, ignore_index=True)
diceroll4
Explanation: ignore_index=True を指定すると、index は通し番号になるように再割当てが行われます。
End of explanation
diceroll = DataFrame()
diceroll['dice1'] = randint(1,7,5)
diceroll
diceroll['dice2'] = randint(1,7,5)
diceroll
Explanation: DataFrame に列を追加する例です。
配列の index 記法で、まだ存在しない column 名を指定すると、新しい列が用意されます。
End of explanation
dice1 = Series(randint(1,7,5),name='dice1')
dice2 = Series(randint(1,7,5),name='dice2')
diceroll = pd.concat([dice1, dice2], axis=1)
diceroll
Explanation: pd.concat 関数で複数の Series を列として結合できます。(axis=1 は列方向での結合を意味します。)
End of explanation
dice3 = Series(randint(1,7,5),name='dice3')
diceroll = pd.concat([diceroll, dice3], axis=1)
diceroll
Explanation: pd.concat 関数で既存の DataFrame に Series を追加することもできます。
End of explanation |
11,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: NumPy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of NumPy
Step2: Built-in Methods
There are lots of built-in ways to generate arrays.
arange
Return evenly spaced values within a given interval. [reference]
Step3: zeros and ones
Generate arrays of zeros or ones. [reference]
Step4: linspace
Return evenly spaced numbers over a specified interval. [reference]
Step5: <font color=green>Note that .linspace() includes the stop value. To obtain an array of common fractions, increase the number of items
Step6: eye
Creates an identity matrix [reference]
Step7: Random
Numpy also has lots of ways to create random number arrays
Step8: randn
Returns a sample (or samples) from the "standard normal" distribution [σ = 1]. Unlike rand which is uniform, values closer to zero are more likely to appear. [reference]
Step9: randint
Returns random integers from low (inclusive) to high (exclusive). [reference]
Step10: seed
Can be used to set the random state, so that the same "random" results can be reproduced. [reference]
Step11: Array Attributes and Methods
Let's discuss some useful attributes and methods for an array
Step12: Reshape
Returns an array containing the same data with a new shape. [reference]
Step13: max, min, argmax, argmin
These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
Step14: Shape
Shape is an attribute that arrays have (not a method)
Step15: dtype
You can also grab the data type of the object in the array | Python Code:
import numpy as np
Explanation: <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
NumPy
NumPy is a powerful linear algebra library for Python. What makes it so important is that almost all of the libraries in the <a href='https://pydata.org/'>PyData</a> ecosystem (pandas, scipy, scikit-learn, etc.) rely on NumPy as one of their main building blocks. Plus we will use it to generate data for our analysis examples later on!
NumPy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use arrays instead of lists, check out this great StackOverflow post.
We will only learn the basics of NumPy. To get started we need to install it!
Installation Instructions
NumPy is already included in your environment! You are good to go if you are using the course environment!
For those not using the provided environment:
It is highly recommended you install Python using the Anaconda distribution to make sure all underlying dependencies (such as Linear Algebra libraries) all sync up with the use of a conda install. If you have Anaconda, install NumPy by going to your terminal or command prompt and typing:
conda install numpy
If you do not have Anaconda and can not install it, please refer to Numpy's official documentation on various installation instructions.
Using NumPy
Once you've installed NumPy you can import it as a library:
End of explanation
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
Explanation: NumPy has many built-in functions and capabilities. We won't cover them all but instead we will focus on some of the most important aspects of NumPy: vectors, arrays, matrices and number generation. Let's start by discussing arrays.
NumPy Arrays
NumPy arrays are the main way we will use NumPy throughout the course. NumPy arrays essentially come in two flavors: vectors and matrices. Vectors are strictly 1-dimensional (1D) arrays and matrices are 2D (but you should note a matrix can still have only one row or one column).
Let's begin our introduction by exploring how to create NumPy arrays.
Creating NumPy Arrays
From a Python List
We can create an array by directly converting a list or list of lists:
End of explanation
np.arange(0,10)
np.arange(0,11,2)
Explanation: Built-in Methods
There are lots of built-in ways to generate arrays.
arange
Return evenly spaced values within a given interval. [reference]
End of explanation
np.zeros(3)
np.zeros((5,5))
np.ones(3)
np.ones((3,3))
Explanation: zeros and ones
Generate arrays of zeros or ones. [reference]
End of explanation
np.linspace(0,10,3)
np.linspace(0,5,20)
Explanation: linspace
Return evenly spaced numbers over a specified interval. [reference]
End of explanation
np.linspace(0,5,21)
Explanation: <font color=green>Note that .linspace() includes the stop value. To obtain an array of common fractions, increase the number of items:</font>
End of explanation
np.eye(4)
Explanation: eye
Creates an identity matrix [reference]
End of explanation
np.random.rand(2)
np.random.rand(5,5)
Explanation: Random
Numpy also has lots of ways to create random number arrays:
rand
Creates an array of the given shape and populates it with random samples from a uniform distribution over [0, 1). [reference]
End of explanation
np.random.randn(2)
np.random.randn(5,5)
Explanation: randn
Returns a sample (or samples) from the "standard normal" distribution [σ = 1]. Unlike rand which is uniform, values closer to zero are more likely to appear. [reference]
End of explanation
np.random.randint(1,100)
np.random.randint(1,100,10)
Explanation: randint
Returns random integers from low (inclusive) to high (exclusive). [reference]
End of explanation
np.random.seed(42)
np.random.rand(4)
np.random.seed(42)
np.random.rand(4)
Explanation: seed
Can be used to set the random state, so that the same "random" results can be reproduced. [reference]
End of explanation
arr = np.arange(25)
ranarr = np.random.randint(0,50,10)
arr
ranarr
Explanation: Array Attributes and Methods
Let's discuss some useful attributes and methods for an array:
End of explanation
arr.reshape(5,5)
Explanation: Reshape
Returns an array containing the same data with a new shape. [reference]
End of explanation
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
Explanation: max, min, argmax, argmin
These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
End of explanation
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,25)
arr.reshape(1,25).shape
arr.reshape(25,1)
arr.reshape(25,1).shape
Explanation: Shape
Shape is an attribute that arrays have (not a method): [reference]
End of explanation
arr.dtype
arr2 = np.array([1.2, 3.4, 5.6])
arr2.dtype
Explanation: dtype
You can also grab the data type of the object in the array: [reference]
End of explanation |
11,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None,real_dim))
inputs_z = tf.placeholder(tf.float32, (None, z_dim))
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope("generator", reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
out = tf.nn.tanh(logits, "output")
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope("discriminator", reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
logits = tf.layers.dense(h1, 1)
out = tf.nn.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_logits_real)*(1-smooth),
logits=d_logits_real))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_logits_fake),
logits=d_logits_fake))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_logits_fake)*(1-smooth),
logits=d_logits_fake))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith("generator")]
d_vars = [var for var in t_vars if var.name.startswith("discriminator")]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
#losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
# Test: Show the shape of samples
np.array(samples).shape
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
11,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Input Data
Step1: Test Frame
Nodes
Table nodes (file nodes.csv) provides the $x$-$y$ coordinates of each node. Other columns, such
as the $z$- coordinate are optional, and ignored if given.
Step2: Supports
Table supports (file supports.csv) specifies the support fixity, by indicating the constrained
direction for each node. There can be 1, 2 or 3 constraints, selected from the set 'FX', 'FY' or 'MZ',
in any order for each constrained node. Directions not mentioned are 'free' or unconstrained.
Step3: Members
Table members (file members.csv) specifies the member incidences. For each member, specify
the id of the nodes at the 'j-' and 'k-' ends. These ends are used to interpret the signs of various values.
Step4: Releases
Table releases (file releases.csv) is optional and specifies internal force releases in some members.
Currently only moment releases at the 'j-' end ('MZJ') and 'k-' end ('MZK') are supported. These specify
that the internal bending moment at those locations are zero. You can only specify one release per line,
but you can have more than one line for a member.
Step5: Properties
Table properties (file properties.csv) specifies the member properties for each member.
If the 'SST' library is available, you may specify the size of the member by using the
designation of a shape in the CISC Structural Section Tables. If either IX or A is missing,
it is retreived using the sst library using the provided size. If the values on any line are missing, they
are copied from the line above.
Step6: Node Loads
Table node_loads (file node_loads.csv) specifies the forces applied directly to the nodes.
DIRN (direction) may be one of 'FX,FY,MZ'. 'LOAD' is an identifier of the kind of load
being applied and F is the value of the load, normally given as a service or specified load.
A later input table will specify load combinations and factors.
Step7: Support Displacements
Table support_displacements (file support_displacements.csv) is optional and specifies imposed displacements
of the supports. DIRN (direction) is one of 'DX, DY, RZ'. LOAD is as for Node Loads, above.
Of course, in this example the frame is statically determinate and so the support displacement
will have no effect on the reactions or member end forces.
Step8: Member Loads
Table member_loads (file member_loads.csv) specifies loads acting on members. Current
types are PL (concentrated transverse, ie point load), CM (concentrated moment), UDL (uniformly
distributed load over entire span), LVL (linearly varying load over a portion of the span) and PLA (point load applied parallel to member coincident with centroidal axis). Values W1 and W2 are loads or
load intensities and A, B, and C are dimensions appropriate to the kind of load.
Step9: Load Combinations
Table load_combinations (file load_combinations.csv) is optional and specifies
factored combinations of loads. By default, there is always a load combination
called all that includes all loads with a factor of 1.0. A frame solution (see below)
indicates which CASE to use.
Step10: Load Iterators
Step11: Number the DOFs
Step12: Input Everything
Step13: Input From Files | Python Code:
from salib import extend, NBImporter
from Tables import Table, DataSource
from Nodes import Node
from Members import Member
from LoadSets import LoadSet, LoadCombination
from NodeLoads import makeNodeLoad
from MemberLoads import makeMemberLoad
from collections import OrderedDict, defaultdict
import numpy as np
from Frame2D_Base import Frame2D
@extend
class Frame2D:
COLUMNS_xxx = [] # list of column names for table 'xxx'
def get_table(self,tablename,extrasok=False,optional=False):
columns = getattr(self,'COLUMNS_'+tablename)
t = DataSource.read_table(tablename,columns=columns,optional=optional)
return t
Explanation: Input Data
End of explanation
%%Table nodes
NODEID,X,Y,Z
A,0.,0.,5000.
B,0,4000,5000
C,8000,4000,5000
D,8000,0,5000
@extend
class Frame2D:
COLUMNS_nodes = ['NODEID','X','Y']
def input_nodes(self):
node_table = self.get_table('nodes')
for ix,r in node_table.iterrows():
if r.NODEID in self.nodes:
raise Exception('Multiply defined node: {}'.format(r.NODEID))
n = Node(r.NODEID,r.X,r.Y)
self.nodes[n.id] = n
self.rawdata.nodes = node_table
def get_node(self,id):
try:
return self.nodes[id]
except KeyError:
raise Exception('Node not defined: {}'.format(id))
##test:
f = Frame2D()
##test:
f.input_nodes()
##test:
f.nodes
##test:
f.get_node('C')
Explanation: Test Frame
Nodes
Table nodes (file nodes.csv) provides the $x$-$y$ coordinates of each node. Other columns, such
as the $z$- coordinate are optional, and ignored if given.
End of explanation
%%Table supports
NODEID,C0,C1,C2
A,FX,FY,MZ
D,FX,FY
def isnan(x):
if x is None:
return True
try:
return np.isnan(x)
except TypeError:
return False
@extend
class Frame2D:
COLUMNS_supports = ['NODEID','C0','C1','C2']
def input_supports(self):
table = self.get_table('supports')
for ix,row in table.iterrows():
node = self.get_node(row.NODEID)
for c in [row.C0,row.C1,row.C2]:
if not isnan(c):
node.add_constraint(c)
self.rawdata.supports = table
##test:
f.input_supports()
##test:
vars(f.get_node('D'))
Explanation: Supports
Table supports (file supports.csv) specifies the support fixity, by indicating the constrained
direction for each node. There can be 1, 2 or 3 constraints, selected from the set 'FX', 'FY' or 'MZ',
in any order for each constrained node. Directions not mentioned are 'free' or unconstrained.
End of explanation
%%Table members
MEMBERID,NODEJ,NODEK
AB,A,B
BC,B,C
CD,C,D
@extend
class Frame2D:
COLUMNS_members = ['MEMBERID','NODEJ','NODEK']
def input_members(self):
table = self.get_table('members')
for ix,m in table.iterrows():
if m.MEMBERID in self.members:
raise Exception('Multiply defined member: {}'.format(m.MEMBERID))
memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))
self.members[memb.id] = memb
self.rawdata.members = table
def get_member(self,id):
try:
return self.members[id]
except KeyError:
raise Exception('Member not defined: {}'.format(id))
##test:
f.input_members()
f.members
##test:
m = f.get_member('BC')
m.id, m.L, m.dcx, m.dcy
Explanation: Members
Table members (file members.csv) specifies the member incidences. For each member, specify
the id of the nodes at the 'j-' and 'k-' ends. These ends are used to interpret the signs of various values.
End of explanation
%%Table releases
MEMBERID,RELEASE
AB,MZK
CD,MZJ
@extend
class Frame2D:
COLUMNS_releases = ['MEMBERID','RELEASE']
def input_releases(self):
table = self.get_table('releases',optional=True)
for ix,r in table.iterrows():
memb = self.get_member(r.MEMBERID)
memb.add_release(r.RELEASE)
self.rawdata.releases = table
##test:
f.input_releases()
##test:
vars(f.get_member('AB'))
Explanation: Releases
Table releases (file releases.csv) is optional and specifies internal force releases in some members.
Currently only moment releases at the 'j-' end ('MZJ') and 'k-' end ('MZK') are supported. These specify
that the internal bending moment at those locations are zero. You can only specify one release per line,
but you can have more than one line for a member.
End of explanation
try:
from sst import SST
__SST = SST()
get_section = __SST.section
except ImportError:
def get_section(dsg,fields):
raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg))
##return [1.] * len(fields.split(',')) # in case you want to do it that way
%%Table properties
MEMBERID,SIZE,IX,A
BC,W460x106,,
AB,W310x97,,
CD,,
@extend
class Frame2D:
COLUMNS_properties = ['MEMBERID','SIZE','IX','A']
def input_properties(self):
table = self.get_table('properties')
table = self.fill_properties(table)
for ix,row in table.iterrows():
memb = self.get_member(row.MEMBERID)
memb.size = row.SIZE
memb.Ix = row.IX
memb.A = row.A
self.rawdata.properties = table
def fill_properties(self,table):
prev = None
for ix,row in table.iterrows():
nf = 0
if type(row.SIZE) in [type(''),type(u'')]:
if isnan(row.IX) or isnan(row.A):
Ix,A = get_section(row.SIZE,'Ix,A')
if isnan(row.IX):
nf += 1
table.loc[ix,'IX'] = Ix
if isnan(row.A):
nf += 1
table.loc[ix,'A'] = A
elif isnan(row.SIZE):
table.loc[ix,'SIZE'] = '' if nf == 0 else prev
prev = table.loc[ix,'SIZE']
table = table.fillna(method='ffill')
return table
##test:
f.input_properties()
##test:
vars(f.get_member('CD'))
Explanation: Properties
Table properties (file properties.csv) specifies the member properties for each member.
If the 'SST' library is available, you may specify the size of the member by using the
designation of a shape in the CISC Structural Section Tables. If either IX or A is missing,
it is retreived using the sst library using the provided size. If the values on any line are missing, they
are copied from the line above.
End of explanation
%%Table node_loads
LOAD,NODEID,DIRN,F
Wind,B,FX,-200000.
@extend
class Frame2D:
COLUMNS_node_loads = ['LOAD','NODEID','DIRN','F']
def input_node_loads(self):
table = self.get_table('node_loads')
dirns = ['FX','FY','FZ']
for ix,row in table.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in dirns:
raise ValueError("Invalid node load direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns)))
if row.DIRN in n.constraints:
raise ValueError("Constrained node {} {} must not have load applied."
.format(row.NODEID,row.DIRN))
l = makeNodeLoad({row.DIRN:row.F})
self.nodeloads.append(row.LOAD,n,l)
self.rawdata.node_loads = table
##test:
f.input_node_loads()
##test:
for o,l,fact in f.nodeloads.iterloads('Wind'):
print(o,l,fact,l*fact)
Explanation: Node Loads
Table node_loads (file node_loads.csv) specifies the forces applied directly to the nodes.
DIRN (direction) may be one of 'FX,FY,MZ'. 'LOAD' is an identifier of the kind of load
being applied and F is the value of the load, normally given as a service or specified load.
A later input table will specify load combinations and factors.
End of explanation
%%Table support_displacements
LOAD,NODEID,DIRN,DELTA
Other,A,DY,-10
@extend
class Frame2D:
COLUMNS_support_displacements = ['LOAD','NODEID','DIRN','DELTA']
def input_support_displacements(self):
table = self.get_table('support_displacements',optional=True)
forns = {'DX':'FX','DY':'FY','RZ':'MZ'}
for ix,row in table.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in forns:
raise ValueError("Invalid support displacements direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(forns.keys())))
fd = forns[row.DIRN]
if fd not in n.constraints:
raise ValueError("Support displacement, load: '{}' node: '{}' dirn: '{}' must be for a constrained node."
.format(row.LOAD,row.NODEID,row.DIRN))
l = makeNodeLoad({fd:row.DELTA})
self.nodedeltas.append(row.LOAD,n,l)
self.rawdata.support_displacements = table
##test:
f.input_support_displacements()
##test:
list(f.nodedeltas)[0]
Explanation: Support Displacements
Table support_displacements (file support_displacements.csv) is optional and specifies imposed displacements
of the supports. DIRN (direction) is one of 'DX, DY, RZ'. LOAD is as for Node Loads, above.
Of course, in this example the frame is statically determinate and so the support displacement
will have no effect on the reactions or member end forces.
End of explanation
%%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
Live,BC,UDL,-50,,,,
Live,BC,PL,-200000,,5000
@extend
class Frame2D:
COLUMNS_member_loads = ['LOAD','MEMBERID','TYPE','W1','W2','A','B','C']
def input_member_loads(self):
table = self.get_table('member_loads')
for ix,row in table.iterrows():
m = self.get_member(row.MEMBERID)
l = makeMemberLoad(m.L,row)
self.memberloads.append(row.LOAD,m,l)
self.rawdata.member_loads = table
##test:
f.input_member_loads()
##test:
for o,l,fact in f.memberloads.iterloads('Live'):
print(o.id,l,fact,l.fefs()*fact)
Explanation: Member Loads
Table member_loads (file member_loads.csv) specifies loads acting on members. Current
types are PL (concentrated transverse, ie point load), CM (concentrated moment), UDL (uniformly
distributed load over entire span), LVL (linearly varying load over a portion of the span) and PLA (point load applied parallel to member coincident with centroidal axis). Values W1 and W2 are loads or
load intensities and A, B, and C are dimensions appropriate to the kind of load.
End of explanation
%%Table load_combinations
CASE,LOAD,FACTOR
One,Live,1.5
One,Wind,1.75
@extend
class Frame2D:
COLUMNS_load_combinations = ['CASE','LOAD','FACTOR']
def input_load_combinations(self):
table = self.get_table('load_combinations',optional=True)
if len(table) > 0:
for ix,row in table.iterrows():
self.loadcombinations.append(row.CASE,row.LOAD,row.FACTOR)
if 'all' not in self.loadcombinations:
all = self.nodeloads.names.union(self.memberloads.names)
all = self.nodedeltas.names.union(all)
for l in all:
self.loadcombinations.append('all',l,1.0)
self.rawdata.load_combinations = table
##test:
f.input_load_combinations()
##test:
for o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads):
print(o.id,l,fact)
for o,l,fact in f.loadcombinations.iterloads('One',f.memberloads):
print(o.id,l,fact,l.fefs()*fact)
Explanation: Load Combinations
Table load_combinations (file load_combinations.csv) is optional and specifies
factored combinations of loads. By default, there is always a load combination
called all that includes all loads with a factor of 1.0. A frame solution (see below)
indicates which CASE to use.
End of explanation
@extend
class Frame2D:
def iter_nodeloads(self,casename):
for o,l,f in self.loadcombinations.iterloads(casename,self.nodeloads):
yield o,l,f
def iter_nodedeltas(self,casename):
for o,l,f in self.loadcombinations.iterloads(casename,self.nodedeltas):
yield o,l,f
def iter_memberloads(self,casename):
for o,l,f in self.loadcombinations.iterloads(casename,self.memberloads):
yield o,l,f
##test:
for o,l,fact in f.iter_nodeloads('One'):
print(o.id,l,fact)
for o,l,fact in f.iter_memberloads('One'):
print(o.id,l,fact)
Explanation: Load Iterators
End of explanation
@extend
class Frame2D:
def number_dofs(self):
self.ndof = (3*len(self.nodes))
self.ncons = sum([len(node.constraints) for node in self.nodes.values()])
self.nfree = self.ndof - self.ncons
ifree = 0
icons = self.nfree
self.dofdesc = [None] * self.ndof
for node in self.nodes.values():
for dirn,ix in node.DIRECTIONS.items():
if dirn in node.constraints:
n = icons
icons += 1
else:
n = ifree
ifree += 1
node.dofnums[ix] = n
self.dofdesc[n] = (node,dirn)
##test:
f.number_dofs()
f.ndof, f.ncons, f.nfree
##test:
f.dofdesc
##test:
f.get_node('D').dofnums
Explanation: Number the DOFs
End of explanation
@extend
class Frame2D:
def input_all(self):
self.input_nodes()
self.input_supports()
self.input_members()
self.input_releases()
self.input_properties()
self.input_node_loads()
self.input_support_displacements()
self.input_member_loads()
self.input_load_combinations()
self.input_finish()
def input_finish(self):
self.number_dofs()
##test:
f.reset()
f.input_all()
Explanation: Input Everything
End of explanation
##test:
f.reset()
DataSource.set_source('frame-1')
f.input_all()
##test:
vars(f.rawdata)
##test:
f.rawdata.nodes
##test:
f.members
##test:
DataSource.DATASOURCE.celldata
##test:
DataSource.DATASOURCE.tables
Explanation: Input From Files
End of explanation |
11,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: tf.keras ではKerasと互換性のあるコードを実行できますが、注意点もあります:
最新リリースのTensorFlowに同梱されている tf.keras のバージョンと、pipインストールした最新の keras のバージョンが同一とは限りません。バージョンは tf.keras.__version__ の出力をご確認ください。
モデルの重みを保存する場合、
tf.keras のデフォルトの保存形式は チェックポイント形式です。
HDF5形式にて保存する場合は、 save_format='h5' オプションを指定してください。
単純なモデルの構築
シーケンシャル モデル
Kerasでは、<b>層</b>を組み合わせて<b>モデル</b>を構築します。
モデルは通常、複数の層から成るグラフ構造をしています。
最も一般的なモデルは、単純に層を積み重ねる類の tf.keras.Sequential モデルです。
単純な全結合ネットワーク(いわゆる マルチ レイヤー パーセプトロン)を構築してみましょう:
Step3: 層の設定
tf.keras.layers はさまざまな層を提供していますが、共通のコンストラクタ引数があります:
activation: 層の活性化関数を設定します。組み込み関数、もしくは呼び出し可能オブジェクトの名前で指定します。デフォルト値は、活性化関数なし。
kernel_initializer ・ bias_initializer: 層の重み(カーネルとバイアス)の初期化方式。名前、もしくは呼び出し可能オブジェクトで指定します。デフォルト値は、 "Glorot uniform" 。
kernel_regularizer ・ bias_regularizer:層の重み(カーネルとバイアス)に適用する、L1やL2等の正則化方式。デフォルト値は、正則化なし。
コンストラクタ引数を使って tf.keras.layers.Dense 層をインスタンス化する例を以下に示します:
Step4: 学習と評価
学習の準備
モデルを構築したあとは、compile メソッドを呼んで学習方法を構成します:
Step5: tf.keras.Model.compile には3つの重要な引数があります:
optimizer
Step6: NumPy データの入力
小規模なデータセットであれば、モデルを学習・評価する際にインメモリの NumPy配列を使いましょう。
モデルは fit メソッドを使って学習データに適合させます。
Step7: tf.keras.Model.fit は3つの重要な引数があります:
epochs
Step8: tf.data データセットの入力
大規模なデータセット、もしくは複数デバイスを用いた学習を行う際は Datasets API を使いましょう。 fitメソッドにtf.data.Dataset インスタンスを渡します:
Step9: fit メソッドの引数 steps_per_epoch には、1エポックあたりの学習ステップ数を指定します。
Dataset がバッチを生成するため batch_sizeの指定は不要です。
Dataset は評価データにも使えます:
Step10: 評価と推論
tf.keras.Model.evaluate と tf.keras.Model.predict メソッドは、NumPyデータとtf.data.Datasetに使えます。
推論モードでデータの損失と評価指標を評価する例を示します:
Step11: 推論 結果を最終層のNumPy配列として出力する例を示します
Step12: 高度なモデルの構築
Functional API
tf.keras.Sequential モデルは層を積み重ねる単純なつくりであり、あらゆるモデルに対応しているわけではありません。
以下に挙げる複雑な構成のモデルを構築するには
Keras functional API
を使いましょう:
入力ヘッドが複数あるモデル
出力ヘッドが複数あるモデル
共有層(おなじ層が複数回呼び出される)を含むモデル
(残差結合のように)データの流れが分岐するモデル
Functional API を用いたモデル構築の流れ:
層のインスタンスは呼び出し可能で、テンソルを返します。
入力テンソルと出力テンソルを使ってtf.keras.Modelインスタンスを定義します。
モデルはSequentialモデルと同様の方法で学習します。
Functional API を使って全結合ネットワークを構築する例を示します:
Step13: inputsとoutputsを引数にモデルをインスタンス化します。
Step14: モデルの派生
tf.keras.Model を継承し順伝播を定義することでカスタムモデルを構築できます。
__init__ メソッドにクラス インスタンスの属性として層をつくります。
call メソッドに順伝播を定義します。
順伝播を命令型で記載できるため、モデルの派生は
Eagerモード でより威力を発揮します。
キーポイント:目的にあったAPIを選択しましょう。派生モデルは柔軟性を与えてくれますが、その代償にモデルはより複雑になりエラーを起こしやすくなります。目的がFunctional APIで賄えるのであれば、そちらを使いましょう。
tf.keras.Modelを継承して順伝播をカスタマイズした例を以下に示します:
Step15: 今定義した派生モデルをインスンス化します。
Step16: 層のカスタマイズ
tf.keras.layers.Layerを継承して層をカスタマイズするには、以下のメソッドを実装します:
build: 層の重みを定義します。add_weightメソッドで重みを追加します。
call: 順伝播を定義します。
compute_output_shape
Step17: カスタマイズした層を使ってモデルを構築します:
Step18: コールバック
コールバックは、学習中のモデルの挙動をカスタマイズするためにモデルに渡されるオブジェクトです。
コールバック関数は自作する、もしくは以下に示すtf.keras.callbacksが提供する組み込み関数を利用できます:
tf.keras.callbacks.ModelCheckpoint:モデルのチェックポイントを一定間隔で保存します。
tf.keras.callbacks.LearningRateScheduler:学習率を動的に変更します。
tf.keras.callbacks.EarlyStopping:評価パフォーマンスが向上しなくなったら学習を中断させます。
tf.keras.callbacks.TensorBoard: モデルの挙動を
TensorBoardで監視します。
tf.keras.callbacks.Callbackを使用するには、モデルの fit メソッドにコールバック関数を渡します:
Step19: <a id='weights_only'></a>
保存と復元
重みのみ
tf.keras.Model.save_weightsを使ってモデルの重みの保存やロードを行います。
Step20: デフォルトでは、モデルの重みは
TensorFlow チェックポイント 形式で保存されます。
重みはKerasのHDF5形式でも保存できます(マルチバックエンド実装のKerasではHDF5形式がデフォルト):
Step21: 構成のみ
モデルの構成も保存可能です。
モデル構造を重み抜きでシリアライズします。
元のモデルのコードがなくとも、保存された構成で再構築できます。
Kerasがサポートしているシリアライズ形式は、JSONとYAMLです。
Step22: JSONから(新たに初期化して)モデルを再構築します:
Step23: YAML形式でモデルを保存するには、
TensorFlowをインポートする前に あらかじめpyyamlをインストールしておく必要があります:
Step24: YAMLからモデルを再構築します:
Step25: 注意:callメソッド内ににPythonコードでモデル構造を定義するため、派生モデルはシリアライズできません。
モデル全体
モデルの重み、構成からオプティマイザ設定までモデル全体をファイルに保存できます。
そうすることで、元のコードなしに、チェックポイントで保存したときと全く同じ状態から学習を再開できます。
Step26: Eagerモード
Eagerモード は、オペレーションを即時に評価する命令型のプログラミング環境です。
Kerasでは必要ありませんが、tf.kerasでサポートされておりプログラムを検査しデバッグするのに便利です。
すべてのtf.kerasモデル構築用APIは、Eagerモード互換性があります。
Sequential や Functional APIも使用できますが、
Eagerモードは特に派生モデル の構築や
層のカスタマイズに有益です。
(既存の層の組み合わせでモデルを作成するAPIの代わりに)
順伝播をコードで実装する必要があります。
詳しくは Eagerモード ガイド
(カスタマイズした学習ループとtf.GradientTapeを使ったKerasモデルの適用事例)をご参照ください。
分散
Estimators
Estimators は分散学習を行うためのAPIです。
実運用に耐えるモデルを巨大なデータセットを用いて分散学習するといった産業利用を目的にすえています。
tf.keras.Modelでtf.estimator APIによる学習を行うには、
tf.keras.estimator.model_to_estimatorを使ってKerasモデルを tf.estimator.Estimatorオブジェクトに変換する必要があります。
KerasモデルからEstimatorsを作成するをご参照ください。
Step27: 注意:Estimator input functionsをデバッグしてデータの検査を行うにはEagerモードで実行してください。
マルチGPU
tf.kerasモデルはtf.contrib.distribute.DistributionStrategyを使用することでマルチGPU上で実行できます。
このAPIを使えば、既存コードをほとんど改変することなく分散学習へ移行できます。
目下、分散方式としてtf.contrib.distribute.MirroredStrategyのみサポートしています。
MirroredStrategy は、シングルマシン上でAllReduce を使った同期学習によりin-grapnレプリケーションを行います。
KerasでDistributionStrategyを使用する場合は、tf.keras.estimator.model_to_estimatorを使って
tf.keras.Model をtf.estimator.Estimatorに変換し、Estimatorインスタンスを使って分散学習を行います。
以下の例では、シングルマシンのマルチGPUにtf.keras.Modelを分散します。
まず、単純なモデルを定義します:
Step28: 入力パイプラインを定義します。input_fn は、複数デバイスにデータを配置するのに使用する tf.data.Dataset を返します。
各デバイスは、入力バッチの一部(デバイス間で均等に分割)を処理します。
Step29: 次に、 tf.estimator.RunConfigを作成し、 train_distribute 引数にtf.contrib.distribute.MirroredStrategy インスタンスを設定します。MirroredStrategyを作成する際、デバイスの一覧を指定する、もしくは引数でnum_gpus(GPU数)を設定することができます。デフォルトでは、使用可能なすべてのGPUを使用する設定になっています:
Step30: Kerasモデルを tf.estimator.Estimator インスタンスへ変換します。
Step31: 最後に、input_fn と steps引数を指定して Estimator インスタンスを学習します: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
!pip install tensorflow=="1.*"
!pip install pyyaml # YAML形式でモデルを保存する際に必要です。
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
Explanation: Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/r1/guide/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/r1/guide/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳はベストエフォートであるため、この翻訳が正確であることや英語の公式ドキュメントの 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリtensorflow/docsにプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [email protected] メーリングリストにご連絡ください。
Kerasは、深層学習モデルを構築・学習するための高水準APIです。
迅速なプロトタイピングから先端研究、実運用にも使用されており、3つの特徴があります:
<b>ユーザーフレンドリー</b><br>
一般的なユースケースに最適化したKerasのAPIは、シンプルで統一性があります。誤った使い方をした場合のエラー出力も明快で、どう対応すべきか一目瞭然です。
<b>モジュール性</b><br>
Kerasのモデルは、設定可能なモジュールをつなぎ合わせて作られます。モジュールのつなぎ方には、ほとんど制約がありません。
<b>拡張性</b><br>
簡単にモジュールをカスタマイズできるため、研究の新しいアイデアを試すのに最適です。新しい層、損失関数を自作し、最高水準のモデルを開発しましょう。
tf.keras のインポート
tf.keras は、TensorFlow版 Keras API 仕様 です。
モデルを構築・学習するための高水準APIであり、TensorFlow特有の機能である
Eagerモードやtf.data パイプライン、 Estimators にも正式に対応しています。
tf.keras は、TensorFlowの柔軟性やパフォーマンスを損ねることなく使いやすさを向上しています。
TensorFlowプログラムの準備として、先ずは tf.keras をインポートしましょう:
End of explanation
model = tf.keras.Sequential()
# ユニット数が64の全結合層をモデルに追加します:
model.add(layers.Dense(64, activation='relu'))
# 全結合層をもう一つ追加します:
model.add(layers.Dense(64, activation='relu'))
# 出力ユニット数が10のソフトマックス層を追加します:
model.add(layers.Dense(10, activation='softmax'))
Explanation: tf.keras ではKerasと互換性のあるコードを実行できますが、注意点もあります:
最新リリースのTensorFlowに同梱されている tf.keras のバージョンと、pipインストールした最新の keras のバージョンが同一とは限りません。バージョンは tf.keras.__version__ の出力をご確認ください。
モデルの重みを保存する場合、
tf.keras のデフォルトの保存形式は チェックポイント形式です。
HDF5形式にて保存する場合は、 save_format='h5' オプションを指定してください。
単純なモデルの構築
シーケンシャル モデル
Kerasでは、<b>層</b>を組み合わせて<b>モデル</b>を構築します。
モデルは通常、複数の層から成るグラフ構造をしています。
最も一般的なモデルは、単純に層を積み重ねる類の tf.keras.Sequential モデルです。
単純な全結合ネットワーク(いわゆる マルチ レイヤー パーセプトロン)を構築してみましょう:
End of explanation
# シグモイド層を1層作る場合:
layers.Dense(64, activation='sigmoid')
# 別の記法:
layers.Dense(64, activation=tf.sigmoid)
# カーネル行列に係数0,01のL1正則化を課した全結合層:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# バイアスベクトルに係数0,01のL2正則化を課した全結合層:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# カーネルをランダム直交行列で初期化した全結合層:
layers.Dense(64, kernel_initializer='orthogonal')
# バイアスベクトルを2.0で初期化した全結合層:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
Explanation: 層の設定
tf.keras.layers はさまざまな層を提供していますが、共通のコンストラクタ引数があります:
activation: 層の活性化関数を設定します。組み込み関数、もしくは呼び出し可能オブジェクトの名前で指定します。デフォルト値は、活性化関数なし。
kernel_initializer ・ bias_initializer: 層の重み(カーネルとバイアス)の初期化方式。名前、もしくは呼び出し可能オブジェクトで指定します。デフォルト値は、 "Glorot uniform" 。
kernel_regularizer ・ bias_regularizer:層の重み(カーネルとバイアス)に適用する、L1やL2等の正則化方式。デフォルト値は、正則化なし。
コンストラクタ引数を使って tf.keras.layers.Dense 層をインスタンス化する例を以下に示します:
End of explanation
model = tf.keras.Sequential([
# ユニット数64の全結合層をモデルに追加する:
layers.Dense(64, activation='relu', input_shape=(32,)),
# もう1層追加する:
layers.Dense(64, activation='relu'),
# 出力ユニット数10のソフトマックス層を追加する:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: 学習と評価
学習の準備
モデルを構築したあとは、compile メソッドを呼んで学習方法を構成します:
End of explanation
# 平均二乗誤差 回帰モデルを構成する。
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # 平均二乗誤差
metrics=['mae']) # 平均絶対誤差
# 多クラス分類モデルを構成する。
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
Explanation: tf.keras.Model.compile には3つの重要な引数があります:
optimizer: このオブジェクトが訓練方式を規定します。 tf.train モジュールから
tf.train.AdamOptimizerや tf.train.RMSPropOptimizer、
tf.train.GradientDescentOptimizer等のオプティマイザ インスタンスを指定します。
loss: 最適化の過程で最小化する関数を指定します。平均二乗誤差(mse)やcategorical_crossentropy、
binary_crossentropy等が好んで使われます。損失関数は名前、もしくは tf.keras.losses モジュールから呼び出し可能オブジェクトとして指定できます。
metrics: 学習の監視に使用します。 名前、もしくはtf.keras.metrics モジュールから呼び出し可能オブジェクトとして指定できます。
学習用モデルの構成例を2つ、以下に示します:
End of explanation
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
Explanation: NumPy データの入力
小規模なデータセットであれば、モデルを学習・評価する際にインメモリの NumPy配列を使いましょう。
モデルは fit メソッドを使って学習データに適合させます。
End of explanation
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
Explanation: tf.keras.Model.fit は3つの重要な引数があります:
epochs: エポック は学習の構成単位で、(バッチに分割した)全入力データを一巡したものを1エポックと換算します。
batch_size: NumPyデータを渡されたモデルは、データをバッチに分割し、それを順繰りに舐めて学習を行います。一つのバッチに配分するサンプル数を、バッチサイズとして整数で指定します。全サンプル数がバッチサイズで割り切れない場合、最後のバッチだけ小さくなる可能性があることに注意しましょう。
validation_data: モデルの試作中に評価データを使って簡単にパフォーマンスを監視したい場合は、この引数に入力とラベルの対を渡すことで、各エポックの最後に推論モードで評価データの損失と評価指標を表示することができます。
validation_data の使用例:
End of explanation
# データセットのインスタンス化の例:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# `fit` にデータセットを渡す際は、`steps_per_epoch` の指定をお忘れなく:
model.fit(dataset, epochs=10, steps_per_epoch=30)
Explanation: tf.data データセットの入力
大規模なデータセット、もしくは複数デバイスを用いた学習を行う際は Datasets API を使いましょう。 fitメソッドにtf.data.Dataset インスタンスを渡します:
End of explanation
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
Explanation: fit メソッドの引数 steps_per_epoch には、1エポックあたりの学習ステップ数を指定します。
Dataset がバッチを生成するため batch_sizeの指定は不要です。
Dataset は評価データにも使えます:
End of explanation
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
Explanation: 評価と推論
tf.keras.Model.evaluate と tf.keras.Model.predict メソッドは、NumPyデータとtf.data.Datasetに使えます。
推論モードでデータの損失と評価指標を評価する例を示します:
End of explanation
result = model.predict(data, batch_size=32)
print(result.shape)
Explanation: 推論 結果を最終層のNumPy配列として出力する例を示します:
End of explanation
inputs = tf.keras.Input(shape=(32,)) # プレイスホルダのテンソルを返します。
# 層のインスタンスは呼び出し可能で、テンソルを返します。
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
Explanation: 高度なモデルの構築
Functional API
tf.keras.Sequential モデルは層を積み重ねる単純なつくりであり、あらゆるモデルに対応しているわけではありません。
以下に挙げる複雑な構成のモデルを構築するには
Keras functional API
を使いましょう:
入力ヘッドが複数あるモデル
出力ヘッドが複数あるモデル
共有層(おなじ層が複数回呼び出される)を含むモデル
(残差結合のように)データの流れが分岐するモデル
Functional API を用いたモデル構築の流れ:
層のインスタンスは呼び出し可能で、テンソルを返します。
入力テンソルと出力テンソルを使ってtf.keras.Modelインスタンスを定義します。
モデルはSequentialモデルと同様の方法で学習します。
Functional API を使って全結合ネットワークを構築する例を示します:
End of explanation
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# コンパイル時に学習方法を指定します。
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# 5エポック学習します。
model.fit(data, labels, batch_size=32, epochs=5)
Explanation: inputsとoutputsを引数にモデルをインスタンス化します。
End of explanation
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# 層をここに定義します。
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# (`__init__`)にてあらかじめ定義した層を使って
# 順伝播をここに定義します。
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# 派生モデルを使用する場合、
# このメソッドをオーバーライドすることになります。
# 派生モデルを使用しない場合、このメソッドは省略可能です。
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
Explanation: モデルの派生
tf.keras.Model を継承し順伝播を定義することでカスタムモデルを構築できます。
__init__ メソッドにクラス インスタンスの属性として層をつくります。
call メソッドに順伝播を定義します。
順伝播を命令型で記載できるため、モデルの派生は
Eagerモード でより威力を発揮します。
キーポイント:目的にあったAPIを選択しましょう。派生モデルは柔軟性を与えてくれますが、その代償にモデルはより複雑になりエラーを起こしやすくなります。目的がFunctional APIで賄えるのであれば、そちらを使いましょう。
tf.keras.Modelを継承して順伝播をカスタマイズした例を以下に示します:
End of explanation
model = MyModel(num_classes=10)
# コンパイル時に学習方法を指定します。
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# 5エポック学習します。
model.fit(data, labels, batch_size=32, epochs=5)
Explanation: 今定義した派生モデルをインスンス化します。
End of explanation
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# 学習可能な重みを指定します。
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# 最後に`build` メソッドを呼ぶのをお忘れなく。
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
Explanation: 層のカスタマイズ
tf.keras.layers.Layerを継承して層をカスタマイズするには、以下のメソッドを実装します:
build: 層の重みを定義します。add_weightメソッドで重みを追加します。
call: 順伝播を定義します。
compute_output_shape: 入力の形状をもとに出力の形状を算出する方法を指定します。
必須ではありませんが、get_configメソッド と from_config クラスメソッドを実装することで層をシリアライズすることができます。
入力のカーネル行列を matmul (行列乗算)するカスタム層の実装例:
End of explanation
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# コンパイル時に学習方法を指定します。
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# 5エポック学習します。
model.fit(data, labels, batch_size=32, epochs=5)
Explanation: カスタマイズした層を使ってモデルを構築します:
End of explanation
callbacks = [
# `val_loss` が2エポック経っても改善しなければ学習を中断させます。
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# TensorBoard用ログを`./logs` ディレクトリに書き込みます。
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
Explanation: コールバック
コールバックは、学習中のモデルの挙動をカスタマイズするためにモデルに渡されるオブジェクトです。
コールバック関数は自作する、もしくは以下に示すtf.keras.callbacksが提供する組み込み関数を利用できます:
tf.keras.callbacks.ModelCheckpoint:モデルのチェックポイントを一定間隔で保存します。
tf.keras.callbacks.LearningRateScheduler:学習率を動的に変更します。
tf.keras.callbacks.EarlyStopping:評価パフォーマンスが向上しなくなったら学習を中断させます。
tf.keras.callbacks.TensorBoard: モデルの挙動を
TensorBoardで監視します。
tf.keras.callbacks.Callbackを使用するには、モデルの fit メソッドにコールバック関数を渡します:
End of explanation
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# TensorFlow チェックポイント ファイルに重みを保存します。
model.save_weights('./weights/my_model')
# モデルの状態を復元します。
# 復元対象のモデルと保存されていた重みのモデル構造が同一である必要があります。
model.load_weights('./weights/my_model')
Explanation: <a id='weights_only'></a>
保存と復元
重みのみ
tf.keras.Model.save_weightsを使ってモデルの重みの保存やロードを行います。
End of explanation
# 重みをHDF5形式で保存します。
model.save_weights('my_model.h5', save_format='h5')
# モデルの状態を復元します。
model.load_weights('my_model.h5')
Explanation: デフォルトでは、モデルの重みは
TensorFlow チェックポイント 形式で保存されます。
重みはKerasのHDF5形式でも保存できます(マルチバックエンド実装のKerasではHDF5形式がデフォルト):
End of explanation
# JSON形式にモデルをシリアライズします
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
Explanation: 構成のみ
モデルの構成も保存可能です。
モデル構造を重み抜きでシリアライズします。
元のモデルのコードがなくとも、保存された構成で再構築できます。
Kerasがサポートしているシリアライズ形式は、JSONとYAMLです。
End of explanation
fresh_model = tf.keras.models.model_from_json(json_string)
Explanation: JSONから(新たに初期化して)モデルを再構築します:
End of explanation
yaml_string = model.to_yaml()
print(yaml_string)
Explanation: YAML形式でモデルを保存するには、
TensorFlowをインポートする前に あらかじめpyyamlをインストールしておく必要があります:
End of explanation
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
Explanation: YAMLからモデルを再構築します:
End of explanation
# 層の浅いモデルを構築します。
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# HDF5ファイルにモデル全体を保存します。
model.save('my_model.h5')
# 重みとオプティマイザを含む 全く同一のモデルを再構築します。
model = tf.keras.models.load_model('my_model.h5')
Explanation: 注意:callメソッド内ににPythonコードでモデル構造を定義するため、派生モデルはシリアライズできません。
モデル全体
モデルの重み、構成からオプティマイザ設定までモデル全体をファイルに保存できます。
そうすることで、元のコードなしに、チェックポイントで保存したときと全く同じ状態から学習を再開できます。
End of explanation
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
Explanation: Eagerモード
Eagerモード は、オペレーションを即時に評価する命令型のプログラミング環境です。
Kerasでは必要ありませんが、tf.kerasでサポートされておりプログラムを検査しデバッグするのに便利です。
すべてのtf.kerasモデル構築用APIは、Eagerモード互換性があります。
Sequential や Functional APIも使用できますが、
Eagerモードは特に派生モデル の構築や
層のカスタマイズに有益です。
(既存の層の組み合わせでモデルを作成するAPIの代わりに)
順伝播をコードで実装する必要があります。
詳しくは Eagerモード ガイド
(カスタマイズした学習ループとtf.GradientTapeを使ったKerasモデルの適用事例)をご参照ください。
分散
Estimators
Estimators は分散学習を行うためのAPIです。
実運用に耐えるモデルを巨大なデータセットを用いて分散学習するといった産業利用を目的にすえています。
tf.keras.Modelでtf.estimator APIによる学習を行うには、
tf.keras.estimator.model_to_estimatorを使ってKerasモデルを tf.estimator.Estimatorオブジェクトに変換する必要があります。
KerasモデルからEstimatorsを作成するをご参照ください。
End of explanation
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
Explanation: 注意:Estimator input functionsをデバッグしてデータの検査を行うにはEagerモードで実行してください。
マルチGPU
tf.kerasモデルはtf.contrib.distribute.DistributionStrategyを使用することでマルチGPU上で実行できます。
このAPIを使えば、既存コードをほとんど改変することなく分散学習へ移行できます。
目下、分散方式としてtf.contrib.distribute.MirroredStrategyのみサポートしています。
MirroredStrategy は、シングルマシン上でAllReduce を使った同期学習によりin-grapnレプリケーションを行います。
KerasでDistributionStrategyを使用する場合は、tf.keras.estimator.model_to_estimatorを使って
tf.keras.Model をtf.estimator.Estimatorに変換し、Estimatorインスタンスを使って分散学習を行います。
以下の例では、シングルマシンのマルチGPUにtf.keras.Modelを分散します。
まず、単純なモデルを定義します:
End of explanation
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
Explanation: 入力パイプラインを定義します。input_fn は、複数デバイスにデータを配置するのに使用する tf.data.Dataset を返します。
各デバイスは、入力バッチの一部(デバイス間で均等に分割)を処理します。
End of explanation
strategy = tf.contrib.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
Explanation: 次に、 tf.estimator.RunConfigを作成し、 train_distribute 引数にtf.contrib.distribute.MirroredStrategy インスタンスを設定します。MirroredStrategyを作成する際、デバイスの一覧を指定する、もしくは引数でnum_gpus(GPU数)を設定することができます。デフォルトでは、使用可能なすべてのGPUを使用する設定になっています:
End of explanation
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
Explanation: Kerasモデルを tf.estimator.Estimator インスタンスへ変換します。
End of explanation
keras_estimator.train(input_fn=input_fn, steps=10)
Explanation: 最後に、input_fn と steps引数を指定して Estimator インスタンスを学習します:
End of explanation |
11,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Two Envelope Paradox - Simulated
Introduction
Bayesian statistics can most naively be described as the art of thinking conditionally. Conditional probability leads often to unexpected outcomes and violates our basic intuition in most surprising ways. One of the most profound violations the famous two envelope paradox.
What's the paradox?
The paradox can be stated in a very simple way
You are given two identical envelopes $a$ and $b$ containing an unknown amount of money $X$. You are being told that one envelope contains twice the amount of money than the other, but you don't know which one. After choosing one envelope you are allowed to open it and decide if you want to switch.
* Strategy 1
Step1: Helper methods
Switching the envelopes is easy
Step2: We also need some helper methods to evaluate the success of our strategy
Step3: Implementing the actual strategy
The strategy contains of four steps
Step4: Performing the experiment
To keep it simple we draw the monetary values from a uniform distribution, whereas our random threshold is exponentially distributed.
Step5: Let us run the experiment $n_{trials}$ times. Since we return values $1$ and $0$ for success and failure, respectively, we can find a good approximation of the success probability by computing the mean.
Step6: It seems strategy 2 from above is correct! The probability is significantly higher than $0.5$ and switching seems like a good choice. How is that possible?
Resolution of the paradox
Looking back at strategy 2, we silently dropped the conditions when evaluating the expectation values. In fact if we let $I$ be an indicator random variable of the event $Y=2X$, then we have
\begin{align}
E( Y | Y = 2X) = E(2X | I = 1) \neq E(2X)
\end{align}
There is no reason we can drop the condition $I=1$ without ensuring that these events are independent. In fact, the above simulation shows, that they are not independent. This leads us to the surprising result that observing the result of one envelope gives us information about the amount of money in the second envelope! It also shows that a naive assumption of symmetry can lead to false intuition. The symmetry argument of strategy 1 secretly assumes independence of the observations, which is clearly violated.
Mathematical resolution of the paradox
To base the resolution is a little more formal description, we denote the observed amount of money of as $x_a$ and $x_b$ and can safely assume $x_a<x_b$. We switch the envelopes in the event $T>x_a$. Let us denote the success of the strategy as event $S$ and choosing envelope $a$ as event $A$. Assuming $T\sim Expo(1)$, we have $P(S|A) = P(T>x_a) = 1-(1-e^{-x_a}) = e^{-x_a}$ and correspondingly $P(S|A^c) = P(T<x_b) = 1- e^{-x_b}$. Adding the events correctly we obtain
\begin{align}
\frac{1}{2} P(S|A) + \frac{1}{2} P(S|A^c) = \frac{1}{2} + \frac{1}{2} \left( e^{-x_a} - e^{-x_b}\right) > \frac{1}{2}
\end{align}
Dependency on the choice of the threshold
The above example was very specific on the choice of our threshold. However, it is now easy to investigate how our success rate depends on the rate parameter $\lambda$ of our distribution $T\sim Expo(\lambda)$ or on the choice of the money amounts in the envelopes.
In this section we use the global money distribution
Step7: Dependence on the rate parameter
We investigate the rate parameter first. To have a continous curve we need a method to repeatedly compute the success rate.
Step8: Now let's create a couple of rates and plot the final result
Step9: We see that this curve is pretty noisy. To smoothen the noise, we can create a bunch of curves and average them. Alternatively we can create a bunch of success rates for every threshold rate and average them.
Step10: Let's plot the results!
Step11: We see that after the noise reduction we still have a pronounced feature. To fully appreciate that let's see what the deviation from the mean is for an eyeballed optimal rate of $\lambda = 2\, 10^{-5}$
Step12: We win! Even within a standard deviation the success rate is significant.
However we also see that not every value of the threshold leads to a successful stratetgy and for the wrong values of the threshold we gain no advatange and the two envelopes seem uncorrelated again.
Dependency on the distribution of the money
Let us see how the distribution range for the money comes into play here. Let us do the above analysis for various different upper ranges of the money distribution.
Uniform money distribution
Step13: We can clearly see that with increasing the spread of the distribution the maximum is moving towars smaller rates. In fact you can roughly say that the maximum occurs at at a rate close to the inverse of the upper boundary. This makes sense since the expectation value $E(T) = 1/\lambda$ and thus an optimal threshold choice is given by the inverse of the upper limit of the money distribution.
Different distributions
The previous choices of the money distribution have been motivated by the idea that we have no knowledge about the value of the money in the envelope at all. This means that every value is equally likely and the natural choice is a uniform distribution.
However, what happens if we do gain some insight to the distribution? A good assumption is that the person handing out the money will do so according to a normal distribution $X\sim N(\mu, \sigma^2)$. Let's see what happens
Step14: Our advantage is gone! The fact that we do know something about the money distribution mysteriously decorrelates the envelopes and renders our strategy obsolete. We might as well stick with any envelope independent of the amount of money we find in it. However, this might not be a bad trade, since we now have a rough idea of what is in either envelope before we even open it.
Summary
The two-envelope paradox is quite surprising. In a first hunch we would assume that the situation is quite symmetric and that there is no chance we could potentially gain knowledge from opening an envelope. However, it turns out, that we can devise a strategy that allows us to make an optimal choice whether to switch envelopes or not. The two envelopes are correlated despite what meets the eye! However, our optimal choice hinges on the range and type of distribution. We only gain an advantage if we have zero prior knowledge of the money distribution, i.e. for a uniform. If we gain knowledge in the form of the money distribution, as is the case for the normal, our strategy is rendered useless. This might not be too bad as it probably eases our minds by making the choice easier.
I hope you enjoy this notebook and have a lot of fun playing some more with it, such as exploring different distributions and maybe strategies. Contributions are more than welcome!
Miscellenea
References
Stylesheets and idea from
- [1] Probabilistic Programming and Bayesian Methods for Hackers
Stylesheet | Python Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
from numpy.random import choice
%matplotlib inline
matplotlib.style.use('ggplot')
matplotlib.rc_params_from_file("../styles/matplotlibrc" ).update()
Explanation: The Two Envelope Paradox - Simulated
Introduction
Bayesian statistics can most naively be described as the art of thinking conditionally. Conditional probability leads often to unexpected outcomes and violates our basic intuition in most surprising ways. One of the most profound violations the famous two envelope paradox.
What's the paradox?
The paradox can be stated in a very simple way
You are given two identical envelopes $a$ and $b$ containing an unknown amount of money $X$. You are being told that one envelope contains twice the amount of money than the other, but you don't know which one. After choosing one envelope you are allowed to open it and decide if you want to switch.
* Strategy 1: The situation seems absolutely symmetric, independent of which envelope you choose. As we don't know any details about the distribution we might argue that it doesn't matter if we switch - hence
\begin{align}
E( Y ) = E(X)
\end{align}
However, after a while you start to doubt yourself, and you start thinking along a different line.
* Strategy 2: Say you choose envelope $a$ and observe an amount $X$. Then you know that with probablity $p=0.5$ the amount in envelope $b$ is $Y=2X$ or with $p=0.5$ it is $Y=X/2$. A quick calculation using conditional probability then reveals
\begin{align}
E( Y ) = \frac{1}{2} E(Y | Y = 2X ) + \frac{1}{2} E(Y | Y = X /2 )
\end{align}
and one might quickly come to the conclusion that
\begin{align}
\frac{1}{2} E( 2X ) + \frac{1}{2} E( X /2 )= \frac{5}{4} E(X) > E(X)
\end{align}
and you should switch the envelope as you have a statistical 25% gain from switching to envelope $b$.
Which strategy is right? It seems that the second strategy can be argued as easily with reverse roles, which would then lead to a constant switching back and forth between the envelopes.
To answer this question let's simulate the problem. First of all we need to devise a strategy after which we decide when to switch. This can be done quite simply by chosing an arbitrary threshold $T$. In the event $X<T$ we switch envelopes, otherwise we stay with the original choice
Numerical solution of the problem
Python imports of the important packages
End of explanation
def switch_envelope(chosen_envelope):
if chosen_envelope == 'a':
return 'b'
else:
return 'a'
Explanation: Helper methods
Switching the envelopes is easy
End of explanation
'''Find out if the final envelope _actually_ contains the highest value'''
def isLargerAmount(chosen_envelope, envelope_contents):
inverted_contents = create_inverted_envelope(envelope_contents)
if chosen_envelope == inverted_contents.get(np.max(inverted_contents.keys())):
return 1 # success
else:
return 0 # failure
'''We need an inverse lookup table to associate the highest value with an envelope'''
def create_inverted_envelope(envelope_contents):
dct = {}
for key in envelope_contents.keys():
dct[envelope_contents.get(key)] = key
return dct
Explanation: We also need some helper methods to evaluate the success of our strategy
End of explanation
def singleExperiment(money_distribution, threshold_distribution):
# create two identical envelopes with a random amount of money
envelope_contents = {'a': money_distribution.random().item(),
'b': money_distribution.random().item()}
#choose an envelope
chosen_envelope = choice(['a','b'])
#check for the amount and switch if necessary
if (threshold_distribution.random().item() >= envelope_contents[chosen_envelope]):
chosen_envelope = switch_envelope(chosen_envelope)
#evaluate whether strategy was a success
return isLargerAmount(chosen_envelope, envelope_contents)
Explanation: Implementing the actual strategy
The strategy contains of four steps:
create two envelopes containing iid amount of money
choose one of them at random
check for the money and switch if it $X<T$ where $T$ is draw from a different distribution
evaluate if the strategy was successful
End of explanation
money = pm.DiscreteUniform('money', 100, 100000)
threshold = pm.Exponential("threshold", 0.00005)
Explanation: Performing the experiment
To keep it simple we draw the monetary values from a uniform distribution, whereas our random threshold is exponentially distributed.
End of explanation
def run_n_experiments(n_trials, money, threshold):
lst = []
for idx in range(n_trials):
lst.append(singleExperiment(money, threshold))
return np.mean(lst)
print 'The success probability is approximately p = %0.3f' % run_n_experiments(100, money, threshold)
Explanation: Let us run the experiment $n_{trials}$ times. Since we return values $1$ and $0$ for success and failure, respectively, we can find a good approximation of the success probability by computing the mean.
End of explanation
money = pm.DiscreteUniform('money', 100, 100000)
Explanation: It seems strategy 2 from above is correct! The probability is significantly higher than $0.5$ and switching seems like a good choice. How is that possible?
Resolution of the paradox
Looking back at strategy 2, we silently dropped the conditions when evaluating the expectation values. In fact if we let $I$ be an indicator random variable of the event $Y=2X$, then we have
\begin{align}
E( Y | Y = 2X) = E(2X | I = 1) \neq E(2X)
\end{align}
There is no reason we can drop the condition $I=1$ without ensuring that these events are independent. In fact, the above simulation shows, that they are not independent. This leads us to the surprising result that observing the result of one envelope gives us information about the amount of money in the second envelope! It also shows that a naive assumption of symmetry can lead to false intuition. The symmetry argument of strategy 1 secretly assumes independence of the observations, which is clearly violated.
Mathematical resolution of the paradox
To base the resolution is a little more formal description, we denote the observed amount of money of as $x_a$ and $x_b$ and can safely assume $x_a<x_b$. We switch the envelopes in the event $T>x_a$. Let us denote the success of the strategy as event $S$ and choosing envelope $a$ as event $A$. Assuming $T\sim Expo(1)$, we have $P(S|A) = P(T>x_a) = 1-(1-e^{-x_a}) = e^{-x_a}$ and correspondingly $P(S|A^c) = P(T<x_b) = 1- e^{-x_b}$. Adding the events correctly we obtain
\begin{align}
\frac{1}{2} P(S|A) + \frac{1}{2} P(S|A^c) = \frac{1}{2} + \frac{1}{2} \left( e^{-x_a} - e^{-x_b}\right) > \frac{1}{2}
\end{align}
Dependency on the choice of the threshold
The above example was very specific on the choice of our threshold. However, it is now easy to investigate how our success rate depends on the rate parameter $\lambda$ of our distribution $T\sim Expo(\lambda)$ or on the choice of the money amounts in the envelopes.
In this section we use the global money distribution
End of explanation
def createSuccessValuesFrom(rates):
success_values = []
for rate in rates:
threshold = pm.Exponential("threshold", rate)
success_values.append(run_n_experiments(1000, money, threshold))
return success_values
Explanation: Dependence on the rate parameter
We investigate the rate parameter first. To have a continous curve we need a method to repeatedly compute the success rate.
End of explanation
array_of_rates = np.logspace(-8, 0, num=100)
plt.semilogx(array_of_rates, createSuccessValuesFrom(array_of_rates))
Explanation: Now let's create a couple of rates and plot the final result
End of explanation
def averageSuccessRate(threshold, number_of_repetitions):
trial = 0
lst = []
while trial < number_of_repetitions:
lst.append(run_n_experiments(100, money, threshold))
trial += 1
return np.mean(lst), np.std(lst)
def createSmoothSuccessValuesFrom(rates, number_of_repetitions):
success_values = []
stddev = []
for rate in rates:
threshold = pm.Exponential("threshold", rate)
success_values.append(averageSuccessRate(threshold, number_of_repetitions)[0])
stddev.append(averageSuccessRate(threshold, number_of_repetitions)[1])
return success_values, stddev
Explanation: We see that this curve is pretty noisy. To smoothen the noise, we can create a bunch of curves and average them. Alternatively we can create a bunch of success rates for every threshold rate and average them.
End of explanation
array_of_rates = np.logspace(-8, 0, num=50)
smoothened_rates = createSmoothSuccessValuesFrom(array_of_rates, 25)
plt.semilogx(array_of_rates, smoothened_rates[0])
Explanation: Let's plot the results!
End of explanation
threshold = pm.Exponential('threshold', 0.00002)
print 'The success probability is approximately p = %0.3f +/- %0.3f ' % averageSuccessRate(threshold, 100)
Explanation: We see that after the noise reduction we still have a pronounced feature. To fully appreciate that let's see what the deviation from the mean is for an eyeballed optimal rate of $\lambda = 2\, 10^{-5}$
End of explanation
def createSuccessValuesWithMoneyRangeFrom(rates, money):
success_values = []
for rate in rates:
threshold = pm.Exponential("threshold", rate)
success_values.append(run_n_experiments(1000, money, threshold))
return success_values
money_1 = pm.DiscreteUniform('money', 100, np.power(10, 3))
money_2 = pm.DiscreteUniform('money', 100, np.power(10, 6))
money_3 = pm.DiscreteUniform('money', 100000, np.power(10, 9))
array_of_rates = np.logspace(-10, 0, num=100)
plt.semilogx(array_of_rates, createSuccessValuesWithMoneyRangeFrom(array_of_rates, money_1),
array_of_rates, createSuccessValuesWithMoneyRangeFrom(array_of_rates, money_2),
array_of_rates, createSuccessValuesWithMoneyRangeFrom(array_of_rates, money_3))
Explanation: We win! Even within a standard deviation the success rate is significant.
However we also see that not every value of the threshold leads to a successful stratetgy and for the wrong values of the threshold we gain no advatange and the two envelopes seem uncorrelated again.
Dependency on the distribution of the money
Let us see how the distribution range for the money comes into play here. Let us do the above analysis for various different upper ranges of the money distribution.
Uniform money distribution
End of explanation
money_1 = pm.Normal('money', np.power(10,5), 100)
money_2 = pm.Normal('money', np.power(10,5), 500)
money_3 = pm.Normal('money', np.power(10,5), 10000)
array_of_rates = np.logspace(-8, 0, num=100)
plt.semilogx(array_of_rates, createSuccessValuesWithMoneyRangeFrom(array_of_rates, money_1),
array_of_rates, createSuccessValuesWithMoneyRangeFrom(array_of_rates, money_2),
array_of_rates, createSuccessValuesWithMoneyRangeFrom(array_of_rates, money_3))
Explanation: We can clearly see that with increasing the spread of the distribution the maximum is moving towars smaller rates. In fact you can roughly say that the maximum occurs at at a rate close to the inverse of the upper boundary. This makes sense since the expectation value $E(T) = 1/\lambda$ and thus an optimal threshold choice is given by the inverse of the upper limit of the money distribution.
Different distributions
The previous choices of the money distribution have been motivated by the idea that we have no knowledge about the value of the money in the envelope at all. This means that every value is equally likely and the natural choice is a uniform distribution.
However, what happens if we do gain some insight to the distribution? A good assumption is that the person handing out the money will do so according to a normal distribution $X\sim N(\mu, \sigma^2)$. Let's see what happens:
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Our advantage is gone! The fact that we do know something about the money distribution mysteriously decorrelates the envelopes and renders our strategy obsolete. We might as well stick with any envelope independent of the amount of money we find in it. However, this might not be a bad trade, since we now have a rough idea of what is in either envelope before we even open it.
Summary
The two-envelope paradox is quite surprising. In a first hunch we would assume that the situation is quite symmetric and that there is no chance we could potentially gain knowledge from opening an envelope. However, it turns out, that we can devise a strategy that allows us to make an optimal choice whether to switch envelopes or not. The two envelopes are correlated despite what meets the eye! However, our optimal choice hinges on the range and type of distribution. We only gain an advantage if we have zero prior knowledge of the money distribution, i.e. for a uniform. If we gain knowledge in the form of the money distribution, as is the case for the normal, our strategy is rendered useless. This might not be too bad as it probably eases our minds by making the choice easier.
I hope you enjoy this notebook and have a lot of fun playing some more with it, such as exploring different distributions and maybe strategies. Contributions are more than welcome!
Miscellenea
References
Stylesheets and idea from
- [1] Probabilistic Programming and Bayesian Methods for Hackers
Stylesheet
End of explanation |
11,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dimensionality Reduction with the Shogun Machine Learning Toolbox
By Sergey Lisitsyn (lisitsyn) and Fernando J. Iglesias Garcia (iglesias).
This notebook illustrates <a href="http
Step1: The function above can be used to generate three-dimensional datasets with the shape of a Swiss roll, the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is
Step2: As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the intrinsic dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height.
Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix | Python Code:
import numpy
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
def generate_data(curve_type, num_points=1000):
if curve_type=='swissroll':
tt = numpy.array((3*numpy.pi/2)*(1+2*numpy.random.rand(num_points)))
height = numpy.array((numpy.random.rand(num_points)-0.5))
X = numpy.array([tt*numpy.cos(tt), 10*height, tt*numpy.sin(tt)])
return X,tt
if curve_type=='scurve':
tt = numpy.array((3*numpy.pi*(numpy.random.rand(num_points)-0.5)))
height = numpy.array((numpy.random.rand(num_points)-0.5))
X = numpy.array([numpy.sin(tt), 10*height, numpy.sign(tt)*(numpy.cos(tt)-1)])
return X,tt
if curve_type=='helix':
tt = numpy.linspace(1, num_points, num_points).T / num_points
tt = tt*2*numpy.pi
X = numpy.r_[[(2+numpy.cos(8*tt))*numpy.cos(tt)],
[(2+numpy.cos(8*tt))*numpy.sin(tt)],
[numpy.sin(8*tt)]]
return X,tt
Explanation: Dimensionality Reduction with the Shogun Machine Learning Toolbox
By Sergey Lisitsyn (lisitsyn) and Fernando J. Iglesias Garcia (iglesias).
This notebook illustrates <a href="http://en.wikipedia.org/wiki/Unsupervised_learning">unsupervised learning</a> using the suite of dimensionality reduction algorithms available in Shogun. Shogun provides access to all these algorithms using Tapkee, a C++ library especialized in <a href="http://en.wikipedia.org/wiki/Dimensionality_reduction">dimensionality reduction</a>.
Hands-on introduction to dimension reduction
First of all, let us start right away by showing what the purpose of dimensionality reduction actually is. To this end, we will begin by creating a function that provides us with some data:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def plot(data, embedded_data, colors='m'):
fig = plt.figure()
fig.set_facecolor('white')
ax = fig.add_subplot(121,projection='3d')
ax.scatter(data[0],data[1],data[2],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
ax = fig.add_subplot(122)
ax.scatter(embedded_data[0],embedded_data[1],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
plt.show()
import shogun as sg
# wrap data into Shogun features
data, colors = generate_data('swissroll')
feats = sg.features(data)
# create instance of Isomap converter and configure it
isomap = sg.transformer('Isomap')
isomap.put('target_dim', 2)
# set the number of neighbours used in kNN search
isomap.put('k', 20)
# create instance of Multidimensional Scaling converter and configure it
mds = sg.transformer('MultidimensionalScaling')
mds.put('target_dim', 2)
# embed Swiss roll data
embedded_data_mds = mds.transform(feats).get('feature_matrix')
embedded_data_isomap = isomap.transform(feats).get('feature_matrix')
plot(data, embedded_data_mds, colors)
plot(data, embedded_data_isomap, colors)
Explanation: The function above can be used to generate three-dimensional datasets with the shape of a Swiss roll, the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is: what principle should we use to keep some important relations between datapoints? In fact, different algorithms imply different criteria to answer this question.
Just to start, lets pick some algorithm and one of the data sets, for example lets see what embedding of the Swissroll is produced by the Isomap algorithm. The Isomap algorithm is basically a slightly modified Multidimensional Scaling (MDS) algorithm which finds embedding as a solution of the following optimization problem:
$$
\min_{x'_1, x'_2, \dots} \sum_i \sum_j \| d'(x'_i, x'_j) - d(x_i, x_j)\|^2,
$$
with defined $x_1, x_2, \dots \in X~~$ and unknown variables $x_1, x_2, \dots \in X'~~$ while $\text{dim}(X') < \text{dim}(X)~~~$,
$d: X \times X \to \mathbb{R}~~$ and $d': X' \times X' \to \mathbb{R}~~$ are defined as arbitrary distance functions (for example Euclidean).
Speaking less math, the MDS algorithm finds an embedding that preserves pairwise distances between points as much as it is possible. The Isomap algorithm changes quite small detail: the distance - instead of using local pairwise relationships it takes global factor into the account with shortest path on the neighborhood graph (so-called geodesic distance). The neighborhood graph is defined as graph with datapoints as nodes and weighted edges (with weight equal to the distance between points). The edge between point $x_i~$ and $x_j~$ exists if and only if $x_j~$ is in $k~$ nearest neighbors of $x_i$. Later we will see that that 'global factor' changes the game for the swissroll dataset.
However, first we prepare a small function to plot any of the original data sets together with its embedding.
End of explanation
# wrap data into Shogun features
data, colors = generate_data('helix')
features = sg.features(data)
# create MDS instance
converter = sg.transformer('StochasticProximityEmbedding')
converter.put('target_dim', 2)
# embed helix data
embedded_features = converter.transform(features)
embedded_data = embedded_features.get('feature_matrix')
plot(data, embedded_data, colors)
Explanation: As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the intrinsic dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height.
Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix:
End of explanation |
11,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name="inputs_real")
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name = "inputs_z")
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope("generator", reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope("discriminator", reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, reuse=False, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
# print(train_loss_d.shape)
# print(train_loss_g.shape)
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
11,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: MNIST on TPU (Tensor Processing Unit)<br>or GPU using tf.Keras and tf.data.Dataset
<table><tr><td><img valign="middle" src="https
Step2: (you can double-ckick on collapsed cells to view the non-essential code inside)
TPU or GPU detection
Step3: Parameters
Step4: tf.data.Dataset
Step5: Let's have a look at the data
Step6: Keras model
Step7: Train and validate the model
Step8: Visualize predictions
Step9: Deploy the trained model to AI Platform prediction
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS (Google Cloud Storage) bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no AI Platform charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
Step10: Colab-only auth
Step11: Export the model for serving from AI Platform
Step12: Deploy the model
This uses the command-line interface. You can do the same thing through the AI Platform UI at https
Step13: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ai-platform"
command line tool but any tool that can send a JSON payload to a REST endpoint will work. | Python Code:
import os, re, time, json
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
AUTOTUNE = tf.data.AUTOTUNE
print("Tensorflow version " + tf.__version__)
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
batch_train_ds = training_dataset.unbatch().batch(N)
# eager execution: loop through datasets normally
for validation_digits, validation_labels in validation_dataset:
validation_digits = validation_digits.numpy()
validation_labels = validation_labels.numpy()
break
for training_digits, training_labels in batch_train_ds:
training_digits = training_digits.numpy()
training_labels = training_labels.numpy()
break
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
Explanation: MNIST on TPU (Tensor Processing Unit)<br>or GPU using tf.Keras and tf.data.Dataset
<table><tr><td><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/keras-tensorflow-tpu300px.png" width="300" alt="Keras+Tensorflow+Cloud TPU"></td></tr></table>
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
<h3><a href="https://cloud.google.com/gpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/gpu-hexagon.png" width="50"></a> Train on GPU or TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
Select a GPU or TPU backend (Runtime > Change runtime type)
Run all cells up to and including "Train and validate the model" and "Visualize predictions".
<h3><a href="https://cloud.google.com/ml-engine/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/mlengine-hexagon.png" width="50"></a> Deploy to AI Platform</h3>
Configure a Google cloud project and bucket as well as the desired model name in "Deploy the trained model".
Run the remaining cells to the end to deploy your model to Cloud AI Platform Prediction and test the deployment.
TPUs are located in Google Cloud, for optimal performance, they read data directly from Google Cloud Storage (GCS).
Imports
End of explanation
try: # detect TPUs
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError: # detect GPUs
strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines
#strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
#strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines
print("Number of accelerators: ", strategy.num_replicas_in_sync)
Explanation: (you can double-ckick on collapsed cells to view the non-essential code inside)
TPU or GPU detection
End of explanation
BATCH_SIZE = 64 * strategy.num_replicas_in_sync # Gobal batch size.
# The global batch size will be automatically sharded across all
# replicas by the tf.data.Dataset API. A single TPU has 8 cores.
# The best practice is to scale the batch size by the number of
# replicas (cores). The learning rate should be increased as well.
LEARNING_RATE = 0.01
LEARNING_RATE_EXP_DECAY = 0.6 if strategy.num_replicas_in_sync == 1 else 0.7
# Learning rate computed later as LEARNING_RATE * LEARNING_RATE_EXP_DECAY**epoch
# 0.7 decay instead of 0.6 means a slower decay, i.e. a faster learnign rate.
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
Explanation: Parameters
End of explanation
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(AUTOTUNE) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM
dataset = dataset.batch(10000)
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
# This model trains to 99.4% accuracy in 10 epochs (with a batch size of 64)
def make_model():
model = tf.keras.Sequential(
[
tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1), name="image"),
tf.keras.layers.Conv2D(filters=12, kernel_size=3, padding='same', use_bias=False), # no bias necessary before batch norm
tf.keras.layers.BatchNormalization(scale=False, center=True), # no batch norm scaling necessary before "relu"
tf.keras.layers.Activation('relu'), # activation after batch norm
tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(filters=32, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, use_bias=False),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.4), # Dropout on dense layer only
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
# Going back and forth between TPU and host is expensive. Better to run 128 batches on the TPU before reporting back.
return model
with strategy.scope():
model = make_model()
# print model layers
model.summary()
# set up learning rate decay
lr_decay = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: LEARNING_RATE * LEARNING_RATE_EXP_DECAY**epoch,
verbose=True)
Explanation: Keras model: 3 convolutional layers, 2 dense layers
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
EPOCHS = 10
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
print("Steps per epoch: ", steps_per_epoch)
history = model.fit(training_dataset,
steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1,
callbacks=[lr_decay])
Explanation: Train and validate the model
End of explanation
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation
PROJECT = "" #@param {type:"string"}
BUCKET = "gs://" #@param {type:"string", default:"jddj"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "mnist" #@param {type:"string"}
MODEL_VERSION = "v1" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
assert re.search(r'gs://.+', BUCKET), 'For this part, you need a GCS bucket. Head to http://console.cloud.google.com/storage and create one.'
Explanation: Deploy the trained model to AI Platform prediction
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS (Google Cloud Storage) bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no AI Platform charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
End of explanation
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user() # Authenticates the Colab machine to access your private GCS buckets.
Explanation: Colab-only auth
End of explanation
export_path = os.path.join(BUCKET, 'keras_export', str(time.time()))
# The serving function performig data pre- and post-processing.
# The model itself is captured by this function by closure.
# Pre-processing: images are received in uint8 format converted
# to float32 before being sent to through the model.
# Post-processing: the Keras model outputs digit probabilities. We want
# the detected digits. An additional tf.argmax is needed.
# @tf.function turns the code in this function into a Tensorflow graph that
# can be exported. This way, the model itself, as well as its pre- and post-
# processing steps are exported in the SavedModel and deployed in a single step.
@tf.function(input_signature=[tf.TensorSpec([None, 28*28], dtype=tf.uint8)])
def my_serve(images):
images = tf.cast(images, tf.float32)/255 # pre-processing
probabilities = model(images, training=False) # prediction from model (inference graph only)
classes = tf.argmax(probabilities, axis=-1) # post-processing
return {'digits': classes}
# exporting in the Tensorflow standard SavedModel format with a serving input function
model.save(export_path, signatures={'serving_default': my_serve}, save_format="tf")
print("Model exported to: ", export_path)
# saved_model_cli: a useful too for troubleshooting SavedModels (the tool is part of the Tensorflow installation)
!saved_model_cli show --dir {export_path}
!saved_model_cli show --dir {export_path} --tag_set serve
!saved_model_cli show --dir {export_path} --tag_set serve --signature_def serving_default
# A note on naming:
# The "serve" tag set (i.e. serving functionality) is the only one exported by tf.saved_model.save
# All the other names are defined by the user in the fllowing lines of code:
# def myserve(self, images):
# ******
# return {'digits': classes}
# ******
# tf.saved_model.save(..., signatures={'serving_default': serving_model.myserve})
# ***************
Explanation: Export the model for serving from AI Platform
End of explanation
# Create the model
if NEW_MODEL:
!gcloud ai-platform models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects.models.versions
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/ai-platform/models/{MODEL_NAME}"
!gcloud ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=2.1 --python-version=3.7
Explanation: Deploy the model
This uses the command-line interface. You can do the same thing through the AI Platform UI at https://console.cloud.google.com/ai-platform/models
End of explanation
# prepare digits to send to online prediction endpoint
digits_float32 = np.concatenate((font_digits, validation_digits[:100-N])) # pixel values in [0.0, 1.0] float range
digits_uint8 = np.round(digits_float32*255).astype(np.uint8) # pixel values in [0, 255] int range
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits_uint8:
# the format for AI Platform online predictions is: one JSON object per line
data = json.dumps({"images": digit.tolist()}) # "images" because that was the name you gave this parametr in the serving funtion my_serve
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ai-platform predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
print(predictions)
predictions = np.array([int(p) for p in predictions if p.isdigit()])
display_top_unrecognized(digits_float32, predictions, labels, N, 100//N)
Explanation: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ai-platform"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
End of explanation |
11,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load necessary packages
Step1: Define functions for filtering, moving averages, and normalizing data
Step2: Read bandwidth and rain/temperature data and normalize them
Step3: Smoothing data (11-year running average)
Step4: Calculate correlation and p-values with considering autocorrelation, and the autocorrelations (coef)
Step5: Check the correlation results | Python Code:
%matplotlib inline
from scipy import interpolate
from scipy import special
from scipy.signal import butter, lfilter, filtfilt
import matplotlib.pyplot as plt
import numpy as np
from numpy import genfromtxt
from nitime import algorithms as alg
from nitime import utils
from scipy.stats import t
import pandas as pd
Explanation: Load necessary packages
End of explanation
def butter_lowpass(cutoff, fs, order=3):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = butter(order, normal_cutoff, btype='low', analog=False)
return b, a
def filter(x, cutoff, axis, fs=1.0, order=3):
b, a = butter_lowpass(cutoff, fs, order=order)
y = filtfilt(b, a, x, axis=axis)
return y
def movingaverage(interval, window_size):
window = np.ones(int(window_size))/float(window_size)
return np.convolve(interval, window, 'valid')
def owncorr(x,y,n):
x_ano=np.ma.anomalies(x)
x_sd=np.sum(x_ano**2,axis=0)
y_ano=np.ma.anomalies(y)
y_sd=np.sum(y_ano**2,axis=0)
nomi = np.dot(x_ano,y_ano)
corr = nomi/np.sqrt(np.dot(x_sd[None],y_sd[None]))
# When using AR_est_YW, we should substract mean from
# time series first
x_coef, x_sigma = alg.AR_est_YW (x_ano, 1)
y_coef, y_sigma = alg.AR_est_YW (y_ano, 1)
if x_coef > 1:
eps = np.spacing(1.0)
x_coef = 1.0 - eps**(1/4)
elif x_coef < 0:
x_coef = 0.0
if y_coef > 1:
eps = np.spacing(1.0)
y_coef = 1.0 - eps**(1/4)
elif y_coef < 0:
y_coef = 0.0
neff = n*(1-x_coef*y_coef)/(1+x_coef*y_coef)
if neff <3:
neff = 3
coef = []
coef.append(x_coef)
coef.append(y_coef)
tval = corr/np.sqrt(1-corr**2)*np.sqrt(neff-2)
pval = t.sf(abs(tval),neff-2)*2
return corr,pval,coef
def gaussianize(X):
n = X.shape[0]
#p = X.shape[1]
Xn = np.empty((n,))
Xn[:] = np.NAN
nz = np.logical_not(np.isnan(X))
index = np.argsort(X[nz])
rank = np.argsort(index)
CDF = 1.*(rank+1)/(1.*n) -1./(2*n)
Xn[nz] = np.sqrt(2)*special.erfinv(2*CDF -1)
return Xn
Explanation: Define functions for filtering, moving averages, and normalizing data
End of explanation
data = genfromtxt('data/scotland.csv', delimiter=',')
bandw = data[0:115,4] # band width (1879-1993), will be correlated with T/P
bandwl = data[3:129,4] # band width (1865-1990), will be correlation with winter NAO
bandwn = gaussianize(bandw) #normalized band width
bandwln = gaussianize(bandwl) #normalized band width
rain = genfromtxt('data/Assynt_P.txt') #precipitaiton
temp = genfromtxt('data/Assynt_T.txt') #temperature
wnao = genfromtxt('data/wnao.txt') #winter NAO
wnao = wnao[::-1]
rainn = gaussianize(rain)
tempn = gaussianize(temp)
#calculate the ratio of temperature over precipitation
ratio = temp/rain
ration = gaussianize(ratio)
Explanation: Read bandwidth and rain/temperature data and normalize them
End of explanation
bandw_fil = movingaverage(bandw, 11)
bandwn_fil = movingaverage(bandwn, 11)
bandwl_fil = movingaverage(bandwl, 11)
rain_fil = movingaverage(rain, 11)
rainn_fil = movingaverage(rainn, 11)
ratio_fil = movingaverage(ratio, 11)
wnao_fil = movingaverage(wnao, 11)
Explanation: Smoothing data (11-year running average)
End of explanation
corr_ratio,pval_ratio,coef = owncorr(bandw_fil,ratio_fil,115) #correlation between smoothed bandwidth and ratio
corr_nao,pval_nao,coef_nao = owncorr(bandwl_fil,wnao_fil,126) #correlation between smoothed bandwidth and winter NAO
corr_n,pval_n,coef_n = owncorr(bandwn,ration,115) #correlation between normalized bandwidth and ratio
corr_naon,pval_naon,coef_naon = owncorr(bandwln,wnao,126) #correlation between normalized bandwidtha and winter NAO
Explanation: Calculate correlation and p-values with considering autocorrelation, and the autocorrelations (coef)
End of explanation
print(corr_ratio)
print(pval_ratio)
print(coef)
print(corr_nao)
print(pval_nao)
print(coef_nao)
print(corr_n)
print(pval_n)
print(coef_n)
print(corr_naon)
print(pval_naon)
print(coef_naon)
Explanation: Check the correlation results
End of explanation |
11,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Практическое задание к уроку 1 (2 неделя).
Линейная регрессия
Step1: Мы будем работать с датасетом "bikes_rent.csv", в котором по дням записаны календарная информация и погодные условия, характеризующие автоматизированные пункты проката велосипедов, а также число прокатов в этот день. Последнее мы будем предсказывать; таким образом, мы будем решать задачу регрессии.
Знакомство с данными
Загрузите датасет с помощью функции pandas.read_csv в переменную df. Выведите первые 5 строчек, чтобы убедиться в корректном считывании данных
Step2: Для каждого дня проката известны следующие признаки (как они были указаны в источнике данных)
Step3: Блок 1. Ответьте на вопросы (каждый 0.5 балла)
Step4: В выборке есть признаки, коррелирующие с целевым, а значит, задачу можно решать линейными методами.
По графикам видно, что некоторые признаки похожи друг на друга. Поэтому давайте также посчитаем корреляции между вещественными признаками.
Step5: На диагоналях, как и полагается, стоят единицы. Однако в матрице имеются еще две пары сильно коррелирующих столбцов
Step6: Признаки имеют разный масштаб, значит для дальнейшей работы нам лучше нормировать матрицу объекты-признаки.
Проблема первая
Step7: Давайте обучим линейную регрессию на наших данных и посмотрим на веса признаков.
Step8: Мы видим, что веса при линейно-зависимых признаках по модулю значительно больше, чем при других признаках.
Чтобы понять, почему так произошло, вспомним аналитическую формулу, по которой вычисляются веса линейной модели в методе наименьших квадратов
Step9: Блок 2. Поясните, каким образом введение регуляризации решает проблему с весами и мультиколлинеарностью.
Ваш ответ (1 балл)
Step10: Визуализируем динамику весов при увеличении параметра регуляризации
Step11: Ответы на следующие вопросы можно давать, глядя на графики или выводя коэффициенты на печать.
Блок 3. Ответьте на вопросы (каждый 0.25 балла)
Step12: Итак, мы выбрали некоторый параметр регуляризации. Давайте посмотрим, какие бы мы выбирали alpha, если бы делили выборку только один раз на обучающую и тестовую, то есть рассмотрим траектории MSE, соответствующие отдельным блокам выборки. | Python Code:
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Практическое задание к уроку 1 (2 неделя).
Линейная регрессия: переобучение и регуляризация
В этом задании мы на примерах увидим, как переобучаются линейные модели, разберем, почему так происходит, и выясним, как диагностировать и контролировать переобучение.
Во всех ячейках, где написан комментарий с инструкциями, нужно написать код, выполняющий эти инструкции. Остальные ячейки с кодом (без комментариев) нужно просто выполнить. Кроме того, в задании требуется отвечать на вопросы; ответы нужно вписывать после выделенного слова "Ответ:".
Напоминаем, что посмотреть справку любого метода или функции (узнать, какие у нее аргументы и что она делает) можно с помощью комбинации Shift+Tab. Нажатие Tab после имени объекта и точки позволяет посмотреть, какие методы и переменные есть у этого объекта.
End of explanation
# (0 баллов)
# Считайте данные и выведите первые 5 строк
df = pd.read_csv('bikes_rent.csv', header=0)
df.head()
Explanation: Мы будем работать с датасетом "bikes_rent.csv", в котором по дням записаны календарная информация и погодные условия, характеризующие автоматизированные пункты проката велосипедов, а также число прокатов в этот день. Последнее мы будем предсказывать; таким образом, мы будем решать задачу регрессии.
Знакомство с данными
Загрузите датасет с помощью функции pandas.read_csv в переменную df. Выведите первые 5 строчек, чтобы убедиться в корректном считывании данных:
End of explanation
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(15, 10))
for idx, feature in enumerate(df.columns[:-1]):
df.plot(feature, "cnt", subplots=True, kind="scatter", ax=axes[idx / 4, idx % 4])
Explanation: Для каждого дня проката известны следующие признаки (как они были указаны в источнике данных):
* season: 1 - весна, 2 - лето, 3 - осень, 4 - зима
* yr: 0 - 2011, 1 - 2012
* mnth: от 1 до 12
* holiday: 0 - нет праздника, 1 - есть праздник
* weekday: от 0 до 6
* workingday: 0 - нерабочий день, 1 - рабочий день
* weathersit: оценка благоприятности погоды от 1 (чистый, ясный день) до 4 (ливень, туман)
* temp: температура в Цельсиях
* atemp: температура по ощущениям в Цельсиях
* hum: влажность
* windspeed(mph): скорость ветра в милях в час
* windspeed(ms): скорость ветра в метрах в секунду
* cnt: количество арендованных велосипедов (это целевой признак, его мы будем предсказывать)
Итак, у нас есть вещественные, бинарные и номинальные (порядковые) признаки, и со всеми из них можно работать как с вещественными. С номинальныеми признаками тоже можно работать как с вещественными, потому что на них задан порядок. Давайте посмотрим на графиках, как целевой признак зависит от остальных
End of explanation
# Код 1.1 (0.5 балла)
# Посчитайте корреляции всех признаков, кроме последнего, с последним с помощью метода corrwith:
df[df.columns[:-1]].corrwith(df['cnt'])
Explanation: Блок 1. Ответьте на вопросы (каждый 0.5 балла):
1. Каков характер зависимости числа прокатов от месяца?
* ответ: в летние месяцы число прокатов в среднем возрастает, в зимние убывает
1. Укажите один или два признака, от которых число прокатов скорее всего зависит линейно
* ответ: temp (температура в Цельсиях), atemp (температура по ощущениям в Цельсиях)
Давайте более строго оценим уровень линейной зависимости между признаками и целевой переменной. Хорошей мерой линейной зависимости между двумя векторами является корреляция Пирсона. В pandas ее можно посчитать с помощью двух методов датафрейма: corr и corrwith. Метод df.corr вычисляет матрицу корреляций всех признаков из датафрейма. Методу df.corrwith нужно подать еще один датафрейм в качестве аргумента, и тогда он посчитает попарные корреляции между признаками из df и этого датафрейма.
End of explanation
# Код 1.2 (0.5 балла)
# Посчитайте попарные корреляции между признаками temp, atemp, hum, windspeed(mph), windspeed(ms) и cnt
# с помощью метода corr:
df.loc[:, ['temp', 'atemp', 'hum', 'windspeed(mph)', 'windspeed(ms)', 'cnt']].corr()
Explanation: В выборке есть признаки, коррелирующие с целевым, а значит, задачу можно решать линейными методами.
По графикам видно, что некоторые признаки похожи друг на друга. Поэтому давайте также посчитаем корреляции между вещественными признаками.
End of explanation
# Код 1.3 (0.5 балла)
# Выведите средние признаков
df.mean()
Explanation: На диагоналях, как и полагается, стоят единицы. Однако в матрице имеются еще две пары сильно коррелирующих столбцов: temp и atemp (коррелируют по своей природе) и два windspeed (потому что это просто перевод одних единиц в другие). Далее мы увидим, что этот факт негативно сказывается на обучении линейной модели.
Напоследок посмотрим средние признаков (метод mean), чтобы оценить масштаб признаков и доли 1 у бинарных признаков.
End of explanation
from sklearn.preprocessing import scale
from sklearn.utils import shuffle
df_shuffled = shuffle(df, random_state=123)
X = scale(df_shuffled[df_shuffled.columns[:-1]])
y = df_shuffled["cnt"]
Explanation: Признаки имеют разный масштаб, значит для дальнейшей работы нам лучше нормировать матрицу объекты-признаки.
Проблема первая: коллинеарные признаки
Итак, в наших данных один признак дублирует другой, и есть еще два очень похожих. Конечно, мы могли бы сразу удалить дубликаты, но давайте посмотрим, как бы происходило обучение модели, если бы мы не заметили эту проблему.
Для начала проведем масштабирование, или стандартизацию признаков: из каждого признака вычтем его среднее и поделим на стандартное отклонение. Это можно сделать с помощью метода scale.
Кроме того, нужно перемешать выборку, это потребуется для кросс-валидации.
End of explanation
from sklearn.linear_model import LinearRegression
# Код 2.1 (1 балл)
# Создайте объект линейного регрессора, обучите его на всех данных и выведите веса модели
# (веса хранятся в переменной coef_ класса регрессора).
# Можно выводить пары (название признака, вес), воспользовавшись функцией zip, встроенной в язык python
# Названия признаков хранятся в переменной df.columns
lr = LinearRegression()
lr.fit(X, y)
zip(df.columns, lr.coef_)
Explanation: Давайте обучим линейную регрессию на наших данных и посмотрим на веса признаков.
End of explanation
from sklearn.linear_model import Lasso, Ridge
# Код 2.2 (0.5 балла)
# Обучите линейную модель с L1-регуляризацией
lasso = Lasso()
lasso.fit(X, y)
zip(df.columns, lasso.coef_)
# Код 2.3 (0.5 балла)
# Обучите линейную модель с L2-регуляризацией
ridge = Ridge()
ridge.fit(X, y)
zip(df.columns, ridge.coef_)
Explanation: Мы видим, что веса при линейно-зависимых признаках по модулю значительно больше, чем при других признаках.
Чтобы понять, почему так произошло, вспомним аналитическую формулу, по которой вычисляются веса линейной модели в методе наименьших квадратов:
$w = (X^TX)^{-1} X^T y$.
Если в X есть коллинеарные (линейно-зависимые) столбцы, матрица $X^TX$ становится вырожденной, и формула перестает быть корректной. Чем более зависимы признаки, тем меньше определитель этой матрицы и тем хуже аппроксимация $Xw \approx y$. Такая ситуацию называют проблемой мультиколлинеарности, вы обсуждали ее на лекции.
С парой temp-atemp чуть менее коррелирующих переменных такого не произошло, однако на практике всегда стоит внимательно следить за коэффициентами при похожих признаках.
Решение проблемы мультиколлинеарности состоит в регуляризации линейной модели. К оптимизируемому функционалу прибавляют L1 или L2 норму весов, умноженную на коэффициент регуляризации $\alpha$. В первом случае метод называется Lasso, а во втором --- Ridge. Подробнее об этом также рассказано в лекции.
Обучите регрессоры Ridge и Lasso с параметрами по умолчанию и убедитесь, что проблема с весами решилась.
End of explanation
# Код 3.1 (1 балл)
alphas = np.arange(1, 500, 50)
coefs_lasso = np.zeros((alphas.shape[0], X.shape[1])) # матрица весов размера (число регрессоров) x (число признаков)
coefs_ridge = np.zeros((alphas.shape[0], X.shape[1]))
# Для каждого значения коэффициента из alphas обучите регрессор Lasso
# и запишите веса в соответствующую строку матрицы coefs_lasso (вспомните встроенную в python функцию enumerate),
# а затем обучите Ridge и запишите веса в coefs_ridge.
for idx, a in enumerate(alphas):
lasso = Lasso(alpha=a)
lasso.fit(X, y)
coefs_lasso[idx] = lasso.coef_
for idx, a in enumerate(alphas):
ridge = Ridge(alpha=a)
ridge.fit(X, y)
coefs_ridge[idx] = ridge.coef_
Explanation: Блок 2. Поясните, каким образом введение регуляризации решает проблему с весами и мультиколлинеарностью.
Ваш ответ (1 балл): введение регуляризации штрафует модель за сложность, накладывая ограничение на веса (по сумме квадратов весов или сумме модулей весов). Таким образом, отсекаются алгоритмы с большими весами, что приводит к уменьшению мультиколлинеарности. L1-регуляризация позволяет также проводить отбор признаков, поскольку веса могут обратиться в ноль.
Проблема вторая: неинформативные признаки
В отличие от L2-регуляризации, L1 обнуляет веса при некоторых признаках. Объяснение данному факту дается в одной из лекций курса.
Давайте пронаблюдаем, как меняются веса при увеличении коэффициента регуляризации $\alpha$ (в лекции коэффициент при регуляризаторе мог быть обозначен другой буквой).
End of explanation
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_lasso.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Lasso")
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_ridge.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Ridge")
Explanation: Визуализируем динамику весов при увеличении параметра регуляризации:
End of explanation
from sklearn.linear_model import LassoCV
# Код 3.2 (1 балл)
# Обучите регрессор LassoCV на всех параметрах регуляризации из alpha
# Постройте график _усредненного_ по строкам MSE в зависимости от alpha.
# Выведите выбранное alpha, а также пары "признак-коэффициент" для обученного вектора коэффициентов
alphas = np.arange(1, 100, 5)
lasso_cv = LassoCV(alphas=alphas)
lasso_cv.fit(X, y)
# График
mse = [mse_row.mean() for mse_row in lasso_cv.mse_path_]
plt.plot(lasso_cv.alphas_, mse)
plt.xlabel("alpha")
plt.ylabel("mse")
# Выбранное alpha
print 'alpha = %d' % lasso_cv.alpha_
# Пары "признак-коэффициент" для обученного вектора коэффициентов
zip(df.columns, lasso_cv.coef_)
#zip(lasso_cv.alphas_, lasso_cv.mse_path_)
Explanation: Ответы на следующие вопросы можно давать, глядя на графики или выводя коэффициенты на печать.
Блок 3. Ответьте на вопросы (каждый 0.25 балла):
1. Какой регуляризатор (Ridge или Lasso) агрессивнее уменьшает веса при одном и том же alpha?
* Ответ: Lasso
1. Что произойдет с весами Lasso, если alpha сделать очень большим? Поясните, почему так происходит.
* Ответ: все веса обнулятся, потому что L1-регуляризация постепенно обнуляет веса признаков и при большом alpha сумма модулей весов должна будет стремиться к нулю.
1. Можно ли утверждать, что Lasso исключает один из признаков windspeed при любом значении alpha > 0? А Ridge? Ситается, что регуляризатор исключает признак, если коэффициент при нем < 1e-3.
* Ответ: Lasso исключает один из признаков windspeed при любом значении alpha > 0, а Ridge нет.
1. Какой из регуляризаторов подойдет для отбора неинформативных признаков?
* Ответ: Lasso подойдет, т.к. он производит отбор признаков и одними из первых (после сильнокоррелирующих) будут обнулены веса у неинформативных признаков.
Далее будем работать с Lasso.
Итак, мы видим, что при изменении alpha модель по-разному подбирает коэффициенты признаков. Нам нужно выбрать наилучшее alpha.
Для этого, во-первых, нам нужна метрика качества. Будем использовать в качестве метрики сам оптимизируемый функционал метода наименьших квадратов, то есть Mean Square Error.
Во-вторых, нужно понять, на каких данных эту метрику считать. Нельзя выбирать alpha по значению MSE на обучающей выборке, потому что тогда мы не сможем оценить, как модель будет делать предсказания на новых для нее данных. Если мы выберем одно разбиение выборки на обучающую и тестовую (это называется holdout), то настроимся на конкретные "новые" данные, и вновь можем переобучиться. Поэтому будем делать несколько разбиений выборки, на каждом пробовать разные значения alpha, а затем усреднять MSE. Удобнее всего делать такие разбиения кросс-валидацией, то есть разделить выборку на K частей, или блоков, и каждый раз брать одну из них как тестовую, а из оставшихся блоков составлять обучающую выборку.
Делать кросс-валидацию для регрессии в sklearn совсем просто: для этого есть специальный регрессор, LassoCV, который берет на вход список из alpha и для каждого из них вычисляет MSE на кросс-валидации. После обучения (если оставить параметр cv=3 по умолчанию) регрессор будет содержать переменную mse_path_, матрицу размера len(alpha) x k, k = 3 (число блоков в кросс-валидации), содержащую значения MSE на тесте для соответствующих запусков. Кроме того, в переменной alpha_ будет храниться выбранное значение параметра регуляризации, а в coef_, традиционно, обученные веса, соответствующие этому alpha_.
Обратите внимание, что регрессор может менять порядок, в котором он проходит по alphas; для сопоставления с матрицей MSE лучше использовать переменную регрессора alphas_.
End of explanation
# Код 3.3 (1 балл)
# Выведите значения alpha, соответствующие минимумам MSE на каждом разбиении (то есть по столбцам).
# На трех отдельных графиках визуализируйте столбцы .mse_path_
print lasso_cv.alphas_[lasso_cv.mse_path_[:, 0].argmin()]
print lasso_cv.alphas_[lasso_cv.mse_path_[:, 1].argmin()]
print lasso_cv.alphas_[lasso_cv.mse_path_[:, 2].argmin()]
plt.figure()
plt.plot(lasso_cv.alphas_, lasso_cv.mse_path_[:, 0], color='red')
plt.legend(loc='upper right', bbox_to_anchor=(1.4, 0.95))
plt.xlabel('alpha')
plt.ylabel('mse')
plt.title('MSE for split 1')
plt.figure()
plt.plot(lasso_cv.alphas_, lasso_cv.mse_path_[:, 1], color='red')
plt.legend(loc='"upper right', bbox_to_anchor=(1.4, 0.95))
plt.xlabel('alpha')
plt.ylabel('mse')
plt.title('MSE for split 2')
plt.figure()
plt.plot(lasso_cv.alphas_, lasso_cv.mse_path_[:, 2], color='red')
plt.legend(loc='upper right', bbox_to_anchor=(1.4, 0.95))
plt.xlabel('alpha')
plt.ylabel('mse')
plt.title('MSE for split 3')
Explanation: Итак, мы выбрали некоторый параметр регуляризации. Давайте посмотрим, какие бы мы выбирали alpha, если бы делили выборку только один раз на обучающую и тестовую, то есть рассмотрим траектории MSE, соответствующие отдельным блокам выборки.
End of explanation |
11,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assembly of system with multiple domains, variables and numerics
This tutorial has the dual purpose of illustrating parameter assigment in PorePy, and also showing how to set up problems in (mixed-dimensional) geometries. It contains two examples, one covering a simple setup (the pressure equation), and a second illustrating the full generality of the coupling scheme.
Step1: Data assignment
We will mainly use default values for parameters, while overriding some of the values. Sets of default parameters are available for flow, transport and mechanics (elasticity). For example, initialize_default_data initializes a second order permeability tensor in the flow data, and a fourth order stiffness tensor in the mechanics data. For more details, and definitions of what the defaults are, see the modules pp/params/data.py and pp/params/parameter_dictionaries.py <br>
The parameters are stored in a class pp.Parameters. This class is again stored in the data dictionary on each node and edge in the GridBucket, that is, in the variable d in this loop. The Paramater object can be accessed by d[pp.PARAMETERS]. To allow storage of parameters for several problems simultaneously (say, we want to solve a combined flow and transport problem), the Parameter class uses keywords to identify sets of parameters. This keyword must also be provided to the discretization method.
When the parameter class is initialized with default values, the default behavior is to identify the parameters by the same keyword as is used to choose the type of default parameters (default_parameter_type). While this is usually good practice, we here override this behavior for illustrative purposes, using the keyword_param_storage.
Step2: Example 1
The practical way of setting up a problem with a single variable is described here. For explanations, and hints on how to consider a more general setting, see the expanded Example 2 below. <br>
As shown in the tutorial on single-phase flow, the equation in the mono-dimensional case is
$$ - \nabla \cdot K \nabla p = f. $$
We expand to the mixed-dimensional version of the single-phase flow problem by solving the problem in each of the subdomains (here
Step3: Now, we define the variables on grids and edges and identify the individual terms of the equation we want to solve. We have an equation for the pressure on each grid (node of the GridBucket), and an equation for the mortar flux between them (edge of the bucket). The terms to be discretized are the diffusion term on the nodes ($- \nabla \cdot K \nabla p$) and the coupling term $- \kappa (p_{fracture} - \texttt{tr }p_{matrix})$ on the edges.
Step4: The task of assembling the linear system is left to a dedicated object, called an Assembler, whih again relies on a DofManager to keep track of the ordering of unknowns (for more, see below).
Discretization and assembly of the global linear system can in this case be carried out by a single function call. Note that for some problems, notably poro-elasticity, this is not possible, then discretization must be carried out first.
Below, A is the global linear system, and b is the corresponding right hand side, and we obtain the pressure solution by solving the system.
Step5: The parameters assigned above will not yield a well-posed problem, thus the solve will likely produce a warning about the matrix being singular. This can be ignored in this case. <br>
The ordering of the unknowns in the global linear system will vary depending on how the components in the GridBucket and the unknowns are traversed. The DofManager has methods to map from combinations of the relevant component in the GridBucket (either the grid or the edge between grids) with variables to the corresponding degrees of freedom.
Step6: Example 2
The first example showed how to work with the assembler in reletively simple cases. In this second example, we aim to illustrate the full scope of the assembler, including
Step7: Primary variables must be defined on each component of the GridBucket.
On the first grid we use a cell centered method which has one primary variable "pressure".
On the second grid, we use a mixed method with both pressure and fluxes combined into one primary variable.
The temperature is tagged with the same keyword on both grids.
Step8: Next we assign a keyword to the coupling terms between the grid. We will have three coupling variables;
one for the fluid flux, and one for each of the diffusive terms in the temperature equation.
Step9: We now give a keyword to the operators.
Step10: So far we have only defined the keywords needed for the discretizations to obtain the correct parameters
and couplings. Next, we create the discretization objects
Step11: Discretization operators on the coupling conditions, chosen to illustrate the framework.
Note that in all cases, the coupling conditions need a separate keyword, which should
correspond to an assigned set of data
Step12: Loop over the nodes in the GridBucket, define primary variables and discretization schemes
Step13: Loop over the edges in the GridBucket, define primary variables and discretizations.
Notice how coupling discretizations are assigned as a dictionary, one per coupling term on each edge. For each term, the coupling contains an inner dictionary, with the keys being the edge and the two neighboring grids. For the edge, the values are the name of the mortar variable, and the discretization object to be applied. For the grids, the values are the variable name on the grid, and the keyword identifying the discretization operator, as specified in the loop over nodes.
Step14: We have now assigned all the data. The task of assembling the linear system is left to a dedicated object
Step15: Discretization and assembly of the global linear system can again be carried out by separate function calls. | Python Code:
import numpy as np
import scipy.sparse as sps
import porepy as pp
Explanation: Assembly of system with multiple domains, variables and numerics
This tutorial has the dual purpose of illustrating parameter assigment in PorePy, and also showing how to set up problems in (mixed-dimensional) geometries. It contains two examples, one covering a simple setup (the pressure equation), and a second illustrating the full generality of the coupling scheme.
End of explanation
def assign_data(gb, keyword_param_storage):
# Method to assign data.
for g, d in gb:
# This keyword is used to define which set of default parameters to pick
# Replace with 'transport' or 'mechanics' if needed
default_parameter_type = 'flow'
# Assign a non-default permeability, for illustrative purposes
if g.dim == 2:
kxx = 10 * np.ones(g.num_cells)
else:
kxx = 0.1 * np.ones(g.num_cells)
perm = pp.SecondOrderTensor(kxx)
# We also set Dirichlet conditions, as the default Neumann condition
# gives a singular problem
bc = pp.BoundaryCondition(g, g.get_boundary_faces(), 'dir')
# Create a dictionary to override the default parameters
specified_parameters = {'second_order_tensor': perm, 'bc': bc}
# Define the
pp.initialize_default_data(g, d, default_parameter_type, specified_parameters,
keyword_param_storage)
# Internally to the Parameter class, the parameters are stored as dictionaries.
# To illustrate how to access specific sets of parameters, print the keywords
# for one of the grids
if g.dim == 2:
print('The assigned parameters for the 2d grid are')
print(d[pp.PARAMETERS][keyword_param_storage].keys())
for e, d in gb.edges():
# On edges in the GridBucket, there is currently no methods for default initialization.
data = {"normal_diffusivity": 2e1}
# Add parameters: We again use keywords to identify sets of parameters.
d[pp.PARAMETERS] = pp.Parameters(keywords=['flow_param_edge'], dictionaries=[data])
return gb
Explanation: Data assignment
We will mainly use default values for parameters, while overriding some of the values. Sets of default parameters are available for flow, transport and mechanics (elasticity). For example, initialize_default_data initializes a second order permeability tensor in the flow data, and a fourth order stiffness tensor in the mechanics data. For more details, and definitions of what the defaults are, see the modules pp/params/data.py and pp/params/parameter_dictionaries.py <br>
The parameters are stored in a class pp.Parameters. This class is again stored in the data dictionary on each node and edge in the GridBucket, that is, in the variable d in this loop. The Paramater object can be accessed by d[pp.PARAMETERS]. To allow storage of parameters for several problems simultaneously (say, we want to solve a combined flow and transport problem), the Parameter class uses keywords to identify sets of parameters. This keyword must also be provided to the discretization method.
When the parameter class is initialized with default values, the default behavior is to identify the parameters by the same keyword as is used to choose the type of default parameters (default_parameter_type). While this is usually good practice, we here override this behavior for illustrative purposes, using the keyword_param_storage.
End of explanation
gb, _ = pp.grid_buckets_2d.single_horizontal([2, 2], simplex=False)
parameter_keyword = 'flow_param'
gb = assign_data(gb, parameter_keyword)
Explanation: Example 1
The practical way of setting up a problem with a single variable is described here. For explanations, and hints on how to consider a more general setting, see the expanded Example 2 below. <br>
As shown in the tutorial on single-phase flow, the equation in the mono-dimensional case is
$$ - \nabla \cdot K \nabla p = f. $$
We expand to the mixed-dimensional version of the single-phase flow problem by solving the problem in each of the subdomains (here: fracture and matrix) and adding the flux between the subdomains
$$ \lambda = - \kappa (p_{fracture} - \texttt{tr }p_{matrix}), $$
with $\kappa$ denoting the normal permeability of the fractures. For details, refer to the tutorial on single-phase flow and published papers, e.g. this one.<br><br>
We start by defining the grid bucket and assigning parameters, tagging them with a keyword. This keyword ensures that the discretizer (here tpfa, defined below) uses the right set of parameters.
End of explanation
# Define the pressure variable with the same keyword on all grids
grid_variable = 'pressure'
# Variable name for the flux between grids, that is, the primary variable
# on the edges in the GridBucket.
mortar_variable = 'mortar_flux'
# Identifier of the discretization operator on each grid
operator_keyword = 'diffusion'
# Identifier of the discretization operator between grids
coupling_operator_keyword = 'coupling_operator'
# Use a two-point flux approximation on all grids.
# Note the keyword here: It must be the same as used when assigning the
# parameters.
tpfa = pp.Tpfa(parameter_keyword)
# Between the grids we use a Robin type coupling (resistance to flow over a fracture).
# Again, the keyword must be the same as used to assign data to the edge
# The edge discretization also needs access to the corresponding discretizations
# on the neighboring nodes
edge_discretization = pp.RobinCoupling('flow_param_edge', tpfa, tpfa)
# Loop over the nodes in the GridBucket, define primary variables and discretization schemes
for g, d in gb:
# Assign primary variables on this grid. It has one degree of freedom per cell.
d[pp.PRIMARY_VARIABLES] = {grid_variable: {"cells": 1, "faces": 0}}
# Assign discretization operator for the variable.
# If the discretization is composed of several terms, they can be assigned
# by multiple entries in the inner dictionary, e.g.
# {operator_keyword_1: method_1, operator_keyword_2: method_2, ...}
d[pp.DISCRETIZATION] = {grid_variable: {operator_keyword: tpfa}}
# Loop over the edges in the GridBucket, define primary variables and discretizations
for e, d in gb.edges():
g1, g2 = gb.nodes_of_edge(e)
# The mortar variable has one degree of freedom per cell in the mortar grid
d[pp.PRIMARY_VARIABLES] = {mortar_variable: {"cells": 1}}
# The coupling discretization links an edge discretization with variables
# and discretization operators on each neighboring grid
d[pp.COUPLING_DISCRETIZATION] = {
coupling_operator_keyword: {
g1: (grid_variable, operator_keyword),
g2: (grid_variable, operator_keyword),
e: (mortar_variable, edge_discretization),
}
}
d[pp.DISCRETIZATION_MATRICES] = {'flow_param_edge': {}}
Explanation: Now, we define the variables on grids and edges and identify the individual terms of the equation we want to solve. We have an equation for the pressure on each grid (node of the GridBucket), and an equation for the mortar flux between them (edge of the bucket). The terms to be discretized are the diffusion term on the nodes ($- \nabla \cdot K \nabla p$) and the coupling term $- \kappa (p_{fracture} - \texttt{tr }p_{matrix})$ on the edges.
End of explanation
dof_manager = pp.DofManager(gb)
assembler = pp.Assembler(gb, dof_manager)
assembler.discretize()
# Assemble the linear system, using the information stored in the GridBucket
A, b = assembler.assemble_matrix_rhs()
pressure = sps.linalg.spsolve(A, b)
Explanation: The task of assembling the linear system is left to a dedicated object, called an Assembler, whih again relies on a DofManager to keep track of the ordering of unknowns (for more, see below).
Discretization and assembly of the global linear system can in this case be carried out by a single function call. Note that for some problems, notably poro-elasticity, this is not possible, then discretization must be carried out first.
Below, A is the global linear system, and b is the corresponding right hand side, and we obtain the pressure solution by solving the system.
End of explanation
# Getting the grids is easy, there is one in each dimension
g_2d = gb.grids_of_dimension(2)[0]
g_1d = gb.grids_of_dimension(1)[0]
# Formally loop over the edges, there is a single one
for e, _ in gb.edges():
continue
# Get 2d dofs
global_dof_2d = dof_manager.grid_and_variable_to_dofs(g_2d, grid_variable)
# Print the relevant part of the system matrix
print(A.toarray()[global_dof_2d, :][:, global_dof_2d])
Explanation: The parameters assigned above will not yield a well-posed problem, thus the solve will likely produce a warning about the matrix being singular. This can be ignored in this case. <br>
The ordering of the unknowns in the global linear system will vary depending on how the components in the GridBucket and the unknowns are traversed. The DofManager has methods to map from combinations of the relevant component in the GridBucket (either the grid or the edge between grids) with variables to the corresponding degrees of freedom.
End of explanation
def assign_data_2(gb, keyword_param_storage, keyword_param_storage_2=None):
# Method to assign data.
for g, d in gb:
# This keyword is used to define which set of default parameters to pick
# Replace with 'transport' or 'mechanics' if needed
default_parameter_type = 'flow'
# Assign a non-default permeability, for illustrative purposes
if g.dim == 2:
kxx = 10 * np.ones(g.num_cells)
else:
kxx = 0.1 * np.ones(g.num_cells)
perm = pp.SecondOrderTensor(kxx)
# Create a dictionary to override the default parameters
specified_parameters = {'second_order_tensor': perm}
#
# Define the
pp.initialize_default_data(g, d, default_parameter_type, specified_parameters,
keyword_param_storage)
# Internally to the Parameter class, the parameters are stored as dictionaries.
# To illustrate how to access specific sets of parameters, print the keywords
# for one of the grids
if g.dim == 2 and not keyword_param_storage_2:
print('The assigned parameters for the 2d grid are')
print(d[pp.PARAMETERS][keyword_param_storage].keys())
# For one example below, we will need two different parameter sets.
# Define a second set, with default values only.
if keyword_param_storage_2:
pp.initialize_default_data(g, d, default_parameter_type, keyword = keyword_param_storage_2)
for e, d in gb.edges():
# On edges in the GridBucket, there is currently no methods for default initialization.
data = {"normal_diffusivity": 2e1}
# Add parameters: We again use keywords to identify sets of parameters.
if keyword_param_storage_2 is not None:
# There are actually three parameters here ('two_parameter_sets' refers to the nodes)
# since we plan on using in total three mortar variables in this case
d[pp.PARAMETERS] = pp.Parameters(keywords=['flow_param_edge',
'second_flow_param_edge',
'third_flow_param_edge'],
dictionaries=[data, data, data])
else:
d[pp.PARAMETERS] = pp.Parameters(keywords=['flow_param_edge'], dictionaries=[data])
return gb
# Define a grid
gb, _ = pp.grid_buckets_2d.single_horizontal([4, 4], simplex=False)
parameter_keyword = 'flow_param'
parameter_keyword_2 = 'second_flow_param'
gb = assign_data_2(gb, parameter_keyword, parameter_keyword_2)
Explanation: Example 2
The first example showed how to work with the assembler in reletively simple cases. In this second example, we aim to illustrate the full scope of the assembler, including:
* General assignment of variables on different grid components (fracture, matrix, etc.):
* Different number of variables on each grid component
* Different names for variables (a relevant case could be to use 'temperature' on one domain, 'enthalpy' on another, with an appropriate coupling)
* General coupling schemes between different grid components:
* Multiple coupling variables
* Couplings related to different variables and discretization schemes on the neighboring grids.
* Multiple discretization operators applied to the same term / equation on different grid components
The example that incorporates all these features are necessarily quite complex and heavy on notation. As such it should be considered as a reference for how to use the functionality, more than a simulation of any real physical system.
We define two primary variables on the nodes and three coupling variables. The resulting system will be somewhat arbitrary, in that it may not reflect any standard physics, but it should better illustrate what is needed for a multi-physics problem.
First we extend the data assignment method.
End of explanation
# Variable keywords first grid
grid_1_pressure_variable = 'pressure'
grid_1_temperature_variable = 'temperature'
# Variable keywords second grid
grid_2_pressure_variable = 'flux_pressure'
grid_2_temperature_variable = 'temperature'
Explanation: Primary variables must be defined on each component of the GridBucket.
On the first grid we use a cell centered method which has one primary variable "pressure".
On the second grid, we use a mixed method with both pressure and fluxes combined into one primary variable.
The temperature is tagged with the same keyword on both grids.
End of explanation
# Coupling variable for pressure
mortar_variable_pressure = 'mortar_flux_pressure'
# Coupling variable for advective temperature flux
mortar_variable_temperature_1 = 'mortar_flux_diffusion'
mortar_variable_temperature_2 = 'mortar_flux_diffusion_2'
Explanation: Next we assign a keyword to the coupling terms between the grid. We will have three coupling variables;
one for the fluid flux, and one for each of the diffusive terms in the temperature equation.
End of explanation
# Identifier of the discretization operator for pressure discretizaiton
operator_keyword_pressure = 'pressure_diffusion'
# identifier for the temperature discretizations.
# THIS IS WEIRD: The intention is to illustrate the use of two discretization operators for
# a single variable. The natural option in this setting is advection-diffusion, but that
# requires either the existence of a Darcy flux, or tighter coupling with the pressure equation.
# Purely for illustrative purposes, we instead use a double diffusion model. There you go.
operator_keyword_temperature_1 = 'diffusion'
operator_keyword_temperature_2 = 'diffusion_2'
# Identifier of the discretization operator between grids
coupling_pressure_keyword = 'coupling_operator_pressure'
Explanation: We now give a keyword to the operators.
End of explanation
# Pressure diffusion discretization
tpfa_flow = pp.Tpfa(parameter_keyword)
vem_flow = pp.MVEM(parameter_keyword)
# Temperature diffusion discretization
tpfa_temperature = pp.Tpfa(parameter_keyword_2)
mpfa_temperature = pp.Mpfa(parameter_keyword_2)
Explanation: So far we have only defined the keywords needed for the discretizations to obtain the correct parameters
and couplings. Next, we create the discretization objects
End of explanation
# One term couples two pressure / flow variables
edge_discretization_flow = pp.RobinCoupling('flow_param_edge', tpfa_flow, vem_flow)
# The second coupling is of mpfa on one domain, and tpfa on the other, both for temperature
edge_discretization_temperature_diffusion_1 = pp.RobinCoupling('second_flow_param_edge',
mpfa_temperature, tpfa_temperature)
# The third coupling is of tpfa for flow with mpfa for temperature
edge_discretization_temperature_diffusion_2 = pp.RobinCoupling('third_flow_param_edge',
tpfa_flow, mpfa_temperature)
Explanation: Discretization operators on the coupling conditions, chosen to illustrate the framework.
Note that in all cases, the coupling conditions need a separate keyword, which should
correspond to an assigned set of data
End of explanation
for g, d in gb:
# Assign primary variables on this grid.
if g.dim == 2:
# Both pressure and temperature are represented as cell centered variables
d[pp.PRIMARY_VARIABLES] = {grid_1_pressure_variable: {"cells": 1, "faces": 0},
grid_1_temperature_variable: {"cells": 1}}
# The structure of the discretization assignment is: For each variable, give a
# pair of operetor identifications (usually a string) and a discretizaiton method.
# If a variable is identified with several discretizations, say, advection and diffusion,
# several pairs can be assigned.
# For pressure, use tpfa.
# For temperature, use two discretizations, respectively tpfa and mpfa
d[pp.DISCRETIZATION] = {grid_1_pressure_variable: {operator_keyword_pressure: tpfa_flow},
grid_1_temperature_variable: {operator_keyword_temperature_1: tpfa_temperature,
operator_keyword_temperature_2: mpfa_temperature}}
else: #g.dim == 1
# Pressure is discretized with flux-pressure combination, temperature with cell centered variables
d[pp.PRIMARY_VARIABLES] = {grid_2_pressure_variable: {"cells": 1, "faces": 1},
grid_2_temperature_variable: {"cells": 1}}
# For pressure, use vem.
# For temperature, only discretize once, with tpfa
d[pp.DISCRETIZATION] = {grid_2_pressure_variable: {operator_keyword_pressure: vem_flow},
grid_2_temperature_variable: {operator_keyword_temperature_1: tpfa_temperature}}
Explanation: Loop over the nodes in the GridBucket, define primary variables and discretization schemes
End of explanation
for e, d in gb.edges():
#
g1, g2 = gb.nodes_of_edge(e)
# The syntax used in the problem setup assumes that g1 has dimension 2
if g1.dim < g2.dim:
g2, g1 = g1, g2
# The mortar variable has one degree of freedom per cell in the mortar grid
d[pp.PRIMARY_VARIABLES] = {mortar_variable_pressure: {"cells": 1},
mortar_variable_temperature_1: {"cells": 1},
mortar_variable_temperature_2: {"cells": 1},
}
# Coupling discretizations
d[pp.COUPLING_DISCRETIZATION] = {
# The flow discretization couples tpfa on one domain with vem on the other
'edge_discretization_flow': {
g1: (grid_1_pressure_variable, operator_keyword_pressure),
g2: (grid_2_pressure_variable, operator_keyword_pressure),
e: (mortar_variable_pressure, edge_discretization_flow),
},
# The first temperature mortar couples one of the temperature discretizations on grid 1
# with the single tempearture discretization on the second grid
# As a side remark, the keys in the outer dictionary are never used, except from debugging,
# but a dictionary seemed a more natural option than a list.
'the_keywords_in_this_dictionary_can_have_any_value': {
g1: (grid_1_temperature_variable, operator_keyword_temperature_2),
g2: (grid_2_temperature_variable, operator_keyword_temperature_1),
e: (mortar_variable_temperature_1, edge_discretization_temperature_diffusion_1),
},
# Finally, the third coupling
'second_edge_discretization_temperature': {
# grid_1_variable_1 gives pressure variable, then identify the discretization object
g1: (grid_1_pressure_variable, operator_keyword_pressure),
# grid_2_variable_2 gives temperature, then use the keyword that was used to identify mpfa
# (and not the one for tpfa, would have been operator_keyword_temperature_1)
g2: (grid_2_temperature_variable, operator_keyword_temperature_2),
e: (mortar_variable_temperature_2, edge_discretization_temperature_diffusion_2),
}
}
d[pp.DISCRETIZATION_MATRICES] = {'flow_param_edge': {},
'second_flow_param_edge': {},
'third_flow_param_edge': {}
}
Explanation: Loop over the edges in the GridBucket, define primary variables and discretizations.
Notice how coupling discretizations are assigned as a dictionary, one per coupling term on each edge. For each term, the coupling contains an inner dictionary, with the keys being the edge and the two neighboring grids. For the edge, the values are the name of the mortar variable, and the discretization object to be applied. For the grids, the values are the variable name on the grid, and the keyword identifying the discretization operator, as specified in the loop over nodes.
End of explanation
dof_manager = pp.DofManager(gb)
assembler = pp.Assembler(gb, dof_manager)
Explanation: We have now assigned all the data. The task of assembling the linear system is left to a dedicated object:
End of explanation
# Discretize, then Assemble the linear system, using the information stored in the GridBucket
assembler.discretize()
A, b = assembler.assemble_matrix_rhs()
# Pick out part of the discretization associated with the third mortar variable
g_2d = gb.grids_of_dimension(2)[0]
# Formally loop over the edges, there is a single one
for e, _ in gb.edges():
continue
# Get 2d dofs
global_dof_2d_pressure = dof_manager.grid_and_variable_to_dofs(g_2d, grid_1_pressure_variable)
global_dof_e_temperature = dof_manager.grid_and_variable_to_dofs(e, mortar_variable_temperature_2)
# Print the relevant part of the system matrix
print(A.toarray()[global_dof_2d_pressure, :][:, global_dof_e_temperature])
Explanation: Discretization and assembly of the global linear system can again be carried out by separate function calls.
End of explanation |
11,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying dogs vs cats with Alex Net
In this notebook, we try to implement a (somehow simplified) version of Alex Net to solve the Cats vs Dogs problem from Kaggle.
Following indications from course.fast.ai, the first step is clearly to download the dataset from Kaggle, after accepting the terms of use
Step1: Preprocess
OpenCV Histogram Equalization. This was how it was originally done in AlexNet. We could implement this as a layer in Keras, but for simplicity and consistency with https
Step2: First of all, for each between cats and dogs, move 1000 (out of 12500) images from train to validation.
Step3: Compute mean of all image
Because we'll want to center them
Step5: It's pretty much uniform gray, with a slight lighter shade in the middle
Start the keras stuff
Step6: The following is essentially Alex Net architecture
The only simplification (as code, the resulting network is denser) done is to use 5 "complete" convolutional layers instead of splitting two of them in half. The original choice was done to facilitate learning on 2 GPUs in parallel, but since we're using one GPU anyway...
The transpose in center is done to pass from usual (channels_last) to theano (channels_first) images.
Step7: Helper functions to get batches and fit models. They are both mutuated from course.fast.ai
Step8: Load batches for both training and validation.
batch_size=64 was chosen due to my small GPU with 2GB of vRAM. It should be proportionally (or even more, it's not optimized) increased for GPUs with more vRAM.
Step9: Analyze how much we learnt | Python Code:
from __future__ import division, print_function
from matplotlib import pyplot as plt
%matplotlib inline
import os, errno
import numpy as np
from tqdm import tqdm
from shutil import copy
import numpy as np
import pandas as pd
import cv2
import bcolz
IMAGE_WIDTH = 227
IMAGE_HEIGHT = 227
Explanation: Classifying dogs vs cats with Alex Net
In this notebook, we try to implement a (somehow simplified) version of Alex Net to solve the Cats vs Dogs problem from Kaggle.
Following indications from course.fast.ai, the first step is clearly to download the dataset from Kaggle, after accepting the terms of use:
Dogs vs. Cats Redux: Kernels Edition
After downloading the test.zip and train.zip files, one should create a subfolder called 'data' in this notebook, and unzip the two folders in there. The resulting setup is something like
dogs_vs_cats_with_AlexNet.ipynb
data/
train/
cat.437.jpg
dog.9924.jpg
cat.1029.jpg
dog.4374.jpg
test/
231.jpg
325.jpg
1235.jpg
9923.jpg
During the first step, we will pre-process the files by splitting train and validation and also moving them into subfolders to be Keras-friendly (one "cats" and one "dogs" subfolder each)
End of explanation
def transform_img(img, img_width=IMAGE_WIDTH, img_height=IMAGE_HEIGHT):
#Histogram Equalization
img[:, :, 0] = cv2.equalizeHist(img[:, :, 0])
img[:, :, 1] = cv2.equalizeHist(img[:, :, 1])
img[:, :, 2] = cv2.equalizeHist(img[:, :, 2])
#Image Resizing
img = cv2.resize(img, (img_width, img_height), interpolation = cv2.INTER_CUBIC)
return img
def create_dir(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
train_path = 'input/train'
valid_path = 'input/valid'
test_path = 'input/test'
train_raw_path = 'data/train/'
test_raw_path = 'data/test/'
valid_raw_path = 'data/valid'
create_dir(valid_raw_path)
Explanation: Preprocess
OpenCV Histogram Equalization. This was how it was originally done in AlexNet. We could implement this as a layer in Keras, but for simplicity and consistency with https://github.com/adilmoujahid/deeplearning-cats-dogs-tutorial we'll do it beforehand
End of explanation
cat_names = [c for c in os.listdir(train_raw_path) if c.startswith('cat')]
dog_names = [c for c in os.listdir(train_raw_path) if c.startswith('dog')]
for c in np.random.choice(cat_names, 1000, replace=False):
os.rename(os.path.join(train_raw_path, c), os.path.join(valid_raw_path, c))
for d in np.random.choice(dog_names, 1000, replace=False):
os.rename(os.path.join(train_raw_path, d), os.path.join(valid_raw_path, d))
# Function that applies transform_img to every element in input_folder, recursively,
# and saves the result in output_folder.
def transform_folder(input_folder, output_folder):
create_dir(os.path.join(output_folder, 'cats'))
create_dir(os.path.join(output_folder, 'dogs'))
files = [(os.path.join(path, name), name) for path, subdirs, files in os.walk(input_folder) for name in files]
for t, n in tqdm(files):
img = cv2.imread(t, cv2.IMREAD_COLOR)
img = transform_img(img, img_width=IMAGE_WIDTH, img_height=IMAGE_HEIGHT)
if 'cat' in n:
cv2.imwrite(os.path.join(output_folder, 'cats', n), img)
elif 'dog' in n:
cv2.imwrite(os.path.join(output_folder, 'dogs', n), img)
else:
cv2.imwrite(os.path.join(output_folder, n), img)
transform_folder(valid_raw_path, valid_path)
transform_folder(train_raw_path, train_path)
transform_folder(test_raw_path, test_path)
Explanation: First of all, for each between cats and dogs, move 1000 (out of 12500) images from train to validation.
End of explanation
from scipy import misc
img_mean = np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH, 3))
train_files = [os.path.join(path, name) for path, subdirs, files in os.walk(train_path) for name in files]
for t in tqdm(train_files):
img_mean += misc.imread(t) / len(train_files)
plt.imshow(img_mean.astype(np.int8))
def save_array(fname, arr):
c=bcolz.carray(arr, rootdir=fname, mode='w')
c.flush()
save_array('input/img_mean.bz', img_mean)
Explanation: Compute mean of all image
Because we'll want to center them
End of explanation
import theano
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Flatten, Lambda, Activation
from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
from keras.preprocessing import image
from keras.layers.core import Layer
from keras.layers import merge
from keras.callbacks import CSVLogger
# Courtesy of https://github.com/heuritech/
# The only change (to adapt to newer versions of Keras / Theano) is to
# change the spatial_2d_padding arguments from "0, half" to
# "((0,0), (half,half))"
def crosschannelnormalization(alpha=1e-4, k=2, beta=0.75, n=5, **kwargs):
This is the function used for cross channel normalization in the original
Alexnet
def f(X):
b, ch, r, c = X.shape
half = n // 2
square = K.square(X)
extra_channels = K.spatial_2d_padding(K.permute_dimensions(square, (0, 2, 3, 1))
, ((0,0), (half,half)))
extra_channels = K.permute_dimensions(extra_channels, (0, 3, 1, 2))
scale = k
for i in range(n):
scale += alpha * extra_channels[:, i:i + ch, :, :]
scale = scale ** beta
return X / scale
return Lambda(f, output_shape=lambda input_shape: input_shape, **kwargs)
Explanation: It's pretty much uniform gray, with a slight lighter shade in the middle
Start the keras stuff
End of explanation
def center(img):
return img - img_mean.astype(np.float32).transpose([2,0,1])
alexnet = Sequential([
Lambda(center, input_shape=(3, IMAGE_HEIGHT, IMAGE_WIDTH), output_shape=(3, IMAGE_HEIGHT, IMAGE_WIDTH)),
Conv2D(96, 11, strides=(4,4), activation='relu'),
MaxPooling2D(pool_size=(3,3), strides=(2,2)),
crosschannelnormalization(),
ZeroPadding2D((2,2)),
Conv2D(256, 5, activation='relu'),
MaxPooling2D(pool_size=(3,3), strides=(2,2)),
crosschannelnormalization(),
ZeroPadding2D((1,1)),
Conv2D(384, 3, activation='relu'),
ZeroPadding2D((1,1)),
Conv2D(384, 3, activation='relu'),
ZeroPadding2D((1,1)),
Conv2D(256, 3, activation='relu'),
MaxPooling2D(pool_size=(3,3), strides=(2,2)),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(2, activation='softmax')
])
Explanation: The following is essentially Alex Net architecture
The only simplification (as code, the resulting network is denser) done is to use 5 "complete" convolutional layers instead of splitting two of them in half. The original choice was done to facilitate learning on 2 GPUs in parallel, but since we're using one GPU anyway...
The transpose in center is done to pass from usual (channels_last) to theano (channels_first) images.
End of explanation
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=4, class_mode='categorical',
target_size=(IMAGE_HEIGHT, IMAGE_WIDTH)):
return gen.flow_from_directory(dirname, target_size=target_size,
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
def fit_model(model, batches, val_batches, nb_epoch=1, verbose=1, callbacks=None):
model.fit_generator(batches, batches.n//batches.batch_size, epochs=nb_epoch, callbacks=callbacks,
validation_data=val_batches, validation_steps=val_batches.n//val_batches.batch_size, verbose=verbose)
Explanation: Helper functions to get batches and fit models. They are both mutuated from course.fast.ai
End of explanation
batches = get_batches(train_path, batch_size=64)
val_batches = get_batches(valid_path, batch_size=64)
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
alexnet.compile(optimizer=sgd,
loss='categorical_crossentropy', metrics=['accuracy'])
csv_logger = CSVLogger('training.log')
# valid_batches and batches are wrongly named - inverted...
fit_model(alexnet, batches, val_batches, nb_epoch=20, callbacks=[csv_logger], verbose=0)
# Make a backup copy to avoid overwriting
copy('training.log', 'training_first_part.log')
csv_logger = CSVLogger('training2.log')
# valid_batches and batches are wrongly named - inverted...
fit_model(alexnet, batches, val_batches, nb_epoch=50, callbacks=[csv_logger], verbose=0)
# Make a backup copy to avoid overwriting
copy('training2.log', 'training_second_part.log')
Explanation: Load batches for both training and validation.
batch_size=64 was chosen due to my small GPU with 2GB of vRAM. It should be proportionally (or even more, it's not optimized) increased for GPUs with more vRAM.
End of explanation
training_results = pd.concat((
pd.read_csv('training_first_part.log'), pd.read_csv('training_second_part.log')
)).reset_index(drop=True)
print(training_results.shape)
training_results.head()
plt.style.use('ggplot')
plt.rcParams.update({'font.size': 22})
training_results[['acc', 'val_acc']].plot(figsize=(15,10))
plt.ylim([0, 1])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
Explanation: Analyze how much we learnt
End of explanation |
11,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Tutorial
Now you are ready to start creating your own AutoML image segmentation model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step11: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step12: Now save the unique dataset identifier for the Dataset resource instance you created.
Step13: Data preparation
The Vertex Dataset resource for images has some requirements for your data
Step14: Quick peek at your data
You will use a version of the Unknown dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.
Step15: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following
Step16: Train the model
Now train an AutoML image segmentation model using your Vertex Dataset resource. To train the model, do the following steps
Step17: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step18: Now save the unique identifier of the training pipeline you created.
Step19: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step20: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step21: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step22: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step23: Now get the unique identifier for the Endpoint resource you created.
Step24: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step25: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step26: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
Step27: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters
Step28: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step29: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML image segmentation model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_segmentation_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_segmentation_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create image segmentation models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the TODO. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML image segmentation model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_segmentation_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_segmentation_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML image segmentation model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("unknown-" + TIMESTAMP, DATA_SCHEMA)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/isg_data.jsonl"
Explanation: Data preparation
The Vertex Dataset resource for images has some requirements for your data:
Images must be stored in a Cloud Storage bucket.
Each image file must be in an image format (PNG, JPEG, BMP, ...).
There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.
The index file must be either CSV or JSONL.
JSONL
For image segmentation, the JSONL index file has the requirements:
Each data item is a separate JSON object, on a separate line.
The key/value pair image_gcs_uri is the Cloud Storage path to the image.
The key/value pair category_mask_uri is the Cloud Storage path to the mask image in PNG format.
The key/value pair 'annotation_spec_colors' is a list mapping mask colors to a label.
The key/value pair pair display_name is the label for the pixel color mask.
The key/value pair pair color are the RGB normalized pixel values (between 0 and 1) of the mask for the corresponding label.
{ 'image_gcs_uri': image, 'segmentation_annotations': { 'category_mask_uri': mask_image, 'annotation_spec_colors' : [ { 'display_name': label, 'color': {"red": value, "blue", value, "green": value} }, ...] }
Note: The dictionary key fields may alternatively be in camelCase. For example, 'image_gcs_uri' can also be 'imageGcsUri'.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
You will use a version of the Unknown dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.
End of explanation
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
Explanation: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:
Uses the Dataset client.
Calls the client method import_data, with the following parameters:
name: The human readable name you give to the Dataset resource (e.g., unknown).
import_configs: The import configuration.
import_configs: A Python list containing a dictionary, with the key/value entries:
gcs_sources: A list of URIs to the paths of the one or more index files.
import_schema_uri: The schema identifying the labeling type.
The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML image segmentation model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
PIPE_NAME = "unknown_pipe-" + TIMESTAMP
MODEL_NAME = "unknown_model-" + TIMESTAMP
task = json_format.ParseDict(
{"budget_milli_node_hours": 2000, "model_type": "CLOUD_LOW_ACCURACY_1"}, Value()
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
model_type: The type of deployed model:
CLOUD_HIGH_ACCURACY_1: For deploying to Google Cloud and optimizing for accuracy.
CLOUD_LOW_LATENCY_1: For deploying to Google Cloud and optimizing for latency (response time),
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("confidenceMetricsEntries", metrics["confidenceMetricsEntries"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (confidenceMetricsEntries) you will print the result.
End of explanation
ENDPOINT_NAME = "unknown_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "unknown_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
automatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
import json
test_items = !gsutil cat $IMPORT_FILE | head -n1
test_data = test_items[0].replace("'", '"')
test_data = json.loads(test_data)
try:
test_item = test_data["image_gcs_uri"]
test_label = test_data["segmentation_annotation"]["annotation_spec_colors"]
except:
test_item = test_data["imageGcsUri"]
test_label = test_data["segmentationAnnotation"]["annotationSpecColors"]
print(test_item, test_label)
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
import base64
import tensorflow as tf
def predict_item(filename, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
with tf.io.gfile.GFile(filename, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": base64.b64encode(content).decode("utf-8")}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(test_item, endpoint_id, None)
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (encoded images) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, image segmentation models do not support additional parameters.
Request
Since in this example your test item is in a Cloud Storage bucket, you will open and read the contents of the imageusing tf.io.gfile.Gfile(). To pass the test data to the prediction service, we will encode the bytes into base 64 -- This makes binary data safe from modification while it is transferred over the Internet.
The format of each instance is:
{ 'content': { 'b64': [base64_encoded_bytes] } }
Since the predict() method can take multiple items (instances), you send our single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in our case there is just one:
confidenceMask: Confidence level in the prediction.
categoryMask: The predicted label per pixel.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
11,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents and Objectives
Implementation of the water-filling algorithm
Interactive illustration of the water-filling principle
Step1: Specify total power p_tot as well as the noise levels of each channel
Step2: Illustration of the water-filling algorithm for 3 channels with configurable noise powers.
Step3: Interactive version with more channels and adjustable water level | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from ipywidgets import interactive
import ipywidgets as widgets
%matplotlib inline
# plotting options
font = {'size' : 30}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(15, 8))
Explanation: Contents and Objectives
Implementation of the water-filling algorithm
Interactive illustration of the water-filling principle
End of explanation
# Function returns the water-level P_max
def get_waterlevel(sigma_nq, p_tot):
# Sort noise values from lowest to largest
sigma_nq_sort = np.append(np.sort(sigma_nq), 9e99)
index = 0
# start filling from bottom until we reach the next channel
while index < len(sigma_nq):
waterlevel = (p_tot + np.sum(sigma_nq_sort[0:(index+1)]))/(index+1)
if waterlevel < sigma_nq_sort[index+1]:
return waterlevel
else:
index = index + 1
Explanation: Specify total power p_tot as well as the noise levels of each channel
End of explanation
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
p_tot = 2
sigma_nq = np.array([0.1,3,0.8])
waterlevel = get_waterlevel(sigma_nq, p_tot)
water = np.maximum(waterlevel - sigma_nq,0)
print("Water level P_max: ", waterlevel)
print("Powers per channel: ", water)
plt.figure(1,figsize=(9,6))
plt.rcParams.update({'font.size': 14})
x = np.arange(0.5,len(sigma_nq)+0.5, 1/100)
y1 = np.repeat(sigma_nq,100)
y2 = np.repeat(water,100)
plt.stackplot(x,y1,y2,colors=('#A22223','#009682'), edgecolor='black')
plt.xlim(0.5,len(sigma_nq)+0.5)
plt.ylim(0,max(sigma_nq+water)*1.1)
nzindex = (water != 0).argmax(axis=0)
plt.text(nzindex+1,sigma_nq[nzindex]+water[nzindex],r'$P_{\max} = %1.2f$' % waterlevel, horizontalalignment='center', verticalalignment='bottom')
plt.xticks(np.arange(1,len(sigma_nq)+1))
plt.xlabel("Kanalindex $i$")
plt.ylabel("")
plt.show()
Explanation: Illustration of the water-filling algorithm for 3 channels with configurable noise powers.
End of explanation
sigma_nq = np.array([0.2, 0.3, 0.17, 1.6, 0.6, 0.25, 0.93, 0.78, 1.3, 1.2, 0.66, 0.1, 0.25, 0.29, 0.19, 0.73])
def interactive_waterfilling_stack(p_tot):
waterlevel = get_waterlevel(sigma_nq, p_tot)
water = np.maximum(waterlevel - sigma_nq,0)
plt.figure(1,figsize=(13,6))
plt.rcParams.update({'font.size': 18})
x = np.arange(0.5,len(sigma_nq)+0.5, 1/100)
y1 = np.repeat(sigma_nq,100)
y2 = np.repeat(water,100)
plt.stackplot(x,y1,y2,colors=('#A22223','#009682'), edgecolor='black')
plt.xlim(0.5,len(sigma_nq)+0.5)
plt.ylim(0,max(sigma_nq+water)*1.1)
nzindex = (water != 0).argmax(axis=0)
plt.text(nzindex+0.8,sigma_nq[nzindex]+water[nzindex],r'$P_{\max} = %1.2f$' % waterlevel, horizontalalignment='left', verticalalignment='bottom')
plt.xticks(np.arange(1,len(sigma_nq)+1))
plt.xlabel("Kanalindex $i$")
plt.ylabel("")
plt.legend([r'$\sigma_{n,i}^2$','$P_i$'])
plt.show()
interactive_update = interactive(interactive_waterfilling_stack, \
p_tot = widgets.FloatSlider(min=0.1,max=15.0,step=0.1,value=3, continuous_update=False, description='P_s',layout=widgets.Layout(width='70%')))
output = interactive_update.children[-1]
output.layout.height = '400px'
interactive_update
Explanation: Interactive version with more channels and adjustable water level
End of explanation |
11,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Near-duplicate image search
Author
Step1: Load the dataset and create a training set of 1,000 images
To keep the run time of the example short, we will be using a subset of 1,000 images from
the tf_flowers dataset (available through
TensorFlow Datasets)
to build our vocabulary.
Step2: Load a pre-trained model
In this section, we load an image classification model that was trained on the
tf_flowers dataset. 85% of the total images were used to build the training set. For
more details on the training, refer to
this notebook.
The underlying model is a BiT-ResNet (proposed in
Big Transfer (BiT)
Step3: Create an embedding model
To retrieve similar images given a query image, we need to first generate vector
representations of all the images involved. We do this via an
embedding model that extracts output features from our pretrained classifier and
normalizes the resulting feature vectors.
Step4: Take note of the normalization layer inside the model. It is used to project the
representation vectors to the space of unit-spheres.
Hashing utilities
Step5: The shape of the vectors coming out of embedding_model is (2048,), and considering practical
aspects (storage, retrieval performance, etc.) it is quite large. So, there arises a need
to reduce the dimensionality of the embedding vectors without reducing their information
content. This is where random projection comes into the picture.
It is based on the principle that if the
distance between a group of points on a given plane is approximately preserved, the
dimensionality of that plane can further be reduced.
Inside hash_func(), we first reduce the dimensionality of the embedding vectors. Then
we compute the bitwise hash values of the images to determine their hash buckets. Images
having same hash values are likely to go into the same hash bucket. From a deployment
perspective, bitwise hash values are cheaper to store and operate on.
Query utilities
The Table class is responsible for building a single hash table. Each entry in the hash
table is a mapping between the reduced embedding of an image from our dataset and a
unique identifier. Because our dimensionality reduction technique involves randomness, it
can so happen that similar images are not mapped to the same hash bucket everytime the
process run. To reduce this effect, we will take results from multiple tables into
consideration -- the number of tables and the reduction dimensionality are the key
hyperparameters here.
Crucially, you wouldn't reimplement locality-sensitive hashing yourself when working with
real world applications. Instead, you'd likely use one of the following popular libraries
Step6: In the following LSH class we will pack the utilities to have multiple hash tables.
Step7: Now we can encapsulate the logic for building and operating with the master LSH table (a
collection of many tables) inside a class. It has two methods
Step8: Create LSH tables
With our helper utilities and classes implemented, we can now build our LSH table. Since
we will be benchmarking performance between optimized and unoptimized embedding models, we
will also warm up our GPU to avoid any unfair comparison.
Step9: Now we can first do the GPU wam-up and proceed to build the master LSH table with
embedding_model.
Step10: At the time of writing, the wall time was 54.1 seconds on a Tesla T4 GPU. This timing may
vary based on the GPU you are using.
Optimize the model with TensorRT
For NVIDIA-based GPUs, the
TensorRT framework
can be used to dramatically enhance the inference latency by using various model
optimization techniques like pruning, constant folding, layer fusion, and so on. Here we
will use the
tf.experimental.tensorrt
module to optimize our embedding model.
Step11: Notes on the parameters inside of tf.experimental.tensorrt.ConversionParams()
Step12: Build LSH tables with optimized model
Step13: Notice the difference in the wall time which is 13.1 seconds. Earlier, with the
unoptimized model it was 54.1 seconds.
We can take a closer look into one of the hash tables and get an idea of how they are
represented.
Step14: Visualize results on validation images
In this section we will first writing a couple of utility functions to visualize the
similar image parsing process. Then we will benchmark the query performance of the models
with and without optimization.
First, we take 100 images from the validation set for testing purposes.
Step15: Now we write our visualization utilities.
Step16: Non-TRT model
Step17: TRT model
Step18: As you may have noticed, there are a couple of incorrect results. This can be mitigated in
a few ways | Python Code:
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import time
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
Explanation: Near-duplicate image search
Author: Sayak Paul<br>
Date created: 2021/09/10<br>
Last modified: 2021/09/10<br>
Description: Building a near-duplicate image search utility using deep learning and locality-sensitive hashing.
Introduction
Fetching similar images in (near) real time is an important use case of information
retrieval systems. Some popular products utilizing it include Pinterest, Google Image
Search, etc. In this example, we will build a similar image search utility using
Locality Sensitive Hashing
(LSH) and random projection on top
of the image representations computed by a pretrained image classifier.
This kind of search engine is also known
as a near-duplicate (or near-dup) image detector.
We will also look into optimizing the inference performance of
our search utility on GPU using TensorRT.
There are other examples under keras.io/examples/vision
that are worth checking out in this regard:
Metric learning for image similarity search
Image similarity estimation using a Siamese Network with a triplet loss
Finally, this example uses the following resource as a reference and as such reuses some
of its code:
Locality Sensitive Hashing for Similar Item Search.
Note that in order to optimize the performance of our parser,
you should have a GPU runtime available.
Imports
End of explanation
train_ds, validation_ds = tfds.load(
"tf_flowers", split=["train[:85%]", "train[85%:]"], as_supervised=True
)
IMAGE_SIZE = 224
NUM_IMAGES = 1000
images = []
labels = []
for (image, label) in train_ds.take(NUM_IMAGES):
image = tf.image.resize(image, (IMAGE_SIZE, IMAGE_SIZE))
images.append(image.numpy())
labels.append(label.numpy())
images = np.array(images)
labels = np.array(labels)
Explanation: Load the dataset and create a training set of 1,000 images
To keep the run time of the example short, we will be using a subset of 1,000 images from
the tf_flowers dataset (available through
TensorFlow Datasets)
to build our vocabulary.
End of explanation
!wget -q https://git.io/JuMq0 -O flower_model_bit_0.96875.zip
!unzip -qq flower_model_bit_0.96875.zip
bit_model = tf.keras.models.load_model("flower_model_bit_0.96875")
bit_model.count_params()
Explanation: Load a pre-trained model
In this section, we load an image classification model that was trained on the
tf_flowers dataset. 85% of the total images were used to build the training set. For
more details on the training, refer to
this notebook.
The underlying model is a BiT-ResNet (proposed in
Big Transfer (BiT): General Visual Representation Learning).
The BiT-ResNet family of models is known to provide excellent transfer performance across
a wide variety of different downstream tasks.
End of explanation
embedding_model = tf.keras.Sequential(
[
tf.keras.layers.Input((IMAGE_SIZE, IMAGE_SIZE, 3)),
tf.keras.layers.Rescaling(scale=1.0 / 255),
bit_model.layers[1],
tf.keras.layers.Normalization(mean=0, variance=1),
],
name="embedding_model",
)
embedding_model.summary()
Explanation: Create an embedding model
To retrieve similar images given a query image, we need to first generate vector
representations of all the images involved. We do this via an
embedding model that extracts output features from our pretrained classifier and
normalizes the resulting feature vectors.
End of explanation
def hash_func(embedding, random_vectors):
embedding = np.array(embedding)
# Random projection.
bools = np.dot(embedding, random_vectors) > 0
return [bool2int(bool_vec) for bool_vec in bools]
def bool2int(x):
y = 0
for i, j in enumerate(x):
if j:
y += 1 << i
return y
Explanation: Take note of the normalization layer inside the model. It is used to project the
representation vectors to the space of unit-spheres.
Hashing utilities
End of explanation
class Table:
def __init__(self, hash_size, dim):
self.table = {}
self.hash_size = hash_size
self.random_vectors = np.random.randn(hash_size, dim).T
def add(self, id, vectors, label):
# Create a unique indentifier.
entry = {"id_label": str(id) + "_" + str(label)}
# Compute the hash values.
hashes = hash_func(vectors, self.random_vectors)
# Add the hash values to the current table.
for h in hashes:
if h in self.table:
self.table[h].append(entry)
else:
self.table[h] = [entry]
def query(self, vectors):
# Compute hash value for the query vector.
hashes = hash_func(vectors, self.random_vectors)
results = []
# Loop over the query hashes and determine if they exist in
# the current table.
for h in hashes:
if h in self.table:
results.extend(self.table[h])
return results
Explanation: The shape of the vectors coming out of embedding_model is (2048,), and considering practical
aspects (storage, retrieval performance, etc.) it is quite large. So, there arises a need
to reduce the dimensionality of the embedding vectors without reducing their information
content. This is where random projection comes into the picture.
It is based on the principle that if the
distance between a group of points on a given plane is approximately preserved, the
dimensionality of that plane can further be reduced.
Inside hash_func(), we first reduce the dimensionality of the embedding vectors. Then
we compute the bitwise hash values of the images to determine their hash buckets. Images
having same hash values are likely to go into the same hash bucket. From a deployment
perspective, bitwise hash values are cheaper to store and operate on.
Query utilities
The Table class is responsible for building a single hash table. Each entry in the hash
table is a mapping between the reduced embedding of an image from our dataset and a
unique identifier. Because our dimensionality reduction technique involves randomness, it
can so happen that similar images are not mapped to the same hash bucket everytime the
process run. To reduce this effect, we will take results from multiple tables into
consideration -- the number of tables and the reduction dimensionality are the key
hyperparameters here.
Crucially, you wouldn't reimplement locality-sensitive hashing yourself when working with
real world applications. Instead, you'd likely use one of the following popular libraries:
ScaNN
Annoy
Vald
End of explanation
class LSH:
def __init__(self, hash_size, dim, num_tables):
self.num_tables = num_tables
self.tables = []
for i in range(self.num_tables):
self.tables.append(Table(hash_size, dim))
def add(self, id, vectors, label):
for table in self.tables:
table.add(id, vectors, label)
def query(self, vectors):
results = []
for table in self.tables:
results.extend(table.query(vectors))
return results
Explanation: In the following LSH class we will pack the utilities to have multiple hash tables.
End of explanation
class BuildLSHTable:
def __init__(
self,
prediction_model,
concrete_function=False,
hash_size=8,
dim=2048,
num_tables=10,
):
self.hash_size = hash_size
self.dim = dim
self.num_tables = num_tables
self.lsh = LSH(self.hash_size, self.dim, self.num_tables)
self.prediction_model = prediction_model
self.concrete_function = concrete_function
def train(self, training_files):
for id, training_file in enumerate(training_files):
# Unpack the data.
image, label = training_file
if len(image.shape) < 4:
image = image[None, ...]
# Compute embeddings and update the LSH tables.
# More on `self.concrete_function()` later.
if self.concrete_function:
features = self.prediction_model(tf.constant(image))[
"normalization"
].numpy()
else:
features = self.prediction_model.predict(image)
self.lsh.add(id, features, label)
def query(self, image, verbose=True):
# Compute the embeddings of the query image and fetch the results.
if len(image.shape) < 4:
image = image[None, ...]
if self.concrete_function:
features = self.prediction_model(tf.constant(image))[
"normalization"
].numpy()
else:
features = self.prediction_model.predict(image)
results = self.lsh.query(features)
if verbose:
print("Matches:", len(results))
# Calculate Jaccard index to quantify the similarity.
counts = {}
for r in results:
if r["id_label"] in counts:
counts[r["id_label"]] += 1
else:
counts[r["id_label"]] = 1
for k in counts:
counts[k] = float(counts[k]) / self.dim
return counts
Explanation: Now we can encapsulate the logic for building and operating with the master LSH table (a
collection of many tables) inside a class. It has two methods:
train(): Responsible for building the final LSH table.
query(): Computes the number of matches given a query image and also quantifies the
similarity score.
End of explanation
# Utility to warm up the GPU.
def warmup():
dummy_sample = tf.ones((1, IMAGE_SIZE, IMAGE_SIZE, 3))
for _ in range(100):
_ = embedding_model.predict(dummy_sample)
Explanation: Create LSH tables
With our helper utilities and classes implemented, we can now build our LSH table. Since
we will be benchmarking performance between optimized and unoptimized embedding models, we
will also warm up our GPU to avoid any unfair comparison.
End of explanation
warmup()
training_files = zip(images, labels)
lsh_builder = BuildLSHTable(embedding_model)
lsh_builder.train(training_files)
Explanation: Now we can first do the GPU wam-up and proceed to build the master LSH table with
embedding_model.
End of explanation
# First serialize the embedding model as a SavedModel.
embedding_model.save("embedding_model")
# Initialize the conversion parameters.
params = tf.experimental.tensorrt.ConversionParams(
precision_mode="FP16", maximum_cached_engines=16
)
# Run the conversion.
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="embedding_model", conversion_params=params
)
converter.convert()
converter.save("tensorrt_embedding_model")
Explanation: At the time of writing, the wall time was 54.1 seconds on a Tesla T4 GPU. This timing may
vary based on the GPU you are using.
Optimize the model with TensorRT
For NVIDIA-based GPUs, the
TensorRT framework
can be used to dramatically enhance the inference latency by using various model
optimization techniques like pruning, constant folding, layer fusion, and so on. Here we
will use the
tf.experimental.tensorrt
module to optimize our embedding model.
End of explanation
# Load the converted model.
root = tf.saved_model.load("tensorrt_embedding_model")
trt_model_function = root.signatures["serving_default"]
Explanation: Notes on the parameters inside of tf.experimental.tensorrt.ConversionParams():
precision_mode defines the numerical precision of the operations in the
to-be-converted model.
maximum_cached_engines specifies the maximum number of TRT engines that will be
cached to handle dynamic operations (operations with unknown shapes).
To learn more about the other options, refer to the
official documentation.
You can also explore the different quantization options provided by the
tf.experimental.tensorrt module.
End of explanation
warmup()
training_files = zip(images, labels)
lsh_builder_trt = BuildLSHTable(trt_model_function, concrete_function=True)
lsh_builder_trt.train(training_files)
Explanation: Build LSH tables with optimized model
End of explanation
idx = 0
for hash, entry in lsh_builder_trt.lsh.tables[0].table.items():
if idx == 5:
break
if len(entry) < 5:
print(hash, entry)
idx += 1
Explanation: Notice the difference in the wall time which is 13.1 seconds. Earlier, with the
unoptimized model it was 54.1 seconds.
We can take a closer look into one of the hash tables and get an idea of how they are
represented.
End of explanation
validation_images = []
validation_labels = []
for image, label in validation_ds.take(100):
image = tf.image.resize(image, (224, 224))
validation_images.append(image.numpy())
validation_labels.append(label.numpy())
validation_images = np.array(validation_images)
validation_labels = np.array(validation_labels)
validation_images.shape, validation_labels.shape
Explanation: Visualize results on validation images
In this section we will first writing a couple of utility functions to visualize the
similar image parsing process. Then we will benchmark the query performance of the models
with and without optimization.
First, we take 100 images from the validation set for testing purposes.
End of explanation
def plot_images(images, labels):
plt.figure(figsize=(20, 10))
columns = 5
for (i, image) in enumerate(images):
ax = plt.subplot(len(images) / columns + 1, columns, i + 1)
if i == 0:
ax.set_title("Query Image\n" + "Label: {}".format(labels[i]))
else:
ax.set_title("Similar Image # " + str(i) + "\nLabel: {}".format(labels[i]))
plt.imshow(image.astype("int"))
plt.axis("off")
def visualize_lsh(lsh_class):
idx = np.random.choice(len(validation_images))
image = validation_images[idx]
label = validation_labels[idx]
results = lsh_class.query(image)
candidates = []
labels = []
overlaps = []
for idx, r in enumerate(sorted(results, key=results.get, reverse=True)):
if idx == 4:
break
image_id, label = r.split("_")[0], r.split("_")[1]
candidates.append(images[int(image_id)])
labels.append(label)
overlaps.append(results[r])
candidates.insert(0, image)
labels.insert(0, label)
plot_images(candidates, labels)
Explanation: Now we write our visualization utilities.
End of explanation
for _ in range(5):
visualize_lsh(lsh_builder)
visualize_lsh(lsh_builder)
Explanation: Non-TRT model
End of explanation
for _ in range(5):
visualize_lsh(lsh_builder_trt)
Explanation: TRT model
End of explanation
def benchmark(lsh_class):
warmup()
start_time = time.time()
for _ in range(1000):
image = np.ones((1, 224, 224, 3)).astype("float32")
_ = lsh_class.query(image, verbose=False)
end_time = time.time() - start_time
print(f"Time taken: {end_time:.3f}")
benchmark(lsh_builder)
benchmark(lsh_builder_trt)
Explanation: As you may have noticed, there are a couple of incorrect results. This can be mitigated in
a few ways:
Better models for generating the initial embeddings especially for noisy samples. We can
use techniques like ArcFace,
Supervised Contrastive Learning, etc.
that implicitly encourage better learning of representations for retrieval purposes.
The trade-off between the number of tables and the reduction dimensionality is crucial
and helps set the right recall required for your application.
Benchmarking query performance
End of explanation |
11,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overlapping Mixtures of Gaussian Processses
Valentine Svensson 2015 <br> (with small edits by James Hensman November 2015)
This illustrates use of the OMGP model described in
Overlapping Mixtures of Gaussian Processes for the data association problem
Miguel Lázaro-Gredilla, Steven Van Vaerenbergh, Neil D. Lawrence
Pattern Recognition 2012
The GPclust implementation makes use of the collapsed variational mixture model for GP assignment.
Step1: Diverging trend seperation
One application of the OMGP model could be to find diverging trends among populations over time. Imagine for example two species evolving from a common ancestor over time.
We load some pre-generated data which diverge over time.
Step2: We define a model assuming K = 2 trends. By default the model will be populated by K RBF kernels. The OMGP implementation is compatible with most kernels in GPy, so that you for example can encode periodicity in the model.
Step3: A simple plot function is included which illustrates the asignment probability for each data point, it also shows the posterior mean and confidence intervals for each Gaussian Process.
Step4: There is also a function for plotting the assignment probability for a given GP directly. Since we haven't optimized the mixture parameters yet the assignment probability is just a random draw from the prior.
Step5: We can first performa a quick optimization to find the rough trends.
Step6: The model identifies the branches of the time series, and in particular the non-branched region have ambigous GP assignment. In this region the two trends share information for prediction.
Like any GPy model the hyper parameters can be inspected.
Step7: We continue by letting the model optimize some more, and also allow it to optimize the hyper parameters. The hyper parameter optimization works best if the mixture parameters have converged or are close to converging.
Step8: Separating signal from noise
An interesting application of the OMGP model pointed out in the original publication is the use for robust GP regression.
Let's illustrate this by creating sinusoidal test data with background noise.
Step9: First we make a model with only one mixture component / kernel. This is equivalent to normal GP regression.
Step10: Now we in stead view this is a mixture problem, and consider two different kinds of kernels for the different GP components. One encoding white noise, and another which can encode a trend over time (an RBF kernel in this case).
Step11: The trend over time is much more noticable, and the confidence intervals are smaller.
Noisy points will have high assignment probability to the 'noise GP', while the assignment of the sinusoidal points is ambiguous. We can use this to seperate the points which are more likely to be noise from the remaining points. | Python Code:
%matplotlib inline
import GPy
from GPclust import OMGP
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
from matplotlib import pyplot as plt
Explanation: Overlapping Mixtures of Gaussian Processses
Valentine Svensson 2015 <br> (with small edits by James Hensman November 2015)
This illustrates use of the OMGP model described in
Overlapping Mixtures of Gaussian Processes for the data association problem
Miguel Lázaro-Gredilla, Steven Van Vaerenbergh, Neil D. Lawrence
Pattern Recognition 2012
The GPclust implementation makes use of the collapsed variational mixture model for GP assignment.
End of explanation
XY = np.loadtxt('../data/split_data_test.csv', delimiter=',', skiprows=1, usecols=[1, 2])
X = XY[:, 0, None]
Y = XY[:, 1, None]
plt.scatter(X, Y);
Explanation: Diverging trend seperation
One application of the OMGP model could be to find diverging trends among populations over time. Imagine for example two species evolving from a common ancestor over time.
We load some pre-generated data which diverge over time.
End of explanation
m = OMGP(X, Y, K=2, variance=0.01, prior_Z='DP')
m.log_likelihood()
Explanation: We define a model assuming K = 2 trends. By default the model will be populated by K RBF kernels. The OMGP implementation is compatible with most kernels in GPy, so that you for example can encode periodicity in the model.
End of explanation
m.plot()
Explanation: A simple plot function is included which illustrates the asignment probability for each data point, it also shows the posterior mean and confidence intervals for each Gaussian Process.
End of explanation
m.plot_probs(gp_num=0)
Explanation: There is also a function for plotting the assignment probability for a given GP directly. Since we haven't optimized the mixture parameters yet the assignment probability is just a random draw from the prior.
End of explanation
m.optimize(step_length=0.01, maxiter=20)
m.plot()
m.plot_probs()
Explanation: We can first performa a quick optimization to find the rough trends.
End of explanation
m
Explanation: The model identifies the branches of the time series, and in particular the non-branched region have ambigous GP assignment. In this region the two trends share information for prediction.
Like any GPy model the hyper parameters can be inspected.
End of explanation
m.optimize(step_length=0.01, maxiter=200)
m
m.plot()
m.plot_probs()
Explanation: We continue by letting the model optimize some more, and also allow it to optimize the hyper parameters. The hyper parameter optimization works best if the mixture parameters have converged or are close to converging.
End of explanation
x1 = np.random.uniform(0, 10, (100, 1))
x2 = np.random.uniform(0, 10, (100, 1))
y1 = 4 * np.random.randn(*x1.shape)
y2 = 3 * np.sin(x2) + 0.5 * np.random.randn(*x2.shape)
x = np.vstack((x1, x2))
y = np.vstack((y1, y2))
plt.scatter(x, y);
Explanation: Separating signal from noise
An interesting application of the OMGP model pointed out in the original publication is the use for robust GP regression.
Let's illustrate this by creating sinusoidal test data with background noise.
End of explanation
kernels = [GPy.kern.RBF(1)]
m = OMGP(x, y, K=1, prior_Z='DP', kernels=kernels)
m.variance = 3
m.hyperparam_interval = 100
m.rbf.lengthscale = 2
m.optimize(verbose=False)
m.plot()
Explanation: First we make a model with only one mixture component / kernel. This is equivalent to normal GP regression.
End of explanation
kernels = [GPy.kern.White(1, name='Noise'), GPy.kern.RBF(1, name='Signal')]
m = OMGP(x, y, K=2, prior_Z='DP', kernels=kernels)
m.variance = 3
m.hyperparam_interval = 250
m.Signal.lengthscale = 2
m.plot(0)
m.optimize(step_length=0.01, verbose=False)
m
m.plot()
Explanation: Now we in stead view this is a mixture problem, and consider two different kinds of kernels for the different GP components. One encoding white noise, and another which can encode a trend over time (an RBF kernel in this case).
End of explanation
m.plot_probs(0)
plt.axhline(0.75);
thr = 0.75
idx = np.where(m.phi[:,0] < thr)[0]
nidx = np.where(m.phi[:,0] >= thr)[0]
plt.figure(figsize=(12,10))
plt.subplot(211)
plt.scatter(x[idx], y[idx]);
plt.title('Signal')
plt.subplot(212, sharey=plt.gca())
plt.scatter(x[nidx], y[nidx]);
plt.title('Noise');
Explanation: The trend over time is much more noticable, and the confidence intervals are smaller.
Noisy points will have high assignment probability to the 'noise GP', while the assignment of the sinusoidal points is ambiguous. We can use this to seperate the points which are more likely to be noise from the remaining points.
End of explanation |
11,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Modern X-ray CCDs are technologically similar to the CCDs used in optical astronomy
Step2: Both files are in FITS image format, which we can read in using astropy.io.fits (here aliased to its older name, pyfits).
Step3: Let's see what we've got
Step4: imfits is a FITS object, containing multiple data structures. The image itself is an array of integer type (remember, counts!), and size 648x648 pixels, stored in the primary "header data unit" or HDU, and accessed via the data method of the FITS object. The other HDUs hold tables containing the "good time intervals" defined in earlier data processing, which we can ignore for our purposes.
exfits contains only the exposure map, with floating point type.
Here we extract the image (that is, the array) data from each object as numpy arrays.
Step5: Note
Step6: Note that information from 7 different CCDs in the MOS2 camera have been combined here, and that X and Y in the image arrays correspond to celestial coordinates (right ascension and declination) rather than X and Y on a given detector or in the focal plane.
In the image, we can see
Step7: Let's create such an object holding the original data
Step8: To quickly illustrate what imx and imy are for, let's look at them
Step9: Keeping these arrays around is obviously a little inefficient in terms of memory, but it means we can easily do calculations involving pixel positions with numpy array arithmetic.
Next, let's make a cut-out (or "postage stamp") around roungly the center of the image, and look at it.
Step10: Note that imx and imy in stamp hold the Image coordinates of each pixel with respect to the original image, which seems like a potentially useful thing.
Something we might want to do when evaluating a model is to compute the distance of each pixel from some specified position, in units of pixels. As a quick test, complete the following to compute an array holding the distance of each pixel in the stamp image from the center of original image. (Remember that Image coordinates start from 1, not 0!)
dist=0 should fall within the stamp, in the bottom-left quadrant as displayed below.
Step11: So now we can easily compute the distance between all pixels and some reference, in units of the pixel size. Note that the Euclidean distance formula is not exact, since the sky is a sphere and not a plane, but on this scale it's easily good enough. Which prompts the question - just what is the size of a pixel in this image?
To find out, we can consult the FITS header of the image. The relevant keywords are near the bottom of the header, and begin CTYPE, CRPIX, CVAL, etc. Note that the exact keywords can vary, since FITS files can have multiple coordinate systems defined. In this case, the header tells us that axis 1 (X) is Right Ascension, axis 2 (Y) is DEClination, and both have a pixel length of 0.0011111... in units of degrees, so 4 arcseconds. (CDELT1 is negative because RA increases to the left by convention.)
Step12: If we need to, header values can be extracted easily. This extracts the pixel size in the X direction, and converts it to arcseconds (CUNIT1 tells us that it is originally in units of degrees). | Python Code:
exec(open('tbc.py').read()) # define TBC and TBC_above
import astropy.io.fits as pyfits
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from astropy.visualization import LogStretch
logstretch = LogStretch()
import scipy.stats as st
Explanation: Tutorial: X-ray Image Data
This notebook introduces one of the real data sets we'll use in this class, namely an image produced from X-ray CCD data. There's a fair bit of domain-specific information here, but it's useful stuff to see if you haven't worked with imaging data before (regardless of wavelength). Do note that there is also quite a bit of corner cutting, however; the problems based on these data are meant to get key statistical concepts across, and not to show you how to do a completely rigorous analysis that accounts for all instrumental and systematic effects.
End of explanation
TBC() # datadir = '../ignore/' # or whatever - path to where you put the downloaded files, including the trailing '/'
Explanation: Modern X-ray CCDs are technologically similar to the CCDs used in optical astronomy: when a photon hits a pixel, one or more electrons are promoted into the conduction band and trapped there until being read out. The main practical difference is that X-ray photons are rarer and their energies much higher.
This means that:
* Only for exceptionally bright sources will we ever have $>1$ photon hit a given pixel in an integration, if we read out the CCD every few seconds.
* We do not get 1 electron promoted per photon, as is the case in visible wavelength CCDs. Instead, the number of electrons is roughly proportional to the photon's energy, which means that these imaging devices are actually imaging spectrometers.
* When we say "counts" in this context, we mean "pixel activation events" rather than number of electrons trapped, so that (as in optical astronomy) we're referring to the number of photons detected (or other events that look like photons to the detector).
Let's look at some processed data from XMM-Newton for galaxy cluster Abell 1835.
Here the raw "event list" of pixel activations has been processed to form an image, meaning that, other than a broad selection on photon energy, the spectral information has been discarded.
XMM actually has 3 CCD cameras, but we'll just work with 1 for simplicity, and with just one of the available observations.
We'll still need 2 files:
* The image, in units of counts
* The exposure map (units of seconds), which accounts for the exposure time and the variation in effective collecting area across the field due to vignetting
There's a nice web interface that allows one to search the public archives for data, but to save time, just download the particular image and exposure map here:
* https://heasarc.gsfc.nasa.gov/FTP/xmm/data/rev0/0098010101/PPS/P0098010101M2U009IMAGE_3000.FTZ
* https://heasarc.gsfc.nasa.gov/FTP/xmm/data/rev0/0098010101/PPS/P0098010101M2U009EXPMAP3000.FTZ
This is an image produced from 1-3 keV events captured by the MOS2 camera in XMM's first observation of A1835, way back in 2001, and the corresponding exposure map.
You can put these files anywhere you want; I will assume they live in a directory called ignore, one level up from tutorials.
End of explanation
imagefile = datadir + 'P0098010101M2U009IMAGE_3000.FTZ'
expmapfile = datadir + 'P0098010101M2U009EXPMAP3000.FTZ'
imfits = pyfits.open(imagefile)
exfits = pyfits.open(expmapfile)
Explanation: Both files are in FITS image format, which we can read in using astropy.io.fits (here aliased to its older name, pyfits).
End of explanation
imfits.info()
exfits.info()
Explanation: Let's see what we've got:
End of explanation
im = imfits[0].data
ex = exfits[0].data
print(im.shape, im.dtype)
print(ex.shape, ex.dtype)
Explanation: imfits is a FITS object, containing multiple data structures. The image itself is an array of integer type (remember, counts!), and size 648x648 pixels, stored in the primary "header data unit" or HDU, and accessed via the data method of the FITS object. The other HDUs hold tables containing the "good time intervals" defined in earlier data processing, which we can ignore for our purposes.
exfits contains only the exposure map, with floating point type.
Here we extract the image (that is, the array) data from each object as numpy arrays.
End of explanation
plt.rcParams['figure.figsize'] = (20.0, 10.0)
fig, ax = plt.subplots(1,2);
ax[0].imshow(logstretch(im), cmap='gray', origin='lower');
ax[0].set_title('image (log scale)');
ax[1].imshow(ex, cmap='gray', origin='lower');
ax[1].set_title('exposure map');
Explanation: Note: If we wanted im to be floating point for some reason, we would need to cast it, as in im = imfits[0].data.astype('np.float32').
Let's have a look the image and exposure map. It's often helpful to stretch images on a logarithmic scale because some sources can differ in brightness by orders of magnitude. The exposure map varies much less, so a linear scale works better in that case.
Some more details: FITS images (and the arrays we read from them) are indexed according to an ancient convention, whereby the first index corresponds to the vertical axis (line) and the second index corresponds to the horizontal axis (sample). This corresponds to the way matplotlib interprets arrays as images, although we need to use the origin='lower' option to display the image the right way up.
End of explanation
class Image:
def __init__(self, imdata, exdata, imx=None, imy=None):
self.im = imdata
self.ex = exdata
if imx is None or imy is None:
# add 1 to make IMAGE coordinates
self.imx, self.imy = np.meshgrid(np.arange(imdata.shape[0])+1, np.arange(imdata.shape[1])+1)
else:
self.imx = imx
self.imy = imy
def cutout(self, x0, x1, y0, y1):
# Again note that the arguments are meant to be IMAGE coordinates, indexed starting from 1
return Image(self.im[(y0-1):y1,(x0-1):x1], self.ex[(y0-1):y1,(x0-1):x1],
self.imx[(y0-1):y1,(x0-1):x1], self.imy[(y0-1):y1,(x0-1):x1])
def extent(self):
return [np.min(self.imx), np.max(self.imx), np.min(self.imy), np.max(self.imy)]
def display(self, log_image=True):
fig, ax = plt.subplots(1,2);
extent = self.extent()
if log_image:
ax[0].imshow(logstretch(self.im), cmap='gray', origin='lower', extent=extent);
ax[0].set_title('image (log scale)');
else:
ax[0].imshow(self.im, cmap='gray', origin='lower', extent=extent);
ax[0].set_title('image');
ax[1].imshow(self.ex, cmap='gray', origin='lower', extent=extent);
ax[1].set_title('exposure map');
Explanation: Note that information from 7 different CCDs in the MOS2 camera have been combined here, and that X and Y in the image arrays correspond to celestial coordinates (right ascension and declination) rather than X and Y on a given detector or in the focal plane.
In the image, we can see:
1. Galaxy cluster Abell 1835 (the big blob in the center).
2. Various other sources (smaller blobs). These are point-like sources - mostly active galactic nuclei (AGN) - that have been smeared out by the telescope's point spread function (PSF).
3. A roughly uniform background, consisting of unresolved AGN, diffuse X-rays from the Galactic halo and local hot bubble, and events due to particles (solar wind protons and cosmic rays) interacting with the CCD.
The exposure map shows:
1. Clear boundaries between the 7 CCDs that make up the MOS2 camera, and a number of "bad rows/columns" where the exposure has been set to zero.
2. An overall gradient with radius - this is the vignetting function of the telescope.
3. A vaguely circular cut-out shape along the edge. This is applied in preprocessing to eliminate pixels where the effective exposure is essentially zero. All of the CCDs are, in fact, square, and the "corner" regions of the field of view are sometimes used to get a measurement of the portion of the background that is not focussed by the optics (e.g. particle-induced events).
You probably know that, in python, array indices start from 0. We could choose to work with $x$ and $y$ coordinates that are simply these indices. However, conventionally, astronomical image coordinates are indexed from 1; for example, if we had used the tool ds9 to define a region of interest in this image, and saved it in "Image" coordinates (as opposed to celestial coordinates), the bottom-left pixel in the image would be $(1,1)$. To avoid confusion, we might want to follow this convention.
Below is a simple class that should assist in displaying these data, and potentially cutting out "sub-images" for local analysis. It holds on to the image and exposure map, and also defines arrays imx and imy which hold the Image X and Y coordinates of each pixel. (Note: we will use "pixel" to refer to an entry in these arrays, as opposed to a physical pixel in one of the CCDs.)
End of explanation
orig = Image(im, ex)
Explanation: Let's create such an object holding the original data:
End of explanation
plt.rcParams['figure.figsize'] = (10.0, 10.0)
fig, ax = plt.subplots(1,2);
ax[0].imshow(orig.imx, cmap='gray', origin='lower');
ax[0].set_title('imx');
ax[1].imshow(orig.imy, cmap='gray', origin='lower');
ax[1].set_title('imy');
Explanation: To quickly illustrate what imx and imy are for, let's look at them:
End of explanation
stamp = orig.cutout(300, 400, 300, 400)
plt.rcParams['figure.figsize'] = (10.0, 10.0)
stamp.display()
Explanation: Keeping these arrays around is obviously a little inefficient in terms of memory, but it means we can easily do calculations involving pixel positions with numpy array arithmetic.
Next, let's make a cut-out (or "postage stamp") around roungly the center of the image, and look at it.
End of explanation
TBC() # dist = something involving stamp.imx and stamp.imy
plt.rcParams['figure.figsize'] = (5.0, 5.0)
plt.imshow(dist, cmap='gray', origin='lower', extent=stamp.extent());
Explanation: Note that imx and imy in stamp hold the Image coordinates of each pixel with respect to the original image, which seems like a potentially useful thing.
Something we might want to do when evaluating a model is to compute the distance of each pixel from some specified position, in units of pixels. As a quick test, complete the following to compute an array holding the distance of each pixel in the stamp image from the center of original image. (Remember that Image coordinates start from 1, not 0!)
dist=0 should fall within the stamp, in the bottom-left quadrant as displayed below.
End of explanation
imfits[0].header
Explanation: So now we can easily compute the distance between all pixels and some reference, in units of the pixel size. Note that the Euclidean distance formula is not exact, since the sky is a sphere and not a plane, but on this scale it's easily good enough. Which prompts the question - just what is the size of a pixel in this image?
To find out, we can consult the FITS header of the image. The relevant keywords are near the bottom of the header, and begin CTYPE, CRPIX, CVAL, etc. Note that the exact keywords can vary, since FITS files can have multiple coordinate systems defined. In this case, the header tells us that axis 1 (X) is Right Ascension, axis 2 (Y) is DEClination, and both have a pixel length of 0.0011111... in units of degrees, so 4 arcseconds. (CDELT1 is negative because RA increases to the left by convention.)
End of explanation
imfits[0].header['CDELT1'] * 3600
Explanation: If we need to, header values can be extracted easily. This extracts the pixel size in the X direction, and converts it to arcseconds (CUNIT1 tells us that it is originally in units of degrees).
End of explanation |
11,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encontro 02, Parte 2
Step1: A seguir, vamos configurar as propriedades visuais
Step2: Por fim, vamos carregar e visualizar um grafo
Step3: Caminhos de comprimento mínimo
Seja $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ um caminho. Dizemos que
Step4: Visualizando algoritmos
A função generate_frame é parecida com a função show_graph mas, em vez de mostrar uma imagem imediatamente, gera um quadro que pode ser usado para montar uma animação.
Vamos então definir uma função de conveniência que cria atributos label a partir de distâncias e adiciona um quadro a uma lista.
Step5: Vamos agora escrever uma versão alternativa da busca em largura. | Python Code:
import sys
sys.path.append('..')
import socnet as sn
Explanation: Encontro 02, Parte 2: Revisão de Busca em Largura
Este guia foi escrito para ajudar você a atingir os seguintes objetivos:
implementar o algoritmo de busca em largura;
usar funcionalidades avançadas da biblioteca da disciplina.
Primeiramente, vamos importar a biblioteca:
End of explanation
sn.graph_width = 320
sn.graph_height = 180
Explanation: A seguir, vamos configurar as propriedades visuais:
End of explanation
g = sn.load_graph('2-largura.gml', has_pos=True)
sn.show_graph(g)
Explanation: Por fim, vamos carregar e visualizar um grafo:
End of explanation
from math import inf, isinf
from queue import Queue
s = 1
q = Queue()
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
q.put(s)
while not q.empty():
n = q.get()
for m in g.neighbors(n):
if isinf(g.node[m]['d']):
g.node[m]['d'] = g.node[n]['d'] + 1
q.put(m)
for n in g.nodes():
print('distância de {}: {}'.format(n, g.node[n]['d']))
Explanation: Caminhos de comprimento mínimo
Seja $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ um caminho. Dizemos que:
* $n_0$ é a origem desse caminho, ou seja, o nó no qual ele começa;
* $n_{k-1}$ é o destino desse caminho, ou seja, o nó no qual ele termina;
* $k-1$ é o comprimento desse caminho, ou seja, a quantidade de arestas pelas quais ele passa.
Um caminho de origem $s$ e destino $t$ tem comprimento mínimo se não existe outro caminho de origem $s$ e destino $t$ de comprimento menor. Note que podem existir múltiplos caminhos de comprimento mínimo.
A distância de $s$ a $t$ é o comprimento mínimo de um caminho de origem $s$ e destino $t$. Por completude, dizemos que a distância de $s$ a $t$ é $\infty$ se não existe caminho de origem $s$ e destino $t$.
Algoritmo de busca em largura
Dado um nó $s$, podemos eficientemente calcular as distâncias desse a todos os outros nós do grafo usando o algoritmo de busca em largura. A ideia desse algoritmo é simples: a partir dos nós de distância $0$, ou seja apenas o próprio $s$, podemos descobrir os nós de distância $1$, a partir dos nós de distância $1$ podemos descobrir os nós de distância $2$, e assim em diante.
Podemos usar uma fila para garantir que os nós são visitados nessa ordem.
End of explanation
def snapshot(g, frames):
for n in g.nodes():
if isinf(g.node[n]['d']):
g.node[n]['label'] = '∞'
else:
g.node[n]['label'] = str(g.node[n]['d'])
frame = sn.generate_frame(g, nlab=True)
frames.append(frame)
Explanation: Visualizando algoritmos
A função generate_frame é parecida com a função show_graph mas, em vez de mostrar uma imagem imediatamente, gera um quadro que pode ser usado para montar uma animação.
Vamos então definir uma função de conveniência que cria atributos label a partir de distâncias e adiciona um quadro a uma lista.
End of explanation
red = (255, 0, 0) # linha nova
blue = (0, 0, 255) # linha nova
frames = [] # linha nova
s = 1
q = Queue()
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
q.put(s)
sn.reset_node_colors(g) # linha nova
sn.reset_edge_colors(g) # linha nova
snapshot(g, frames) # linha nova
while not q.empty():
n = q.get()
g.node[n]['color'] = red # linha nova
snapshot(g, frames) # linha nova
for m in g.neighbors(n):
g.edge[n][m]['color'] = red # linha nova
snapshot(g, frames) # linha nova
if isinf(g.node[m]['d']):
g.node[m]['d'] = g.node[n]['d'] + 1
q.put(m)
g.edge[n][m]['color'] = sn.edge_color # linha nova
snapshot(g, frames) # linha nova
g.node[n]['color'] = blue # linha nova
snapshot(g, frames) # linha nova
sn.show_animation(frames)
Explanation: Vamos agora escrever uma versão alternativa da busca em largura.
End of explanation |
11,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Posterior Predictive Checks
PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior.
Elaborating slightly one can say that - Posterior predictive checks (PPCs) analyze the degree to which data generated from the model deviate from data generated from the true distribution. So often you'll want to know if for example your posterior distribution is approximating your underlying distribution. The visualization aspect of this model evaluation method is also great for a 'sense check' or explaining your model to others and getting criticism.
PyMC3 has random number support thanks to Mark Wibrow as implemented in PR784.
Here we will implement a general routine to draw samples from the observed nodes of a model.
Step1: Lets generate a very simple model
Step2: This function will randomly draw 500 samples of parameters from the trace. Then, for each sample, it will draw 100 random numbers from a normal distribution specified by the values of mu and std in that sample.
Step3: Now, ppc contains 500 generated data sets (containing 100 samples each), each using a different parameter setting from the posterior
Step4: One common way to visualize is to look if the model can reproduce the patterns observed in the real data. For example, how close are the inferred means to the actual sample mean
Step5: Comparison between PPC and other model evaluation methods.
An excellent introduction to this is given on Edward and since I can't write this any better I'll just quote this
Step6: Mean predicted values plus error bars to give sense of uncertainty in prediction | Python Code:
%matplotlib inline
import numpy as np
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
from collections import defaultdict
Explanation: Posterior Predictive Checks
PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior.
Elaborating slightly one can say that - Posterior predictive checks (PPCs) analyze the degree to which data generated from the model deviate from data generated from the true distribution. So often you'll want to know if for example your posterior distribution is approximating your underlying distribution. The visualization aspect of this model evaluation method is also great for a 'sense check' or explaining your model to others and getting criticism.
PyMC3 has random number support thanks to Mark Wibrow as implemented in PR784.
Here we will implement a general routine to draw samples from the observed nodes of a model.
End of explanation
data = np.random.randn(100)
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1, testval=0)
sd = pm.HalfNormal('sd', sd=1)
n = pm.Normal('n', mu=mu, sd=sd, observed=data)
trace = pm.sample(5000)
pm.traceplot(trace);
Explanation: Lets generate a very simple model:
End of explanation
ppc = pm.sample_ppc(trace, samples=500, model=model, size=100)
Explanation: This function will randomly draw 500 samples of parameters from the trace. Then, for each sample, it will draw 100 random numbers from a normal distribution specified by the values of mu and std in that sample.
End of explanation
np.asarray(ppc['n']).shape
Explanation: Now, ppc contains 500 generated data sets (containing 100 samples each), each using a different parameter setting from the posterior:
End of explanation
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['n']], kde=False, ax=ax)
ax.axvline(data.mean())
ax.set(title='Posterior predictive of the mean', xlabel='mean(x)', ylabel='Frequency');
Explanation: One common way to visualize is to look if the model can reproduce the patterns observed in the real data. For example, how close are the inferred means to the actual sample mean:
End of explanation
# Use a theano shared variable to be able to exchange the data the model runs on
from theano import shared
def invlogit(x):
return np.exp(x) / (1 + np.exp(x))
n = 4000
n_oos = 50
coeff = 1.
predictors = np.random.normal(size=n)
# Turn predictor into a shared var so that we can change it later
predictors_shared = shared(predictors)
outcomes = np.random.binomial(1, invlogit(coeff * predictors))
outcomes
predictors_oos = np.random.normal(size=50)
outcomes_oos = np.random.binomial(1, invlogit(coeff * predictors_oos))
def tinvlogit(x):
import theano.tensor as t
return t.exp(x) / (1 + t.exp(x))
with pm.Model() as model:
coeff = pm.Normal('coeff', mu=0, sd=1)
p = tinvlogit(coeff * predictors_shared)
o = pm.Bernoulli('o', p, observed=outcomes)
trace = pm.sample(5000, n_init=5000)
# Changing values here will also change values in the model
predictors_shared.set_value(predictors_oos)
# Simply running PPC will use the updated values and do prediction
ppc = pm.sample_ppc(trace, model=model, samples=500)
Explanation: Comparison between PPC and other model evaluation methods.
An excellent introduction to this is given on Edward and since I can't write this any better I'll just quote this:
"PPCs are an excellent tool for revising models, simplifying or expanding the current model as one examines how well it fits the data. They are inspired by prior checks and classical hypothesis testing, under the philosophy that models should be criticized under the frequentist perspective of large sample assessment.
PPCs can also be applied to tasks such as hypothesis testing, model comparison, model selection, and model averaging. It’s important to note that while they can be applied as a form of Bayesian hypothesis testing, hypothesis testing is generally not recommended: binary decision making from a single test is not as common a use case as one might believe. We recommend performing many PPCs to get a holistic understanding of the model fit."
An important lesson to learn as someone using Probabilistic Programming is to not overfit your understanding or your criticism of models to only one metric. Model evaluation is a skill that can be honed with practice.
Prediction
The same pattern can be used for prediction. Here we're building a logistic regression model. Note that since we're dealing the full posterior, we're also getting uncertainty in our predictions for free.
End of explanation
plt.errorbar(x=predictors_oos, y=np.asarray(ppc['o']).mean(axis=0), yerr=np.asarray(ppc['o']).std(axis=0), linestyle='', marker='o')
plt.plot(predictors_oos, outcomes_oos, 'o')
plt.ylim(-.05, 1.05)
plt.xlabel('predictor')
plt.ylabel('outcome')
Explanation: Mean predicted values plus error bars to give sense of uncertainty in prediction
End of explanation |
11,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation API
A merchant wants to offer products and maximize profits. Just like on a real online marketplace, he can look at all existing offers, add some own, restock or reprice. First, it has to order products from the producer, which comes with costs. All Pricewars entities are implemented as services, and their interfaces (REST) are described in detail here.
This notebook will present how to use the Pricewars APIs to do all these tasks easily. From registration to buying products, offering them and repricing them. Using this, it is possible to build a powerful merchant.
Note
Step1: Initialize Marketplace API
Step2: If the marketplace doesn't run on the default URL, you can change it with the host argument
Register as merchant
In order to act on the marketplace, we need to be a registered merchant. Usually you use the Management UI to register a merchant and remember/keep the merchant_token. However, you can also use an API call.
You will also have to provie an API endpoint for your merchant, which will be called upon sales of products. We will simply use an invalid one, since this is only an example.
Step3: It was not possible to connect to the marketplace if you got the following error
Step4: The result is a list of Offer objects.
Step5: If you want a state-less merchant, you can set the argument include_empty_offers to True. This will add your own, but out-of-stock offers to be added to the response.
Initialize Producer API
To be able to call authenticated functions (like ordering products), we must provide our merchant token.
Step6: Order products
Order any amount of units of a product
Step7: The order contains 10 units of a product.
The billing_amount is the total cost that the merchant must pay for this order.
Step8: Add product to marketplace
To create a new offer, you need a product, a selling price for that offer and guaranteed shipping times.
Let's use a price of 35€ and any shipping times.
Step9: Send the offer to the marketplace. The accepted offer with its new offer ID is returned.
Step10: Let's see, if we can find the new offer on the marketplace
Step11: Update product on marketplace
Updating an offer, e.g. changing its price, is a limited API request. According to your simulation/marketplace settings, we can only call it N times per second.
Step12: Unregister the merchant
You should keep your merchant and the token as long as possible, because it is the reference to all market data (sales, profit, marketshare), offers and products.
However, if you just try things out, like in this sample and don't want to pollute the database with lots of merchants, unregister it. This also removes all offers and products.
Step13: Now, it shouldn't be possible to do authenticated actions. | Python Code:
import sys
sys.path.append('../')
Explanation: Simulation API
A merchant wants to offer products and maximize profits. Just like on a real online marketplace, he can look at all existing offers, add some own, restock or reprice. First, it has to order products from the producer, which comes with costs. All Pricewars entities are implemented as services, and their interfaces (REST) are described in detail here.
This notebook will present how to use the Pricewars APIs to do all these tasks easily. From registration to buying products, offering them and repricing them. Using this, it is possible to build a powerful merchant.
Note: The code is type-hinted, so using an IDE (e.g. PyCharm/IntelliJ or an IPython/Jupyter notebook) provides you with auto-completion.
If you want to try the following examples, make sure that the Pricewars plattform is running.
Either by deploying them individually or by using the docker setup.
The following step is specific for this notebook.
It is not necessary if your merchant is in the repository root.
End of explanation
from api import Marketplace
marketplace = Marketplace()
Explanation: Initialize Marketplace API
End of explanation
registration = marketplace.register(
'http://nobody:55000/',
merchant_name='notebook_merchant',
algorithm_name='human')
registration
Explanation: If the marketplace doesn't run on the default URL, you can change it with the host argument
Register as merchant
In order to act on the marketplace, we need to be a registered merchant. Usually you use the Management UI to register a merchant and remember/keep the merchant_token. However, you can also use an API call.
You will also have to provie an API endpoint for your merchant, which will be called upon sales of products. We will simply use an invalid one, since this is only an example.
End of explanation
offers = marketplace.get_offers()
offers
Explanation: It was not possible to connect to the marketplace if you got the following error:
ConnectionError: HTTPConnectionPool(host='marketplace', port=8080)
In that case, make sure that the marketplace is running and host and port are correct.
If host or port are wrong, you can change it by creating a marketplace object with the host argument:
marketplace = Marketplace(host=www.another_host.com:1234)
Check offers on the market
End of explanation
type(offers[0])
Explanation: The result is a list of Offer objects.
End of explanation
from api import Producer
producer = Producer(token=registration.merchant_token)
Explanation: If you want a state-less merchant, you can set the argument include_empty_offers to True. This will add your own, but out-of-stock offers to be added to the response.
Initialize Producer API
To be able to call authenticated functions (like ordering products), we must provide our merchant token.
End of explanation
order = producer.order(amount=10)
order
Explanation: Order products
Order any amount of units of a product:
End of explanation
type(order)
Explanation: The order contains 10 units of a product.
The billing_amount is the total cost that the merchant must pay for this order.
End of explanation
from models import Offer
price = 35
shipping_time = {'standard': 5, 'prime': 2}
offer = Offer.from_product(order.product, price, shipping_time)
offer
Explanation: Add product to marketplace
To create a new offer, you need a product, a selling price for that offer and guaranteed shipping times.
Let's use a price of 35€ and any shipping times.
End of explanation
offer = marketplace.add_offer(offer)
offer
Explanation: Send the offer to the marketplace. The accepted offer with its new offer ID is returned.
End of explanation
[market_offer for market_offer in marketplace.get_offers() if market_offer.offer_id == offer.offer_id][0]
Explanation: Let's see, if we can find the new offer on the marketplace:
End of explanation
offer.price = 28
marketplace.update_offer(offer)
[market_offer for market_offer in marketplace.get_offers() if market_offer.offer_id == offer.offer_id][0]
Explanation: Update product on marketplace
Updating an offer, e.g. changing its price, is a limited API request. According to your simulation/marketplace settings, we can only call it N times per second.
End of explanation
marketplace.unregister()
Explanation: Unregister the merchant
You should keep your merchant and the token as long as possible, because it is the reference to all market data (sales, profit, marketshare), offers and products.
However, if you just try things out, like in this sample and don't want to pollute the database with lots of merchants, unregister it. This also removes all offers and products.
End of explanation
from api.ApiError import ApiError
offer.price = 35
try:
marketplace.update_offer(offer)
except ApiError as e:
print("I can't do that")
print(e)
Explanation: Now, it shouldn't be possible to do authenticated actions.
End of explanation |
11,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!-- Some HTML for ncie picture -->
<p>
<a href="https
Step1: A. The semantic connections
Step2: A. Web of Science - Recursion 1.
Search details
Date
Step3: Keyword analysis
Step4: Journal analysis
Step5: Scopus - Recursive search
Step6: Journal analysis
Step7: C. Scopus Recursive search
Step8: Journal analysis | Python Code:
maketimeseries() # Load this function from bottom of notebook to print.
Explanation: <!-- Some HTML for ncie picture -->
<p>
<a href="https://commons.wikimedia.org/wiki/File:Open_Science_-_Prinzipien.png#/media/File:Open_Science_-_Prinzipien.png"><img src="https://upload.wikimedia.org/wikipedia/commons/9/9c/Open_Science_-_Prinzipien.png" alt="Open Science - Prinzipien.png" width="400"></a>
<br>
</p>
<center>Image by Andreas E. Neuhold, CC BY 3.0</center>
Bibliometric probing of the concept 'open science' - a notebook
by Christopher Kullenberg<sup>1</sup> (2017)
E-mail: christopher#kullenberg§gu#se
Abstract. To date there is only a small number of scientific articles that have been written on the topic of "open science", at least when considering the recent trend in science policy and public discourse. The Web of Science only contains 544 records with the term "open science" in the title, keywords or abstract, and Scopus ammounts to 769 records. Based on article keywords, open science is primarily connected with the concepts of "open access", "open data", "reproducible research", and "data sharing". It occurs most frequently in biomedical-, interdisciplinary- and natural science journals, however, some of the most cited articles have been published in science policy journals.
Contents
Introduction
Method
Software
Main results, a summary
The semantic connections: keywords
Most cited articles
Most common journals
Code, data and figures.
Web of Science, recursion 1.
Scopus, recursion 1.
Scopus, recursion 2.
1. Introduction
The purpose of this notebook is to produce an overview of scientific literature related to the phenomenon of open science. This is a preliminary research note, not a definite result. Feel free to use and modify this notebook in anyway you want, as long as you cite it as follows:
<sup>1</sup> Kullenberg, Christopher (2017) "Bibliometric probing of the concept 'open science' - a notebook", https://github.com/christopherkullenberg/openscienceliterature, accessed YY-MM-DD.
This notebook uses bibliometric data from the Web of Science and Scopus databases. Since these databases are proprietary and require a subscription, the source data cannot be redistributed here. However, the search strings and dates of retrieval are noted each time a dataset is used, so if you have access to these databases, it should be quite possible to replicate every step.
2. Method
Since open science is very loosely defined and can have multiple meanings, it is difficult to craft a precise search string in the bibliometric databases. Thus, in this notebook, I probe recursively into the Web of Science and Scopus databases with recursive searches (See Kullenberg & Kasperowski 2016) to generate search terms that can be relevant for getting a better picture of the open science phenomenon.
|Database|Records|
| --- | --- |
| Web of Science | 544 |
| Scopus 1st search| 769 |
| Scopus 2nd search| 14.146 |
The recursive search works as follows. First each database is queried in the abstract, title and keywords with the search term "open science". Then the keywords of this first database is analysed in terms of frequency and co-occurrence. This gives a first overview of what other keywords are relevant.
In the next step the keywords from the first iteration are inspected and sorted. I filter out those keywords that are too general in scope, for example "open source" and "reproducibility", in order to craft a more precise combination of search terms. This is a qualitative process, so for every keyword, I make choices on what is relevant and what is not. As this notebook is explorative in nature, this should be regarded as an interpretative practice, where one might return at a later stage and then make other decisions due to increased knowledge about the field.
A. Software
This Python 3 notebook uses the Pandas dataframe as a method for parsing the bibliometric records into a convenient data structure. For plotting, the Seaborn high-level library for interfacing with Matplotlib is used. Wielding such software for bibliometric analyses might seem a bit strange, but there are many advantages when it comes to reproducibility, since every step can be traced and the software is free. To install all software swiftly, I
recommend warmly the Anaconda distribution.
3. Main results, a summary
As the figure below shows, the notion of "open science" is practically non-existent before the turn of the millennium (in quantitative terms). During the past two or tree years, however, the concept seems to have gained a little traction. Note - the slight decrease in publications for 2016 is only a "lag effect" of the update process of the databases.
End of explanation
# General libraries
%matplotlib inline
import pandas as pd
import warnings
warnings.simplefilter(action = "ignore", category = FutureWarning) # Supress some meaningless warnings.
#from tabulate import tabulate
from collections import Counter
import seaborn as sns
import numpy as np
from itertools import combinations
import matplotlib.pyplot as plt
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 7 # Make figures a little bigger
# Load up this cell to enable the keywordcooccurrence function.
def cooccurrence(column):
'''
Input: a dataframe column containing keywords that are separated by semicolons
Example: df.DE (Web of Science) or df[Author keywords] (Scopus)
Output: A list of co-occurring keywords that can be ranked with the Counter function
'''
cooccurrencelist = []
for keyword in column:
k = str(keyword)
keywordsperarticle = k.split('; ')
keywordsperarticle = [word.lower() for word in keywordsperarticle] # Lowers all keywords in each list.
cooccurrence = list(combinations(keywordsperarticle, 2))
for c in cooccurrence:
cooccurrencelist.append(c)
return(cooccurrencelist)
# This function returns a list of journals that can be easily counted
def frequentjournals(column):
'''
Input: a dataframe column containing journal names.
Example: df.SO (Web of Science) or df[Source title] (Scopus)
Output: A list of journal names that can be ranked with the Counter function
'''
journallist = []
for journal in column:
#print(len(journal))
journallist.append(journal.lower()) # Lower names. Looks bad, but computes well.
return(journallist)
Explanation: A. The semantic connections: keywords
Based on a first iterative search on "open science", the term frequently occurs in connection with:
open access
open data
reproducible research
data sharing
reproducibility
big data
collaboration
metadata
meta-analysis
research data
Here it is reasonable to conclude that "open science" is largely defined from an epistemic point of view, as these terms refer to the scientific practice of sharing research findings and data openly (open access, open data), ensuring the reproducibility of research and achieving a better collaboration between scientists. Also, the notions of "meta-data" and "meta-analysis" indicate a concern for standardizing data for the purpose of aggregation (big data).
B. Most cited articles
Finding the most cited articles can be a good way of finding obligatory points of passage in the literature. However, one must be careful not to draw too far-fetched conclusions from such numbers. After all, the practice of citation cannot be reduced to a single motivational factor.
Web of Science, 10 most cited articles.
| Author | Year | Title | Journal | Times Cited | DOI/URL |
| --- | --- | --- | --- | --- | --- |
| Nooner, KB; Colcombe, SJ; Tobe, RH; Mennes, M;... | 2012.0 | The NKI-Rockland sample: a model for accelerat... | FRONTIERS IN NEUROSCIENCE | 82.0 | https://dx.doi.org/10.3389/fnins.2012.00152
| Fabrizio, KR; Di Minin, A | 2008.0 | Commercializing the laboratory: Faculty patent... | RESEARCH POLICY | 81.0 | https://dx.doi.org/10.1016/j.respol.2008.01.010
| Castellanos, FX; Di Martino, A; Craddock, RC; ... | 2013.0 | Clinical applications of the functional connec... | NEUROIMAGE | 79.0 | https://dx.doi.org/10.1016/j.neuroimage.2013.04.083
| Markman, GD; Siegel, DS; Wright, M | 2008.0 | Research and Technology Commercialization | JOURNAL OF MANAGEMENT STUDIES | 75.0 | https://dx.doi.org/10.1111/j.1467-6486.2008.00803.x
| Newman, G; Wiggins, A; Crall, A; Graham, E; Ne... | 2012.0 | The future of citizen science: emerging techno... | FRONTIERS IN ECOLOGY AND THE ENVIRONMENT | 74.0 | https://dx.doi.org/10.1890/110294
| Mello, MM; Francer, JK; Wilenzick, M; Teden, P... | 2013.0 | Preparing for Responsible Sharing of Clinical ... | NEW ENGLAND JOURNAL OF MEDICINE | 57.0 | https://dx.doi.org/10.1056/NEJMhle1309073
| Breschi, S; Catalini, C | 2010.0 | Tracing the links between science and technolo... | RESEARCH POLICY | 48.0 | https://dx.doi.org/10.1016/j.respol.2009.11.004
| Procter, R; Williams, R; Stewart, J; Poschen, ... | 2010.0 | Adoption and use of Web 2.0 in scholarly commu... | PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIET... | 44.0 | https://dx.doi.org/10.1098/rsta.2010.0155
| Mueller, ST; Piper, BJ | 2014.0 | The Psychology Experiment Building Language (P... | JOURNAL OF NEUROSCIENCE METHODS | 43.0 | https://dx.doi.org/10.1016/j.jneumeth.2013.10.024
| Mennes, M; Biswal, BB; Castellanos, FX; Milham... | 2013.0 | Making data sharing work: The FCP/INDI experience | NEUROIMAGE | 42.0 | https://dx.doi.org/10.1016/j.neuroimage.2012.10.064
Scopus, 10 most cited articles.
| Author | Year | Title | Journal | Times Cited | DOI/URL |
| --- | --- | --- | --- | --- | --- |
| Partha D., David P.A. | 1994 | Toward a new economics of science | Research Policy | 926.0 | https://dx.doi.org/10.1016/0048-7333(94)01002-1 |
| Mix A.C., Bard E., Schneider R. | 2001 | Environmental processes of the ice age: Land, ... | Quaternary Science Reviews | 479.0 | https://dx.doi.org/10.1016/S0277-3791(00)00145-1 |
| Balconi M., Breschi S., Lissoni F. | 2004 | Networks of inventors and the role of academia... | Research Policy | 216.0 | https://dx.doi.org/10.1016/S0048-7333(03)00108-2 |
| Veugelers R., Cassiman B. | 2005 | R&D cooperation between firms and universities... | International Journal of Industrial Organization | 189.0 | https://dx.doi.org/10.1016/j.ijindorg.2005.01.008 |
| Dosi G., Llerena P., Labini M.S. | 2006 | The relationships between science, technologie... | Research Policy | 173.0 | https://dx.doi.org/10.1016/j.respol.2006.09.012 |
| Pordes R., Petravick D., Kramer B., Olson D., ... | 2007 | The open science grid | Journal of Physics: Conference Series | 112.0 | https://dx.doi.org/10.1088/1742-6596/78/1/012057 |
| Agrawal A. | 2006 | Engaging the inventor: Exploring licensing str... | Strategic Management Journal | 111.0 | https://dx.doi.org/10.1002/smj.508 |
| Markman G.D., Siegel D.S., Wright M. | 2008 | Research and technology commercialization | Journal of Management Studies | 110.0 | https://dx.doi.org/10.1111/j.1467-6486.2008.00803.x |
| Nooner K.B., Colcombe S.J., Tobe R.H., Mennes ... | 2012 | The NKI-Rockland sample: A model for accelerat... | Frontiers in Neuroscience | 105.0 | https://dx.doi.org/10.3389/fnins.2012.00152 |
| Fabrizio K.R., Di Minin A. | 2008 | Commercializing the laboratory: Faculty patent... | Research Policy | 92.0 | https://dx.doi.org/10.1016/j.respol.2008.01.010
C. Most common journals
Depending on whether you query Scopus or Web of Science, you get different results concerning the distribution of records that are conference proceedings or journal articles. Here I have selected to include only journal articles, not conference papers, reviews or opinion pieces. The most common journals are:
Elife
Research policy
Plos ONE
Journal of Technology Transfer
Peerj
Science
Conclusion: The dominant fields for "open science" appears to be biomedical-, multidisciplinary-, and natural sciences.
4. Code and Data
End of explanation
#fel data , error_bad_lines=False
df = pd.read_csv('.data/WoS549recs20170121.tsv', sep="\t", encoding='utf-8') # Input: web of science tsv file, utf-8 encoding.
df.head(3)
# Print this for explanation of WoS columns
woskeyfile = open('woskeys.txt')
woskeys = woskeyfile.read()
#print(woskeys)
dfTC = df.sort('TC', ascending=False) # Order dataframe by times cited
dfTC[['AU', 'PY', 'TI', 'SO', 'TC', 'DI']].head(10) # Ten most cited articles.
publicationyears = sns.factorplot('PY', data=df, kind='count', size=10, aspect=2)
Explanation: A. Web of Science - Recursion 1.
Search details
Date:
20170121, General Search.
Search string:
TS="open science"
Result:
544 Records.
Important note on parsing the WoS tsv files
Delete the column BJ since it is erronneous. The separator is tab, and there is no delimiter.
End of explanation
# Read all keywords into a list.
allkeywords = []
for keyword in df.DE:
k = str(keyword)
keywordsperarticle = k.split('; ')
for word in keywordsperarticle:
allkeywords.append(word.lower()) # make all lower case for better string matching.
print("Total number of keywords: " + str(len(allkeywords)))
# Find the most common keywords
commonkeywords = Counter(allkeywords).most_common(10) # Increase if you want more results
for word in commonkeywords:
if word[0] != "nan": # Clean out empty fields.
print(word[0] + "\t" + str(word[1]))
Counter(cooccurrence(df.DE)).most_common(10)
keywordsDF = pd.DataFrame(commonkeywords, columns=["keyword", "freq"])
# Plot figure while excluding "nan" values in Dataframe
keywordsWoS = sns.factorplot(x='keyword', y='freq', kind="bar", data=keywordsDF[keywordsDF.keyword.str.contains("nan") == False], size=8, aspect=2)
keywordsWoS.set_xticklabels(rotation=45)
Explanation: Keyword analysis
End of explanation
# To get only journal articles, select df.SO[df['PT'] == 'J']
for journal in Counter(frequentjournals(df.SO[df['PT'] == 'J'] )).most_common(10):
print(journal)
Explanation: Journal analysis
End of explanation
df2 = pd.read_csv('.data/scopusRecursionOne769recs20170120.csv', encoding="utf-8")
# Print this for Scopus column names
#for header in list(df2.columns.values):
# print(header)
#df2['Document Type']
df2TC = df2.sort('Cited by', ascending=False) # Order dataframe by times cited
df2TC.tail(3)
# NOTE: there is a cryptic character in front of the Authors column:
# Sometimes 'Authors' works, sometimes 'Authors', depending on system locale settings.
df2TC[['Authors', 'Year', 'Title', 'Source title', 'Cited by', 'DOI']].head(10) # Ten most cited articles.
# Create a time series of the publications. Some data cleaning is needed:
df2TCdropna = df2TC.Year.dropna() # Drop empty values in years
df2TCyears = pd.DataFrame(df2TCdropna.astype(int)) # Convert existing years to integers, make new dataframe
publicationyearsScopus = sns.factorplot('Year', data=df2TCyears, kind='count', size=8, aspect=2)
# Read all keywords into a list.
allscopuskeywords = []
for keyword in df2['Author Keywords']:
k = str(keyword)
keywordsperarticle = k.split('; ')
for word in keywordsperarticle:
allscopuskeywords.append(word.lower()) # make all lower case for better string matching.
print("Total number of keywords: " + str(len(allkeywords)))
# Find the most common keywords
commonscopuskeywords = Counter(allscopuskeywords).most_common(20) # Increase if you want more results
for word in commonscopuskeywords:
if word[0] != "nan": # Clean out empty fields.
print(word[0] + "\t" + str(word[1]))
# Get co-occurrences
Counter(cooccurrence(df2['Author Keywords'])).most_common(10)
keywordsScopusDF = pd.DataFrame(commonscopuskeywords, columns=["keyword", "freq"])
# Plot figure while excluding "nan" values in Dataframe
keywordsScP = sns.factorplot(x='keyword', y='freq', kind="bar", data=keywordsScopusDF[keywordsScopusDF.keyword.str.contains("nan") == False], size=6, aspect=2)
keywordsScP.set_xticklabels(rotation=45)
keywordsScP.fig.text(0.65, 0.7, "Scopus - Recursion 1:\nSearchstring: \
'open science'\nMost frequent keywords\nN=769", ha ='left', fontsize = 15)
Explanation: Scopus - Recursive search: Iteration 1
Search date:
20170120 (TITLE-ABS-KEY"
Search string:
"open science"
Records:
769
End of explanation
# For journal articles only: df2['Source title'][df2['Document Type'] == 'Article']
for journal in Counter(frequentjournals(df2['Source title'][df2['Document Type'] == 'Article'])).most_common(10):
print(journal)
Explanation: Journal analysis
End of explanation
df3 = pd.read_csv('.data/scopusRecursionTwo14146recs20170120.csv')
df3.tail(3) # Verify all data is there.
# Create a time series of the publications. Some data cleaning is needed:
df3dropna = df3.Year.dropna() # Drop empty values in years
df3years = pd.DataFrame(df3dropna.astype(int)) # Convert existing years to integers, make new dataframe
publicationyearsScopus = sns.factorplot('Year', data=df3years, kind='count', size=8, aspect=2)
publicationyearsScopus.set_xticklabels(rotation=45)
Explanation: C. Scopus Recursive search: Iteration 2
Scopus Search date:
20170120 (TITLE-ABS-KEY)
Note: Scopus will not include Author Keywords when exporting this large ammount of data.
Search string:
Included:
"open science" OR "data sharing" OR "open data" OR "open science grid"
Excluded
* "open access" - Too broad, gets around 40k hits.
* "open source" - Too broad.
* "reproducibility" - Too broad.
* "big data" - Too broad.
* "collaboration" - Too broad.
* ...
Records:
14.146
End of explanation
for journal in Counter(frequentjournals(df3['Source title'][df3['Document Type'] == "Article"])).most_common(10):
print(journal)
WoSyears = []
for year in df.PY.dropna():
if year > 1990.0:
WoSyears.append(int(year))
#print(sorted(WoSyears))
Scopusyears = []
for year in df2['Year'].dropna():
if year > 1990.0:
Scopusyears.append(year)
dfWoSyears = pd.DataFrame.from_dict(Counter(WoSyears), orient='index', dtype=None)
dfsorted = pd.DataFrame.sort_index(dfWoSyears)
dfsorted.head()
dfScopusyears = pd.DataFrame.from_dict(Counter(Scopusyears), orient='index', dtype=None)
dfSsorted = pd.DataFrame.sort_index(dfScopusyears)
dfSsorted.head()
def maketimeseries():
plt.title('"Open science" - Published articles and proceedings, 1990-2016', fontsize=16)
plt.xlabel('Year \n', fontsize=16)
plt.ylabel('Records', fontsize=16)
plt.ylim([0, 150])
plt.xlim(1990,2016)
# Line styles: http://matplotlib.org/1.3.1/examples/pylab_examples/line_styles.html
plt.plot(dfsorted, linestyle='--', marker='D', label="Web of Science")
plt.plot(dfSsorted, linestyle='-', marker='o', label="Scopus")
# legend guide: http://matplotlib.org/1.3.1/users/legend_guide.html
plt.legend(loc=2, borderaxespad=0., fontsize=16)
plt.savefig(".data/fig1.png")
maketimeseries()
Explanation: Journal analysis
End of explanation |
11,993 | Given the following text description, write Python code to implement the functionality described.
Description:
Queries for rotation and Kth character of the given string in constant time
Python3 implementation of the approach ; Function to perform the required queries on the given string ; Pointer pointing to the current starting character of the string ; For every query ; If the query is to rotate the string ; Update the pointer pointing to the starting character of the string ; Index of the kth character in the current rotation of the string ; Print the kth character ; Driver code
| Python Code:
size = 2
def performQueries(string , n , queries , q ) :
ptr = 0 ;
for i in range(q ) :
if(queries[i ][0 ] == 1 ) :
ptr =(ptr + queries[i ][1 ] ) % n ;
else :
k = queries[i ][1 ] ;
index =(ptr + k - 1 ) % n ;
print(string[index ] ) ;
if __name__== "__main __":
string = "abcdefgh ";
n = len(string ) ;
queries =[[ 1 , 2 ] ,[2 , 2 ] ,[1 , 4 ] ,[2 , 7 ] ] ;
q = len(queries ) ;
performQueries(string , n , queries , q ) ;
|
11,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FAI1 Practical Deep Learning I | 11 May 2017 | Wayne Nixalo
In this notebook I'll be building a simple linear model in Keras using Sequential()
Tutorial on Linear Model for MNIST
Step1: That above is 42,0000 images by 784 pixels. So, the usual 28x28 pixel images. We can do the same to take a look at the output, which, not surprisingly, is the 10 possible digits
Step2: As I'd expect, a single perceptron layer doing a linear mapping between input to output performs.. poorly. But this is one of the first times I'm hand coding this from scratch, and it's good to be in a place where I can start experimenting without spending the bulk of mental effort over getting the machine to work.
By accidentally running more epochs without re-initializing the model, it seems it plateaus at 0.1761 accuracy.
It'll be interesting to see how RMSprop compares to SGD, and mean-squared error vs categorical cross-entropy. More interesting is adding more layers and tweaking their activations and looking at different learning rates
Step3: Unfortunately I don't yet know how to separate a .csv file into a validation set. Wait that sounds easy. Take a random permutation, or for the lazy
Step4: Okay, so.. training data is the number of images X (number of pixels + label). The label just adds 1 to the vector's length because it's just a decimal {0..9}, giving 42k X 785..
The training input vector has the label removed so its 42k images X 784 pixels..
The trianing labels vector is one-hot encoded to a ten-bit, & of course the 42k images..
So... nothing is saying I can't just cut input & labels and separate those into new training and validation sets. Not sure why I was getting crashes the other way, but we'll see if this works (it should). Ooo, maybe I.. okay maybe I made a mistake with leaving the labels on or something, before.
Step5: Okay, so we got those separated.. let's do the same thing as before
Step6: Yeah, I'm not seeing a real difference in not/using an activation function for just a single linear layer.
A Multilayer Perceptron (MLP) can be built by simple adding on layers. The big difference between an MLP and a NN is no backpropagation is going on. A single forward pass is done through the network each epoch, and there isn't any adjustment of weights. | Python Code:
# Import relevant libraries
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
import numpy as np
import os
# Data functions ~ mostly from utils.py or vgg16.py
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=4, class_mode='categorical',
target_size=(224,224)):
return gen.flow_from_directory(dirname, target_size=target_size,
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
# from keras.utils.np_utils import to_categorical
# def onehot(x): return to_categorical(labels, num_classes=10)
from keras.utils.np_utils import to_categorical
def onehot(x): return to_categorical(x)
# from sklearn.preprocessing import OneHotEncoder
# def onehot(x): return np.array(OneHotEncoder().fit_transform(x.reshape(-1,1)).todense())
# import bcolz
# def save_data(fname, array): c=bcolz.carray(array, rootdir=fname, mode='w'); c.flush()
# def load_data(path): return bcolz.open(fname)[:]
# Some setup
path = 'L2HW_data/'
if not os.path.exists(path): os.mkdir(path)
# Getting Data
# val_batches = get_batches(path+'valid/', shuffle=False, batch_size=1)
# trn_batches = get_batches(path+'train/', shuffle=False, batch_size=1)
# converting classes to OneHot for Keras
# val_classes = val_batches.classes
# trn_classes = trn_batches.classes
# val_labels = onehot(val_classes)
# trn_labels = onehot(trn_classes)
# See: https://www.kaggle.com/fchollet/simple-deep-mlp-with-keras/code/code
# for help loading
# I haven't learned how to batch-load .csv files; I'll blow that bridge
# when I get to it.
# read data
import pandas as pd
trn_data = pd.read_csv(path + 'train.csv')
trn_labels = trn_data.ix[:,0].values.astype('int32')
trn_input = (trn_data.ix[:,1:].values).astype('float32')
test_input = (pd.read_csv(path + 'test.csv').values).astype('float32')
# one-hot encode labels
trn_labels = onehot(trn_labels)
input_dim = trn_input.shape[1]
nb_classes = trn_labels.shape[1]
# To show how we'd know what the input dimensions should be without researching MNIST:
print(trn_input.shape)
Explanation: FAI1 Practical Deep Learning I | 11 May 2017 | Wayne Nixalo
In this notebook I'll be building a simple linear model in Keras using Sequential()
Tutorial on Linear Model for MNIST: linky
Keras.io doc on .fit_generator & Sequential
Some Notes:
It looks like I'll need to use Pandas to work with data in .csv files (MNIST from Kaggle comes that way). For the data sets that come in as folders of .jpegs, I'll use the way shown in class of get_batches & get_data ... but then if that's for a DLNN wouldn't I have to do that for this as well? Will see.
End of explanation
print(trn_labels.shape)
# I/O Dimensions: determined by data/categories/network
# Output_Cols = 10
# input_dim = 784 # for 1st layer only: rest do auto-shape-inference
# Hyperparameters
LR = 0.1
optz = SGD(lr=LR)
# optz = RMSprop(lr=LR)
lossFn = 'mse'
# lossFn = 'categorical_cross_entropy'
metric=['accuracy']
# metrics=None
LM = Sequential( [Dense(nb_classes, input_shape=(input_dim,))] )
# LM.compile(optmizer = optz, loss = lossFn, metrics = metric)
LM.compile(optimizer=SGD(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])
# lm.compile(optimizer=RMSprop(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model on the data
LM.fit(trn_input, trn_labels, nb_epoch=5, batch_size = 4, verbose=1)
Explanation: That above is 42,0000 images by 784 pixels. So, the usual 28x28 pixel images. We can do the same to take a look at the output, which, not surprisingly, is the 10 possible digits
End of explanation
# # Turns out this cell was unnecessary
# import pandas as pd
# test_data = pd.read_csv(path + 'test.csv')
# # wait there are no labels that's the point..
# # test_labels = test_data.ix[:,0].values.astype('int32')
# test_input = (test_data.ix[:,1:].values).astype('float32')
# print(test_data.shape)
# print(test_labels.shape)
Explanation: As I'd expect, a single perceptron layer doing a linear mapping between input to output performs.. poorly. But this is one of the first times I'm hand coding this from scratch, and it's good to be in a place where I can start experimenting without spending the bulk of mental effort over getting the machine to work.
By accidentally running more epochs without re-initializing the model, it seems it plateaus at 0.1761 accuracy.
It'll be interesting to see how RMSprop compares to SGD, and mean-squared error vs categorical cross-entropy. More interesting is adding more layers and tweaking their activations and looking at different learning rates: constant or graduated. Even more interesting is adding backpropagation and turning it into a neural network.
Below I'm getting the data separated into training and validation sets.
End of explanation
# # Wondering if a crash I had earlier was because One-Hotting did something to the labels..
# trn_data = pd.read_csv(path + 'train.csv')
# trn_labels = trn_data.ix[:,0].values.astype('int32')
# trn_input = (trn_data.ix[:,1:].values).astype('float32')
# test_data has 42,000 elements. I'll take 2,000 for validation.
# val_data = trn_data[:2000]
# trn_2_data = trn_data[2000:]
# val_input = val_data.ix[:,0].values.astype('int32')
# val_labels = (val_data.ix[:,1:].values).astype('float32')
# trn_2_input = trn_2_data.ix[:,0].values.astype('int32')
# trn_2_labels = (trn_2_data.ix[:,1:].values).astype('float32')
# trying to do this in a way that doesn't kill the kernel
print(trn_data.shape)
print(trn_input.shape)
print(trn_labels.shape)
Explanation: Unfortunately I don't yet know how to separate a .csv file into a validation set. Wait that sounds easy. Take a random permutation, or for the lazy: the first X amount of inputs and labels from the training set and call that validation. Oh. Okay. Maybe I should do that.
End of explanation
# Old
# print(val_data.shape, val_input.shape, val_labels.shape)
# print(trn_2_input.shape, trn_2_input.shape, trn_2_labels.shape)
val_input = trn_input[:2000]
val_labels = trn_labels[:2000]
newtrn_input = trn_input[2000:]
newtrn_labels = trn_labels[2000:]
print(val_input.shape, val_labels.shape)
print(newtrn_input.shape, newtrn_labels.shape)
Explanation: Okay, so.. training data is the number of images X (number of pixels + label). The label just adds 1 to the vector's length because it's just a decimal {0..9}, giving 42k X 785..
The training input vector has the label removed so its 42k images X 784 pixels..
The trianing labels vector is one-hot encoded to a ten-bit, & of course the 42k images..
So... nothing is saying I can't just cut input & labels and separate those into new training and validation sets. Not sure why I was getting crashes the other way, but we'll see if this works (it should). Ooo, maybe I.. okay maybe I made a mistake with leaving the labels on or something, before.
End of explanation
# The stuff above's One-Hotted.. so don't do it again..
# # I forgot the onehot encode the labels after loading from disk
# val_labels = onehot(val_labels)
# trn_2_labels = onehot(trn_2_labels)
# print(val_labels.shape)
# print(trn_2_labels.shape)
LM = Sequential([Dense(nb_classes, input_dim=input_dim)])
LM.compile(optimizer='sgd', loss='mse', metrics=['accuracy'])
# LM = Sequential([Dense(nb_classes, activation='sigmoid', input_dim=input_dim)])
LM.fit(newtrn_input, newtrn_labels, nb_epoch=5, batch_size=4,
validation_data=(val_input, val_labels))
LM2 = Sequential([Dense(nb_classes, activation='sigmoid', input_dim=input_dim)])
LM2.compile(optimizer='sgd', loss='mse', metrics=['accuracy'])
LM2.fit(trn_input, trn_labels, nb_epoch=5, batch_size=4, verbose=1)
Explanation: Okay, so we got those separated.. let's do the same thing as before: single Linear Model Perceptron Layer, but with a validation set to check against.
End of explanation
# this notebook will be a bit of a mess; the machine isn't the only one learning
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
??Activation
# just here so I have them infront of me
input_dim = input_dim # 784
nb_classes = nb_classes # 10
MLP = Sequential()
MLP.add(Dense(112, input_dim=input_dim)) # I'll just set internal output to 4x28=112
MLP.add(Activation('sigmoid'))
# I dont know what to set the dropout layer too, but I see it in keras.io as 0.5
MLP.add(Dropout(0.5))
# and now to add 3 more layers set to sigmoid
for layer in xrange(3):
MLP.add(Dense(112))
MLP.add(Activation('sigmoid'))
MLP.add(Dropout(0.5))
# and our final layers
MLP.add(Dense(nb_classes, activation='softmax'))
# this will be a beautiful disaster
MLP.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])
MLP.fit(newtrn_input, newtrn_labels, nb_epoch=5, batch_size=4,
validation_data = (val_input, val_labels))
Explanation: Yeah, I'm not seeing a real difference in not/using an activation function for just a single linear layer.
A Multilayer Perceptron (MLP) can be built by simple adding on layers. The big difference between an MLP and a NN is no backpropagation is going on. A single forward pass is done through the network each epoch, and there isn't any adjustment of weights.
End of explanation |
11,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font size="8">Energy Meter Examples</font>
<br>
<font size="5">BayLibre's ACME Cape and IIOCapture</font>
<br>
<hr>
Import Required Modules
Step1: Target Configuration
Step2: Workload Execution and Power Consumptions Samping
Step3: Power Measurements Data | Python Code:
import logging
reload(logging)
logging.basicConfig(
format='%(asctime)-9s %(levelname)-8s: %(message)s',
datefmt='%I:%M:%S')
# Enable logging at INFO level
logging.getLogger().setLevel(logging.INFO)
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
Explanation: <font size="8">Energy Meter Examples</font>
<br>
<font size="5">BayLibre's ACME Cape and IIOCapture</font>
<br>
<hr>
Import Required Modules
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_IIOCapture",
# Define devlib modules to load
"exclude_modules" : [ 'hwmon' ],
# Energy Meters Configuration for BayLibre's ACME Cape
"emeter" : {
"instrument" : "acme",
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
'channels' : {
'Device0' : 0,
'Device1' : 1,
}
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
Explanation: Target Configuration
End of explanation
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
channels_nrg, nrg_file = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
Explanation: Workload Execution and Power Consumptions Samping
End of explanation
logging.info("Measured channels energy:")
logging.info("%s", channels_nrg)
logging.info("Returned energy file:")
logging.info(" %s", nrg_file)
!cat $nrg_file
stats_file = nrg_file.replace('.json', '_stats.json')
logging.info("Complete energy stats:")
logging.info(" %s", stats_file)
!cat $stats_file
logging.info("Device0 stats (head)")
samples_file = os.path.join(te.res_dir, 'samples_Device0.csv')
!head $samples_file
logging.info("Device1 stats (head)")
samples_file = os.path.join(te.res_dir, 'samples_Device1.csv')
!head $samples_file
Explanation: Power Measurements Data
End of explanation |
11,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-bootstrap for evaluating pretrained LMs
This notebook shows an example of a paired multi-bootstrap analysis. This type of analysis is applicable for any kind of intervention that is applied independently to a particular pretraining (e.g. BERT) checkpoint, including
Step1: Load the run metadata. You can also just look through the directory, but this index file is convenient if (as we do here) you only want to download some of the files.
Step2: Load the dev set labels
Step4: Load the predictions
Step5: Compute the overall score for each run
Step6: We treat the selection of fine-tuning learning rate as part of the optimiztion process, and select the best learning rate for each task. Do this independently for each pretraining configuration
Step7: Run multibootstrap (paired)
base (L) is MultiBERTs with 1M steps, expt (L') is MultiBERTs with 2M steps. We have five seeds on base, and 25 seeds on expt but only five of these correspond to base seeds, which the multibootstrap() code will select automatically. Each seed will have 5 finetuning runs, which will be averaged over inside each sample.
Note that while in the paper we focus on metrics like accuracy that can be expressed as an average point loss, in general the multibootstrap procedure is valid for most common metrics like F1 or BLEU that behave asymptotically like one. As such, our API takes an arbitrary metric function f(y_pred, y_true) which will be called on each sample. See the docstring in multibootstrap.py for more detail.
Step8: Plot result distribution
Step9: The distributions of scores from 1M and 2M checkpoints seem to overlap significantly in the above plot, but because these are derived from the same samples of (seeds, examples), they are highly correlated. If we look at deltas, we see that in nearly all cases, the intervention (pretraining to 2M steps) will outperform the base model (1M steps), confirming the p-value of close to zero we computed above.
Step10: This plots the above side-by-side, to create Figure 4 from the paper. Note that the shapes won't match exactly due to randomness, but should be qualitatively similar. | Python Code:
#@title Import libraries and multibootstrap code
import re
import os
import numpy as np
import pandas as pd
import sklearn.metrics
import scipy.stats
from tqdm.notebook import tqdm # for progress indicator
import multibootstrap
scratch_dir = "/tmp/multiberts_mnli"
if not os.path.isdir(scratch_dir):
os.mkdir(scratch_dir)
preds_root = "https://storage.googleapis.com/multiberts/public/example-predictions/GLUE"
# Fetch development set labels
!curl -O $preds_root/MNLI_dev_labels --output-dir $scratch_dir
# Fetch predictions index file
!curl -O $preds_root/index.tsv --output-dir $scratch_dir
!ls $scratch_dir
Explanation: Multi-bootstrap for evaluating pretrained LMs
This notebook shows an example of a paired multi-bootstrap analysis. This type of analysis is applicable for any kind of intervention that is applied independently to a particular pretraining (e.g. BERT) checkpoint, including:
Interventions such as intermediate task training or pruning which directly manipulate a pretraining checkpoint.
Changes to any fine-tuning or probing procedure which is applied after pretraining.
In the most general case, we'll have a set of $k$ pretraining checkpoints (seeds), to which we'll apply our intervention, perform any additional transformations (like fine-tuning), then evaluate a downstream metric $L$ on a finite evaluation set. The multiple bootstrap procedure allows us to account for three sources of variance:
Variation between pretraining checkpoints
Expected variance due to a finite evaluation set
Variation due to fine-tuning or other procedure
2M vs. 1M pretraining steps
Here, we'll compare the MultiBERTs models run for 2M steps with those run for 1M steps, as described in Appendix E.1 of the paper. We'll use the five pretraining seeds (0,1,2,3,4) for which we have a dense set of checkpoints throughout training, such that we can treat the 2M runs as an "intervention" (training for additional time) over the 1M-step models and perform a paired analysis. From each pretraining checkpoint, we'll run fine-tuning 5 times for each of 4 learning rates, select the best learning rate (treating this as part of the optimization), and then run our multibootstrap procedure.
We'll use MultiNLI for this example, but the code below can easily be modified to run on other tasks.
End of explanation
task_name = "MNLI"
run_info = pd.read_csv(os.path.join(scratch_dir, 'index.tsv'), sep='\t')
# Filter to the runs we're interested in
mask = run_info.task == task_name
mask &= run_info.release == 'multiberts'
run_info = run_info[mask].copy()
run_info
Explanation: Load the run metadata. You can also just look through the directory, but this index file is convenient if (as we do here) you only want to download some of the files.
End of explanation
ALL_TASKS = list(run_info.task.unique())
print("Tasks:", ALL_TASKS)
task_labels = {}
for task_name in ALL_TASKS:
labels_file = os.path.join(scratch_dir, task_name + "_dev_labels")
labels = np.loadtxt(labels_file).astype(float if task_name == 'STS-B' else int)
task_labels[task_name] = labels
{k:len(v) for k, v in task_labels.items()}
Explanation: Load the dev set labels:
End of explanation
# Download all prediction files
for fname in tqdm(run_info.file):
!curl $preds_root/$fname -o $scratch_dir/$fname --create-dirs --silent
!ls $scratch_dir/MNLI
# Load all predictions (slow)
def get_preds(preds_tsv_path, task_name):
Load predictions, as array of integers.
# [num_examples, num_classes]
preds = np.loadtxt(preds_tsv_path, delimiter="\t")
# [num_examples]
if task_name == 'STS-B':
return preds.astype(float)
else:
return np.argmax(preds, axis=1).astype(int)
all_preds = [get_preds(os.path.join(scratch_dir, row['file']), row['task'])
for _, row in tqdm(run_info.iterrows(), total=len(run_info))]
run_info['preds'] = all_preds
Explanation: Load the predictions:
End of explanation
def score_row(task, preds, **kw):
labels = task_labels[task]
if task == "STS-B":
metric = lambda x, y: scipy.stats.pearsonr(x, y)[0]
else:
metric = sklearn.metrics.accuracy_score
return metric(preds, labels)
all_scores = [score_row(**row) for _, row in tqdm(run_info.iterrows(), total=len(run_info))]
run_info['score'] = all_scores
Explanation: Compute the overall score for each run:
End of explanation
# Find the best finetuning LR for each task
def find_best_lr(sub_df):
return sub_df.groupby('lr').agg({'score': np.mean})['score'].idxmax()
gb = run_info.groupby(['release', 'n_steps', 'task'])
best_lr = gb.apply(find_best_lr)
best_lr
# Select only runs with the best LR (should be 1/4 of total)
gb = run_info.groupby(['release', 'n_steps', 'task'])
def filter_to_best_lr(sub_df):
lr = find_best_lr(sub_df)
return sub_df[sub_df.lr == lr]
best_lr_runs = gb.apply(filter_to_best_lr).reset_index(drop=True)
best_lr_runs
Explanation: We treat the selection of fine-tuning learning rate as part of the optimiztion process, and select the best learning rate for each task. Do this independently for each pretraining configuration: original BERT, MultiBERTs 1M, and MultiBERTs 2M.
End of explanation
num_bootstrap_samples = 1000 #@param {type: "integer"}
mask = (best_lr_runs.release == 'multiberts')
run_df = best_lr_runs[mask]
stats = {}
for task_name in ALL_TASKS:
print("Task: ", task_name)
selected_runs = run_df[run_df.task == task_name].copy()
# Set intervention and seed columns
selected_runs['intervention'] = selected_runs.n_steps == '2M'
selected_runs['seed'] = selected_runs.pretrain_id
print("Available runs:", len(selected_runs))
labels = task_labels[task_name]
print("Labels:", labels.dtype, labels.shape)
preds = np.stack(selected_runs.preds)
print("Preds:", preds.dtype, preds.shape)
if task_name == "STS-B":
metric = lambda x, y: scipy.stats.pearsonr(x, y)[0]
else:
metric = sklearn.metrics.accuracy_score
samples = multibootstrap.multibootstrap(selected_runs, preds, labels,
metric, nboot=num_bootstrap_samples,
paired_seeds=True,
progress_indicator=tqdm)
stats[task_name] = multibootstrap.report_ci(samples, c=0.95)
print("") # newline
pd.concat({k: pd.DataFrame(v) for k,v in stats.items()}).transpose()
Explanation: Run multibootstrap (paired)
base (L) is MultiBERTs with 1M steps, expt (L') is MultiBERTs with 2M steps. We have five seeds on base, and 25 seeds on expt but only five of these correspond to base seeds, which the multibootstrap() code will select automatically. Each seed will have 5 finetuning runs, which will be averaged over inside each sample.
Note that while in the paper we focus on metrics like accuracy that can be expressed as an average point loss, in general the multibootstrap procedure is valid for most common metrics like F1 or BLEU that behave asymptotically like one. As such, our API takes an arbitrary metric function f(y_pred, y_true) which will be called on each sample. See the docstring in multibootstrap.py for more detail.
End of explanation
from matplotlib import pyplot
import seaborn as sns
sns.set_style('white')
%config InlineBackend.figure_format = 'retina' # make matplotlib plots look better
# Plot distribution of scores
var_name = 'Pretraining Steps'
val_name = "MNLI Accuracy"
bdf = pd.DataFrame(samples, columns=['1M', '2M']).melt(var_name=var_name, value_name=val_name)
bdf['x'] = 0
fig = pyplot.figure(figsize=(10, 7))
ax = fig.gca()
sns.violinplot(ax=ax, x=var_name, y=val_name, data=bdf, inner='quartile')
ax.set_title("MultiBERTs 1M vs 2M")
ax
Explanation: Plot result distribution
End of explanation
# Plot distribution of deltas L' - L
var_name = 'Pretraining Steps'
val_name = "MNLI Accuracy delta"
bdf = pd.DataFrame(samples, columns=['1M', '2M'])
bdf['deltas'] = bdf['2M'] - bdf['1M']
bdf = bdf.drop(axis=1, labels=['1M', '2M']).melt(var_name=var_name, value_name=val_name)
bdf['x'] = 0
fig = pyplot.figure(figsize=(5, 7))
ax = fig.gca()
sns.violinplot(ax=ax, x=var_name, y=val_name, data=bdf, inner='quartile',
palette='gray')
ax.set_title("MultiBERTs 1M vs 2M")
# ax.set_ylim(bottom=0)
ax
Explanation: The distributions of scores from 1M and 2M checkpoints seem to overlap significantly in the above plot, but because these are derived from the same samples of (seeds, examples), they are highly correlated. If we look at deltas, we see that in nearly all cases, the intervention (pretraining to 2M steps) will outperform the base model (1M steps), confirming the p-value of close to zero we computed above.
End of explanation
# Plot distribution of results and deltas
fig, (a1, a2) = pyplot.subplots(nrows=1, ncols=2, figsize=(7,4),
gridspec_kw=dict(width_ratios=[2,1], wspace=0.33))
var_name = 'Pretraining Steps'
val_name = "MNLI Accuracy"
bdf = pd.DataFrame(samples, columns=['1M', '2M']).melt(var_name=var_name, value_name=val_name)
bdf['x'] = 0
sns.violinplot(ax=a1, x=var_name, y=val_name, data=bdf, inner='quartile')
var_name = 'Pretraining Steps'
val_name = "MNLI Accuracy delta"
bdf = pd.DataFrame(samples, columns=['1M', '2M'])
bdf['deltas'] = bdf['2M'] - bdf['1M']
bdf = bdf.drop(axis=1, labels=['1M', '2M']).melt(var_name=var_name, value_name=val_name)
bdf['x'] = 0
sns.violinplot(ax=a2, x=var_name, y=val_name, data=bdf, inner='quartile',
palette='gray')
Explanation: This plots the above side-by-side, to create Figure 4 from the paper. Note that the shapes won't match exactly due to randomness, but should be qualitatively similar.
End of explanation |
11,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 6.1.2 - Using word embeddings
Embedding layer with Keras
Step1: The layers transforms a 2D input tensor of integer of shape (number_of samples, sequence_length) into a 3D floating point tensor, of shape (number_of_samples, sequence_length, embedding_dimensionality.)
Such tensor can be processed a RNN layer of a 1D convolutional layer.
IMDB example
Step2: Model
Step3: Using pre-trained word embeddings
The data can be downloaded from
Step4: Tokenizing the data
Step5: GloVe Embedding
Download from
Step6: Model
Step7: Performance
Step8: The model overfits very quickly.
Model without pre-trained embeddings | Python Code:
import keras
keras.__version__
from keras.layers import Embedding
# Number of maximum tokens is equal of maximum word index + 1
max_number_of_tokens = 1000
embedding_dimentionality = 64
embedding_layer = Embedding(max_number_of_tokens, embedding_dimentionality)
Explanation: Chapter 6.1.2 - Using word embeddings
Embedding layer with Keras
End of explanation
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
# Number of words considered as features
max_features = 10000
# Cutting reviews after only 20 words
sequence_max_length = 20
# Loading data
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words = max_features)
x_train.shape
x_train_sequence = pad_sequences(x_train, maxlen = sequence_max_length)
x_train_sequence.shape
x_train[0:2]
x_train[0].__getitem__(-20)
x_train_sequence[0, :]
x_train_sequence[0]
x_train_sequence[1]
Explanation: The layers transforms a 2D input tensor of integer of shape (number_of samples, sequence_length) into a 3D floating point tensor, of shape (number_of_samples, sequence_length, embedding_dimensionality.)
Such tensor can be processed a RNN layer of a 1D convolutional layer.
IMDB example
End of explanation
from keras.models import Sequential
from keras.layers import Flatten, Dense
model = Sequential()
model.add(Embedding(input_dim = max_features, output_dim = 8, input_length = sequence_max_length))
model.add(Flatten())
model.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the model
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics = ['acc'])
model.summary()
# Training
history = model.fit(x = x_train_sequence,
y = y_train,
epochs = 10,
batch_size = 32,
validation_split = 0.2)
Explanation: Model
End of explanation
import os
imdb_dir = './data/Chapter 6.1.2 - Using word embeddings/aclImdb/'
train_dir = os.path.join(imdb_dir, 'train')
labels = []
texts = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(train_dir, label_type)
for fname in os.listdir(dir_name):
# Taking into consideration files which are only .txt
if fname[-4:] == '.txt':
f = open(os.path.join(dir_name, fname), encoding="utf8")
texts.append(f.read())
f.close()
if label_type == 'neg':
labels.append(0)
else:
labels.append(1)
len(labels)
len(texts)
texts[0]
labels[0]
Explanation: Using pre-trained word embeddings
The data can be downloaded from: http://mng.bz/0tIo
End of explanation
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
# Using only first 100 words of each review
maxlen = 100
# Number of training samples
training_samples = 200
# Number of validation samples
validation_samples = 10000
# Tokenizing only top 10 000 words in the dataset.
max_words = 10000
# Initializing Tokenizer
tokenizer = Tokenizer(num_words = max_words)
# Fitting the Tokenizer on the text
tokenizer.fit_on_texts(texts)
# Text to sequence
sequences = tokenizer.texts_to_sequences(texts)
sequences[0:2]
# Word index
word_index = tokenizer.word_index
type(word_index)
first10pairs = {k: word_index[k] for k in list(word_index)[:10]}
first10pairs
# Padding the sequence
data = pad_sequences(sequences, maxlen = maxlen)
data.shape
labels = np.asarray(labels)
labels.shape
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
# Splitting the data into train and validation datasets
x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]
x_train.shape
x_val.shape
Explanation: Tokenizing the data
End of explanation
# Importing tqdm to show a progress bar
from tqdm import tqdm
glove_dir = './data/Chapter 6.1.2 - Using word embeddings/glove.6B/'
embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.100d.txt'),
encoding = 'utf-8')
for line in tqdm(f):
values = line.split()
word = values[0]
coefs = np.asarray(values[1:],
dtype = 'float32')
embeddings_index[word] = coefs
f.close()
len(embeddings_index)
embedding_dim = 100
embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
if i < max_words:
embedding_vector = embeddings_index.get(word)
# Words not found in the embedding index will be represented as zeros
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
embedding_matrix
Explanation: GloVe Embedding
Download from: http://nlp.stanford.edu/data/glove.6B.zip
End of explanation
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(input_dim = max_words,
output_dim = embedding_dim,
input_length = maxlen))
model.add(Flatten())
model.add(Dense(units = 32,
activation = 'relu'))
model.add(Dense(units = 1,
activation = 'sigmoid'))
model.summary()
# Loading pretrained word embeddings
model.layers[0].set_weights([embedding_matrix])
# Freezing the layer
model.layers[0].trainable = False
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x = x_train,
y = y_train,
epochs = 10,
batch_size = 32,
validation_data = (x_val, y_val))
model.save_weights('./saved_checkpoints/Chapter 6.1.2 - Using word embeddings/pre_trained_glove_model.h5')
Explanation: Model
End of explanation
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: Performance
End of explanation
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val))
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: The model overfits very quickly.
Model without pre-trained embeddings
End of explanation |
11,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Starting the Analysis Cluster
NEXUS utilizes Apache Spark running on Apache Mesos for its analytical functions. Now that the infrastructure has been started, we can start up the analysis cluster.
The analysis cluster consists of and Apache Mesos cluster and the NEXUS webapp Tornado server. The Mesos cluster we will be bringing up has one master node and three agent nodes. Apache Spark is already installed and configured on the three agent nodes and will act as Spark executors for the NEXUS analytic functions.
Step 1
Step1: Step 3
Step2: Step 4
Step3: Step 5 | Python Code:
# TODO Run this cell to see the status of the Mesos slaves. You should see 3 slaves connected.
import requests
import json
response = requests.get('http://mesos-master:5050/state.json')
print(json.dumps(response.json()['slaves'], indent=2))
Explanation: Starting the Analysis Cluster
NEXUS utilizes Apache Spark running on Apache Mesos for its analytical functions. Now that the infrastructure has been started, we can start up the analysis cluster.
The analysis cluster consists of and Apache Mesos cluster and the NEXUS webapp Tornado server. The Mesos cluster we will be bringing up has one master node and three agent nodes. Apache Spark is already installed and configured on the three agent nodes and will act as Spark executors for the NEXUS analytic functions.
Step 1: Start the Containers
We can use docker-compose again to start our containers.
TODO
Navigate to the directory containing the docker-compose.yml file for the analysis cluster
bash
$ cd ~/nexus/esip-workshop/docker/analysis
Use docker-compose to bring up the containers in the analysis cluster
bash
$ docker-compose up -d
Step 2: Verify the Cluster is Working
Now that the cluster has started we can use various commands to ensure that it is operational and monitor its status.
TODO
List all running docker containers.
bash
$ docker ps
The output should look simillar to this:
<pre style="white-space: pre;">
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e5589456a78a nexusjpl/nexus-webapp "/tmp/docker-entry..." 5 seconds ago Up 5 seconds 0.0.0.0:4040->4040/tcp, 0.0.0.0:8083->8083/tcp nexus-webapp
18e682b9af0e nexusjpl/spark-mesos-agent "/tmp/docker-entry..." 7 seconds ago Up 5 seconds mesos-agent1
8951841d1da6 nexusjpl/spark-mesos-agent "/tmp/docker-entry..." 7 seconds ago Up 6 seconds mesos-agent3
c0240926a4a2 nexusjpl/spark-mesos-agent "/tmp/docker-entry..." 7 seconds ago Up 6 seconds mesos-agent2
c97ad268833f nexusjpl/spark-mesos-master "/bin/bash -c './b..." 7 seconds ago Up 7 seconds 0.0.0.0:5050->5050/tcp mesos-master
90d370eb3a4e nexusjpl/jupyter "tini -- start-not..." 2 days ago Up 2 days 0.0.0.0:8000->8888/tcp jupyter
cd0f47fe303d nexusjpl/nexus-solr "docker-entrypoint..." 2 days ago Up 2 days 8983/tcp solr2
8c0f5c8eeb45 nexusjpl/nexus-solr "docker-entrypoint..." 2 days ago Up 2 days 8983/tcp solr3
27e34d14c16e nexusjpl/nexus-solr "docker-entrypoint..." 2 days ago Up 2 days 8983/tcp solr1
247f807cb5ec cassandra:2.2.8 "/docker-entrypoin..." 2 days ago Up 2 days 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp cassandra3
09cc86a27321 zookeeper "/docker-entrypoin..." 2 days ago Up 2 days 2181/tcp, 2888/tcp, 3888/tcp zk1
33e9d9b1b745 zookeeper "/docker-entrypoin..." 2 days ago Up 2 days 2181/tcp, 2888/tcp, 3888/tcp zk3
dd29e4d09124 cassandra:2.2.8 "/docker-entrypoin..." 2 days ago Up 2 days 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp cassandra2
11e57e0c972f zookeeper "/docker-entrypoin..." 2 days ago Up 2 days 2181/tcp, 2888/tcp, 3888/tcp zk2
2292803d942d cassandra:2.2.8 "/docker-entrypoin..." 2 days ago Up 2 days 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp cassandra1
</pre>
List the available Mesos slaves by running the cell below.
End of explanation
import nexuscli
nexuscli.set_target("http://nexus-webapp:8083")
nexuscli.dataset_list()
Explanation: Step 3: List available Datasets
Now that the cluster is up, we can investigate the datasets available. Use the nexuscli module to list available datatsets.
TODO
Get a list of datasets by using the nexuscli module to issue a request to the nexus-webapp container that was just started.
End of explanation
# TODO Run this cell to produce a Time Series plot using AVHRR data.
%matplotlib inline
import matplotlib.pyplot as plt
import time
import nexuscli
from datetime import datetime
from shapely.geometry import box
bbox = box(-150, 40, -120, 55)
datasets = ["AVHRR_OI_L4_GHRSST_NCEI"]
start_time = datetime(2013, 1, 1)
end_time = datetime(2013, 12, 31)
start = time.perf_counter()
ts, = nexuscli.time_series(datasets, bbox, start_time, end_time, spark=True)
print("Time Series took {} seconds to generate".format(time.perf_counter() - start))
plt.figure(figsize=(10,5), dpi=100)
plt.plot(ts.time, ts.mean, 'b-', marker='|', markersize=2.0, mfc='b')
plt.grid(b=True, which='major', color='k', linestyle='-')
plt.xlabel("Time")
plt.ylabel ("Sea Surface Temperature (C)")
plt.show()
Explanation: Step 4: Run a Time Series
Verify the analysis functions are working by running a simple Time Series.
TODO
Run the cell below to produce a time series plot using the analysis cluster you just started.
End of explanation
# TODO Run this cell. You should see at least one successful Time Series Spark job.
import requests
response = requests.get('http://nexus-webapp:4040/api/v1/applications')
appId = response.json()[0]['id']
response = requests.get("http://nexus-webapp:4040/api/v1/applications/%s/jobs" % appId)
for job in response.json():
print(job['name'])
print('\t' + job['status'])
Explanation: Step 5: Check the Results of the Spark Job
The time series function in the previous cell will run on the Spark cluster. It is possible to use the Spark RESTful interface to determine the status of the Spark job.
TODO
Run the cell below to see the status of the Spark Job.
End of explanation |
11,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Model10
Step2: Feature functions(private)
Step3: Feature function(public)
Step4: Utility functions
Step5: GMM
Classifying questions
features
Step7: B. Modeling
Select model
Step8: Training and testing model
Step9: Writing result | Python Code:
import gzip
import pickle
from os import path
from collections import defaultdict
from numpy import sign
Load buzz data as a dictionary.
You can give parameter for data so that you will get what you need only.
def load_buzz(root='../data', data=['train', 'test', 'questions'], format='pklz'):
buzz_data = {}
for ii in data:
file_path = path.join(root, ii + "." + format)
with gzip.open(file_path, "rb") as fp:
buzz_data[ii] = pickle.load(fp)
return buzz_data
Explanation: Model10: GMM
A. Functions
There have four different functions.
Data reader: Read data from file.
Feature functions(private): Functions which extract features are placed in here. It means that if you make a specific feature function, you can add the one into here.
Feature function(public): We can use only this function for feature extraction.
Utility functions: All the funtions except functions which are mentioned in above should be placed in here.
Data reader
End of explanation
from numpy import sign, abs
def _feat_basic(bd, group):
X = []
for item in bd[group].items():
qid = item[1]['qid']
q = bd['questions'][qid]
#item[1]['q_length'] = max(q['pos_token'].keys())
item[1]['q_length'] = len(q['question'].split())
item[1]['category'] = q['category'].lower()
item[1]['answer'] = q['answer'].lower()
X.append(item[1])
return X
def _feat_sign_val(data):
for item in data:
item['sign_val'] = sign(item['position'])
def _get_pos(bd, sign_val=None):
# bd is not bd, bd is bd['train']
unwanted_index = []
pos_uid = defaultdict(list)
pos_qid = defaultdict(list)
for index, key in enumerate(bd):
if sign_val and sign(bd[key]['position']) != sign_val:
unwanted_index.append(index)
else:
pos_uid[bd[key]['uid']].append(bd[key]['position'])
pos_qid[bd[key]['qid']].append(bd[key]['position'])
return pos_uid, pos_qid, unwanted_index
def _get_avg_pos(bd, sign_val=None):
pos_uid, pos_qid, unwanted_index = _get_pos(bd, sign_val)
avg_pos_uid = {}
avg_pos_qid = {}
if not sign_val:
sign_val = 1
for key in pos_uid:
pos = pos_uid[key]
avg_pos_uid[key] = sign_val * (sum(pos) / len(pos))
for key in pos_qid:
pos = pos_qid[key]
avg_pos_qid[key] = sign_val * (sum(pos) / len(pos))
return avg_pos_uid, avg_pos_qid, unwanted_index
def _feat_avg_pos(data, bd, group, sign_val):
avg_pos_uid, avg_pos_qid, unwanted_index = _get_avg_pos(bd['train'], sign_val=sign_val)
if group == 'train':
for index in sorted(unwanted_index, reverse=True):
del data[index]
for item in data:
if item['uid'] in avg_pos_uid:
item['avg_pos_uid'] = avg_pos_uid[item['uid']]
else:
vals = avg_pos_uid.values()
item['avg_pos_uid'] = sum(vals) / float(len(vals))
if item['qid'] in avg_pos_qid:
item['avg_pos_qid'] = avg_pos_qid[item['qid']]
else:
vals = avg_pos_qid.values()
item['avg_pos_qid'] = sum(vals) / float(len(vals))
# Response position can be longer than length of question
if item['avg_pos_uid'] > item['q_length']:
item['avg_pos_uid'] = item['q_length']
if item['avg_pos_qid'] > item['q_length']:
item['avg_pos_qid'] = item['q_length']
Explanation: Feature functions(private)
End of explanation
def featurize(bd, group, sign_val=None, extra=None):
# Basic features
# qid(string), uid(string), position(float)
# answer'(string), 'potistion'(float), 'qid'(string), 'uid'(string)
X = _feat_basic(bd, group=group)
# Some extra features
if extra:
for func_name in extra:
func_name = '_feat_' + func_name
if func_name in ['_feat_avg_pos']:
globals()[func_name](X, bd, group=group, sign_val=sign_val)
else:
globals()[func_name](X)
if group == 'train':
y = []
for item in X:
y.append(item['position'])
del item['position']
return X, y
elif group == 'test':
return X
else:
raise ValueError(group, 'is not the proper type')
Explanation: Feature function(public)
End of explanation
import csv
def select(data, keys):
unwanted = data[0].keys() - keys
for item in data:
for unwanted_key in unwanted:
del item[unwanted_key]
return data
def write_result(test_set, predictions, file_name='guess.csv'):
predictions = sorted([[id, predictions[index]] for index, id in enumerate(test_set.keys())])
predictions.insert(0,["id", "position"])
with open(file_name, "w") as fp:
writer = csv.writer(fp, delimiter=',')
writer.writerows(predictions)
Explanation: Utility functions
End of explanation
%matplotlib inline
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
def plot_gmm(X, models, n_components, covariance_type='diag',
figsize=(10, 20), suptitle=None, xlabel=None, ylabel=None):
color_iter = ['r', 'g', 'b', 'c', 'm', 'y', 'k', 'gray', 'pink', 'lime']
plt.figure(figsize=figsize)
plt.suptitle(suptitle, fontsize=20)
for i, model in enumerate(models):
mm = getattr(mixture, model)(n_components=n_components,
covariance_type=covariance_type)
mm.fit(X_pos_qid)
Y = mm.predict(X_pos_qid)
plt.subplot(len(models), 1, 1 + i)
for i, color in enumerate(color_iter):
plt.scatter(X_pos_qid[Y == i, 0], X_pos_qid[Y == i, 1], .7, color=color)
plt.title(model, fontsize=15)
plt.xlabel(xlabel, fontsize=12)
plt.ylabel(ylabel, fontsize=12)
plt.grid()
plt.show()
from collections import UserDict
import numpy as np
class DictDict(UserDict):
def __init__(self, bd):
UserDict.__init__(self)
self._set_bd(bd)
def sub_keys(self):
return self[list(self.keys())[0]].keys()
def select(self, sub_keys):
vals = []
for key in self:
vals.append([self[key][sub_key] for sub_key in sub_keys])
return np.array(vals)
def sub_append(self, sub_key, values):
for index, key in enumerate(self):
self[key][sub_key] = values[index]
class Users(DictDict):
def _set_bd(self, bd):
pos_uid, _, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_uid:
u = np.array(pos_uid[key])
ave_pos_uid = sum(abs(u)) / float(len(u))
acc_ratio_uid = len(u[u > 0]) / float(len(u))
self[key] = {'ave_pos_uid': ave_pos_uid,
'acc_ratio_uid': acc_ratio_uid}
class Questions(DictDict):
def _set_bd(self, bd):
_, pos_qid, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_qid:
u = np.array(pos_qid[key])
ave_pos_qid = sum(abs(u)) / float(len(u))
acc_ratio_qid = len(u[u > 0]) / float(len(u))
self[key] = bd['questions'][key]
self[key]['ave_pos_qid'] = ave_pos_qid
self[key]['acc_ratio_qid'] = acc_ratio_qid
users = Users(load_buzz())
questions = Questions(load_buzz())
X_pos_uid = users.select(['ave_pos_uid', 'acc_ratio_uid'])
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid'])
plot_gmm(X_pos_uid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying users',
xlabel='abs(position)',
ylabel='accuracy ratio')
plot_gmm(X_pos_qid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying questions',
xlabel='abs(position)',
ylabel='accuracy ratio')
# Question category
n_components = 8
gmm = mixture.GMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_qid)
pred_cat_qid = gmm.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 8
gmm = mixture.GMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_uid)
pred_cat_uid = gmm.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
from collections import Counter
users.sub_append('cat_uid', [str(x) for x in pred_cat_uid])
questions.sub_append('cat_qid', [str(x) for x in pred_cat_qid])
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
print(most_pred_cat_uid)
print(most_pred_cat_qid)
print(users[1])
print(questions[1])
Explanation: GMM
Classifying questions
features: avg_pos, accuracy rate
End of explanation
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['sign_val', 'avg_pos'])
X_train = select(X_train, regression_keys)
def transform(X):
for index, item in enumerate(X):
uid = int(item['uid'])
qid = int(item['qid'])
# uid
if int(uid) in users:
item['acc_ratio_uid'] = users[uid]['acc_ratio_uid']
item['cat_uid'] = users[uid]['cat_uid']
else:
print('Not found uid:', uid)
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
item['cat_uid'] = most_pred_cat_uid
# qid
if int(qid) in questions:
item['acc_ratio_qid'] = questions[qid]['acc_ratio_qid']
item['cat_qid'] = questions[qid]['cat_qid']
else:
print('Not found qid:', qid)
acc = questions.select(['acc_ratio_qid'])
item['acc_ratio_qid'] = sum(acc) / float(len(acc))
item['cat_qid'] = most_pred_cat_qid
item['uid'] = str(uid)
item['qid'] = str(qid)
transform(X_train)
X_train[1]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
X_train_dict_vec = vec.fit_transform(X_train)
import multiprocessing
from sklearn import linear_model
from sklearn.cross_validation import train_test_split, cross_val_score
import math
from numpy import abs, sqrt
regressor_names =
LinearRegression
LassoCV
ElasticNetCV
print ("=== Linear Cross validation RMSE scores:")
for regressor in regressor_names.split():
scores = cross_val_score(getattr(linear_model, regressor)(normalize=True, n_jobs=multiprocessing.cpu_count()-1),
X_train_dict_vec, y_train,
cv=2,
scoring='mean_squared_error'
)
print (regressor, sqrt(abs(scores)).mean())
Explanation: B. Modeling
Select model
End of explanation
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['avg_pos'])
X_train = select(X_train, regression_keys)
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
X_test = select(X_test, regression_keys)
transform(X_train)
transform(X_test)
X_train[1]
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
regressor = linear_model.ElasticNetCV(n_jobs=3, normalize=True)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
Explanation: Training and testing model
End of explanation
write_result(load_buzz()['test'], predictions)
Explanation: Writing result
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.