Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Práctica 3 - Dinámica de manipuladores
En esta práctica nuestro objetivo será simular el comportamiento de un manipulador tipo PUMA, empecemos importando las liberrias necesarias
Step1: Y copiando la función para generar matrices de transformación homogéneas a partir de los parametros DH
Step2: He guardado todas las matrices de transformación homgénea en un solo arreglo, de tal manera que puedo hacer una función que tome todas las transformaciones de cada eslabon, y me devuelva las transformaciones a cada articulacion
Step3: Una vez obtenido esto, puedo obtener las posiciones de cada articulación con una List comprehension
Step4: Ejercicio
Genera una lista que contenga todas las matrices de rotación de cada articulación usando list comprehensions
Step5: Si ahora declaramos un vector con todos los grados de libertad
Step6: podemos obtener el Jacobiano traslacional de cada articulacion con
Step7: Ejercicio
Genera una lista con los Jacobianos traslacionales
Step8: Un paso que tenemos que hacer manualmente es definir los vectores de orientación (compuesto por $\phi$, $\theta$ y $\psi$) ya que se tiene un sistema sobrerestringido, pero son lo suficientemente faciles de obtener
Step9: y si se guarda una lista con cada uno de estos vectores, se puede obtener el jacobiano rotacional de la misma manera que el traslacional
Step10: Ejercicio
Genera una lista con los Jacobianos rotacionales
Step11: Otra cosa que podemos hacer en automatico es definir los tensores de inercia necesarios para el manipulador, ya que esto solo depende del numero de grados de libertad, defino la función que va a tomar el vector con el estado del sistema, $q$, y va a calcular una lista con los tensores
Step12: definiré una lista con todas las masas de los eslabones
Step13: De tal manera que podamos hacer una función que tome estas, los jacobianos y los tensores de inercia,para calcular la matriz de masas
Step14: mmm... un poco grande, tratemos de simplificar un poco
Step15: mmm... un poco mejor, pero aun no es viable; los terminos del segundo y tercer grado de libertad son simples, el problema es el primero, tratemos de simplificar solo ese termino, intentaremos factorizar $l_2^2$ y $l_3^2$ y despues simplificar
Step16: esto se ve aceptable, apliquemoslo a toda la matriz
Step17: Ejercicio
Escribe el código de una función, que dada la matriz de masas, una lista con los grados de libertad y los indices de posición, calcule el símbolo de Christoffel pedido, recuerde que la formula es
Step18: Con esta función podemos calcular cualquier simbolo de Christoffel (recordando que los indices en Python empiezan en $0$
Step19: y crear una función que calcule todos los simbolos de Christoffel a partir de esta función
Step20: Y ya con los simbolos de Christoffel, calcular la matriz de Coriolis
Step21: En este punto tenemos un resultado lo suficientemente compacto para copiarlo a las definiciones numéricas, por lo que seguimos al vector de gravedad
Step22: Y calculando las energías potenciales | Python Code:
from sympy.physics.mechanics import mechanics_printing
mechanics_printing()
from sympy import var, Function, pi
var("l1:4")
var("m1:4")
var("g t")
q1 = Function("q1")(t)
q2 = Function("q2")(t)
q3 = Function("q3")(t)
Explanation: Práctica 3 - Dinámica de manipuladores
En esta práctica nuestro objetivo será simular el comportamiento de un manipulador tipo PUMA, empecemos importando las liberrias necesarias:
End of explanation
def DH(params):
from sympy import Matrix, sin, cos
a, d, α, θ = params
A = Matrix([[cos(θ), -sin(θ)*cos(α), sin(θ)*sin(α), a*cos(θ)],
[sin(θ), cos(θ)*cos(α), -cos(θ)*sin(α), a*sin(θ)],
[0, sin(α), cos(α), d],
[0, 0, 0, 1]])
return A
A1 = DH([0, l1, pi/2, q1])
A2 = DH([l2, 0, 0, q2])
A3 = DH([l3, 0, 0, q3])
As = [A1, A2, A3]
As
Explanation: Y copiando la función para generar matrices de transformación homogéneas a partir de los parametros DH:
End of explanation
def transf_art(transformaciones):
from sympy import eye, simplify
Hs = [eye(4)]
for trans in transformaciones:
Hs.append(simplify(Hs[-1]*trans))
return Hs[1:]
Hs = transf_art(As)
Hs
Explanation: He guardado todas las matrices de transformación homgénea en un solo arreglo, de tal manera que puedo hacer una función que tome todas las transformaciones de cada eslabon, y me devuelva las transformaciones a cada articulacion:
End of explanation
ps = [H[0:3, 3:4] for H in Hs]
ps
Explanation: Una vez obtenido esto, puedo obtener las posiciones de cada articulación con una List comprehension:
End of explanation
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
Rs
from nose.tools import assert_equal
from sympy import Matrix, sin, cos, var
R1 = Matrix([[cos(q1), 0, sin(q1)],
[sin(q1), 0, -cos(q1)],
[0, 1, 0]])
R2 = Matrix([[cos(q1)*cos(q2), -sin(q2)*cos(q1), sin(q1)],
[sin(q1)*cos(q2), -sin(q2)*sin(q1), -cos(q1)],
[sin(q2), cos(q2), 0]])
R3 = Matrix([[cos(q1)*cos(q2+q3), -sin(q2+q3)*cos(q1), sin(q1)],
[sin(q1)*cos(q2+q3), -sin(q2+q3)*sin(q1), -cos(q1)],
[sin(q2+q3), cos(q2+q3), 0]])
assert_equal(Rs[0], R1)
assert_equal(Rs[1], R2)
assert_equal(Rs[2], R3)
Explanation: Ejercicio
Genera una lista que contenga todas las matrices de rotación de cada articulación usando list comprehensions
End of explanation
q = [q1, q2, q3]
Explanation: Si ahora declaramos un vector con todos los grados de libertad:
End of explanation
ps[1].jacobian(q)
Explanation: podemos obtener el Jacobiano traslacional de cada articulacion con:
End of explanation
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
Jvs
from nose.tools import assert_equal
assert_equal(Jvs[0], ps[0].jacobian(q))
assert_equal(Jvs[1], ps[1].jacobian(q))
assert_equal(Jvs[2], ps[2].jacobian(q))
Explanation: Ejercicio
Genera una lista con los Jacobianos traslacionales
End of explanation
o1 = Matrix([[0], [0], [q1]])
o1
o2 = Matrix([[0], [q2], [q1]])
o2
o3 = Matrix([[0], [q2 + q3], [q1]])
o3
Explanation: Un paso que tenemos que hacer manualmente es definir los vectores de orientación (compuesto por $\phi$, $\theta$ y $\psi$) ya que se tiene un sistema sobrerestringido, pero son lo suficientemente faciles de obtener:
End of explanation
os = [o1, o2, o3]
Explanation: y si se guarda una lista con cada uno de estos vectores, se puede obtener el jacobiano rotacional de la misma manera que el traslacional:
End of explanation
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
Jωs
from nose.tools import assert_equal
assert_equal(Jωs[0], os[0].jacobian(q))
assert_equal(Jωs[1], os[1].jacobian(q))
assert_equal(Jωs[2], os[2].jacobian(q))
Explanation: Ejercicio
Genera una lista con los Jacobianos rotacionales
End of explanation
def tens_iner(q):
from sympy import Matrix
Is = []
for i in range(len(q)):
Js = [var("J_{" + str(i+1) + "_" + eje + "}") for eje in "xyz"]
I = Matrix([[Js[0], 0, 0], [0, Js[1], 0], [0, 0, Js[2]]])
Is.append(I)
return Is
Is = tens_iner(q)
Is
Explanation: Otra cosa que podemos hacer en automatico es definir los tensores de inercia necesarios para el manipulador, ya que esto solo depende del numero de grados de libertad, defino la función que va a tomar el vector con el estado del sistema, $q$, y va a calcular una lista con los tensores:
End of explanation
ms = [m1, m2, m3]
Explanation: definiré una lista con todas las masas de los eslabones:
End of explanation
def matriz_masas(ms, Jvs, Is, Jωs):
from sympy import zeros, expand, simplify
M = zeros(len(ms))
for m, Jv, I, Jω in zip(ms, Jvs, Is, Jωs):
M += simplify(expand(m*Jv.T*Jv + Jω.T*I*Jω))
return M
M = matriz_masas(ms, Jvs, Is, Jωs)
M
Explanation: De tal manera que podamos hacer una función que tome estas, los jacobianos y los tensores de inercia,para calcular la matriz de masas:
End of explanation
from sympy import simplify
simplify(M)
Explanation: mmm... un poco grande, tratemos de simplificar un poco:
End of explanation
M[0].collect(l2**2).collect(l3**2).collect(m3).simplify()
Explanation: mmm... un poco mejor, pero aun no es viable; los terminos del segundo y tercer grado de libertad son simples, el problema es el primero, tratemos de simplificar solo ese termino, intentaremos factorizar $l_2^2$ y $l_3^2$ y despues simplificar:
End of explanation
M = simplify(M.applyfunc(lambda M: collect(M, l2**2)).applyfunc(lambda M: collect(M, l3**2)).applyfunc(lambda M: collect(M, m3)))
M
Explanation: esto se ve aceptable, apliquemoslo a toda la matriz:
End of explanation
def christoffel(M, q, i, j, k):
from sympy import Rational, simplify
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
return simplify(simbolo)
from nose.tools import assert_equal
from sympy import Rational, expand
assert_equal(christoffel(M, q, 0,0,1), expand(Rational(1,2)*((m2+m3)*l2**2*sin(2*q2) + m3*l3**2*sin(2*(q2+q3))) + m3*l2*l3*sin(2*q2+q3)))
assert_equal(christoffel(M, q, 0,0,0), 0)
Explanation: Ejercicio
Escribe el código de una función, que dada la matriz de masas, una lista con los grados de libertad y los indices de posición, calcule el símbolo de Christoffel pedido, recuerde que la formula es:
$$
c_{ijk} = \frac{1}{2}\left{\frac{\partial M_{kj}}{\partial q_i} + \frac{\partial M_{ki}}{\partial q_j} - \frac{\partial M_{ij}}{\partial q_k}\right}
$$
End of explanation
c113 = christoffel(M, q, 0,0,2)
c113
Explanation: Con esta función podemos calcular cualquier simbolo de Christoffel (recordando que los indices en Python empiezan en $0$:
End of explanation
def simbolos_chris(M, q):
simbolos = []
for i in range(len(q)):
sim = []
for j in range(len(q)):
s = [christoffel(M, q, i, j, k) for k in range(len(q))]
sim.append(s)
simbolos.append(sim)
return simbolos
simbolos_christoffel = simbolos_chris(M, q)
simbolos_christoffel[0][0][2]
Explanation: y crear una función que calcule todos los simbolos de Christoffel a partir de esta función:
End of explanation
def matriz_coriolis(simbolos, q̇):
from sympy import Matrix
coriolis = []
for k in range(len(simbolos)):
cor = []
for j in range(len(simbolos)):
c=0
for i in range(len(simbolos)):
c+= simbolos[i][j][k]*q̇[i]
cor.append(c)
coriolis.append(cor)
return Matrix(coriolis)
C = simplify(matriz_coriolis(simbolos_christoffel, q̇))
C
Explanation: Y ya con los simbolos de Christoffel, calcular la matriz de Coriolis:
End of explanation
def ener_pot(params):
m, h = params
U = m*g*h
return U
Explanation: En este punto tenemos un resultado lo suficientemente compacto para copiarlo a las definiciones numéricas, por lo que seguimos al vector de gravedad:
End of explanation
h1, h2, h3 = ps[0][2], ps[1][2], ps[2][2]
U1 = ener_pot([m1, h1])
U2 = ener_pot([m2, h2])
U3 = ener_pot([m3, h3])
U = U1 + U2 + U3
def vector_grav(U, q):
from sympy import Matrix
return Matrix([[U]]).jacobian(q).T
G = vector_grav(U, q)
G
Explanation: Y calculando las energías potenciales:
End of explanation |
11,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
these notes will not display in the slideshow
interactive dashboard application rendered as a slideshow
using
ipywidgets
plotly (express)
voila
reveal
Step1: loading the iris dataset
Step2: inspiration
https | Python Code:
import ipywidgets as widgets
import plotly.graph_objs as go
import plotly.express as px
Explanation: these notes will not display in the slideshow
interactive dashboard application rendered as a slideshow
using
ipywidgets
plotly (express)
voila
reveal
End of explanation
iris = px.data.iris()
iris.head()
fig = go.FigureWidget()
keys = list(iris.keys()[:4])
@widgets.interact(x=keys, y=keys[::-1])
def update_px(x, y):
p = px.scatter(iris, x, y, color='species', width=800, height=600)
for i in range(len(p.data)):
fig.data = []
fig.update(data = [d.to_plotly_json() for d in p.data])
fig.plotly_relayout(p.layout.to_plotly_json())
fig
Explanation: loading the iris dataset
End of explanation
fig2 = go.FigureWidget()
cmaps = {'Plotly': px.colors.qualitative.Plotly, 'D3': px.colors.qualitative.D3,
'Pastel': px.colors.qualitative.Pastel, 'Vivid': px.colors.qualitative.Vivid}
@widgets.interact(color_discrete_sequence=cmaps)
def update_cmap(color_discrete_sequence):
p = px.scatter_matrix(iris, dimensions=keys, color='species', color_discrete_sequence=color_discrete_sequence,
width=800, height=600)
for i in range(len(p.data)):
fig2.data = []
fig2.update(data = [d.to_plotly_json() for d in p.data])
fig2.plotly_relayout(p.layout.to_plotly_json())
fig2
Explanation: inspiration
https://plot.ly/python/plotly-express/
https://community.plot.ly/t/plotly-express-hover-selecting-event-only-partially-working/22136
alternatively, view pairwise relationships at a glance, with a qualitative colormap of your choice
End of explanation |
11,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data
Both datasets are text collections from this site.
TCP-ECCO (170mb uncompressed) can be downloaded here
Lincoln (700kb uncompressed) can be downloaded here
Step1: Intialize swhoosh index
Step2: Add documents to swhoosh index
Step3: Save and close, then reload swoosh index
Note that the index MUST be saved -- it does not autosave!
Step4: Wrap Index in an IndexReader and get statistics needed for BM25
Step5: Get term info and postings for 'lincoln'
Step6: Run a BM25 search with Whoosh API
Step7: Compare results to Whoosh
Step8: Note
Step9: And repeating with multiprocessing enabled
Step10: Comparing to whoosh default
Step11: And to whoosh with multiprocessing enabled
Step12: Again, the matcher results are the same
Step14: Query Performance (BM25)
To benchmark this, we extrac token's from one of lincoln's speechs (not in TCP-ECCO), and select queries at random from the resulting ~5000 tokens.
Step15: 3 word queries
Step16: 6 word queries
Step17: 30 word queries | Python Code:
def get_lincoln():
for filepath in sorted(glob.glob('Lincoln/*.txt')):
with open(filepath, 'r', encoding='latin') as f:
doc = f.read()
yield {'filepath': filepath, 'doc': doc}
def get_TCP():
for filepath in sorted(glob.glob('TCP-ECCO/*.txt')):
with open(filepath, 'r', encoding='latin') as f:
doc = f.read()
yield {'filepath': filepath, 'doc': doc}
Explanation: Data
Both datasets are text collections from this site.
TCP-ECCO (170mb uncompressed) can be downloaded here
Lincoln (700kb uncompressed) can be downloaded here
End of explanation
s = swhoosh.Index('randomIdx/randomIdx', simple_schema(), reset=True)
s.load()
Explanation: Intialize swhoosh index
End of explanation
t = time.time()
s.add_documents(get_lincoln())
print("TIME:", time.time() - t)
Explanation: Add documents to swhoosh index
End of explanation
s.save_and_close()
with open('randomIdx/randomIdx.manager', 'rb') as f:
s = pickle.load(f)
s.load()
Explanation: Save and close, then reload swoosh index
Note that the index MUST be saved -- it does not autosave!
End of explanation
r = s.reader()
print(r.doc_count())
print(r.doc_frequency('doc',b'lincoln'))
print(r.doc_field_length(21, 'doc'))
print(r.avg_field_length('doc'))
Explanation: Wrap Index in an IndexReader and get statistics needed for BM25
End of explanation
# returns (overall frequency, num docs, start loc in postings file, postings size)
s._idx['doc'].terminfo(b'lincoln')
[swhoosh.postings.load2(a[2], a[1]) for a in s._idx['doc']._postings(b'lincoln')]
# what the postings look like: (docId, frequency, positions)
s._idx['doc']._postings(b'lincoln')
Explanation: Get term info and postings for 'lincoln'
End of explanation
qp = QueryParser("doc", schema=s._schema)
q = qp.parse("lincoln")
with s.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
Explanation: Run a BM25 search with Whoosh API
End of explanation
def make_clean_index(ix_dirname, paths, procs=1):
ix = whoosh.index.create_in(ix_dirname, schema=simple_schema())
writer = ix.writer(procs=procs)
for filepath in paths:
add_doc(writer, filepath)
writer.commit()
return ix
def add_doc(writer, filepath):
with open(filepath, 'rb') as f:
text = f.read().decode('latin')
writer.add_document(doc=text, filepath=filepath)
t = time.time()
ix = make_clean_index('wind', sorted(glob.glob('Lincoln/*.txt')))
print("TIME:", time.time() - t)
with ix.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
Explanation: Compare results to Whoosh
End of explanation
s = swhoosh.Index('randomIdx2/randomIdx2', simple_schema(), reset=True)
s.load()
t = time.time()
s.add_documents(get_TCP())
print("TIME:", time.time() - t)
Explanation: Note: the BM25 scores returned by whoosh's default settings are a tiny bit smaller because the default whoosh reader adds 1 to the current document length for some reason (I don't think this is correct).
Indexing a bigger collection
End of explanation
s = swhoosh.Index('randomIdx2/randomIdx2', simple_schema(), reset=True)
s.load()
t = time.time()
s.add_documents_multiprocessing(get_TCP(), num_procs=4)
print("TIME:", time.time() - t)
s.save()
Explanation: And repeating with multiprocessing enabled:
End of explanation
t = time.time()
ix = make_clean_index('wind2', sorted(glob.glob('TCP-ECCO/*.txt')))
print("TIME:", time.time() - t)
Explanation: Comparing to whoosh default:
End of explanation
t = time.time()
ix = make_clean_index('wind2', sorted(glob.glob('TCP-ECCO/*.txt')), procs=4)
print("TIME:", time.time() - t)
Explanation: And to whoosh with multiprocessing enabled:
End of explanation
with s.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
print('')
with ix.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
Explanation: Again, the matcher results are the same:
End of explanation
with open('randomIdx2/randomIdx2.manager', 'rb') as f:
s = pickle.load(f)
s.load()
ix = whoosh.index.open_dir('wind2')
import numpy as np
s1 = s.searcher()
s2 = ix.searcher()
qp = QueryParser("doc", schema=s._schema)
with open('Lincoln/24-speech-1856.txt', 'r', encoding='latin') as f:
data = f.read()
query_vocab = [t.text for t in s._schema['doc'].analyzer(data)]
print('Length of query vocab:',len(query_vocab))
def random_n_query(n):
Generates a random query of length n
return ' '.join(np.random.choice(query_vocab, size=n))
def benchmark_n_query(n, trials):
t_swhoosh, t_whoosh = 0, 0
for i in range(trials):
q = qp.parse(random_n_query(n))
t = time.time()
results = s1.search(q)
t_swhoosh += time.time() - t
t = time.time()
results = s2.search(q)
t_whoosh += time.time() - t
print('- Swhoosh time per query:', "{:.2f}".format(t_swhoosh / trials * 1000), "ms")
print('- Whoosh time per query:', "{:.2f}".format(t_whoosh / trials * 1000), "ms")
return t_swhoosh/trials, t_whoosh/trials
Explanation: Query Performance (BM25)
To benchmark this, we extrac token's from one of lincoln's speechs (not in TCP-ECCO), and select queries at random from the resulting ~5000 tokens.
End of explanation
x, y = benchmark_n_query(3, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
Explanation: 3 word queries
End of explanation
x, y = benchmark_n_query(6, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
Explanation: 6 word queries
End of explanation
x, y = benchmark_n_query(30, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
Explanation: 30 word queries
End of explanation |
11,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https
Step1: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务:
Step3: 分词
任务越少,速度越快。如指定仅执行分词,默认细粒度:
Step4: 执行粗颗粒度分词:
Step5: 同时执行细粒度和粗粒度分词:
Step6: coarse为粗分,fine为细分。
注意
Native API的输入单位限定为句子,需使用多语种分句模型或基于规则的分句函数先行分句。RESTful同时支持全文、句子、已分词的句子。除此之外,RESTful和native两种API的语义设计完全一致,用户可以无缝互换。
自定义词典
自定义词典为分词任务的成员变量,要操作自定义词典,先获取分词任务,以细分标准为例:
Step7: 自定义词典为分词任务的成员变量:
Step8: HanLP支持合并和强制两种优先级的自定义词典,以满足不同场景的需求。
不挂词典:
Step9: 强制模式
强制模式优先输出正向最长匹配到的自定义词条(慎用,详见《自然语言处理入门》第二章):
Step10: 与大众的朴素认知不同,词典优先级最高未必是好事,极有可能匹配到不该分出来的自定义词语,导致歧义。自定义词语越长,越不容易发生歧义。这启发我们将强制模式拓展为强制校正功能。
强制校正原理相似,但会将匹配到的自定义词条替换为相应的分词结果
Step11: 合并模式
合并模式的优先级低于统计模型,即dict_combine会在统计模型的分词结果上执行最长匹配并合并匹配到的词条。一般情况下,推荐使用该模式。
Step12: 需要算法基础才能理解,初学者可参考《自然语言处理入门》。
空格单词
含有空格、制表符等(Transformer tokenizer去掉的字符)的词语需要用tuple的形式提供:
Step13: 聪明的用户请继续阅读,tuple词典中的字符串其实等价于该字符串的所有可能的切分方式:
Step14: 单词位置
HanLP支持输出每个单词在文本中的原始位置,以便用于搜索引擎等场景。在词法分析中,非语素字符(空格、换行、制表符等)会被剔除,此时需要额外的位置信息才能定位每个单词:
Step15: 返回格式为三元组(单词,单词的起始下标,单词的终止下标),下标以字符级别计量。 | Python Code:
!pip install hanlp -U
Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Ftok_mtl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a>
</div>
安装
无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定:
End of explanation
import hanlp
hanlp.pretrained.mtl.ALL # MTL多任务,具体任务见模型名称,语种见名称最后一个字段或相应语料库
Explanation: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
End of explanation
HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH)
Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务:
End of explanation
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok').pretty_print()
Explanation: 分词
任务越少,速度越快。如指定仅执行分词,默认细粒度:
End of explanation
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok/coarse').pretty_print()
Explanation: 执行粗颗粒度分词:
End of explanation
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok*')
Explanation: 同时执行细粒度和粗粒度分词:
End of explanation
tok = HanLP['tok/fine']
tok
Explanation: coarse为粗分,fine为细分。
注意
Native API的输入单位限定为句子,需使用多语种分句模型或基于规则的分句函数先行分句。RESTful同时支持全文、句子、已分词的句子。除此之外,RESTful和native两种API的语义设计完全一致,用户可以无缝互换。
自定义词典
自定义词典为分词任务的成员变量,要操作自定义词典,先获取分词任务,以细分标准为例:
End of explanation
tok.dict_combine, tok.dict_force
Explanation: 自定义词典为分词任务的成员变量:
End of explanation
tok.dict_force = tok.dict_combine = None
HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
Explanation: HanLP支持合并和强制两种优先级的自定义词典,以满足不同场景的需求。
不挂词典:
End of explanation
tok.dict_force = {'和服', '服务项目'}
HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
Explanation: 强制模式
强制模式优先输出正向最长匹配到的自定义词条(慎用,详见《自然语言处理入门》第二章):
End of explanation
tok.dict_force = {'和服务': ['和', '服务']}
HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
Explanation: 与大众的朴素认知不同,词典优先级最高未必是好事,极有可能匹配到不该分出来的自定义词语,导致歧义。自定义词语越长,越不容易发生歧义。这启发我们将强制模式拓展为强制校正功能。
强制校正原理相似,但会将匹配到的自定义词条替换为相应的分词结果:
End of explanation
tok.dict_force = None
tok.dict_combine = {'和服', '服务项目'}
HanLP("商品和服务项目", tasks='tok/fine').pretty_print()
Explanation: 合并模式
合并模式的优先级低于统计模型,即dict_combine会在统计模型的分词结果上执行最长匹配并合并匹配到的词条。一般情况下,推荐使用该模式。
End of explanation
tok.dict_combine = {('iPad', 'Pro'), '2个空格'}
HanLP("如何评价iPad Pro ?iPad Pro有2个空格", tasks='tok/fine')['tok/fine']
Explanation: 需要算法基础才能理解,初学者可参考《自然语言处理入门》。
空格单词
含有空格、制表符等(Transformer tokenizer去掉的字符)的词语需要用tuple的形式提供:
End of explanation
dict(tok.dict_combine.config["dictionary"]).keys()
Explanation: 聪明的用户请继续阅读,tuple词典中的字符串其实等价于该字符串的所有可能的切分方式:
End of explanation
tok.config.output_spans = True
sent = '2021 年\nHanLPv2.1 为生产环境带来次世代最先进的多语种NLP技术。'
word_offsets = HanLP(sent, tasks='tok/fine')['tok/fine']
print(word_offsets)
Explanation: 单词位置
HanLP支持输出每个单词在文本中的原始位置,以便用于搜索引擎等场景。在词法分析中,非语素字符(空格、换行、制表符等)会被剔除,此时需要额外的位置信息才能定位每个单词:
End of explanation
for word, begin, end in word_offsets:
assert word == sent[begin:end]
Explanation: 返回格式为三元组(单词,单词的起始下标,单词的终止下标),下标以字符级别计量。
End of explanation |
11,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kernel hypothesis testing in Shogun
Heiko Strathmann - [email protected] - http
Step1: Some Formal Basics (skip if you just want code examples)
To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called null-hypothesis $H_0
Step2: Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at all (infinitely many) higher order moments.
Step3: Quadratic Time MMD
We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in <a href="http
Step4: Any sub-class of <a href="http
Step5: Now let us visualise distribution of MMD statistic under $H_0
Step6: Null and Alternative Distribution Illustrated
Visualise both distributions, $H_0
Step7: Different Ways to Approximate the Null Distribution for the Quadratic Time MMD
As already mentioned, permuting the data to access the null distribution is probably the method of choice, due to the efficient implementation in Shogun. There exist a couple of methods that are more sophisticated (and slower) and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun.
The first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as
$$
m\mmd^2_b \rightarrow \sum_{l=1}^\infty \lambda_l z_l^2
$$
where $z_l\sim \mathcal{N}(0,2)$ are i.i.d. normal samples and $\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\hat\lambda_l=\frac{1}{m}\nu_l$ where $\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters
Step8: The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive.
Step9: The Gamma Moment Matching Approximation and Type I errors
$\DeclareMathOperator{\var}{var}$
Another method for approximating the null-distribution is by matching the first two moments of a <a href="http
Step10: As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution
Step11: We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors.
Linear Time MMD on Gaussian Blobs
So far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in <a href="http
Step12: We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute
$$
\mmd_l^2[\mathcal{F},X,Y]=\frac{1}{m_2}\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-
k(x_{2i+1},y_{2i})
$$
where $ m_2=\lfloor\frac{m}{2} \rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor.
We begin by illustrating how to pass data to <a href="http
Step13: Sometimes, one might want to use <a href="http
Step14: The Gaussian Approximation to the Null Distribution
As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since <a href="http
Step15: Kernel Selection for the MMD -- Overview
$\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}$
Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever.
Shogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the <a href="http
Step16: Now perform two-sample test with that kernel
Step17: For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)
Step18: And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection | Python Code:
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import shogun as sg
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Kernel hypothesis testing in Shogun
Heiko Strathmann - [email protected] - http://github.com/karlnapf - http://herrstrathmann.de
Soumyajit De - [email protected] - http://github.com/lambday
This notebook describes Shogun's framework for <a href="http://en.wikipedia.org/wiki/Statistical_hypothesis_testing">statistical hypothesis testing</a>. We begin by giving a brief outline of the problem setting and then describe various implemented algorithms.
All algorithms discussed here are instances of <a href="http://en.wikipedia.org/wiki/Kernel_embedding_of_distributions#Kernel_two_sample_test">kernel two-sample testing</a> with the maximum mean discrepancy, and are based on embedding probability distributions into <a href="http://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space">Reproducing Kernel Hilbert Spaces</a> (RKHS).
There are two types of tests available, a quadratic time test and a linear time test. Both come in various flavours.
End of explanation
# use scipy for generating samples
from scipy.stats import laplace, norm
def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=np.sqrt(0.5)):
# sample from both distributions
X=norm.rvs(size=n)*np.sqrt(sigma2)+mu
Y=laplace.rvs(size=n, loc=mu, scale=b)
return X,Y
mu=0.0
sigma2=1
b=np.sqrt(0.5)
n=220
X,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b)
# plot both densities and histograms
plt.figure(figsize=(18,5))
plt.suptitle("Gaussian vs. Laplace")
plt.subplot(121)
Xs=np.linspace(-2, 2, 500)
plt.plot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2))
plt.plot(Xs, laplace.pdf(Xs, loc=mu, scale=b))
plt.title("Densities")
plt.xlabel("$x$")
plt.ylabel("$p(x)$")
plt.subplot(122)
plt.hist(X, alpha=0.5)
plt.xlim([-5,5])
plt.ylim([0,100])
plt.hist(Y,alpha=0.5)
plt.xlim([-5,5])
plt.ylim([0,100])
plt.legend(["Gaussian", "Laplace"])
plt.title('Samples');
Explanation: Some Formal Basics (skip if you just want code examples)
To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called null-hypothesis $H_0:p=q$, which is the complement of an alternative-hypothesis $H_A$.
To distinguish the hypotheses, a test statistic is computed on sample data. Since sample data is finite, this corresponds to sampling the true distribution of the test statistic. There are two different distributions of the test statistic -- one for each hypothesis. The null-distribution corresponds to test statistic samples under the model that $H_0$ holds; the alternative-distribution corresponds to test statistic samples under the model that $H_A$ holds.
In practice, one tries to compute the quantile of the test statistic in the null-distribution. In case the test statistic is in a high quantile, i.e. it is unlikely that the null-distribution has generated the test statistic -- the null-hypothesis $H_0$ is rejected.
There are two different kinds of errors in hypothesis testing:
A type I error is made when $H_0: p=q$ is wrongly rejected. That is, the test says that the samples are from different distributions when they are not.
A type II error is made when $H_A: p\neq q$ is wrongly accepted. That is, the test says that the samples are from the same distribution when they are not.
A so-called consistent test achieves zero type II error for a fixed type I error, as it sees more data.
To decide whether to reject $H_0$, one could set a threshold, say at the $95\%$ quantile of the null-distribution, and reject $H_0$ when the test statistic lies below that threshold. This means that the chance that the samples were generated under $H_0$ are $5\%$. We call this number the test power $\alpha$ (in this case $\alpha=0.05$). It is an upper bound on the probability for a type I error. An alternative way is simply to compute the quantile of the test statistic in the null-distribution, the so-called p-value, and to compare the p-value against a desired test power, say $\alpha=0.05$, by hand. The advantage of the second method is that one not only gets a binary answer, but also an upper bound on the type I error.
In order to construct a two-sample test, the null-distribution of the test statistic has to be approximated. One way of doing this is called the permutation test, where samples from both sources are mixed and permuted repeatedly and the test statistic is computed for every of those configurations. While this method works for every statistical hypothesis test, it might be very costly because the test statistic has to be re-computed many times. Shogun comes with an extremely optimized implementation though. For completeness, Shogun also includes a number of more sohpisticated ways of approximating the null distribution.
Base class for Hypothesis Testing
Shogun implements statistical testing in the abstract class <a href="http://shogun.ml/HypothesisTest">HypothesisTest</a>. All implemented methods will work with this interface at their most basic level. We here focos on <a href="http://shogun.ml/TwoSampleTest">TwoSampleTest</a>. This class offers methods to
compute the implemented test statistic,
compute p-values for a given value of the test statistic,
compute a test threshold for a given p-value,
approximate the null distribution, e.g. perform the permutation test and
performing a full two-sample test, and either returning a p-value or a binary rejection decision. This method is most useful in practice. Note that, depending on the used test statistic.
Kernel Two-Sample Testing with the Maximum Mean Discrepancy
$\DeclareMathOperator{\mmd}{MMD}$
An important class of hypothesis tests are the two-sample tests.
In two-sample testing, one tries to find out whether two sets of samples come from different distributions. Given two probability distributions $p,q$ on some arbritary domains $\mathcal{X}, \mathcal{Y}$ respectively, and i.i.d. samples $X={x_i}{i=1}^m\subseteq \mathcal{X}\sim p$ and $Y={y_i}{i=1}^n\subseteq \mathcal{Y}\sim p$, the two sample test distinguishes the hypothesises
\begin{align}
H_0: p=q\
H_A: p\neq q
\end{align}
In order to solve this problem, it is desirable to have a criterion than takes a positive unique value if $p\neq q$, and zero if and only if $p=q$. The so called Maximum Mean Discrepancy (MMD), has this property and allows to distinguish any two probability distributions, if used in a reproducing kernel Hilbert space (RKHS). It is the distance of the mean embeddings $\mu_p, \mu_q$ of the distributions $p,q$ in such a RKHS $\mathcal{F}$ -- which can also be expressed in terms of expectation of kernel functions, i.e.
\begin{align}
\mmd[\mathcal{F},p,q]&=||\mu_p-\mu_q||\mathcal{F}^2\
&=\textbf{E}{x,x'}\left[ k(x,x')\right]-
2\textbf{E}{x,y}\left[ k(x,y)\right]
+\textbf{E}{y,y'}\left[ k(y,y')\right]
\end{align}
Note that this formulation does not assume any form of the input data, we just need a kernel function whose feature space is a RKHS, see [2, Section 2] for details. This has the consequence that in Shogun, we can do tests on any type of data (<a href="http://shogun.ml/DenseFeatures">DenseFeatures</a>, <a href="http://shogun.ml/SparseFeatures">SparseFeatures</a>, <a href="http://shogun.ml/CStringFeatures">CStringFeatures</a>, etc), as long as we or you provide a positive definite kernel function under the interface of <a href="http://shogun.ml/Kernel">Kernel</a>.
We here only describe how to use the MMD for two-sample testing. Shogun offers two types of test statistic based on the MMD, one with quadratic costs both in time and space, and one with linear time and constant space costs. Both come in different versions and with different methods how to approximate the null-distribution in order to construct a two-sample test.
Running Example Data. Gaussian vs. Laplace
In order to illustrate kernel two-sample testing with Shogun, we use a couple of toy distributions. The first dataset we consider is the 1D Standard Gaussian
$p(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(x-\mu)^2}{\sigma^2}\right)$
with mean $\mu$ and variance $\sigma^2$, which is compared against the 1D Laplace distribution
$p(x)=\frac{1}{2b}\exp\left(-\frac{|x-\mu|}{b}\right)$
with the same mean $\mu$ and variance $2b^2$. In order to increase difficulty, we set $b=\sqrt{\frac{1}{2}}$, which means that $2b^2=\sigma^2=1$.
End of explanation
print("Gaussian vs. Laplace")
print("Sample means: %.2f vs %.2f" % (np.mean(X), np.mean(Y)))
print("Samples variances: %.2f vs %.2f" % (np.var(X), np.var(Y)))
Explanation: Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at all (infinitely many) higher order moments.
End of explanation
# turn data into Shogun representation (columns vectors)
feat_p=sg.create_features(X.reshape(1,len(X)))
feat_q=sg.create_features(Y.reshape(1,len(Y)))
# choose kernel for testing. Here: Gaussian
kernel_width=1
kernel=sg.create_kernel("GaussianKernel", width=kernel_width)
# create mmd instance of test-statistic
mmd=sg.QuadraticTimeMMD()
mmd.set_kernel(kernel)
mmd.set_p(feat_p)
mmd.set_q(feat_q)
# compute biased and unbiased test statistic (default is unbiased)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
biased_statistic=mmd.compute_statistic()
mmd.set_statistic_type(sg.ST_UNBIASED_FULL)
statistic=unbiased_statistic=mmd.compute_statistic()
print("%d x MMD_b[X,Y]^2=%.2f" % (len(X), biased_statistic))
print("%d x MMD_u[X,Y]^2=%.2f" % (len(X), unbiased_statistic))
Explanation: Quadratic Time MMD
We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in <a href="http://shogun.ml/QuadraticTimeMMD">QuadraticTimeMMD</a>, which accepts any type of features in Shogun, and use it on the above toy problem.
An unbiased estimate for the MMD expression above can be obtained by estimating expected values with averaging over independent samples
$$
\mmd_u[\mathcal{F},X,Y]^2=\frac{1}{m(m-1)}\sum_{i=1}^m\sum_{j\neq i}^mk(x_i,x_j) + \frac{1}{n(n-1)}\sum_{i=1}^n\sum_{j\neq i}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j)
$$
A biased estimate would be
$$
\mmd_b[\mathcal{F},X,Y]^2=\frac{1}{m^2}\sum_{i=1}^m\sum_{j=1}^mk(x_i,x_j) + \frac{1}{n^ 2}\sum_{i=1}^n\sum_{j=1}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j)
.$$
Computing the test statistic using <a href="http://shogun.ml/QuadraticTimeMMD">QuadraticTimeMMD</a> does exactly this, where it is possible to choose between the two above expressions. Note that some methods for approximating the null-distribution only work with one of both types. Both statistics' computational costs are quadratic both in time and space. Note that the method returns $m\mmd_b[\mathcal{F},X,Y]^2$ since null distribution approximations work on $m$ times null distribution. Here is how the test statistic itself is computed.
End of explanation
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(200)
# now show a couple of ways to compute the test
# compute p-value for computed test statistic
p_value=mmd.compute_p_value(statistic)
print("P-value of MMD value %.2f is %.2f" % (statistic, p_value))
# compute threshold for rejecting H_0 for a given test power
alpha=0.05
threshold=mmd.compute_threshold(alpha)
print("Threshold for rejecting H0 with a test power of %.2f is %.2f" % (alpha, threshold))
# performing the test by hand given the above results, note that those two are equivalent
if statistic>threshold:
print("H0 is rejected with confidence %.2f" % alpha)
if p_value<alpha:
print("H0 is rejected with confidence %.2f" % alpha)
# or, compute the full two-sample test directly
# fixed test power, binary decision
binary_test_result=mmd.perform_test(alpha)
if binary_test_result:
print("H0 is rejected with confidence %.2f" % alpha)
Explanation: Any sub-class of <a href="http://www.shogun.ml/HypothesisTest">HypothesisTest</a> can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample of the null distribution, the test statistic has to be computed for a different permutation of the data. Shogun's implementation is highly optimized, exploiting low-level CPU caching and multiple available cores.
End of explanation
num_samples=500
# sample null distribution
null_samples=mmd.sample_null()
# sample alternative distribution, generate new data for that
alt_samples=np.zeros(num_samples)
for i in range(num_samples):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
feat_p=sg.create_features(np.reshape(X, (1,len(X))))
feat_q=sg.create_features(np.reshape(Y, (1,len(Y))))
# TODO: reset pre-computed kernel here
mmd.set_p(feat_p)
mmd.set_q(feat_q)
alt_samples[i]=mmd.compute_statistic()
np.std(alt_samples)
Explanation: Now let us visualise distribution of MMD statistic under $H_0:p=q$ and $H_A:p\neq q$. Sample both null and alternative distribution for that. Use the interface of <a href="http://www.shogun.ml/HypothesisTest">HypothesisTest</a> to sample from the null distribution (permutations, re-computing of test statistic is done internally). For the alternative distribution, compute the test statistic for a new sample set of $X$ and $Y$ in a loop. Note that the latter is expensive, as the kernel cannot be precomputed, and infinite data is needed. Though it is not needed in practice but only for illustrational purposes here.
End of explanation
def plot_alt_vs_null(alt_samples, null_samples, alpha):
plt.figure(figsize=(18,5))
plt.subplot(131)
plt.hist(null_samples, 50, color='blue')
plt.title('Null distribution')
plt.subplot(132)
plt.title('Alternative distribution')
plt.hist(alt_samples, 50, color='green')
plt.subplot(133)
plt.hist(null_samples, 50, color='blue')
plt.hist(alt_samples, 50, color='green', alpha=0.5)
plt.title('Null and alternative distriution')
# find (1-alpha) element of null distribution
null_samples_sorted=np.sort(null_samples)
quantile_idx=int(len(null_samples)*(1-alpha))
quantile=null_samples_sorted[quantile_idx]
plt.axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null')
plt.legend();
plot_alt_vs_null(alt_samples, null_samples, alpha)
Explanation: Null and Alternative Distribution Illustrated
Visualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type II error:
type I error is the area of the null distribution being right of the threshold
type II error is the area of the alternative distribution being left from the threshold
End of explanation
# optional: plot spectrum of joint kernel matrix
# TODO: it would be good if there was a way to extract the joint kernel matrix for all kernel tests
# get joint feature object and compute kernel matrix and its spectrum
feats_p_q=mmd.get_p_and_q()
sg.as_kernel(mmd.get("kernel")).init(feats_p_q, feats_p_q)
K=sg.as_kernel(mmd.get("kernel")).get_kernel_matrix()
w,_=np.linalg.eig(K)
# visualise K and its spectrum (only up to threshold)
plt.figure(figsize=(18,5))
plt.subplot(121)
plt.imshow(K, interpolation="nearest")
plt.title("Kernel matrix K of joint data $X$ and $Y$")
plt.subplot(122)
thresh=0.1
plt.plot(w[:len(w[w>thresh])])
plt.title("Eigenspectrum of K until component %d" % len(w[w>thresh]));
Explanation: Different Ways to Approximate the Null Distribution for the Quadratic Time MMD
As already mentioned, permuting the data to access the null distribution is probably the method of choice, due to the efficient implementation in Shogun. There exist a couple of methods that are more sophisticated (and slower) and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun.
The first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as
$$
m\mmd^2_b \rightarrow \sum_{l=1}^\infty \lambda_l z_l^2
$$
where $z_l\sim \mathcal{N}(0,2)$ are i.i.d. normal samples and $\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\hat\lambda_l=\frac{1}{m}\nu_l$ where $\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters:
Number of samples from null-distribution. The more, the more accurate.
Number of Eigenvalues of the Eigen-decomposition of the kernel matrix to use. The more, the better the results get. However, the Eigen-spectrum of the joint gram matrix usually decreases very fast. Plotting the Spectrum can help. See [2] for details.
If the kernel matrices are diagonal dominant, this method is likely to fail. For that and more details, see the original paper. Computational costs are likely to be larger than permutation testing, due to the efficient implementation of the latter: Eigenvalues of the gram matrix cost $\mathcal{O}(m^3)$.
Below, we illustrate how to sample the null distribution and perform two-sample testing with the Spectrum approximation in the class <a href="https://shogun.ml/&QuadraticTimeMMD">QuadraticTimeMMD</a>. This method only works with the biased statistic.
End of explanation
# threshold for eigenspectrum
thresh=0.1
# compute number of eigenvalues to use
num_eigen=len(w[w>thresh])
# finally, do the test, use biased statistic
mmd.set_statistic_type(sg.ST_BIASED_FULL)
#tell Shogun to use spectrum approximation
mmd.set_null_approximation_method(sg.NAM_MMD2_SPECTRUM)
mmd.spectrum_set_num_eigenvalues(num_eigen)
mmd.set_num_null_samples(num_samples)
# the usual test interface
statistic=mmd.compute_statistic()
p_value_spectrum=mmd.compute_p_value(statistic)
print("Spectrum: P-value of MMD test is %.2f" % p_value_spectrum)
# compare with ground truth from permutation test
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(num_samples)
p_value_permutation=mmd.compute_p_value(statistic)
print("Bootstrapping: P-value of MMD test is %.2f" % p_value_permutation)
Explanation: The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive.
End of explanation
# tell Shogun to use gamma approximation
mmd.set_null_approximation_method(sg.NAM_MMD2_GAMMA)
# the usual test interface
statistic=mmd.compute_statistic()
p_value_gamma=mmd.compute_p_value(statistic)
print("Gamma: P-value of MMD test is %.2f" % p_value_gamma)
# compare with ground truth bootstrapping
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
p_value_spectrum=mmd.compute_p_value(statistic)
print("Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum)
Explanation: The Gamma Moment Matching Approximation and Type I errors
$\DeclareMathOperator{\var}{var}$
Another method for approximating the null-distribution is by matching the first two moments of a <a href="http://en.wikipedia.org/wiki/Gamma_distribution">Gamma distribution</a> and then compute the quantiles of that. This does not result in a consistent test, but usually also gives good results while being very fast. However, there are distributions where the method fail. Therefore, the type I error should always be monitored. Described in [2]. It uses
$$
m\mmd_b(Z) \sim \frac{x^{\alpha-1}\exp(-\frac{x}{\beta})}{\beta^\alpha \Gamma(\alpha)}
$$
where
$$
\alpha=\frac{(\textbf{E}(\text{MMD}_b(Z)))^2}{\var(\text{MMD}_b(Z))} \qquad \text{and} \qquad
\beta=\frac{m \var(\text{MMD}_b(Z))}{(\textbf{E}(\text{MMD}_b(Z)))^2}
$$
Then, any threshold and p-value can be computed using the gamma distribution in the above expression. Computational costs are in $\mathcal{O}(m^2)$. Note that the test is parameter free. It only works with the biased statistic.
End of explanation
# type I error is false alarm, therefore sample data under H0
num_trials=50
rejections_gamma=np.zeros(num_trials)
rejections_spectrum=np.zeros(num_trials)
rejections_bootstrap=np.zeros(num_trials)
num_samples=50
alpha=0.05
for i in range(num_trials):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
# simulate H0 via merging samples before computing the
Z=np.hstack((X,Y))
X=Z[:len(X)]
Y=Z[len(X):]
feat_p=sg.create_features(np.reshape(X, (1,len(X))))
feat_q=sg.create_features(np.reshape(Y, (1,len(Y))))
# gamma
mmd=sg.QuadraticTimeMMD()
mmd.set_p(feat_p)
mmd.set_q(feat_q)
mmd.set_kernel(kernel)
mmd.set_null_approximation_method(sg.NAM_MMD2_GAMMA)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
rejections_gamma[i]=mmd.perform_test(alpha)
# spectrum
mmd=sg.QuadraticTimeMMD()
mmd.set_p(feat_p)
mmd.set_q(feat_q)
mmd.set_kernel(kernel)
mmd.set_null_approximation_method(sg.NAM_MMD2_SPECTRUM)
mmd.spectrum_set_num_eigenvalues(num_eigen)
mmd.set_num_null_samples(num_samples)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
rejections_spectrum[i]=mmd.perform_test(alpha)
# bootstrap (precompute kernel)
mmd=sg.QuadraticTimeMMD()
mmd.set_p(feat_p)
mmd.set_q(feat_q)
p_and_q=mmd.get_p_and_q()
kernel.init(p_and_q, p_and_q)
precomputed_kernel=sg.CustomKernel(kernel)
mmd.set_kernel(precomputed_kernel)
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(num_samples)
mmd.set_statistic_type(sg.ST_BIASED_FULL)
rejections_bootstrap[i]=mmd.perform_test(alpha)
convergence_gamma=np.cumsum(rejections_gamma)/(np.arange(num_trials)+1)
convergence_spectrum=np.cumsum(rejections_spectrum)/(np.arange(num_trials)+1)
convergence_bootstrap=np.cumsum(rejections_bootstrap)/(np.arange(num_trials)+1)
print("Average rejection rate of H0 for Gamma is %.2f" % np.mean(convergence_gamma))
print("Average rejection rate of H0 for Spectrum is %.2f" % np.mean(convergence_spectrum))
print("Average rejection rate of H0 for Bootstrapping is %.2f" % np.mean(rejections_bootstrap))
Explanation: As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while.
End of explanation
# paramters of dataset
m=20000
distance=10
stretch=5
num_blobs=3
angle=np.pi/4
# these are streaming features
gen_p=sg.GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=sg.GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
# stream some data and plot
num_plot=1000
features=gen_p.get_streamed_features(num_plot)
features=features.create_merged_copy(gen_q.get_streamed_features(num_plot))
data=features.get("feature_matrix")
plt.figure(figsize=(18,5))
plt.subplot(121)
plt.grid(True)
plt.plot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$')
plt.title('$X\sim p$')
plt.subplot(122)
plt.grid(True)
plt.plot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5)
plt.title('$Y\sim q$')
plt.show()
Explanation: We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors.
Linear Time MMD on Gaussian Blobs
So far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in <a href="http://shogun.ml/LinearTimeMMD">LinearTimeMMD</a> can help here, as it accepts data under the streaming interface <a href="http://shogun.ml/StreamingFeatures">StreamingFeatures</a>, which deliver data one-by-one.
And it can do more cool things, for example choose the best single (or combined) kernel for you. But we need a more fancy dataset for that to show its power. We will use one of Shogun's streaming based data generator, <a href="http://shogun.ml/GaussianBlobsDataGenerator">GaussianBlobsDataGenerator</a> for that. This dataset consists of two distributions which are a grid of Gaussians where in one of them, the Gaussians are stretched and rotated. This dataset is regarded as challenging for two-sample testing.
End of explanation
block_size=100
# if features are already under the streaming interface, just pass them
mmd=sg.LinearTimeMMD()
mmd.set_p(gen_p)
mmd.set_q(gen_q)
mmd.set_kernel(kernel)
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_num_blocks_per_burst(block_size)
# compute an unbiased estimate in linear time
statistic=mmd.compute_statistic()
print("MMD_l[X,Y]^2=%.2f" % statistic)
# note: due to the streaming nature, successive calls of compute statistic use different data
# and produce different results. Data cannot be stored in memory
for _ in range(5):
print("MMD_l[X,Y]^2=%.2f" % mmd.compute_statistic())
Explanation: We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute
$$
\mmd_l^2[\mathcal{F},X,Y]=\frac{1}{m_2}\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-
k(x_{2i+1},y_{2i})
$$
where $ m_2=\lfloor\frac{m}{2} \rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor.
We begin by illustrating how to pass data to <a href="http://shogun.ml/LinearTimeMMD">LinearTimeMMD</a>. In order not to loose performance due to overhead, it is possible to specify a block size for the data stream.
End of explanation
# data source
gen_p=sg.GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=sg.GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
num_samples=100
print("Number of data is %d" % num_samples)
# retreive some points, store them as non-streaming data in memory
data_p=gen_p.get_streamed_features(num_samples)
data_q=gen_q.get_streamed_features(num_samples)
# example to create mmd (note that num_samples can be maximum the number of data in memory)
mmd=sg.LinearTimeMMD()
mmd.set_p(data_p)
mmd.set_q(data_q)
mmd.set_kernel(sg.create_kernel("GaussianKernel", width=1))
mmd.set_num_blocks_per_burst(100)
print("Linear time MMD statistic: %.2f" % mmd.compute_statistic())
Explanation: Sometimes, one might want to use <a href="http://shogun.ml/LinearTimeMMD">LinearTimeMMD</a> with data that is stored in memory. In that case, it is easy to data in the form of for example <a href="http://shogun.ml/StreamingDenseFeatures">StreamingDenseFeatures</a> into <a href="http://shogun.ml/DenseFeatures">DenseFeatures</a>.
End of explanation
mmd=sg.LinearTimeMMD()
mmd.set_p(gen_p)
mmd.set_q(gen_q)
mmd.set_kernel(kernel)
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_num_blocks_per_burst(block_size)
print("m=%d samples from p and q" % m)
print("Binary test result is: " + ("Rejection" if mmd.perform_test(alpha) else "No rejection"))
print("P-value test result is %.2f" % mmd.compute_p_value(mmd.compute_statistic()))
Explanation: The Gaussian Approximation to the Null Distribution
As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since <a href="http://shogun.ml/LinearTimeMMD">LinearTimeMMD</a> operates on streaming features, new data is taken from the stream in every iteration.
Bootstrapping is not really necessary since there exists a fast and consistent estimate of the null-distribution. However, to ensure that any approximation is accurate, it should always be checked against bootstrapping at least once.
Since both the null- and the alternative distribution of the linear time MMD are Gaussian with equal variance (and different mean), it is possible to approximate the null-distribution by using a linear time estimate for this variance. An unbiased, linear time estimator for
$$
\var[\mmd_l^2[\mathcal{F},X,Y]]
$$
can simply be computed by computing the empirical variance of
$$
k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-k(x_{2i+1},y_{2i}) \qquad (1\leq i\leq m_2)
$$
A normal distribution with this variance and zero mean can then be used as an approximation for the null-distribution. This results in a consistent test and is very fast. However, note that it is an approximation and its accuracy depends on the underlying data distributions. It is a good idea to compare to the bootstrapping approach first to determine an appropriate number of samples to use. This number is usually in the tens of thousands.
<a href="http://shogun.ml/LinearTimeMMD">LinearTimeMMD</a> allows to approximate the null distribution in the same pass as computing the statistic itself (in linear time). This should always be used in practice since seperate calls of computing statistic and p-value will operator on different data from the stream. Below, we compute the test on a large amount of data (impossible to perform quadratic time MMD for this one as the kernel matrices cannot be stored in memory)
End of explanation
# mmd instance using streaming features
mmd=sg.LinearTimeMMD()
mmd.set_p(gen_p)
mmd.set_q(gen_q)
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_num_blocks_per_burst(block_size)
sigmas=[2**x for x in np.linspace(-5, 5, 11)]
print("Choosing kernel width from", ["{0:.2f}".format(sigma) for sigma in sigmas])
for i in range(len(sigmas)):
mmd.add_kernel(sg.create_kernel("GaussianKernel", width=sigmas[i]))
# optmal kernel choice is possible for linear time MMD
mmd.set_kernel_selection_strategy(sg.KSM_MAXIMIZE_POWER)
# must be set true for kernel selection
mmd.set_train_test_mode(True)
# select best kernel
mmd.select_kernel()
best_kernel=mmd.get("kernel")
print("Best single kernel has bandwidth %.2f" % np.exp(best_kernel.get("width")))
Explanation: Kernel Selection for the MMD -- Overview
$\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}$
Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever.
Shogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the <a href="http://shogun.ml/LinearTimeMMD">LinearTimeMMD</a>, [3] describes a way of selecting the optimal kernel in the sense that the test's type II error is minimised. For the linear time MMD, this is the method of choice. It is done via maximising the MMD statistic divided by its standard deviation and it is possible for single kernels and also for convex combinations of them. For the <a href="http://shogun.ml/QuadraticTimeMMD">QuadraticTimeMMD</a>, the best method in literature is choosing the kernel that maximised the MMD statistic [4]. For convex combinations of kernels, this can be achieved via a $L2$ norm constraint. A detailed comparison of all methods on numerous datasets can be found in [5].
MMD Kernel selection in Shogun always involves coosing a one of the methods of <a href="http://shogun.ml/KernelSelectionStrategy">GaussianKernel</a> All methods compute their results for a fixed set of these baseline kernels. We later give an example how to use these classes after providing a list of available methods.
KSM_MEDIAN_HEURISTIC: Selects from a set <a href="http://shogun.ml/GaussianKernel">GaussianKernel</a> instances the one whose width parameter is closest to the median of the pairwise distances in the data. The median is computed on a certain number of points from each distribution that can be specified as a parameter. Since the median is a stable statistic, one does not have to compute all pairwise distances but rather just a few thousands. This method a useful (and fast) heuristic that in many cases gives a good hint on where to start looking for Gaussian kernel widths. It is for example described in [1]. Note that it may fail badly in selecting a good kernel for certain problems.
KSM_MAXIMIZE_MMD: Selects from a set of arbitrary baseline kernels a single one that maximises the used MMD statistic -- more specific its estimate.
$$
k^*=\argmax_{k\in\mathcal{K}} \hat \eta_k,
$$
where $\eta_k$ is an empirical MMD estimate for using a kernel $k$.
This was first described in [4] and was empirically shown to perform better than the median heuristic above. However, it remains a heuristic that comes with no guarantees. Since MMD estimates can be computed in linear and quadratic time, this method works for both methods. However, for the linear time statistic, there exists a better method.
KSM_MAXIMIZE_POWER: Selects the optimal single kernel from a set of baseline kernels. This is done via maximising the ratio of the linear MMD statistic and its standard deviation.
$$
k^=\argmax_{k\in\mathcal{K}} \frac{\hat \eta_k}{\hat\sigma_k+\lambda},
$$
where $\eta_k$ is a linear time MMD estimate for using a kernel $k$ and $\hat\sigma_k$ is a linear time variance estimate of $\eta_k$ to which a small number $\lambda$ is added to prevent division by zero.
These are estimated in a linear time way with the streaming framework that was described earlier. Therefore, this method is only available for <a href="http://shogun.ml/LinearTimeMMD">LinearTimeMMD</a>. Optimal here means that the resulting test's type II error is minimised for a fixed type I error. Important: For this method to work, the kernel needs to be selected on different* data than the test is performed on. Otherwise, the method will produce wrong results.
<a href="http://shogun.ml/MMDKernelSelectionCombMaxL2">MMDKernelSelectionCombMaxL2</a> Selects a convex combination of kernels that maximises the MMD statistic. This is the multiple kernel analogous to <a href="http://shogun.ml/MMDKernelSelectionMax">MMDKernelSelectionMax</a>. This is done via solving the convex program
$$
\boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0},
$$
where $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. See [3] for details. Note that this method is unable to select a single kernel -- even when this would be optimal.
Again, when using the linear time MMD, there are better methods available.
<a href="http://shogun.ml/MMDKernelSelectionCombOpt">MMDKernelSelectionCombOpt</a> Selects a convex combination of kernels that maximises the MMD statistic divided by its covariance. This corresponds to \emph{optimal} kernel selection in the same sense as in class <a href="http://shogun.ml/MMDKernelSelectionOpt">MMDKernelSelectionOpt</a> and is its multiple kernel analogous. The convex program to solve is
$$
\boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} (\hat Q+\lambda I) {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0},
$$
where again $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. The matrix $\hat Q$ is a linear time estimate of the covariance matrix of the vector $\boldsymbol{\eta}$ to whose diagonal a small number $\lambda$ is added to prevent division by zero. See [3] for details. In contrast to <a href="http://shogun.ml/MMDKernelSelectionCombMaxL2">MMDKernelSelectionCombMaxL2</a>, this method is able to select a single kernel when this gives a lower type II error than a combination. In this sense, it contains <a href="http://shogun.ml/MMDKernelSelectionOpt">MMDKernelSelectionOpt</a>.
MMD Kernel Selection in Shogun
In order to use one of the above methods for kernel selection, one has to create a new instance of <a href="http://shogun.ml/CombinedKernel">CombinedKernel</a> append all desired baseline kernels to it. This combined kernel is then passed to the MMD class. Then, an object of any of the above kernel selection methods is created and the MMD instance is passed to it in the constructor. There are then multiple methods to call
compute_measures to compute a vector kernel selection criteria if a single kernel selection method is used. It will return a vector of selected kernel weights if a combined kernel selection method is used. For \shogunclass{MMDKernelSelectionMedian}, the method does throw an error.
select_kernel returns the selected kernel of the method. For single kernels this will be one of the baseline kernel instances. For the combined kernel case, this will be the underlying <a href="http://shogun.ml/CombinedKernel">CombinedKernel</a> instance where the subkernel weights are set to the weights that were selected by the method.
In order to utilise the selected kernel, it has to be passed to an MMD instance. We now give an example how to select the optimal single and combined kernel for the Gaussian Blobs dataset.
What is the best kernel to use here? This is tricky since the distinguishing characteristics are hidden at a small length-scale. Create some kernels to select the best from
End of explanation
mmd.set_null_approximation_method(sg.NAM_MMD1_GAUSSIAN);
p_value_best=mmd.compute_p_value(mmd.compute_statistic());
print("Bootstrapping: P-value of MMD test with optimal kernel is %.2f" % p_value_best)
Explanation: Now perform two-sample test with that kernel
End of explanation
m=5000
mmd.set_num_samples_p(m)
mmd.set_num_samples_q(m)
mmd.set_train_test_mode(False)
num_samples=500
# sample null and alternative distribution, implicitly generate new data for that
mmd.set_null_approximation_method(sg.NAM_PERMUTATION)
mmd.set_num_null_samples(num_samples)
null_samples=mmd.sample_null()
alt_samples=np.zeros(num_samples)
for i in range(num_samples):
alt_samples[i]=mmd.compute_statistic()
Explanation: For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)
End of explanation
plot_alt_vs_null(alt_samples, null_samples, alpha)
Explanation: And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection
End of explanation |
11,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Db2 Macros
The %sql command also allows the use of macros. Macros are used to substitute text into SQL commands that you execute. Macros substitution is done before any SQL is executed. This allows you to create macros that include commonly used SQL commands or parameters rather than having to type them in. Before using any macros, we must make sure we have loaded the Db2 extensions.
Step1: Macro Basics
A Macro command begins with a percent sign (% similar to the %sql magic command) and can be found anywhere within a %sql line or %%sql block. Macros must be separated from other text in the SQL with a space.
To define a macro, the %%sql macro <name> command is used. The body of the macro is found in the cell below the definition of the macro. This simple macro called EMPTABLE will substitute a SELECT statement into a SQL block.
Step2: The name of the macro follows the %%sql macro command and is case sensitive. To use the macro, we can place it anywhere in the %sql block. This first example uses it by itself.
Step3: The actual SQL that is generated is not shown by default. If you do want to see the SQL that gets generated, you can use the -e (echo) option to display the final SQL statement. The following example will display the generated SQL. Note that the echo setting is only used to display results for the current cell that is executing.
Step4: Since we can use the %emptable anywhere in our SQL, we can add additional commands around it. In this example we add some logic to the select statement.
Step5: Macros can also have parameters supplied to them. The parameters are included after the name of the macro. Here is a simple macro which will use the first parameter as the name of the column we want returned from the EMPLOYEE table.
Step6: This example illustrates two concepts. The MACRO command will replace any existing macro with the same name. Since we already have an emptable macro, the macro body will be replaced with this code. In addition, macros only exist for the duration of your notebook. If you create another Jupyter notebook, it will not contain any macros that you may have created. If there are macros that you want to share across notebooks, you should create a separate notebook and place all of the macro definitions in there. Then you can include these macros by executing the %run command using the name of the notebook that contains the macros.
The following SQL shows the use of the macro with parameters.
Step7: The remainder of this notebook will explore the advanced features of macros.
Macro Parameters
Macros can have up to 9 parameters supplied to them. The parameters are numbered from 1 to 9, left to right in the argument list for the macro. For instance, the following macro has 5 paramters
Step8: Note that the EMPNO field is a character field in the EMPLOYEE table. Even though the employee number was supplied as a string, the quotes are not included in the parameter. The macro places quotes around the parameter {5} so that it is properly used in the SQL statement. The other feature of this macro is that the display (on) command is part of the macro body so the generated SQL will always be displayed.
Step9: We can modify the macro to assume that the parameters will include the quotes in the string.
Step10: We just have to make sure that the quotes are part of the parameter now.
Step11: We could use the square brackets as an alternative way of passing the parameter.
Step12: Parameters can also be named in a macro. To name an input value, the macro needs to use the format
Step13: Named parameters are useful when there are many options within the macro and you don't want to keep track of which position it is in. In addition, if you have a variable number of parameters, you should use named parameters for the fixed (required) parameters and numbered parameters for the optional ones.
Macro Coding Overview
Macros can contain any type of text, including SQL commands. In addition to the text, macros can also contain the following keywords
Step14: Using the -e flag will display the final SQL that is run.
Step15: If we remove the -e option, the final SQL will not be shown.
Step16: Exit Command
The exit command will terminate the processing within a macro and not run the generated SQL. You would use this when a condition is not met within the macro (like a missing parameter).
Step17: The macro that was defined will not show the second statement, nor will it execute the SQL that was defined in the macro body.
Step18: Echo Command
As you already noticed in the previous example, the echo command will display information on the screen. Any text following the command will have variables substituted and then displayed with a green box surrounding it. The following code illustates the use of the command.
Step19: The echo command will show each line as a separate box.
Step20: If you want to have a message go across multiple lines use the <br> to start a new line.
Step21: Var Command
The var (variable) command sets a macro variable to a value. A variable is referred to in the macro script using curly braces {name}. By default the arguments that are used in the macro call are assigned the variable names {1} to {9}. If you use a named argument (option="value") in the macro call, a variable called {option} will contain the value within the macro.
To set a variable within a macro you would use the var command
Step22: Calling runit will display the variable that was set in the first macro.
Step23: A variable can be converted to uppercase by placing the ^ beside the variable name or number.
Step24: The string following the variable name can include quotes and these will not be removed. Only quotes that are supplied in a parameter to a macro will have the quotes removed.
Step25: When passing parameters to a macro, the program will automatically create variables based on whether they are positional parameters (1, 2, ..., n) or named parameters. The following macro will be used to show how parameters are passed to the routine.
Step26: Calling the macro will show how the variable names get assigned and used.
Step27: If you pass an empty value (or if a variable does not exist), a "null" value will be shown.
Step28: An empty string also returns a null value.
Step29: Finally, any string that is supplied to the macro will not include the quotes in the variable. The Hello World string will not have quotes when it is displayed
Step30: You need to supply the quotes in the script or macro when using variables since quotes are stripped from any strings that are supplied.
Step31: The count of the total number of parameters passed is found in the {argc} variable. You can use this variable to decide whether or not the user has supplied the proper number of arguments or change which code should be executed.
Step32: Unnamed parameters are included in the count of arguments while named parameters are ignored.
Step33: If/Else/Endif Command
If you need to add conditional logic to your macro then you should use the if/else/endif commands. The format of the if statement is
Step34: Running the previous macro with no parameters will check to see if the option keyword was used.
Step35: Now include the optional parameter.
Step36: Finally, issue the macro with multiple parameters.
Step37: One additional option is available for variable substitution. If the first character of the variable name or parameter number is the ^ symbol, it will uppercase the entire string. | Python Code:
%run db2.ipynb
Explanation: Db2 Macros
The %sql command also allows the use of macros. Macros are used to substitute text into SQL commands that you execute. Macros substitution is done before any SQL is executed. This allows you to create macros that include commonly used SQL commands or parameters rather than having to type them in. Before using any macros, we must make sure we have loaded the Db2 extensions.
End of explanation
%%sql macro emptable
select * from employee
Explanation: Macro Basics
A Macro command begins with a percent sign (% similar to the %sql magic command) and can be found anywhere within a %sql line or %%sql block. Macros must be separated from other text in the SQL with a space.
To define a macro, the %%sql macro <name> command is used. The body of the macro is found in the cell below the definition of the macro. This simple macro called EMPTABLE will substitute a SELECT statement into a SQL block.
End of explanation
%sql %emptable
Explanation: The name of the macro follows the %%sql macro command and is case sensitive. To use the macro, we can place it anywhere in the %sql block. This first example uses it by itself.
End of explanation
%%sql -e
%emptable
Explanation: The actual SQL that is generated is not shown by default. If you do want to see the SQL that gets generated, you can use the -e (echo) option to display the final SQL statement. The following example will display the generated SQL. Note that the echo setting is only used to display results for the current cell that is executing.
End of explanation
%%sql
%emptable
where empno = '000010'
Explanation: Since we can use the %emptable anywhere in our SQL, we can add additional commands around it. In this example we add some logic to the select statement.
End of explanation
%%sql macro emptable
SELECT {1} FROM EMPLOYEE
Explanation: Macros can also have parameters supplied to them. The parameters are included after the name of the macro. Here is a simple macro which will use the first parameter as the name of the column we want returned from the EMPLOYEE table.
End of explanation
%%sql
%emptable(lastname)
Explanation: This example illustrates two concepts. The MACRO command will replace any existing macro with the same name. Since we already have an emptable macro, the macro body will be replaced with this code. In addition, macros only exist for the duration of your notebook. If you create another Jupyter notebook, it will not contain any macros that you may have created. If there are macros that you want to share across notebooks, you should create a separate notebook and place all of the macro definitions in there. Then you can include these macros by executing the %run command using the name of the notebook that contains the macros.
The following SQL shows the use of the macro with parameters.
End of explanation
%%sql macro emptable
display on
SELECT {1},{2},{3},{4}
FROM EMPLOYEE
WHERE EMPNO = '{5}'
Explanation: The remainder of this notebook will explore the advanced features of macros.
Macro Parameters
Macros can have up to 9 parameters supplied to them. The parameters are numbered from 1 to 9, left to right in the argument list for the macro. For instance, the following macro has 5 paramters:
%emptable(lastname,firstnme,salary,bonus,'000010')
Parameters are separated by commas, and can contain strings as shown using single or double quotes. When the parameters are used within a macro, the quotes are not included as part of the string. If you do want to pass the quotes as part of the parameter, use square brackets [] around the string. For instance, the following parameter will not have quotes passed to the macro:
python
%sql %abc('no quotes')
To send the string with quotes, you could surround the parameter with other quotes "'hello'" or use the following technique if you use multiple quotes in your string:
python
%sql %abc (['quotes'])
To use a parameter within your macro, you enclose the parameter number with braces {}. The next command will illustrate the use of the five parameters.
End of explanation
%sql %emptable(lastname,firstnme,salary,bonus,'000010')
Explanation: Note that the EMPNO field is a character field in the EMPLOYEE table. Even though the employee number was supplied as a string, the quotes are not included in the parameter. The macro places quotes around the parameter {5} so that it is properly used in the SQL statement. The other feature of this macro is that the display (on) command is part of the macro body so the generated SQL will always be displayed.
End of explanation
%%sql macro emptable
SELECT {1},{2},{3},{4}
FROM EMPLOYEE
WHERE EMPNO = {5}
Explanation: We can modify the macro to assume that the parameters will include the quotes in the string.
End of explanation
%sql -e %emptable(lastname,firstnme,salary,bonus,"'000010'")
Explanation: We just have to make sure that the quotes are part of the parameter now.
End of explanation
%sql -e %emptable(lastname,firstnme,salary,bonus,['000010'])
Explanation: We could use the square brackets as an alternative way of passing the parameter.
End of explanation
%%sql macro showemp
SELECT {1},{2} FROM EMPLOYEE
{logic}
%sql %showemp(firstnme,lastname,logic="WHERE EMPNO='000010'")
%sql %showemp(firstnme,logic="WHERE EMPNO='000010'",lastname)
Explanation: Parameters can also be named in a macro. To name an input value, the macro needs to use the format:
field=value
For instance, the following macro call will have 2 numbered parameters and one named parameter:
%showemp(firstnme,lastname,logic="WHERE EMPNO='000010'")
From within the macro the parameter count would be 2 and the value for parameter 1 is firstnme, and the value for parameter 2 is lastname. Since we have a named parameter, it is not included in the list of numbered parameters. In fact, the following statement is equivalent since unnamed parameters are numbered in the order that they are found in the macro, ignoring any named parameters that are found:
%showemp(firstnme,logic="WHERE EMPNO='000010'",lastname)
The following macro illustrates this feature.
End of explanation
%%sql macro showdisplay
SELECT * FROM EMPLOYEE FETCH FIRST ROW ONLY
Explanation: Named parameters are useful when there are many options within the macro and you don't want to keep track of which position it is in. In addition, if you have a variable number of parameters, you should use named parameters for the fixed (required) parameters and numbered parameters for the optional ones.
Macro Coding Overview
Macros can contain any type of text, including SQL commands. In addition to the text, macros can also contain the following keywords:
echo - Display a message
exit - Exit the macro immediately
if/else/endif - Conditional logic
var - Set a variable
display - Turn the display of the final text on
The only restriction with macros is that macros cannot be nested. This means I can't call a macro from within a macro. The sections below explain the use of each of these statement types.
Echo Option
The -e option will result in the final SQL being display after the macro substitution is done.
%%sql -e
%showemp(...)
End of explanation
%sql -e %showdisplay
Explanation: Using the -e flag will display the final SQL that is run.
End of explanation
%sql %showdisplay
Explanation: If we remove the -e option, the final SQL will not be shown.
End of explanation
%%sql macro showexit
echo This message gets shown
SELECT * FROM EMPLOYEE FETCH FIRST ROW ONLY
exit
echo This message does not get shown
Explanation: Exit Command
The exit command will terminate the processing within a macro and not run the generated SQL. You would use this when a condition is not met within the macro (like a missing parameter).
End of explanation
%sql %showexit
Explanation: The macro that was defined will not show the second statement, nor will it execute the SQL that was defined in the macro body.
End of explanation
%%sql macro showecho
echo Here is a message
echo Two lines are shown
Explanation: Echo Command
As you already noticed in the previous example, the echo command will display information on the screen. Any text following the command will have variables substituted and then displayed with a green box surrounding it. The following code illustates the use of the command.
End of explanation
%sql %showecho
Explanation: The echo command will show each line as a separate box.
End of explanation
%%sql macro showecho
echo Here is a paragraph. <br> And a final paragraph.
%sql %showecho
Explanation: If you want to have a message go across multiple lines use the <br> to start a new line.
End of explanation
%%sql macro initialize
var $hello Hello There
var hello You won't see this
%%sql macro runit
echo The value of hello is *{hello}*
echo {$hello}
Explanation: Var Command
The var (variable) command sets a macro variable to a value. A variable is referred to in the macro script using curly braces {name}. By default the arguments that are used in the macro call are assigned the variable names {1} to {9}. If you use a named argument (option="value") in the macro call, a variable called {option} will contain the value within the macro.
To set a variable within a macro you would use the var command:
var name value
The variable name can be any name as long as it only includes letters, numbers, underscore _ and $. Variable names are case sensitive so {a} and {A} are different. When the macro finishes executing, the contents of the variables will be lost. If you do want to keep a variable between macros, you should start the name of the variable with a $ sign:
var $name value
This variable will persist between macro calls.
End of explanation
%sql %initialize
%sql %runit
Explanation: Calling runit will display the variable that was set in the first macro.
End of explanation
%%sql macro runit
echo The first parameter is {^1}
%sql %runit(Hello There)
Explanation: A variable can be converted to uppercase by placing the ^ beside the variable name or number.
End of explanation
%%sql macro runit
var hello This is a long string without quotes
var hello2 'This is a long string with quotes'
echo {hello} <br> {hello2}
%sql %runit
Explanation: The string following the variable name can include quotes and these will not be removed. Only quotes that are supplied in a parameter to a macro will have the quotes removed.
End of explanation
%%sql macro showvar
echo parm1={1} <br>parm2={2} <br>message={message}
Explanation: When passing parameters to a macro, the program will automatically create variables based on whether they are positional parameters (1, 2, ..., n) or named parameters. The following macro will be used to show how parameters are passed to the routine.
End of explanation
%sql %showvar(parameter 1, another parameter,message="Hello World")
Explanation: Calling the macro will show how the variable names get assigned and used.
End of explanation
%sql %showvar(1,,message="Hello World")
Explanation: If you pass an empty value (or if a variable does not exist), a "null" value will be shown.
End of explanation
%sql %showvar(1,2,message="")
Explanation: An empty string also returns a null value.
End of explanation
%sql %showvar(1,2,message="Hello World")
Explanation: Finally, any string that is supplied to the macro will not include the quotes in the variable. The Hello World string will not have quotes when it is displayed:
End of explanation
%%sql macro showvar
echo parm1={1} <br>parm2={2} <br>message='{message}'
%sql %showvar(1,2,message="Hello World")
Explanation: You need to supply the quotes in the script or macro when using variables since quotes are stripped from any strings that are supplied.
End of explanation
%%sql macro showvar
echo The number of unnamed parameters is {argc}. The where clause is *{where}*.
Explanation: The count of the total number of parameters passed is found in the {argc} variable. You can use this variable to decide whether or not the user has supplied the proper number of arguments or change which code should be executed.
End of explanation
%sql %showvar(1,2,option=nothing,3,4,where=)
Explanation: Unnamed parameters are included in the count of arguments while named parameters are ignored.
End of explanation
%%sql macro showif
if {argc} = 0
echo No parameters supplied
if {option} <> null
echo The optional parameter option was set: {option}
endif
else
if {argc} = "1"
echo One parameter was supplied
else
echo More than one parameter was supplied: {argc}
endif
endif
Explanation: If/Else/Endif Command
If you need to add conditional logic to your macro then you should use the if/else/endif commands. The format of the if statement is:
if variable condition value
statements
else
statements
endif
The else portion is optional, but the block must be closed with the endif command. If statements can be nested up to 9 levels deep:
if condition 1
if condition 2
statements
else
if condition 3
statements
end if
endif
endif
If the condition in the if clause is true, then anything following the if statement will be executed and included in the final SQL statement. For instance, the following code will create a SQL statement based on the value of parameter 1:
if {1} = null
SELECT * FROM EMPLOYEE
else
SELECT {1} FROM EMPLOYEE
endif
Conditions
The if statement requires a condition to determine whether or not the block should be executed. The condition uses the following format:
if {variable} condition {variable} | constant | null
Variable can be a number from 1 to 9 which represents the argument in the macro list. So {1} refers to the first argument. The variable can also be the name of a named parameter or global variable.
The condition is one of the following comparison operators:
- =, ==: Equal to
- <: Less than
- >: Greater than
- <=,=<: Less than or equal to
- >=, =>: Greater than or equal to
- !=, <> : Not equal to
The variable or constant will have quotes stripped away before doing the comparison. If you are testing for the existence of a variable, or to check if a variable is empty, use the keyword null.
End of explanation
%sql %showif
Explanation: Running the previous macro with no parameters will check to see if the option keyword was used.
End of explanation
%sql %showif(option="Yes there is an option")
Explanation: Now include the optional parameter.
End of explanation
%sql %showif(Here,are,a,number,of,parameters)
Explanation: Finally, issue the macro with multiple parameters.
End of explanation
%%sql macro showif
if {option} <> null
echo The optional parameter option was set: {^option}
endif
%sql %showif(option="Yes there is an option")
Explanation: One additional option is available for variable substitution. If the first character of the variable name or parameter number is the ^ symbol, it will uppercase the entire string.
End of explanation |
11,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boolean and None Objects
Note
Step1: To stay practical, it is important to understand that you won't be assigning True and False values to variables as much as you will be receiving them. We talked in the "Comparison Operators" series about said operators, and what they return. How do we capture True and False from an expression?
Step2: The general format for if-else control flow in Python is the following | Python Code:
# Declaring both Boolean values
a = True
b = False
Explanation: Boolean and None Objects
Note: Complete this lecture after finishing the "Comparison Operators" Section
Booleans are objects that evaluate to either True or False. In turn, they represent the 1 and 0 "on and off" concept. Whether you're building a web form or working with backend authentication, working with boolean objects is something that is inevitable. You use Booleans to check if things are either True or False, and use control flow to handle the situation accordingly.
A boolean object is either True, or False.
End of explanation
# Capturing True from an expression
x = 2 < 3
# Capturing False from an expression
y = 5 > 9
Explanation: To stay practical, it is important to understand that you won't be assigning True and False values to variables as much as you will be receiving them. We talked in the "Comparison Operators" series about said operators, and what they return. How do we capture True and False from an expression?
End of explanation
# Example of assigning None, and changing it.
some_obj = None
if 2 < 3:
some_obj = True
Explanation: The general format for if-else control flow in Python is the following:
if something_that_evals_to_True:
foo()
elif something_else_:
woo()
.
.
else:
falsework()
None
Often, you will not always be sure as to what an object should hold yet. You may have a variable in mind to store an incoming object, but you need a placeholder. Or, perhaps, you want to check if the object you received is nothing. In this case, we use the "None" object.
End of explanation |
11,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import the BigBang modules as needed. These should be in your Python environment if you've installed BigBang correctly.
Step1: Also, let's import a number of other dependencies we'll use later.
Step2: Now let's load the data for analysis.
Step3: This variable is for the range of days used in computing rolling averages.
Now, let's see
Step4: This might be useful for seeing the distribution (does the top message sender dominate?) or for identifying key participants to talk to.
Many mailing lists will have some duplicate senders
Step5: For this still naive measure (edit distance on a normalized string), it appears that there are many duplicates in the <10 range, but that above that the edit distance of short email addresses at common domain names can take over.
Step6: We can create the same color plot with the consolidated dataframe to see how the distribution has changed.
Step7: Of course, there are still some duplicates, mostly people who are using the same name, but with a different email address at an unrelated domain name.
How does our consolidation affect the graph of distribution of senders?
Step8: Okay, not dramatically different, but the consolidation makes the head heavier. There are more people close to that high end, a stronger core group and less a power distribution smoothly from one or two people.
We could also use sender email addresses as a naive inference for affiliation, especially for mailing lists where corporate/organizational email addresses are typically used.
Pandas lets us group by the results of a keying function, which we can use to group participants sending from email addresses with the same domain.
Step9: We can also aggregate the number of messages that come from addresses at each domain. | Python Code:
import bigbang.ingress.mailman as mailman
import bigbang.analysis.graph as graph
import bigbang.analysis.process as process
from bigbang.parse import get_date
from bigbang.archive import Archive
import imp
imp.reload(process)
Explanation: Import the BigBang modules as needed. These should be in your Python environment if you've installed BigBang correctly.
End of explanation
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import numpy as np
import math
import pytz
import pickle
import os
Explanation: Also, let's import a number of other dependencies we'll use later.
End of explanation
urls = ["http://www.ietf.org/mail-archive/text/ietf-privacy/",
"http://lists.w3.org/Archives/Public/public-privacy/"]
mlists = [mailman.open_list_archives(url) for url in urls]
activities = [Archive.get_activity(Archive(ml)) for ml in mlists]
Explanation: Now let's load the data for analysis.
End of explanation
a = activities[1] # for the first mailing list
ta = a.sum(0) # sum along the first axis
ta.sort_values()[-10:].plot(kind='barh', width=1)
Explanation: This variable is for the range of days used in computing rolling averages.
Now, let's see: who are the authors of the most messages to one particular list?
End of explanation
levdf = process.sorted_matrix(a) # creates a slightly more nuanced edit distance matrix
# and sorts by rows/columns that have the best candidates
levdf_corner = levdf.iloc[:25,:25] # just take the top 25
fig = plt.figure(figsize=(15, 12))
plt.pcolor(levdf_corner)
plt.yticks(np.arange(0.5, len(levdf_corner.index), 1), levdf_corner.index)
plt.xticks(np.arange(0.5, len(levdf_corner.columns), 1), levdf_corner.columns, rotation='vertical')
plt.colorbar()
plt.show()
Explanation: This might be useful for seeing the distribution (does the top message sender dominate?) or for identifying key participants to talk to.
Many mailing lists will have some duplicate senders: individuals who use multiple email addresses or are recorded as different senders when using the same email address. We want to identify those potential duplicates in order to get a more accurate representation of the distribution of senders.
To begin with, let's calculate the similarity of the From strings, based on the Levenshtein distance.
End of explanation
consolidates = []
# gather pairs of names which have a distance of less than 10
for col in levdf.columns:
for index, value in levdf.loc[levdf[col] < 10, col].items():
if index != col: # the name shouldn't be a pair for itself
consolidates.append((col, index))
print(str(len(consolidates)) + ' candidates for consolidation.')
c = process.consolidate_senders_activity(a, consolidates)
print('We removed: ' + str(len(a.columns) - len(c.columns)) + ' columns.')
Explanation: For this still naive measure (edit distance on a normalized string), it appears that there are many duplicates in the <10 range, but that above that the edit distance of short email addresses at common domain names can take over.
End of explanation
lev_c = process.sorted_matrix(c)
levc_corner = lev_c.iloc[:25,:25]
fig = plt.figure(figsize=(15, 12))
plt.pcolor(levc_corner)
plt.yticks(np.arange(0.5, len(levc_corner.index), 1), levc_corner.index)
plt.xticks(np.arange(0.5, len(levc_corner.columns), 1), levc_corner.columns, rotation='vertical')
plt.colorbar()
plt.show()
Explanation: We can create the same color plot with the consolidated dataframe to see how the distribution has changed.
End of explanation
fig, axes = plt.subplots(nrows=2, figsize=(15, 12))
ta = a.sum(0) # sum along the first axis
ta.sort_values()[-20:].plot(kind='barh',ax=axes[0], width=1, title='Before consolidation')
tc = c.sum(0)
tc.sort_values()[-20:].plot(kind='barh',ax=axes[1], width=1, title='After consolidation')
plt.show()
Explanation: Of course, there are still some duplicates, mostly people who are using the same name, but with a different email address at an unrelated domain name.
How does our consolidation affect the graph of distribution of senders?
End of explanation
grouped = tc.groupby(process.domain_name_from_email)
domain_groups = grouped.size()
domain_groups.sort_values(ascending=True)[-20:].plot(kind='barh', width=1, title="Number of participants at domain")
Explanation: Okay, not dramatically different, but the consolidation makes the head heavier. There are more people close to that high end, a stronger core group and less a power distribution smoothly from one or two people.
We could also use sender email addresses as a naive inference for affiliation, especially for mailing lists where corporate/organizational email addresses are typically used.
Pandas lets us group by the results of a keying function, which we can use to group participants sending from email addresses with the same domain.
End of explanation
domain_messages_sum = grouped.sum()
domain_messages_sum.sort_values(ascending=True)[-20:].plot(kind='barh', width=1, title="Number of messages from domain")
Explanation: We can also aggregate the number of messages that come from addresses at each domain.
End of explanation |
11,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
4.0 Numpy Advanced
4.1 Verifying the python version you are using
Step1: At this point anything above python 3.5 should be ok.
4.2 Import numpy
Step2: Notes
Step3: Notes
Step4: Notes
Step5: Notes
Step6: Notes
Step7: Notes
Step8: Notes
Step9: Notes
Step10: Notes | Python Code:
import sys
print(sys.version)
Explanation: 4.0 Numpy Advanced
4.1 Verifying the python version you are using
End of explanation
import numpy as np
np.__version__
import matplotlib as mpl
from matplotlib import pyplot as plt
mpl.__version__
Explanation: At this point anything above python 3.5 should be ok.
4.2 Import numpy
End of explanation
values = np.zeros((2,50))
size = values.shape
print(size)
for i in range(size[1]):
values[0,i] = i * 2
values[1,i] = np.sin(i / 2)
print(values)
Explanation: Notes:
4.3 Set a dummy numpy array elements
Create a 2D numpy array and set the elements in it as you iterate.
End of explanation
np.save('np_file.npy', values)
np.savetxt('txt_file.txt', np.transpose(values))
Explanation: Notes:
4.4 Save the numpy array to file
It is practical to save the processed arrays and numpy has built in functions for this
End of explanation
values_from_text = np.loadtxt("txt_file.txt")
values_from_np = np.load("np_file.npy")
print(values_from_text[0,0] == values_from_np[0,0])
print(values_from_text)
Explanation: Notes:
4.5 Load numpy array from file
End of explanation
values = values_from_text
print(values.shape)
x_0 = values[0]
print(x_0)
x_1 = values[:,0]
print(x_1)
y_1 = values[:,1]
print(y_1)
fig = plt.figure()
plt.plot(x_1, y_1)
plt.show()
Explanation: Notes:
4.6 Exercice:
Create a numpy array from -pi to pi and save the sin of it to file. The read this file again and print the result in the notebook.
Notes:
5. Numpy array slicing
5.1 General slicing
End of explanation
indices = [5,10, 15 ,20]
x = values[indices,0]
print(x)
y = values[indices,1]
print(y)
fig = plt.figure()
plt.plot(x, y)
plt.show()
Explanation: Notes:
5.2 Precise slicing
End of explanation
indices = np.where(values[:,1] > -0.5)[0]
print("indices: ",indices)
x = values[indices,0]
print("x: ",x)
y = values[indices,1]
print("y: ",y)
fig = plt.figure()
plt.plot(x, y)
plt.show()
Explanation: Notes:
5.3 Conditional slicing
End of explanation
to_sort = np.random.rand(10)
print(to_sort)
to_sort.sort()
print(to_sort)
Explanation: Notes:
6. Numpy array Sorting
6.1 One dimension
End of explanation
to_sort = np.random.rand(2,10)
print(to_sort)
to_sort.sort(axis=1)
print(to_sort)
Explanation: Notes:
6.2 Two dimension
End of explanation
to_sort = np.random.rand(3,10)
print(to_sort)
#investigate the axis we want to sort after
print("The axis to sort: \n",to_sort[1])
sort_indices = to_sort[1].argsort()
print("The indexes after the sort: \n",sort_indices)
#proceed the sort using the slicing method we just introduced
to_sort = to_sort[:,sort_indices]
print("The sorted full array:\n ",to_sort)
Explanation: Notes:
6.3 Sort whole array by one column
End of explanation |
11,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Youth
refer to last week's robotic session
Under the hood, languages like Python program translate human language to something the machine can understand (instructions)
Python is an open source general purpose language, used for many things from data science, to web development to app development, etc.
Python emphasizes readability, compared to older langauges many default functions and commands are read like plain English.
ask who has done programming, who has done Python
Introduce repl.it, not the only way, easy for first introduction without installation of anything. FORK the repl.it
and the online python interpreter that we useis here
Step1: summary
What we learned today
Step2: Numbers
You can use Python as a calculator
Step3: Python's order of operations works just like math's
Step4: Logic
We've talked about how Python handles numbers already. Let's now turn to how it handles inequalities – things that are either true or false.
Step5: you can continue here
https
Step6: The second is called a "tuple", which is an immutable list (nothing can be added or subtracted) whose elements also can't be reassigned.
Step7: Indexing and Slicing
Step8: For Loops
Step11: Functions
Step12: Useful Packages
Step13: Guessing game
if we have extra time, we can play another game | Python Code:
# First, let the player choose Rock, Paper or Scissors by typing the letter ‘r’, ‘p’ or ‘s’
# first create a prompt and explain
input('what is your name?')
# for python to do anything with the result we need to save it in a variable which we can name anything but this is informative
player = input('rock (r), paper (p) or scissors (s)?')
# what did we just do? we used a built-in function in Python to prompt the user to input a letter in the console
# and we assigned the input to a variable called 'player'. the = symbol indicates that what is on the right is assigned to the variable name on the left
# what is a function like input or print? They let us do things with objects that we've made, like the player variable
# Now print out what the player chose:
#print(player)
#print('you chose', player)
# mention object type as string or integer
#print(player, 'vs')
print(player, 'vs', end=' ')
# Second: Computer's Turn
# Use 'randint' function to generate a random number to decide whether the computer has chosen rock, paper or scissors.
# we need to import it from 'random' library
from random import randint
chosen = randint(1,3) # search google for how to use via documentation
#print(chosen)
#print('computer chose',chosen)
# we like to print the letters not numbers. let's say 1 = rock, 2 = paper, 3=scissors
# we can use if statement to assign letters to whatever number that is selected
if chosen == 1:
computer = 'r' # 'O'
elif chosen == 2:
computer = 'p' #'__'
else:
computer = 's' #'>8'
print(computer)
# run a few times to show the random nature of the function
# notice that we are not inside the if statement because we don't use indentation
# a nicer output:
#print(player, 'vs', computer)
# let's add a code for determining the winner
# we need to compare the 'player' and 'computer' variables
# explain briefly booleans to compare
if player == computer:
print('Draw!')
elif player == 'r' and computer == 's':
print('player wins!')
elif player == 'r' and computer =='p':
print('Computer wins!')
elif player == 'p' and computer == 'r':
print('Player wins!')
elif player == 'p' and computer == 's':
print('Computer wins!')
elif player == 's' and computer == 'p':
print('Player wins!')
elif player == 's' and computer == 'r':
print('Computer wins!')
# Challenge: Instead of using the letters r, p and s to represent rock, paper and scissors, can you use ASCII art?
# O for rock, __ for paper, and >8 for scissors
# so now change the lines you print the choices of the player and the computer in ASCII art
#if player == 'r':
# print('O', 'vs', end=' ')
#elif player == 'p':
# print('__', 'vs', end=' ')
#else:
# print('>8', 'vs', end =' ')
if player == 'r':
print('O', 'vs', computer)
elif player == 'p':
print('__', 'vs', computer )
else:
print('>8', 'vs', computer)
Explanation: Python for Youth
refer to last week's robotic session
Under the hood, languages like Python program translate human language to something the machine can understand (instructions)
Python is an open source general purpose language, used for many things from data science, to web development to app development, etc.
Python emphasizes readability, compared to older langauges many default functions and commands are read like plain English.
ask who has done programming, who has done Python
Introduce repl.it, not the only way, easy for first introduction without installation of anything. FORK the repl.it
and the online python interpreter that we useis here: https://repl.it/@UofTCoders/saturdayprogram
explain stickies -> sometimes things go wrong, not a problem! part of the learning experience.
we're all human!
Lesson outline:
We learn python through a game, taken from Code Club Projects
A brief intro to general python commands, syntax = the rules/grammar used that Python can always know what you're trying to say.
going more in depth with what we did in the game and explaining
you'll have an idea of python commands and syntax, and enough for you to go home and find tutorials for what you want to do, (almost anything!)
End of explanation
name = 'Sara'
year = 2017
# we can check the type of our variable using the type(variable_name) function
print(type(name))
#str is a string: a sequence of characters.
Explanation: summary
What we learned today:
- built-in functions like input prompt and print
- variables
- numbers and strings
- libraries and functions like randint
- if statements
review:
Variables
End of explanation
7 + 8
7*8
8%7
2**4
Explanation: Numbers
You can use Python as a calculator:
End of explanation
16 ** 0.5
16 ** (1/2)
16 ** 1/2
Explanation: Python's order of operations works just like math's:
Parentheses
Exponents
Multiplication and division
Addition and subtraction
End of explanation
6 > 0
4 == 6
4 <= 6
4 != 6
Explanation: Logic
We've talked about how Python handles numbers already. Let's now turn to how it handles inequalities – things that are either true or false.
End of explanation
fruits = ['apple', 'banana', 'mango', 'lychee']
print(fruits)
fruits.append('orange')
print(fruits)
# lists don't need to comprise of all the same type
misc = [29, 'dog', fruits]
print(misc)
print(fruits + fruits)
Explanation: you can continue here
https://docs.trinket.io/getting-started-with-python#/logic/combining-boolean-expressions
Lists
Python has two array-like things. The first is called a "list", which can hold any data types.
After the .append() method, explain OOP briefly and why it's useful.
End of explanation
tup1 = (1,2)
print(tup1)
Explanation: The second is called a "tuple", which is an immutable list (nothing can be added or subtracted) whose elements also can't be reassigned.
End of explanation
#indexing in Python starts at 0, not 1 (like in Matlab or Oracle)
print(fruits[0])
print(fruits[1])
# strings are just a particular kind of list
s = 'This is a string.'
print(s[0])
# use -1 to get the last element
print(fruits[-1])
print(fruits[-2])
# to get a slice of the string use the : symbol
print(s[0:4])
print(s[:4])
print(s[4:7])
print(s[7:])
print(s[7:len(s)])
Explanation: Indexing and Slicing
End of explanation
nums = [23, 56, 1, 10, 15, 0]
# in this case, 'n' is a dummy variable that will be used by the for loop
# you do not need to assign it ahead of time
for n in nums:
if n%2 == 0:
print('even')
else:
print('odd')
# for loops can iterate over strings as well
vowels = 'aeiou'
for vowel in vowels:
print(vowel)
Explanation: For Loops
End of explanation
# always use descriptive naming for functions, variables, arguments etc.
def sum_of_squares(num1, num2):
Input: two numbers
Output: the sum of the squares of the two numbers
ss = num1**2 + num2**2
return(ss)
# The stuff inside is called the "docstring". It can be accessed by typing help(sum_of_squares)
print(sum_of_squares(4,2))
# the return statement in a function allows us to store the output of a function call in a variable for later use
ss1 = sum_of_squares(5,5)
print(ss1)
Explanation: Functions
End of explanation
# use a package by importing it, you can also give it a shorter alias, in this case 'np'
import numpy as np
array = np.arange(15)
lst = list(range(15))
print(array)
print(lst)
print(type(array))
print(type(lst))
# numpy arrays allow for vectorized calculations
print(array*2)
print(lst*2)
array = array.reshape([5,3])
print(array)
# we can get the mean over all rows (using axis=1)
array.mean(axis=1)
# max value in each column
array.max(axis=0)
import pandas as pd
# this will read in a csv file into a pandas DataFrame
# this csv has data of country spending on healthcare
data = pd.read_csv('health.csv', header=0, index_col=0, encoding="ISO-8859-1")
# the .head() function will allow us to look at first few lines of the dataframe
data.head()
# by default, rows are indicated first, followed by the column: [row, column]
data.loc['Canada', '2008']
# you can also slice a dataframe
data.loc['Canada':'Denmark', '1999':'2001']
%matplotlib inline
import matplotlib.pyplot as plt
# the .plot() function will create a simple graph for you to quickly visualize your data
data.loc['Denmark'].plot()
data.loc['Canada'].plot()
data.loc['India'].plot()
plt.legend(loc='best')
plt.savefig("countries_healthexpenditure.png")
Explanation: Useful Packages
End of explanation
import random
number = random.randint(1, 10)
tries = 0
win = False # setting a win flag to false
name = input("Hello, What is your username?")
print("Hello " + name + "." )
question = input("Would you like to play a game? [Y/N] ")
if question.lower() == "n": #in case of capital letters is entered
print("oh..okay")
exit()
elif question.lower() == "y":
print("I'm thinking of a number between 1 & 10")
while not win: # while the win is not true, run the while loop. We set win to false at the start therefore this will always run
guess = int(input("Have a guess: "))
tries = tries + 1
if guess == number:
win = True # set win to true when the user guesses correctly.
elif guess < number:
print("Guess Higher")
elif guess > number:
print("Guess Lower")
# if win is true then output message
print("Congrats, you guessed correctly. The number was indeed {}".format(number))
print("it had taken you {} tries".format(tries))
Explanation: Guessing game
if we have extra time, we can play another game:
End of explanation |
11,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-driving car Nanodegree - Term 1
Project 1
Step1: Loading data
Step3: Include an exploratory visualization of the dataset
I did not spend so much time on this. I first print out the distribution of the samples in 43 classes of labels which 'Speed limit (50km/h)' sign has most samples (2010 samples) following by 'Speed limit (30km/h)' sign (1980 samples) and 'Yield' sign (1920 samples).
I have also plotted out 10 random images which can be seen in notebook.
Step5: Design and Train Classifier Model
Image Data Pre-Processing
As a first step, I decided to convert the images to grayscale to convert to 1 channel image and remove the effect of color.
Next, I normalized normalized the data so that the data has mean zero and equal variance, i.e (pixel - 128.0)/ 128.0
Defining Helper Functions
Step9: First Model Architecture
my First attempt was to try the famous Lenet-5 model as recommended by Udacity because convolutional model is considered to performed best on object recognition
Step11: Second Model Training
Step12: Model evaluation
Step13: Test a Model on New Images
Testing on new five German Traffic Signs
To give more insight into how the model is working, we will test pictures of German traffic signs taken from the web and use the model to predict the traffic sign type. The file ../signnames.csv contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
Step14: Analyze Performance of 5 test images
Step15: Model's predictions on new traffic signs
Here are the results of the prediction
Step16: Conclusion
For the first image, the model's first choice was no vehicle sign (0.93) while the correct sign was third rank (0.026)
| Prediction | Probability |
| | Python Code:
# Load pickled data
import pickle
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
# Visualizations will be shown in the notebook.
%matplotlib inline
import cv2
import glob
import tensorflow as tf
from tensorflow.contrib.layers import flatten
from tensorflow.contrib.layers import flatten
from sklearn.utils import shuffle
Explanation: Self-driving car Nanodegree - Term 1
Project 1: Build a Traffic Sign Recognition Classifier
In this project, we will use deep neural networks and convolutional neural networks to classify traffic signs. We will train and validate a model so it can classify traffic sign images using the German Traffic Sign Dataset as sample dataset.
The goals / steps of this project are the following:
* Load the data set (see below for links to the project data set)
* Explore, summarize and visualize the data set
* Design, train and test a model architecture
* Use the model to make predictions on new images
* Analyze the softmax probabilities of the new images
* Summarize the results with a written report
Author : Tran Ly Vu
Github repo
Notebook
Python code
Load and Visualize The DataSet
Sample Dataset Information
The German Traffic Sign Dataset consists of 43 different traffic signs with each image having 32×32 px size. This dataset has 39,209 images as training data (Using this number of an image we have to train a neural network) and 12,630 images as a test data. Each image is a photo of one of the 43 class of traffic sign
Specifically , we will use the following pickled dataset provided by Udacity in which the images were resized to 32x32. The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
The provided dataset has 3 separate sets: training, validation and test sets, hence I do not have to split data for validation purpose. Here are some information:
The size of training set is 34799
The size of the validation set is 4410
The size of test set is 12630
The shape of a traffic sign image is (32, 32, 3)
The number of unique classes/labels in the data set is 43
The shape of traffic sign implies 32x32 pixels image with 3 channels, this is because Udacity has resized the images them before providing to students.
Importing packages
End of explanation
def loaded_pickled_data(file):
with open(file, mode='rb') as f:
output = pickle.load(f)
return output
training_file = '../../../train.p'
validation_file= '../../../valid.p'
testing_file = '../../../test.p'
train = loaded_pickled_data(training_file)
valid = loaded_pickled_data(validation_file)
test = loaded_pickled_data(testing_file)
X_train_original, y_train_original = train['features'], train['labels']
X_valid_original, y_valid_original = valid['features'], valid['labels']
X_test_original, y_test_original = test['features'], test['labels']
assert(len(X_train_original) == len(y_train_original))
assert(len(X_valid_original) == len(y_valid_original))
assert(len(X_test_original) == len(y_test_original))
# number of training examples
n_train = len(X_train_original)
# Number of validation examples
n_validation = len(X_valid_original)
# Number of testing examples.
n_test = len(X_test_original)
# What's the shape of an traffic sign image?
image_shape = X_train_original.shape[1:]
# there are few ways to print tuple
print('Original training dataset shape is: {}'.format(X_train_original.shape))
print('Original validation dataset shape is: ', X_valid_original.shape)
print('Original test dataset shape is: ', X_test_original.shape)
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of test examples =", n_test)
print("Image data shape =", image_shape)
Explanation: Loading data
End of explanation
plotting 10 randome traffic sign images
def plot_10_random_images(features, labels):
fig, axes = plt.subplots(2, 5, figsize=(13, 6))
fig.subplots_adjust(left=None, right=None, hspace = .02, wspace=0.1)
for i in range(2):
for j in range(5):
randomindex = random.randint(0, len(features) - 1)
axes[i,j].axis('off')
axes[i,j].imshow(features[randomindex])
axes[i,j].set_title(labels[randomindex])
# How many unique classes/labels there are in the dataset.
classes = pd.read_csv('../signnames.csv')
print("Number of classes =", len(classes))
sign_names = classes.values[:,1]
# class_indices: position where class appear, class_counts: number of count of class
sign_classes, class_indices, class_counts = np.unique(y_train_original, return_index = True, return_counts = True)
# longest name of sign names
longest_sign_name = max(len(name) for name in sign_names)
for c, c_index, c_count in zip(sign_classes, class_indices, class_counts):
print ("Class %i: %-*s %s samples" % (c, longest_sign_name, sign_names[c], str(c_count)))
plot_10_random_images(X_train_original, y_train_original)
Explanation: Include an exploratory visualization of the dataset
I did not spend so much time on this. I first print out the distribution of the samples in 43 classes of labels which 'Speed limit (50km/h)' sign has most samples (2010 samples) following by 'Speed limit (30km/h)' sign (1980 samples) and 'Yield' sign (1920 samples).
I have also plotted out 10 random images which can be seen in notebook.
End of explanation
def grayscale(input_image):
output = []
for i in range(len(input_image)):
img = cv2.cvtColor(input_image[i], cv2.COLOR_RGB2GRAY)
output.append(img)
return output
def normalization(input_image):
normalization
Pre-defined interval [-1,1]
from the forum :https://discussions.udacity.com/t/accuracy-is-not-going-over-75-80/314938/22
some said that using the decimal 128.0 makes huge diffference
output = []
for i in range(len(input_image)):
img = np.array((input_image[i] - 128.0) / (128.0), dtype=np.float32)
output.append(img)
return output
def get_weights(input_shape):
return tf.Variable(tf.truncated_normal(shape = input_shape, mean = 0.0, stddev = 0.1))
def get_biases(length):
return tf.Variable(tf.zeros(length))
#NOTE: number of filter is output channel
def convolution_layer(input_image,
filter_size,
input_channel,
number_of_filters,
padding_choice = 'VALID'):
shape = [filter_size, filter_size, input_channel, number_of_filters]
weights = get_weights(input_shape = shape)
biases = get_biases(length = number_of_filters)
layer = tf.nn.conv2d(input = input_image,
filter = weights,
strides = [1, 1, 1, 1],
padding = padding_choice) + biases
return layer
def activation_relu(input_layer):
return tf.nn.relu(input_layer)
def max_spooling(input_layer, padding_choice):
return tf.nn.max_pool(value = input_layer,
ksize = [1, 2, 2, 1],
strides = [1, 2, 2, 1],
padding= padding_choice)
def flatten_layer(input_layer):
return flatten(input_layer)
def fully_connected_layer(input_layer,
number_of_inputs,
number_of_outputs):
weights = get_weights(input_shape = [number_of_inputs, number_of_outputs])
biases = get_biases(length = number_of_outputs)
layer = tf.matmul(input_layer, weights) + biases
return layer
def dropout_layer(layer, keep_prob):
layer = tf.nn.dropout(layer, keep_prob)
return layer
Explanation: Design and Train Classifier Model
Image Data Pre-Processing
As a first step, I decided to convert the images to grayscale to convert to 1 channel image and remove the effect of color.
Next, I normalized normalized the data so that the data has mean zero and equal variance, i.e (pixel - 128.0)/ 128.0
Defining Helper Functions
End of explanation
Pre-processing data
def preprocess_data(input_image):
gray_image = grayscale(input_image)
output = normalization(gray_image)
output = np.expand_dims(output, 3)
return output
X_train_final = preprocess_data(X_train_original)
X_valid_final = preprocess_data(X_valid_original)
print(X_train_final[0].shape)
Model design
def Lenet_5_model(input_image):
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x10.
conv1 = convolution_layer(input_image, 5, 1, 10, 'VALID')
conv1 = activation_relu(conv1)
# Layer 2: Convolutional. Input = 28x28x10. Output = 24x24x20.
conv2 = convolution_layer(conv1, 5, 10, 20, 'VALID')
conv2 = activation_relu(conv2)
# drop-out
conv2 = dropout_layer(conv2, keep_prob)
# Layer 3: Convolutional. Input = 24x24x20. Output = 20x20x30.
conv3 = convolution_layer(conv2, 5, 20, 30, 'VALID')
conv3 = activation_relu(conv3)
# drop-out
conv3 = dropout_layer(conv3, keep_prob)
# Layer 4: Convolutional. Input = 20x20x30. Output = 16x16x40.
conv4 = convolution_layer(conv3, 5, 30, 40, 'VALID')
conv4 = tf.nn.relu(conv4)
# max_pool: output = 8x8x40
conv4 = max_spooling(conv4, 'VALID')
# drop-out
conv4 = dropout_layer(conv4, keep_prob)
# Flatten. Input = 8x8x40. Output = 2560.
fc0 = flatten_layer(conv4)
# Layer 5: Fully Connected. Input = 2560. Output = 1280.
fc1 = fully_connected_layer(fc0, 2560, 1280)
fc1 = tf.nn.relu(fc1)
# Layer 6: Fully Connected. Input = 1280. Output = 640.
fc2 = fully_connected_layer(fc1, 1280, 640)
fc2 = tf.nn.relu(fc2)
# Layer 7: Fully Connected. Input = 640. Output = 320
fc3 = fully_connected_layer(fc2, 640, 320)
fc3 = tf.nn.relu(fc3)
# Layer 8: Fully Connected. Input = 320. Output = 160
fc4 = fully_connected_layer(fc3, 320, 160)
fc4 = tf.nn.relu(fc4)
# Layer 9: Fully Connected. Input = 160. Output = 80
fc5 = fully_connected_layer(fc4, 160, 80)
fc5 = tf.nn.relu(fc5)
# Layer 10: Fully Connected. Input = 80. Output = 43
logits = fully_connected_layer(fc5, 80, 43)
return logits
Evaluation function
def evaluate(X_data, y_data, my_keep_prob):
num_examples = len(X_data)
total_accuracy = 0
total_loss = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset : offset + BATCH_SIZE], y_data[offset : offset + BATCH_SIZE]
loss, accuracy = sess.run([loss_operation, accuracy_operation], feed_dict={x: batch_x,
y: batch_y,
keep_prob: my_keep_prob})
total_accuracy += (accuracy * len(batch_x))
total_loss += (loss * len(batch_x))
return total_loss / num_examples, total_accuracy / num_examples
Explanation: First Model Architecture
my First attempt was to try the famous Lenet-5 model as recommended by Udacity because convolutional model is considered to performed best on object recognition:
pre-processing pipeline
Graysclale
Normalization
Model design:
The original Lenet-5 model
|Layer |type |Input |output |
|--------|--------|--------|--------|
|1 |conv |32x32x1 |28x28x6 |
| |relu | | |
| |max_pool|28x28x6 |14x14x6 |
|2 |conv |14x14x6 |10x10x16|
| |relu | | |
| |max_pool|10x10x16|5x5x16 |
| |flatten |5x5x16 |400 |
|3 |linear |400 |120 |
| |relu | | |
|4 |linear |120 |84 |
| |relu | | |
|5 |linear |84 |43 |
Second Model Architecture
First attempt only gave me 86% vadilation accuracy with 28 epochs. Validation loss is way higher than training loss and they converge at different values. This is strong signal of overfitting.
There are few techniques to battle overfitting:
- Increase training dataset
- Regulazation, i.e dropout
- Reduce the complexity of training model
The conplexity of original Lener-5 is pretty simple, so I chose to apply dropout to every layers of the model
Second attempt summary:
```
1. Pre-processing pipeline:
- Grayscale
- Normalization
2a. Model design:
- Original Lenet-5 model with dopout of 0.5 to every layers
After running for 300 epochs, my validation accuracy reached 89% and there is no signal of overfitting. I decided to increase the complexity of model to improve the accuracy.
2b. Model re-design
|Layer |type |Input |output |
|--------|--------|--------|--------|
|1 |conv |32x32x1 |28x28x10|
| |relu | | |
| |dropout | | |
|2 |conv |28x28x10|24x24x20|
| |relu | | |
| |dropout | | |
|3 |conv |24x24x10|20x20x30|
| |relu | | |
| |dropout | | |
|4 |conv |20x20x30|16x16x40|
| |relu | | |
| |max_pool|16x16x40|8x8x40 |
| |dropout | | |
| |flatten |8x8x40 |2560 |
|5 |linear |2560 |1280 |
| |relu | | |
|6 |linear |1280 |640 |
| |relu | | |
|7 |linear |640 |320 |
| |relu | | |
|8 |linear |320 |160 |
| |relu | | |
|9 |linear |160 |80 |
| |relu | | |
|10 |linear |80 |43 |
```
End of explanation
Parameters setting
EPOCHS = 40
BATCH_SIZE = 128
LEARNING_RATE = 0.0001
'''Training and save'''
keep_prob = tf.placeholder(tf.float32)
# x is a placeholder for a batch of input images. y is a placeholder for a batch of output labels.
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
# convert to 1 hot-coded data
one_hot_y = tf.one_hot(y, 43)
logits = Lenet_5_model(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = LEARNING_RATE)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
train_loss_history = []
valid_loss_history = []
#Start running tensor flow
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train_final)
print("Training...")
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train_final, y_train_original)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5})
valid_loss, valid_accuracy = evaluate(X_valid_final, y_valid_original, 1.0)
valid_loss_history.append(valid_loss)
train_loss, train_accuracy = evaluate(X_train_final, y_train_original, 1.0)
train_loss_history.append(train_loss)
print("EPOCH {} ...".format(i + 1))
print("Training Accuracy = {:.3f}".format(train_accuracy))
print("Validation Accuracy = {:.3f}".format(valid_accuracy))
print("Training Loss = {:.3f}".format(train_loss))
print("Validation Loss = {:.3f}".format(valid_loss))
saver.save(sess, '../../../lenet')
print("Model saved")
loss_plot = plt.subplot(2,1,1)
loss_plot.set_title('Loss')
loss_plot.plot(train_loss_history, 'r', label='Training Loss')
loss_plot.plot(valid_loss_history, 'b', label='Validation Loss')
loss_plot.set_xlim([0, EPOCHS])
loss_plot.legend(loc=4)
Explanation: Second Model Training
End of explanation
X_test_final = preprocess_data(X_test_original)
with tf.Session() as sess:
saver.restore(sess, '../../../lenet')
test_loss, test_accuracy = evaluate(X_test_final, y_test_original, 1.0)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Model evaluation
End of explanation
from numpy import newaxis
import os
TEST_IMAGES = os.listdir('../new_images')
fig, axes = plt.subplots(1, 5, figsize=(13, 6))
fig.subplots_adjust(left=None, right=None, hspace = .02, wspace=0.1)
sample_list = []
i = 0
for img in TEST_IMAGES:
img = '../new_images/' + img
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
sample_list.append(img)
axes[i].axis('off')
axes[i].imshow(img)
i += 1
print(np.shape(sample_list))
sample_list = preprocess_data(sample_list)
print(sample_list.shape)
Explanation: Test a Model on New Images
Testing on new five German Traffic Signs
To give more insight into how the model is working, we will test pictures of German traffic signs taken from the web and use the model to predict the traffic sign type. The file ../signnames.csv contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
# img1: a stop sign
# img2: a yield sign
# img3: a road work sign
# img3: a left turn ahead sign
# img4: a 60 km/h sign,
image_labels = [14, 13, 25, 34, 3]
with tf.Session() as sess:
saver.restore(sess, '../../../lenet')
test_loss, test_accuracy = evaluate(sample_list, image_labels, 1.0)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Analyze Performance of 5 test images
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=5)
with tf.Session() as sess:
saver.restore(sess, '../../../lenet')
my_softmax_logits = sess.run(softmax_logits, feed_dict={x: sample_list, keep_prob: 1.0})
predicts = sess.run(top_k, feed_dict={x: sample_list, keep_prob: 1.0})
for i in range(len(predicts[0])):
print('Image', i, 'probabilities:', predicts[0][i], '\n and predicted classes:', predicts[1][i])
Explanation: Model's predictions on new traffic signs
Here are the results of the prediction:
| Image | Prediction |
|:---------------------------:|:-----------------------------:|
| Stop Sign | No vehicle |
| Yield sign | Yield sign |
| Road work sign | General caution |
| Left turn sign | Keep right |
| Speed limit (26km/h) | No passing for vehicles over 3.5 metric tons |
The model was able to correctly guess 1 of the 5 traffic signs, which gives an accuracy of 20%. This does not correspond to the accuracy on the test set.
Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
Explanation: Conclusion
For the first image, the model's first choice was no vehicle sign (0.93) while the correct sign was third rank (0.026)
| Prediction | Probability |
|:---------------------------:|:-----------------------------:|
| Road work | 0.93 |
| Traffic signals | 0.038 |
| Stop sign | 0.026 |
| Keep right | 0.00165 |
| Bumpy road | 0.0013 |
The model predicted correctly the second image - the Yield sign (almost 1)
| Prediction | Probability |
|:---------------------------:|:-----------------------------:|
| Yield Sign | ~1 |
| Children crossing | ~0 |
| End of all speed and passing limits | ~0 |
| Speed limit (100km/h) | ~0 |
| Priority road | ~0 |
Other images can be seen from the notebook
Overall, the current model is uncertain as it does not predict well with new images. I'm still not sure the reason
Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
End of explanation |
11,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
sequana_coverage test case example (Virus)
This notebook creates the BED file provided in
- https
Step1: Download the genbank and genome reference
Method1
Step2: Download the FastQ
Step3: Map the reads
Step4: Convert the BAM to BED
Step5: Using Sequana library to detect ROI in the coverage data | Python Code:
%pylab inline
matplotlib.rcParams['figure.figsize'] = [10,7]
Explanation: sequana_coverage test case example (Virus)
This notebook creates the BED file provided in
- https://github.com/sequana/resources/tree/master/coverage and
- https://www.synapse.org/#!Synapse:syn10638358/wiki/465309
WARNING: you need an account on synapse to get the FastQ files.
If you just want to test the BED file, download it directly:
wget https://github.com/sequana/resources/raw/master/coverage/JB409847.bed.bz2
and jump to the section Using-Sequana-library-to-detect-ROI-in-the-coverage-data
otherwise, first download the FastQ from Synapse, its reference genome and its genbank annotation. Then, map the reads using BWA to get a BAM file. The BAM file is converted to a BED, which is going to be one input file to our analysis. Finally, we use the coverage tool from Sequana project (i) with the standalone (sequana_coverage) and (ii) the Python library to analyse the BED file.
Versions used:
- sequana 0.7.0
- bwa mem 0.7.15
- bedtools 2.26.0
- samtools 1.5
- synapseclient 1.7.2
End of explanation
!sequana_coverage --download-reference JB409847 --download-genbank JB409847
Explanation: Download the genbank and genome reference
Method1: use sequana_coverage to download from ENA website
http://www.ebi.ac.uk/ena/data/view/JB409847
End of explanation
# to install synapseclient, use
# pip install synapseclient
import synapseclient
l = synapseclient.login()
_ = l.get("syn10638367", downloadLocation=".", ifcollision="overwrite.local")
Explanation: Download the FastQ
End of explanation
!sequana_mapping --file1 JB409847_R1_clean.fastq.gz --reference JB409847.fa
Explanation: Map the reads
End of explanation
!bedtools genomecov -d -ibam JB409847.fa.sorted.bam> JB409847.bed
Explanation: Convert the BAM to BED
End of explanation
from sequana import GenomeCov
b = GenomeCov("JB409847.bed", "JB409847.gbk")
chromosome = b.chr_list[0]
chromosome.running_median(4001, circular=True)
chromosome.compute_zscore(k=2)
# you can replace the 2 previous lines by since version 0.6.4
# chromosome.run(4001, k=2, circular=True)
chromosome.plot_coverage()
Explanation: Using Sequana library to detect ROI in the coverage data
End of explanation |
11,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Variables,
arithmetic operators
~~~
+ - * / ** % //
~~~
assignment
~~~
=
~~~
assign and increment
~~~
+=
~~~
Step1: Flow Control, Loops
Step2: Multiplication Table
2017-09-19
Step3: Print the Fibionacci Sequence
Step4: 2017-09-26
Simulate gambling using a coin flip
Model
Step5: Estimate the probability of ruin in betting using a fair coin flip
using a Monte Carlo Simulation
Step6: Start with an asset of price $S$ at $t=0$.
Assume that the price is evolving according to the brownian motion with volatility $\sigma^2$
Estimate the probability that at time $T$, the price is higher than $K$.
Step7: Lists, Tuples and Dictionaries
List
Step8: list comprehension
Step9: Tuples
Step10: Option Pricing
Vanilla
\begin{eqnarray}
W_T & \sim & \mathcal{N}(0, T) \
S_T & = & S_0 \exp\left((r-\frac{1}2\sigma^2)T + \sigma W_T \right) \
\end{eqnarray}
European Call
\begin{eqnarray}
U_T & = & \exp(-rT) \left\langle (S_T - K)^+ \right\rangle
\end{eqnarray}
European Put
\begin{eqnarray}
Y_T & = & \exp(-rT) \left\langle (K - S_T)^+ \right\rangle
\end{eqnarray}
Asian
\begin{eqnarray}
W_i & \sim & \mathcal{N}(0, T/N) \
S_i & = & S_{i-1} \exp\left((r-\frac{1}2\sigma^2)(T/N) + \sigma W_i \right) \
\bar{S} & = & \frac{1}{N}\sum_i S_i
\end{eqnarray}
Step11: Importance sampling
Step12: Target Density
Step13: Dictionaries
json | Python Code:
a = [1,2,5]
b = [4,7,6]
print('a =',a)
print('b =',b)
a = b
print('a =',a)
print('b =',b)
b = [1,1,1]
print('a =',a)
print('b =',b)
a = 2
b = 7.1
c = 4
d = a + b * c
print(a**b)
a = 7
b = 2
print(a//b)
a = 5
# a = a + 2
a += 2
a -= 3
a *=4
a //= 3
print(a)
x = -3
fx = 3*x**2 + 4*x - 7
print(x, fx)
x = -2
fx = 3*x**2 + 4*x - 7
print(x, fx)
x = -1
fx = 3*x**2 + 4*x - 7
print(x, fx)
Explanation: Introduction
Variables,
arithmetic operators
~~~
+ - * / ** % //
~~~
assignment
~~~
=
~~~
assign and increment
~~~
+=
~~~
End of explanation
x = -3
for i in range(13):
fx = 3*x**2 + 4*x - 7
print(x, fx)
x += 0.5
for i in range(10):
for j in range(10):
print(i+1,' x ', j+1, '=', (i+1)*(j+1))
Explanation: Flow Control, Loops
End of explanation
for i in range(20):
for j in range(20):
print((i+1)*(j+1), end=' ')
print()
Explanation: Multiplication Table
2017-09-19
End of explanation
x_prevprev = 1
x_prev = 1
print(x_prevprev)
print(x_prev)
for i in range(10):
x = x_prevprev + x_prev
print(x)
x_prevprev = x_prev
x_prev = x
Explanation: Print the Fibionacci Sequence
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
MAX_TURN = 100
INITIAL_CAPITAL = 10.
X = np.zeros(MAX_TURN)
capital = INITIAL_CAPITAL
for turn in range(MAX_TURN):
gain = 2*np.random.randint(high=2, low=0)-1
capital += gain
X[turn] = capital
if capital==0:
print(turn+1)
break
plt.plot(X)
plt.show()
Explanation: 2017-09-26
Simulate gambling using a coin flip
Model:
\begin{eqnarray}
e_k & \in & {-1, +1} \
x_k & = & x_{k-1} + e_k
\end{eqnarray}
End of explanation
import numpy as np
EPOCHS = 1000
MAX_TURN = 100
INITIAL_CAPITAL = 10
lost = 0
for epoch in range(EPOCHS):
capital = INITIAL_CAPITAL
for turn in range(MAX_TURN):
gain = 2*np.random.randint(high=2, low=0)-1
capital += gain
if capital==0:
lost+=1
break
print("{0:.2f}".format(lost/EPOCHS))
import numpy as np
# This is a comment
EPOCHS = 10000
MAX_TURN = 100
INITIAL_CAPITAL = 10
lost = 0
sigma = 1
for epoch in range(EPOCHS):
capital = INITIAL_CAPITAL
for turn in range(MAX_TURN):
gain = sigma*np.random.randn()
capital += gain
if capital<=0:
lost+=1
break
print("{0:.5f}".format(lost/EPOCHS))
np.random.randn()
Explanation: Estimate the probability of ruin in betting using a fair coin flip
using a Monte Carlo Simulation
End of explanation
S = 10
K = 10
T = 100
sigma = 1
EPOCH = 100000
count_higher_K = 0
for e in range(EPOCH):
gain = np.random.randn()*T**0.5*sigma
if S + gain >= K:
count_higher_K += 1
print("{0:.5f}".format(count_higher_K/EPOCH))
Explanation: Start with an asset of price $S$ at $t=0$.
Assume that the price is evolving according to the brownian motion with volatility $\sigma^2$
Estimate the probability that at time $T$, the price is higher than $K$.
End of explanation
lst = [1,2,3,7,'abc']
for x in lst:
print(x)
lst.append('tail')
lst
lst.clear()
lst
u = lst.copy()
lst
u[0] = 10
lst.count('tail')
lst.extend(range(3))
lst.index('abc')
lst.insert(3,8)
lst
lst = [1,'abc',5,'tail','tail']
lst.pop()
lst.remove(5)
lst.reverse()
lst
lst.sort()
lst = ['hjk','abc','zzs']
lst.sort()
lst
lst = []
for x in range(11):
# lst.append(x**2)
lst.insert(0,x**2)
lst
Explanation: Lists, Tuples and Dictionaries
List
End of explanation
lst = [x**2 for x in reversed(range(11))]
lst
lst = [[x,x**2] for x in range(11)]
lst
lst = []
N = 10+2
for i in range(1,N):
u = [x**2 for x in range(i)]
lst.append(u)
lst
lst
lst.append([0,1,2])
lst = [[x**2 for x in range(u)] for u in range(1,12)]
lst
lst = []
lst2 = []
for x in range(11):
lst.append(x**2)
lst2.insert(x,lst.copy())
lst2
lst = []
lst2 = []
for x in range(11):
lst.append(x**2)
lst2.append(lst)
lst2
fb = [1, 1]
N = 20
for i in range(N-2):
fb.append(fb[-1]+fb[-2])
fb
a = [1,2,3]
b = a
b[0] = 16
a
Explanation: list comprehension
End of explanation
u = (1,4,5)
u[0] = 7
person = {'name': 'Taylan', 'surname': 'Cemgil', 'age': 48}
print(person['name'])
print(person['surname'])
print(person['age'])
30*750/5
Explanation: Tuples
End of explanation
import numpy as np
r = 0.05
S0 = 100
K = 110
sigma = 0.1
T = 1
np.random.randn()
N = 100000
C = 0
P = 0
for i in range(N):
WT = np.random.randn()*np.sqrt(T)
ST = S0*np.exp((r-0.5*sigma**2)*T + sigma*WT)
C += max(0, ST - K)
P += max(0, K - ST)
call_price = (C/N)*np.exp(-r*T)
put_price = (P/N)*np.exp(-r*T)
print('Call price = ', call_price)
print('Put price = ', put_price)
import numpy as np
r = 0.05
S0 = 100
K = 110
sigma = 0.1
T = 1
np.random.randn()
EPOCH = 100000
N = 12
C = 0
P = 0
for i in range(EPOCH):
S_bar = 0
S = S0
for j in range(N):
WT = np.random.randn()*np.sqrt(T/N)
S = S*np.exp((r-0.5*sigma**2)*(T/N) + sigma*WT)
S_bar += S
S_bar /= N
C += max(0, S_bar - K)
P += max(0, K - S_bar)
call_price = (C/EPOCH)*np.exp(-r*T)
put_price = (P/EPOCH)*np.exp(-r*T)
print('Asian')
print('Call price = ', call_price)
print('Put price = ', put_price)
L = [ 3,2,1,0,4,7,9 ]
## Create a list with even numbers only
L2 = []
for x in range(len(L)):
if L[x]%2==0:
L2.append(L[x])
else:
pass
print(L2)
## Create a list with even numbers only
L2 = []
for e in L:
if e%2==0:
L2.append(e)
else:
pass
L2 = list(filter(lambda x: x%2==0, L))
print(L2)
def is_even(x):
return x%2==0
L2 = list(filter(is_even, L))
print(L2)
for x in filter(is_even, L):
print(x, end=' ')
L = [ 3,2,1,0,4,7,9 ]
S = []
for x in L:
if x%2==0:
S.append(x)
for x in L:
if x%2==1:
S.append(x)
S
L = [ 3,2,1,0,4,7,9 ]
S = []
for x in L:
if x%2==0:
S.insert(0,x)
else:
S.append(x)
S
list(filter(is_even, L))+list(filter(lambda x: x%2==1, L))
import numpy as np
U = np.array(L)
U[U%2==0]
from functools import reduce
reduce(lambda x,y: x+y, filter(is_even, L))/len(list(filter(is_even, L)))
L
reduce(lambda x,y: x*y, L)
list(map(lambda x: x**2+1, L))
L*2
Explanation: Option Pricing
Vanilla
\begin{eqnarray}
W_T & \sim & \mathcal{N}(0, T) \
S_T & = & S_0 \exp\left((r-\frac{1}2\sigma^2)T + \sigma W_T \right) \
\end{eqnarray}
European Call
\begin{eqnarray}
U_T & = & \exp(-rT) \left\langle (S_T - K)^+ \right\rangle
\end{eqnarray}
European Put
\begin{eqnarray}
Y_T & = & \exp(-rT) \left\langle (K - S_T)^+ \right\rangle
\end{eqnarray}
Asian
\begin{eqnarray}
W_i & \sim & \mathcal{N}(0, T/N) \
S_i & = & S_{i-1} \exp\left((r-\frac{1}2\sigma^2)(T/N) + \sigma W_i \right) \
\bar{S} & = & \frac{1}{N}\sum_i S_i
\end{eqnarray}
End of explanation
import numpy as np
N = 100
Z = 3
rho = 1
s = 0
for i in range(N):
x = np.random.exponential(scale=1./rho)
if x>Z:
s+=1
print(s/N)
print(np.exp(-rho*Z))
Explanation: Importance sampling
End of explanation
import numpy as np
N = 100
Z = 20
rho = 1
r = 0.01
s = 0
for i in range(N):
x = np.random.exponential(scale=1./r)
W = rho/r*np.exp(-(rho-r)*x)
if x>Z:
s+= W
print(s/N)
print(np.exp(-rho*Z))
Explanation: Target Density:
$$
p(x) = \rho \exp(-\rho x)
$$
Proposal
$$
q(x) = r \exp(-r x)
$$
Importance sampling identity
$$
E_p{\phi(x)} = E_q{\frac{p(x)}{q(x)}\phi(x)}
$$
$$
W(x) = \frac{p(x)}{q(x)} = \frac{\rho \exp(-\rho x)}{r \exp(-r x)} = \frac{\rho}{r} \exp(-(\rho-r)x)
$$
End of explanation
person = {'name': 'Taylan', 'surname': 'Cemgil', 'age': 48}
person2 = {'name': 'Ahmet', 'surname': 'Ahmetoglu', 'age': 12}
person3 = {'name': 'Cetin', 'surname': 'Cetin', 'age': 62}
print(person['name'])
print(person['surname'])
print(person['age'])
people = [person, person2, person3]
for x in people[0:2]:
print(x['name'], x['surname'])
#txt = 'JSON (JavaScript Object Notation) is a lightweight data-interchange format.'
txt = 'JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999.'
counts = {}
for i in range(len(txt)):
key = txt[i]
if key in counts.keys():
counts[key] += 1
else:
counts[key] = 1
counts
counts = {}
for i in range(len(txt)-1):
key = txt[i]+txt[i+1]
if key in counts.keys():
counts[key] += 1
else:
counts[key] = 1
counts
txt = 'babababcccbcbcbcbcbcaabababababaab'
counts = {'a':0, 'b':0, 'c': 0}
for i in range(len(txt)):
counts[txt[i]]+=1
counts
counts['d'] = 0
temp = txt
remove_list = ['(',')','.',',','-']
while True:
words = temp.partition(' ')
if words[2] == '':
break
w = words[0]
for l in remove_list:
w = w.replace(l,'')
if len(w)>0:
print(w)
temp = words[2]
txt2 = ''
for i in range(len(txt)):
txt2 = txt2+txt[i]*5
print(txt2)
a = (3,'abc',3.4)
a[2] = 5
txt.replace('.','').replace('(','').replace(')','').replace(',','').replace('-','').split()
for w in txt.split():
print(w)
txt.maketrans
Explanation: Dictionaries
json
End of explanation |
11,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
conda install -y numpy
conda install -y scipy
conda install -y matplotlib
conda install -y rasterio
pip install lmdb
conda install -y caffe
conda install -y protobuf==3.0.0b3
pip install tdqm
conda install -y fiona
conda install -y shapely
Step1: Reading in the ground truth locations
Step2: Plotting
This is how we can plot rasters with goerefernced vectors on top
First,
python
import rasterio.plot
Then you can do this | Python Code:
import logging
import os
import numpy as np
import rasterio as rio
import lmdb
from caffe.proto.caffe_pb2 import Datum
import caffe.io
from rasterio._io import RasterReader
from glob import glob
sources =glob('/home/shared/srp/try2/*.tif')
print len(sources)
pos_regions = rasterio.open(r'/home/liux13/Desktop/tmp/pos_regions-epsg-26949.tif')
pos_mask = pos_regions.read(1) > 0
print pos_mask.shape, pos_mask.dtype
imshow(pos_mask[4000:6000, 4000:6000], extent=(4000,5000, 4000,5000), cmap=cm.binary)
colorbar();
from tqdm import tnrange, tqdm_notebook,
points = []
for i in tnrange(len(sources)):
ds = rasterio.open(sources[i])
data = ds.read()
mask = data.sum(0) > 1
indices = np.nonzero(mask)
xy = np.c_[ds.affine*(indices[1], indices[0])]
pos_mask_indices = np.c_[~pos_regions.affine*xy.T].T.round().astype(int)
pos_mask_indices = np.roll(pos_mask_indices, 1, 0)
within_pos_mask = (pos_mask_indices[0] >= 0)
within_pos_mask = (pos_mask_indices[1] >= 0)
within_pos_mask &= (pos_mask_indices[0] < pos_mask.shape[0])
within_pos_mask &= (pos_mask_indices[1] < pos_mask.shape[1])
pos_mask_indices = pos_mask_indices[:, within_pos_mask]
negative_mask = pos_mask[pos_mask_indices[0], pos_mask_indices[1]] == False
xy = xy[within_pos_mask,:][negative_mask, :]
points.append(xy)
points= np.concatenate(points)
print "There are {:,d} negative examples".format(len(points))
import fiona
import shapely.geometry
vectors = fiona.open(r'/home/liux13/Desktop/tmp/boxes_section11.shp')
shapes = [shapely.geometry.shape(f['geometry']) for f in vectors if f['geometry'] is not None]
centers = np.row_stack([np.r_[s.centroid.xy] for s in shapes])
print "There are {:,d} positive examples".format(len(centers))
def get_angle(shape):
verts = np.column_stack(s.xy)
verts
dx, dy = (verts[2]-verts[1])
angle = np.degrees(np.arctan2(dy, dx))
return angle
angles = np.r_[[get_angle(s) for s in shapes]]
print angles.astype(int)
np.savez('/home/shared/srp/sample_locations_epsg26949.npz', neg_xy=points, pos_xy=centers, pos_angles=angles)
Explanation: Setup
conda install -y numpy
conda install -y scipy
conda install -y matplotlib
conda install -y rasterio
pip install lmdb
conda install -y caffe
conda install -y protobuf==3.0.0b3
pip install tdqm
conda install -y fiona
conda install -y shapely
End of explanation
gt = np.load('/home/shared/srp/sample_locations_epsg26949.npz')
print gt.keys()`b
pos_xy = gt['pos_xy']
print pos_xy.shape
Explanation: Reading in the ground truth locations
End of explanation
np.random.randn(2)
Explanation: Plotting
This is how we can plot rasters with goerefernced vectors on top
First,
python
import rasterio.plot
Then you can do this:
python
figsize(15,15)
rasterio.plot.show( (pos_regions, 1), cmap=cm.binary_r, ax=gca())
xlim(232440.0, 232510.0)
ylim(252140.0, 252210.0)
scatter(xy[:,0], xy[:,1], lw=0, s=1)
scatter(pxy[:,0], pxy[:,1], lw=0, c='yellow')
The important part is the second line, where I pass a tuple with the datsaet and the band.
Also important is I pass the current axis in to rasterio.plot.show.
End of explanation |
11,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The 8-Queens Puzzle
We represent solutions to the 8-queens puzzle as tuples of the form
$$ (r_0, \cdots, r_7), $$
where $r_i$ is the row of the queen in column $i$. We start counting from $0$ because this is the way it is done in Python.
In general, states are defined as tuples of the form
$$ s = (r_0, \cdots, r_{c-1}). $$
In the state $s$, there are $c$ queens on the board.
The state start is the empty tuple.
Step1: The function next_states takes a tuple $S$ representing a state
and tries to extend this tuple by placing an additional queen on the board. It returns the set of all such states that do not lead to a conflict. It might be easier if you make use of an auxiliary function in order to implement the function next_states.
Step2: The global variable Solutions is a list that collect all solutions to the 8-Queens puzzle.
Step3: Implement a version of depth first search that is able to find all solutions of the 8-queens puzzle.
We don't need a goal here as we are going to compute all solutions.
Step4: Visualization
The following code assumes that you have installed python-chess. After activating the appropriate
Python environment, this can be done using the following command
Step5: This function takes a state, which is represented as tuple of integers and displays it as a chess board with n queens | Python Code:
start = ()
Explanation: The 8-Queens Puzzle
We represent solutions to the 8-queens puzzle as tuples of the form
$$ (r_0, \cdots, r_7), $$
where $r_i$ is the row of the queen in column $i$. We start counting from $0$ because this is the way it is done in Python.
In general, states are defined as tuples of the form
$$ s = (r_0, \cdots, r_{c-1}). $$
In the state $s$, there are $c$ queens on the board.
The state start is the empty tuple.
End of explanation
def next_states(S):
"your code here"
Explanation: The function next_states takes a tuple $S$ representing a state
and tries to extend this tuple by placing an additional queen on the board. It returns the set of all such states that do not lead to a conflict. It might be easier if you make use of an auxiliary function in order to implement the function next_states.
End of explanation
Solutions = []
Explanation: The global variable Solutions is a list that collect all solutions to the 8-Queens puzzle.
End of explanation
def dfs(state, next_states):
"your code here"
%%time
dfs(start, next_states)
len(Solutions)
Explanation: Implement a version of depth first search that is able to find all solutions of the 8-queens puzzle.
We don't need a goal here as we are going to compute all solutions.
End of explanation
import chess
Explanation: Visualization
The following code assumes that you have installed python-chess. After activating the appropriate
Python environment, this can be done using the following command:
pip install python-chess
End of explanation
from IPython.core.display import display
def display_state(state):
board = chess.Board(None) # create empty chess board
queen = chess.Piece(chess.QUEEN, True)
for col in range(len(state)):
row = state[col]
field_number = row * 8 + col
board.set_piece_at(field_number, queen)
display(board)
# Print the first solution
display_state(Solutions[0])
Explanation: This function takes a state, which is represented as tuple of integers and displays it as a chess board with n queens
End of explanation |
11,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Goal
Assessing the error in taxon abundances when using qPCR data + 16S sequence relative abundances to determine taxon proportional absolute abundances
Init
Step2: Making dataset
Step3: Simulating qPCR data | Python Code:
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
def neg_binom_err(m, r, negs=False):
Adding negative binomial distribuiton error, where variance
scales more with the mean than a poisson distribution if (r < inf).
Parameters
----------
m : float
Mean value
r : float
Negative binomial dispersion parameter
negs : bool
Negative values allowed? (otherwise 0)
sigma = np.sqrt(m + m**2 / r)
x = np.random.normal(m, sigma)
if negs==False and x < 0:
x = 0
return x
Explanation: Goal
Assessing the error in taxon abundances when using qPCR data + 16S sequence relative abundances to determine taxon proportional absolute abundances
Init
End of explanation
%%R -w 900 -h 300
n = 500
meanlog = 0.5
sdlog = 1
comm = rlnorm(n, meanlog, sdlog)
comm = data.frame(1:length(comm), comm)
colnames(comm) = c('taxon', 'count')
comm = comm %>%
mutate(taxon = as.character(taxon)) %>%
group_by() %>%
mutate(rel_abund = count / sum(count)) %>%
ungroup()
comm$taxon = reorder(comm$taxon, -comm$rel_abund)
ggplot(comm, aes(taxon, rel_abund)) +
geom_point() +
theme_bw() +
theme(
axis.text.x = element_blank()
)
Explanation: Making dataset
End of explanation
%%R
exp_series = function(e_start, e_end){
sapply(e_start:e_end, function(x) cumprod(rep(10, x))[x])
}
neg_binom_err = function(m, r, n_reps=1, negs=FALSE){
sigma = sqrt(m + (m**2 / r))
print(c(m, sigma))
x = rnorm(n_reps, m, sigma)
if (negs==FALSE & x < 0){
x[x<0] = 0
}
return(x)
}
%%R
# test
exp_series(1,3)
neg_binom_err(1e9, 0.5, n_reps=1)
%%R
# params
r = 0.5
n_reps = 3
total_copies = exp_series(6, 10)
# qPCR
sapply(total_copies, function(x) qPCR(x, r=r, n_reps=n_reps))
Explanation: Simulating qPCR data
End of explanation |
11,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook presents a working example of adjusting texts for multiple subplots, related to https
Step1: With multiple subplots, run adjust_text for one subplot at a time | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt # Matplotlib 2.0 shown here
from adjustText import adjust_text
import numpy as np
import pandas as pd
Explanation: This notebook presents a working example of adjusting texts for multiple subplots, related to https://github.com/Phlya/adjustText/issues/58
End of explanation
fig, axes = plt.subplots(1, 2, figsize=(8, 3), sharex=True, sharey=True)
axes = axes.ravel()
for k, ax in enumerate(axes):
np.random.seed(0)
x, y = np.random.random((2,30))
ax.plot(x, y, 'bo')
texts = []
for i in range(len(x)):
t = ax.text(x[i], y[i], 'Text%s' %i, ha='center', va='center')
texts.append(t)
%time adjust_text(texts, ax=ax)
Explanation: With multiple subplots, run adjust_text for one subplot at a time
End of explanation |
11,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Mixture Models (GMM) are a kind of hybrid between a clustering estimator and a density estimator. Density estimator is an algorithm which takes a D-dimensional dataset and produces an estimate of the D-dimensional probability distribution which that data is drawn from. GMM algorithm accomplishes by representing the density as a weighted sum of Gaussian distributions.
Kernel density estimation (KDE) is in some senses takes the mixture of Gaussians to its logical extreme
Step1: Motivation for KDE - Histograms
Density estimator is an algorithm which seeks to model the probability distribution that generated a dataset. This is simpler to see in 1-dimensional datam as the histogram. A histogram divides the data into discrete bins, counts the number of points that fall in each bind, and then visualizes the results in an intuitive manner.
Step2: Standard count-based histogram can be viewed from the plt.hist() function. normed parameter of this function makes the heights of the bars to reflect probability density
Step3: This histogram is equal binned, hence this normalization simply changes the scale on the y-axis, keeping the shape of the histogram constant. normed keeps the total area under the histogram to be 1, as we can confirm below
Step4: One problem with histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features.
Let's see this example of 20 points, the choice of bins can lead to an entirely different interpretation of the data.
Step5: We can think of histogram as a stack of blocks, where we stack one block within each bins on top of each point in the dataset. Let's view this in the following chart
Step6: The effects fo two binnings comes from the fact that the height of the block stack often reflects not on the actual density of points neaby, but on coincidences of how the bins align with the data points. This mis-alignment between points and their blocks is a potential cause of the poor histogram results.
What if, instead of stacking the blocks aligned with the bins, we were to stack the blocks aligned with the points they represent? if we do this the blocks won't be aligned, but we can add their contributions at each location along the x-axis to find the result.
Step7: Rough edges are not aesthetically pleasing, nor are they reflecting of any true properties of the data. In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian. Let's use a standard normal curve at each point instead of a block
Step8: This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one which has much less variance (i.e., changes much less in response to differences in sampling).
Kernel Density Estimation in Practice
Free parameters of the kernel density estimation are the kernal, which specifies the shape of the distribution placed at each point and the kernel bandwith, which controls the size of the kernel at each point. Scikit-Learn has a choice of 6 kernels.
Step9: Selecting the bandwidth via cross-validation
Choice of bandwith within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias-variance trade-off in the estimate of density | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
Explanation: Gaussian Mixture Models (GMM) are a kind of hybrid between a clustering estimator and a density estimator. Density estimator is an algorithm which takes a D-dimensional dataset and produces an estimate of the D-dimensional probability distribution which that data is drawn from. GMM algorithm accomplishes by representing the density as a weighted sum of Gaussian distributions.
Kernel density estimation (KDE) is in some senses takes the mixture of Gaussians to its logical extreme: it uses a mixture consisting of one Gaussian component per point, resulting in an essentially non-parameteric estimator of density
End of explanation
def make_data(N, f=0.3, rseed=1087):
rand = np.random.RandomState(rseed)
x = rand.randn(N)
x[int(f*N):] += 5
return x
x = make_data(1000)
Explanation: Motivation for KDE - Histograms
Density estimator is an algorithm which seeks to model the probability distribution that generated a dataset. This is simpler to see in 1-dimensional datam as the histogram. A histogram divides the data into discrete bins, counts the number of points that fall in each bind, and then visualizes the results in an intuitive manner.
End of explanation
hist = plt.hist(x, bins=30, normed=True)
Explanation: Standard count-based histogram can be viewed from the plt.hist() function. normed parameter of this function makes the heights of the bars to reflect probability density:
End of explanation
density, bins, patches = hist
widths = bins[1:] - bins[:-1]
(density * widths).sum()
Explanation: This histogram is equal binned, hence this normalization simply changes the scale on the y-axis, keeping the shape of the histogram constant. normed keeps the total area under the histogram to be 1, as we can confirm below:
End of explanation
x = make_data(20)
bins = np.linspace(-5, 10, 10)
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharex=True, sharey=True,
subplot_kw = {'xlim': (-4, 9), 'ylim': (-0.02, 0.3)})
fig.subplots_adjust(wspace=0.05)
for i, offset in enumerate([0.0, 0.6]):
ax[i].hist(x, bins=bins+offset, normed=True)
ax[i].plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
Explanation: One problem with histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features.
Let's see this example of 20 points, the choice of bins can lead to an entirely different interpretation of the data.
End of explanation
fig, ax = plt.subplots()
bins = np.arange(-3, 8)
ax.plot(x, np.full_like(x, -0.1), '|k',
markeredgewidth=1)
for count, edge in zip(*np.histogram(x, bins)):
for i in range(count):
ax.add_patch(plt.Rectangle((edge, i), 1, 1,
alpha=0.5))
ax.set_xlim(-4, 8)
ax.set_ylim(-0.2, 8)
Explanation: We can think of histogram as a stack of blocks, where we stack one block within each bins on top of each point in the dataset. Let's view this in the following chart:
End of explanation
x_d = np.linspace(-4, 8, 2000)
density = sum((abs(xi - x_d) < 0.5) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 8]);
x_d[:50]
Explanation: The effects fo two binnings comes from the fact that the height of the block stack often reflects not on the actual density of points neaby, but on coincidences of how the bins align with the data points. This mis-alignment between points and their blocks is a potential cause of the poor histogram results.
What if, instead of stacking the blocks aligned with the bins, we were to stack the blocks aligned with the points they represent? if we do this the blocks won't be aligned, but we can add their contributions at each location along the x-axis to find the result.
End of explanation
from scipy.stats import norm
x_d = np.linspace(-4, 8, 1000)
density = sum(norm(xi).pdf(x_d) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 5]);
Explanation: Rough edges are not aesthetically pleasing, nor are they reflecting of any true properties of the data. In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian. Let's use a standard normal curve at each point instead of a block:
End of explanation
from sklearn.neighbors import KernelDensity
# instantiate and fit the KDE model
kde = KernelDensity(bandwidth=1.0, kernel='gaussian')
kde.fit(x[:, None])
# score samples returns the log of the probability density
logprob = kde.score_samples(x_d[:, None])
plt.fill_between(x_d, np.exp(logprob), alpha=0.5)
plt.plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
plt.ylim(-0.02, 0.30);
Explanation: This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one which has much less variance (i.e., changes much less in response to differences in sampling).
Kernel Density Estimation in Practice
Free parameters of the kernel density estimation are the kernal, which specifies the shape of the distribution placed at each point and the kernel bandwith, which controls the size of the kernel at each point. Scikit-Learn has a choice of 6 kernels.
End of explanation
KernelDensity().get_params().keys()
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import LeaveOneOut
bandwidths = 10 ** np.linspace(-1, 1, 100)
grid = GridSearchCV(KernelDensity(kernel='gaussian'), {'bandwidth': bandwidths}, cv=LeaveOneOut(len(x)))
grid.fit(x[:, None])
grid.best_params_
Explanation: Selecting the bandwidth via cross-validation
Choice of bandwith within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias-variance trade-off in the estimate of density: too narrow a bandwidth to a high-variance estimate (over-fitting) where the presence or absence of a sinple point makes a large difference.
In machine learning contexts, we've seen that such hyperparameter tuning often is done empirically via a cross-validation approach.
End of explanation |
11,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Removing particles from the simulation
This tutorial shows the different ways to remove particles from a REBOUND simulation. Let us start by setting up a simple simulation with 10 bodies, and assign them unique hashes, so we can keep track of them (see UniquelyIdentifyingParticlesWithHashes.ipynb).
Step1: Let us add one more particle, this time with a custom name
Step2: Now let us run perform a short integration to isolate the particles that interest us for a longer simulation
Step3: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x < 0$ at the end of the preliminary integration. Let's first print out the particle hashes and x positions.
Step4: Note that 4066125545 is the hash corresponding to the string "Saturn" we added above. We can use the remove() function to filter out particles. As an argument, we pass the corresponding index in the particles array.
Step5: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved. Note that this is helpful for example if you use an integrator such as WHFast which uses Jacobi coordinates.
By running through the planets in reverse order, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it).
If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0
Step6: We see that the order of the particles array has changed.
Because in general particles can change positions in the particles array, a more robust way of referring to particles (rather than through their index) is through their hash, which won't change. You can pass sim.remove either the hash directly, or if you pass a string, it will be automatically converted to its corresponding hash
Step7: If you try to remove a particle with an invalid index or hash, an exception is thrown, which might be caught using the standard python syntax | Python Code:
import rebound
import numpy as np
sim = rebound.Simulation()
sim.add(m=1., hash=0)
for i in range(1,10):
sim.add(a=i, hash=i)
sim.move_to_com()
print("Particle hashes:{0}".format([sim.particles[i].hash for i in range(sim.N)]))
Explanation: Removing particles from the simulation
This tutorial shows the different ways to remove particles from a REBOUND simulation. Let us start by setting up a simple simulation with 10 bodies, and assign them unique hashes, so we can keep track of them (see UniquelyIdentifyingParticlesWithHashes.ipynb).
End of explanation
sim.add(a=10, hash="Saturn")
print("Particle hashes:{0}".format([sim.particles[i].hash for i in range(sim.N)]))
Explanation: Let us add one more particle, this time with a custom name:
End of explanation
Noutputs = 1000
xs = np.zeros((sim.N, Noutputs))
ys = np.zeros((sim.N, Noutputs))
times = np.linspace(0.,50*2.*np.pi, Noutputs, endpoint=False)
for i, time in enumerate(times):
sim.integrate(time)
xs[:,i] = [sim.particles[j].x for j in range(sim.N)]
ys[:,i] = [sim.particles[j].y for j in range(sim.N)]
%matplotlib inline
import matplotlib.pyplot as plt
fig,ax = plt.subplots(figsize=(15,5))
for i in range(sim.N):
plt.plot(xs[i,:], ys[i,:])
ax.set_aspect('equal')
Explanation: Now let us run perform a short integration to isolate the particles that interest us for a longer simulation:
End of explanation
print("Hash\t\tx")
for i in range(sim.N):
print("{0}\t{1}".format(sim.particles[i].hash, xs[i,-1]))
Explanation: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x < 0$ at the end of the preliminary integration. Let's first print out the particle hashes and x positions.
End of explanation
for i in reversed(range(1,sim.N)):
if xs[i,-1] > 0:
sim.remove(i)
print("Number of particles after cut = {0}".format(sim.N))
print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles]))
Explanation: Note that 4066125545 is the hash corresponding to the string "Saturn" we added above. We can use the remove() function to filter out particles. As an argument, we pass the corresponding index in the particles array.
End of explanation
sim.remove(2, keepSorted=0)
print("Number of particles after cut = {0}".format(sim.N))
print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles]))
Explanation: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved. Note that this is helpful for example if you use an integrator such as WHFast which uses Jacobi coordinates.
By running through the planets in reverse order, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it).
If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0:
End of explanation
sim.remove(hash="Saturn")
print("Number of particles after cut = {0}".format(sim.N))
print("Hashes of remaining particles = {0}".format([p.hash for p in sim.particles]))
Explanation: We see that the order of the particles array has changed.
Because in general particles can change positions in the particles array, a more robust way of referring to particles (rather than through their index) is through their hash, which won't change. You can pass sim.remove either the hash directly, or if you pass a string, it will be automatically converted to its corresponding hash:
End of explanation
try:
sim.remove(hash="Planet 9")
except RuntimeError as e:
print("A runtime error occured: {0}".format(e))
Explanation: If you try to remove a particle with an invalid index or hash, an exception is thrown, which might be caught using the standard python syntax:
End of explanation |
11,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Achieving Generalization
Testing and cross-validation
Train-test split
Step1: Cross validation
Step3: Valid options are ['accuracy', 'adjusted_rand_score', 'average_precision', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'log_loss', 'mean_absolute_error', 'mean_squared_error', 'median_absolute_error', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc'
http
Step4: Greedy selection of features
Controlling for over-parameterization
Step5: Madelon dataset
Step6: Univariate selection of features
Step7: Recursive feature selection
Step8: Regularization
Ridge
Step9: Grid search for optimal parameters
Step10: Random Search
Step11: Lasso
Step12: Elasticnet
Step13: Stability selection | Python Code:
import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
dataset = pd.DataFrame(boston.data, columns=boston.feature_names)
dataset['target'] = boston.target
observations = len(dataset)
variables = dataset.columns[:-1]
X = dataset.ix[:,:-1]
y = dataset['target'].values
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
print ("Train dataset sample size: %i" % len(X_train))
print ("Test dataset sample size: %i" % len(X_test))
X_train, X_out_sample, y_train, y_out_sample = train_test_split(X, y, test_size=0.40, random_state=101)
X_validation, X_test, y_validation, y_test = train_test_split(X_out_sample, y_out_sample, test_size=0.50, random_state=101)
print ("Train dataset sample size: %i" % len(X_train))
print ("Validation dataset sample size: %i" % len(X_validation))
print ("Test dataset sample size: %i" % len(X_test))
Explanation: Achieving Generalization
Testing and cross-validation
Train-test split
End of explanation
from sklearn.cross_validation import cross_val_score, KFold, StratifiedKFold
from sklearn.metrics import make_scorer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
import numpy as np
def RMSE(y_true, y_pred):
return np.sum((y_true -y_pred)**2)
lm = LinearRegression()
cv_iterator = KFold(n=len(X), n_folds=10, shuffle=True, random_state=101)
edges = np.histogram(y, bins=5)[1]
binning = np.digitize(y, edges)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
second_order=PolynomialFeatures(degree=2, interaction_only=False)
third_order=PolynomialFeatures(degree=3, interaction_only=True)
over_param_X = second_order.fit_transform(X)
extra_over_param_X = third_order.fit_transform(X)
cv_score = cross_val_score(lm, over_param_X, y, cv=cv_iterator, scoring='mean_squared_error', n_jobs=1)
print (cv_score)
print ('Cv score: mean %0.3f std %0.3f' % (np.mean(np.abs(cv_score)), np.std(cv_score)))
cv_score = cross_val_score(lm, over_param_X, y, cv=stratified_cv_iterator, scoring='mean_squared_error', n_jobs=1)
print ('Cv score: mean %0.3f std %0.3f' % (np.mean(np.abs(cv_score)), np.std(cv_score)))
Explanation: Cross validation
End of explanation
import random
def Bootstrap(n, n_iter=3, random_state=None):
Random sampling with replacement cross-validation generator.
For each iter a sample bootstrap of the indexes [0, n) is
generated and the function returns the obtained sample
and a list of all the excluded indexes.
if random_state:
random.seed(random_state)
for j in range(n_iter):
bs = [random.randint(0, n-1) for i in range(n)]
out_bs = list({i for i in range(n)} - set(bs))
yield bs, out_bs
boot = Bootstrap(n=10, n_iter=5, random_state=101)
for train_idx, validation_idx in boot:
print (train_idx, validation_idx)
import numpy as np
boot = Bootstrap(n=len(X), n_iter=10, random_state=101)
lm = LinearRegression()
bootstrapped_coef = np.zeros((10,13))
for k, (train_idx, validation_idx) in enumerate(boot):
lm.fit(X.ix[train_idx,:],y[train_idx])
bootstrapped_coef[k,:] = lm.coef_
print(bootstrapped_coef[:,10])
print(bootstrapped_coef[:,6])
Explanation: Valid options are ['accuracy', 'adjusted_rand_score', 'average_precision', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'log_loss', 'mean_absolute_error', 'mean_squared_error', 'median_absolute_error', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc'
http://scikit-learn.org/stable/modules/model_evaluation.html
Bootstrapping
End of explanation
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=3)
lm = LinearRegression()
lm.fit(X_train,y_train)
print ('Train (cases, features) = %s' % str(X_train.shape))
print ('Test (cases, features) = %s' % str(X_test.shape))
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(X_train)))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(X_test)))
from sklearn.preprocessing import PolynomialFeatures
second_order=PolynomialFeatures(degree=2, interaction_only=False)
third_order=PolynomialFeatures(degree=3, interaction_only=True)
lm.fit(second_order.fit_transform(X_train),y_train)
print ('(cases, features) = %s' % str(second_order.fit_transform(X_train).shape))
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(second_order.fit_transform(X_train))))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(second_order.fit_transform(X_test))))
lm.fit(third_order.fit_transform(X_train),y_train)
print ('(cases, features) = %s' % str(third_order.fit_transform(X_train).shape))
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(third_order.fit_transform(X_train))))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(third_order.fit_transform(X_test))))
Explanation: Greedy selection of features
Controlling for over-parameterization
End of explanation
try:
import urllib.request as urllib2
except:
import urllib2
import numpy as np
train_data = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_train.data'
validation_data = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_valid.data'
train_response = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/MADELON/madelon_train.labels'
validation_response = 'https://archive.ics.uci.edu/ml/machine-learning-databases/madelon/madelon_valid.labels'
try:
Xt = np.loadtxt(urllib2.urlopen(train_data))
yt = np.loadtxt(urllib2.urlopen(train_response))
Xv = np.loadtxt(urllib2.urlopen(validation_data))
yv = np.loadtxt(urllib2.urlopen(validation_response))
except:
# In case downloading the data doesn't works,
# just manually download the files into the working directory
Xt = np.loadtxt('madelon_train.data')
yt = np.loadtxt('madelon_train.labels')
Xv = np.loadtxt('madelon_valid.data')
yv = np.loadtxt('madelon_valid.labels')
print ('Training set: %i observations %i feature' % (Xt.shape))
print ('Validation set: %i observations %i feature' % (Xv.shape))
from scipy.stats import describe
print (describe(Xt))
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
def visualize_correlation_matrix(data, hurdle = 0.0):
R = np.corrcoef(data, rowvar=0)
R[np.where(np.abs(R)<hurdle)] = 0.0
heatmap = plt.pcolor(R, cmap=mpl.cm.coolwarm, alpha=0.8)
heatmap.axes.set_frame_on(False)
plt.xticks(rotation=90)
plt.tick_params(axis='both', which='both', bottom='off', top='off', left = 'off',
right = 'off')
plt.colorbar()
plt.show()
visualize_correlation_matrix(Xt[:,100:150], hurdle=0.0)
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression
logit = LogisticRegression()
logit.fit(Xt,yt)
from sklearn.metrics import roc_auc_score
print ('Training area under the curve: %0.3f' % roc_auc_score(yt,logit.predict_proba(Xt)[:,1]))
print ('Validation area under the curve: %0.3f' % roc_auc_score(yv,logit.predict_proba(Xv)[:,1]))
Explanation: Madelon dataset
End of explanation
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile=50)
selector.fit(Xt,yt)
variable_filter = selector.get_support()
plt.hist(selector.scores_, bins=50, histtype='bar')
plt.grid()
plt.show()
variable_filter = selector.scores_ > 10
print ("Number of filtered variables: %i" % np.sum(variable_filter))
from sklearn.preprocessing import PolynomialFeatures
interactions = PolynomialFeatures(degree=2, interaction_only=True)
Xs = interactions.fit_transform(Xt[:,variable_filter])
print ("Number of variables and interactions: %i" % Xs.shape[1])
logit.fit(Xs,yt)
Xvs = interactions.fit_transform(Xv[:,variable_filter])
print ('Validation area Under the Curve before recursive selection: %0.3f' % roc_auc_score(yv,logit.predict_proba(Xvs)[:,1]))
Explanation: Univariate selection of features
End of explanation
# Execution time: 3.15 s
from sklearn.feature_selection import RFECV
from sklearn.cross_validation import KFold
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=1)
lm = LinearRegression()
cv_iterator = KFold(n=len(X_train), n_folds=10, shuffle=True, random_state=101)
recursive_selector = RFECV(estimator=lm, step=1, cv=cv_iterator, scoring='mean_squared_error')
recursive_selector.fit(second_order.fit_transform(X_train),y_train)
print ('Initial number of features : %i' % second_order.fit_transform(X_train).shape[1])
print ('Optimal number of features : %i' % recursive_selector.n_features_)
a = second_order.fit_transform(X_train)
print (a)
essential_X_train = recursive_selector.transform(second_order.fit_transform(X_train))
essential_X_test = recursive_selector.transform(second_order.fit_transform(X_test))
lm.fit(essential_X_train, y_train)
print ('cases = %i features = %i' % essential_X_test.shape)
print ('In-sample mean squared error %0.3f' % mean_squared_error(y_train,lm.predict(essential_X_train)))
print ('Out-sample mean squared error %0.3f' % mean_squared_error(y_test,lm.predict(essential_X_test)))
edges = np.histogram(y, bins=5)[1]
binning = np.digitize(y, edges)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
essential_X = recursive_selector.transform(second_order.fit_transform(X))
cv_score = cross_val_score(lm, essential_X, y, cv=stratified_cv_iterator, scoring='mean_squared_error', n_jobs=1)
print ('Cv score: mean %0.3f std %0.3f' % (np.mean(np.abs(cv_score)), np.std(cv_score)))
Explanation: Recursive feature selection
End of explanation
from sklearn.linear_model import Ridge
ridge = Ridge(normalize=True)
# The following commented line is to show a logistic regression with L2 regularization
# lr_l2 = LogisticRegression(C=1.0, penalty='l2', tol=0.01)
ridge.fit(second_order.fit_transform(X), y)
lm.fit(second_order.fit_transform(X), y)
print ('Average coefficient: Non regularized = %0.3f Ridge = %0.3f' % (np.mean(lm.coef_), np.mean(ridge.coef_)))
print ('Min coefficient: Non regularized = %0.3f Ridge = %0.3f' % (np.min(lm.coef_), np.min(ridge.coef_)))
print ('Max coefficient: Non regularized = %0.3f Ridge = %0.3f' % (np.max(lm.coef_), np.max(ridge.coef_)))
Explanation: Regularization
Ridge
End of explanation
from sklearn.grid_search import GridSearchCV
edges = np.histogram(y, bins=5)[1]
binning = np.digitize(y, edges)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
search = GridSearchCV(estimator=ridge, param_grid={'alpha':np.logspace(-4,2,7)}, scoring = 'mean_squared_error',
n_jobs=1, refit=True, cv=stratified_cv_iterator)
search.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search.best_params_['alpha'])
print ('Best CV mean squared error: %0.3f' % np.abs(search.best_score_))
search.grid_scores_
# Alternative: sklearn.linear_model.RidgeCV
from sklearn.linear_model import RidgeCV
auto_ridge = RidgeCV(alphas=np.logspace(-4,2,7), normalize=True, scoring = 'mean_squared_error', cv=None)
auto_ridge.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % auto_ridge.alpha_)
Explanation: Grid search for optimal parameters
End of explanation
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats import expon
np.random.seed(101)
search_func=RandomizedSearchCV(estimator=ridge, param_distributions={'alpha':np.logspace(-4,2,100)}, n_iter=10,
scoring='mean_squared_error', n_jobs=1, iid=False, refit=True, cv=stratified_cv_iterator)
search_func.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search_func.best_params_['alpha'])
print ('Best CV mean squared error: %0.3f' % np.abs(search_func.best_score_))
Explanation: Random Search
End of explanation
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=1.0, normalize=True, max_iter=2*10**5)
#The following comment shows an example of L1 logistic regression
#lr_l1 = LogisticRegression(C=1.0, penalty='l1', tol=0.01)
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats import expon
np.random.seed(101)
stratified_cv_iterator = StratifiedKFold(binning, n_folds=10, shuffle=True, random_state=101)
search_func=RandomizedSearchCV(estimator=lasso, param_distributions={'alpha':np.logspace(-5,2,100)}, n_iter=10,
scoring='mean_squared_error', n_jobs=1, iid=False, refit=True, cv=stratified_cv_iterator)
search_func.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search_func.best_params_['alpha'])
print ('Best CV mean squared error: %0.3f' % np.abs(search_func.best_score_))
print ('Zero value coefficients: %i out of %i' % (np.sum(~(search_func.best_estimator_.coef_==0.0)),
len(search_func.best_estimator_.coef_)))
# Alternative: sklearn.linear_model.LassoCV
# Execution time: 54.9 s
from sklearn.linear_model import LassoCV
auto_lasso = LassoCV(alphas=np.logspace(-5,2,100), normalize=True, n_jobs=1, cv=None, max_iter=10**6)
auto_lasso.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % auto_lasso.alpha_)
Explanation: Lasso
End of explanation
# Execution time: 1min 3s
from sklearn.linear_model import ElasticNet
import numpy as np
elasticnet = ElasticNet(alpha=1.0, l1_ratio=0.15, normalize=True, max_iter=10**6, random_state=101)
from sklearn.grid_search import RandomizedSearchCV
from scipy.stats import expon
np.random.seed(101)
search_func=RandomizedSearchCV(estimator=elasticnet, param_distributions={'alpha':np.logspace(-5,2,100),
'l1_ratio':np.arange(0.0, 1.01, 0.05)}, n_iter=10,
scoring='mean_squared_error', n_jobs=1, iid=False, refit=True, cv=stratified_cv_iterator)
search_func.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % search_func.best_params_['alpha'])
print ('Best l1_ratio: %0.5f' % search_func.best_params_['l1_ratio'])
print ('Best CV mean squared error: %0.3f' % np.abs(search_func.best_score_))
print ('Zero value coefficients: %i out of %i' % (np.sum(~(search_func.best_estimator_.coef_==0.0)),
len(search_func.best_estimator_.coef_)))
# Alternative: sklearn.linear_model.ElasticNetCV
from sklearn.linear_model import ElasticNetCV
auto_elastic = ElasticNetCV(alphas=np.logspace(-5,2,100), normalize=True, n_jobs=1, cv=None, max_iter=10**6)
auto_elastic.fit(second_order.fit_transform(X), y)
print ('Best alpha: %0.5f' % auto_elastic.alpha_)
print ('Best l1_ratio: %0.5f' % auto_elastic.l1_ratio_)
print(second_order.fit_transform(X).shape)
print(len(y))
print(second_order.fit_transform(X)[0])
print(y[0])
Explanation: Elasticnet
End of explanation
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import RandomizedLogisticRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
threshold = 0.03
stability_selection = RandomizedLogisticRegression(n_resampling=300, n_jobs=1, random_state=101, scaling=0.15,
sample_fraction=0.50, selection_threshold=threshold)
interactions = PolynomialFeatures(degree=4, interaction_only=True)
model = make_pipeline(stability_selection, interactions, logit)
model.fit(Xt,yt)
print(Xt.shape)
print(yt.shape)
#print(Xt)
#print(yt)
#print(model.steps[0][1].all_scores_)
print ('Number of features picked by stability selection: %i' % np.sum(model.steps[0][1].all_scores_ >= threshold))
from sklearn.metrics import roc_auc_score
print ('Area Under the Curve: %0.3f' % roc_auc_score(yv,model.predict_proba(Xv)[:,1]))
Explanation: Stability selection
End of explanation |
11,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Course - Primer cas pràctic
<img src="http
Step1: I considerem una llista d'objectes que ha comprat un cert "client"
Step2: Step 1
Step3: Segona versió - Ara com a mínim sap Python
Tercera versió - No reinventeu la roda
Step4: Step 2 | Python Code:
prices = {'apple': 0.40, 'banana': 0.50, 'entrada_promocional': 10, 'entrada_simple': 17}
Explanation: Python Course - Primer cas pràctic
<img src="http://www.telecogresca.com/logo_mail.png"></img>
Exercici fortament sintètic
(En part de https://wiki.python.org/moin/SimplePrograms, en part collita pròpia)
Considerem un diccionari de preus, amb productes bàsics:
End of explanation
compra_marcos = [
'apple',
'banana',
'apple',
'entrada_promocional',
'apple',
'banana',
'entrada_promocional', # segur que voldrà un canvi de nom d'aquesta
]
Explanation: I considerem una llista d'objectes que ha comprat un cert "client"
End of explanation
result = dict()
for i in range(len(compra_marcos)):
producte = compra_marcos[i]
previous_value = result.get(producte)
if previous_value == None:
result[producte] = 1
else:
result[producte] = previous_value + 1
result
Explanation: Step 1: Compra del client
Imaginem que volem saber de forma una mica més ben estructurada què ha comprat aquest client.
Primera versió - C++ programmer fent desastres
End of explanation
from collections import Counter
Explanation: Segona versió - Ara com a mínim sap Python
Tercera versió - No reinventeu la roda
End of explanation
total = 0
for producte, quantitat in result.items():
total += prices[producte] * quantitat
total
Explanation: Step 2: Càlcul del total
Ara volem calcular el total de la compra del client.
Primera versió - Somewhat Python
End of explanation |
11,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lemke-Howson
Step1: Two-Player Games in Normal Form
We are going to find Nash equilibria (pure or mixed action) of a Two-Player Game $g = (I, (A_i){i \in I}, (u_i){i \in I})$, where
$I = {0, 1}$
Step2: Given support $I$ and $J$, if for each player, the indiff_mixed_action returns True, then there is a Nash equilirium with support $I$ and $J$, and the mixed actions are just as calculated.
Consider the example from von Stengel (2007)
Step3: To find all Nash equilibria of a normal form game, we just iterate all possible combinations of $I$ and $J$, and then apply indiff_mixed_action to each players with each support pair. If indiff_mixed_action returns True for both players, we store mixed actions in list NEs.
This iteration procedure is wrapped by a function gt.support_enumeration which takes NormalFormGame as argument.
Step4: There exist 3 Nash equilibria for this example
Step5: $((x, v), (y, u)) \in \bar{P} \times \bar{Q}$ is completely labeled if every $k \in M \cup N$ appears as a label of either $(x, v)$ or $(y, u)$.
In the context of best response polyhedra and labels, we can define Nash equilibrium as
Step6: Consider the first list of indices of basic variables as example. The first element is $3$, which means the basic variable for first row of $\text{tableu}_1$ is $s_3$. Similarly, the basic variable for second row of $\text{tableu}_1$ is $s_4$.
Define the min_ratio_test function for deciding the leaving basic variable, given the indice of the entering basic variable in a pivoting process.
Step8: The Pivoting function below is for updating the tableau after we decide the basic variables to be added and dropped.
Step10: We can apply the pivoting recursively to the tableaux until one equilirium is found as we described above. The following gives an example where the initial pivot is set to be 1.
Step12: The red cross markers show the vertices of a pair. By pivoting, we move the point in polytope $P$ and $Q$ in turn. After 4 times of pivoting, one completely labeled pair $(q, d)$ is found.
In the following, we define the function for doing pivoting process recursively until a completely labeled pair is found, taking tableaux and lists of indices of basic variables as arguments.
Step13: After the completely pair is found, we can get mixed actions from tableux and lists of indices of basic variables.
Step15: Finally we wrap all the procedures together, and define a function called lemke_howson_numpy which takes NormalFormGame as argument.
Step16: As it shows, one Nash equilibrium for this nondegenerate game is
Step17: Degenerate Games
In a degenerate game, the minimum ratio test may have more than one minimizers. In this case, arbitrary tie breaking may lead to cycling, causing the algorithm to fall into an infinite loop.
For example, consider a game with the payoff matrices being $C$ and $D$ for Player 0 and 1 respectively
Step18: As such, we replace the minimum ratio test with the lexico-minimum ratio test for determining the leaving basic variable in degenerate games.
Note that the original system can be written as
$$
D x + I s = \mathbf{1},
$$
where $x$ is the mixed action, and $s$ is the slack variable vector.
The lexico-minimum ratio test introduces $(\epsilon, \epsilon^1, \cdots, \epsilon^n)^\prime$ to the right hand
Step19: Comparing quantecon.lemke_howson and lemke_howson_numpy
The lemke_howson routine in quantecon.game_thoery is accelerated by numba, and therefore, the speed of finding a Nash equilibrium is much faster than lemke_howson_numpy. Below is a comparison of these two functions. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import quantecon.game_theory as gt
%matplotlib inline
Explanation: Lemke-Howson: An Algorithm to Find Nash Equilibrium
This notebook introduces the Lemke-Howson algorithm for finding a Nash equilibrium of a two-player normal form game.
End of explanation
def indiff_mixed_action(payoff_matrix, own_supp, opp_supp, out):
# (number of own actions, number of opponent's actions)
nums_actions = payoff_matrix.shape
# Support size
k = len(own_supp)
# Matrix in the left hand side of the linear equation
a = np.empty((k+1, k+1))
a[:-1, :-1] = payoff_matrix[own_supp, :][:, opp_supp]
a[-1, :-1] = 1
a[:-1, -1] = -1
a[-1, -1] = 0
# Vector in the right hand side of the linear equation
b = np.zeros(k+1)
b[-1] = 1
try:
sol = np.linalg.solve(a, b)
except np.linalg.LinAlgError:
return False
# Return False immediately if any of the "probabilities" is not positive
if (sol[:-1] <= 0).any():
return False
own_supp_c = np.setdiff1d(np.arange(nums_actions[0]), own_supp)
# Return False immediately if the solution mixed action is not optimal
if (sol[-1] < payoff_matrix[own_supp_c, :][:, opp_supp] @ sol[:-1]).any():
return False
out.fill(0)
out[opp_supp] = sol[:-1]
return True
Explanation: Two-Player Games in Normal Form
We are going to find Nash equilibria (pure or mixed action) of a Two-Player Game $g = (I, (A_i){i \in I}, (u_i){i \in I})$, where
$I = {0, 1}$: the set of players.
$M = {0, ..., m-1}, N = {m, ..., m+n-1}$: two pure action spaces.
$\Delta^L = {x \in \mathbb{R}^L_+ \mid \sum_{\ell \in L} x_{\ell} = 1}, L=M, N$: the mixed action spaces.
$A \in \mathbb{R}^{M \times N}, B \in \mathbb{R}^{N \times M}$: the payoff matrices for player 0 and 1 respectively.
$x^{\prime}Ay$ and $y^{\prime}Bx$: the expected payoffs for player 0 and 1 respectively with $x \in \Delta^M$ and $y \in \Delta^N$.
Nash Equilibrium
For $x \in \Delta^M$ and $y \in \Delta^N$, they constitute a Nash equilibrium if
$$
x^{\prime}Ay \geq z ^\prime Ay \quad \forall z \in \Delta^M,
$$
and
$$
y^{\prime}Bx \geq z ^\prime Bx \quad \forall z \in \Delta^N.
$$
We define
$$
\bar{x} = \underset{j \in N}{\arg \max}(B x)_j, \quad
x^\circ = {i \in M \mid x_i = 0} \
\bar{y} = \underset{i \in M}{\arg \max}(A y)_i, \quad
y^\circ = {j \in N \mid y_j = 0} \
\text{supp}(x) = {i \mid x_i > 0}, \quad
\text{supp}(y) = {j \mid y_j > 0}
$$
According to von Stengel, B. (2007), we can establish whether $(x, y) \in \Delta^M \times \Delta^N$ is a Nash equilibrium by checking:
$\text{supp}(x) \subset \bar{y}, \text{supp}(y) \subset \bar{x}$, or
$\bar{y} \cup x^\circ = M, \bar{x} \cup y^\circ = N$, which is equivalently $(\bar{x} \cup x^\circ) \cup (\bar{y} \cup y^\circ) = M \cup N$.
These two conditions are equivalent, and therefore, checking only one of them is enough for finding a Nash equilibrium.
These two conditions allow us to use some algorithms to compute a Nash equilibrium for any well-defined two-player normal form game. However, with degenerate games which will be defined formally below, we have to deal with some difficulties as we will see later. Therefore, we start by considering nondegenerate games, which are simpler to address.
Nondegenerate Games
Definition: A two-player game is nondegenerate if for any $x \in \Delta^M$ and any $y \in \Delta^N$, we have
$$
\left| \bar{x} \right| \leq \left| \text{supp}(x) \right|, \quad
\left| \bar{y} \right| \leq \left| \text{supp}(y) \right|,
$$
or equivalently,
$$
\left| x^\circ \right| + \left| \bar{x} \right| \leq m, \quad
\left| y^\circ \right| + \left| \bar{y} \right| \leq n,
$$
where $\left| x \right|$ is the cardinality of set $x$. Otherwise, the game is called as degenerate.
Combined with the condition that a Nash equilibrium should satisfy described above, we know that if $(x, y)$ is a Nash equilibrium of a nondegenerate game, then
$$
\left| \text{supp}(x) \right| = \left| \text{supp}(y) \right|.
$$
Using this result, we can exclude a large number of action support pairs that cannot be a Nash Equilibrium.
Support Enumeration
Before we introduce the Lemke-Howson algorithm, we first describe a method to find all Nash equilibria by iterating over all equal-sized support pairs, and checking whether they satisfy the necessary and sufficient conditions mentioned above to be a Nash Equilibrium. This technique is called Support Enumeration.
For each $k=1, ..., \text{min}{m, n}$ and each pair $(I, J)$, $I \subset M$ and $J \subset N$, such that $\left| I \right| = \left| J \right| = k$, the mixed action $(x, y)$ is a Nash equilibrium if it solves the systems of linear equations
$$
\sum_{j \in J} a_{ij} y_j = u \text{ for } i \in I, \quad
\sum_{j \in J} y_j = 1, \
\sum_{i \in I} b_{ji} x_i = v \text{ for } j \in J, \quad
\sum_{i \in I} x_i = 1.
$$
And also satisfies that
* $x_i > 0$ for all $i \in I$ and $y_j >0$ for all $j \in J$,
* $u \geq \sum_{j \in J} a_{ij} y_j$ for all $i \not \in I$ and $v \geq \sum_{i \in I} b_{ij} x_i$ for all $j \not \in J$.
The systems of equations can be written in matrix form as
$$
C \begin{pmatrix} y_J \ u \end{pmatrix} = e,
$$
and
$$
D \begin{pmatrix} x_I \ v \end{pmatrix} = e,
$$
with
$$
C =
\begin{pmatrix}
A_{IJ} & -\mathbf{1} \
\mathbf{1}' & 0
\end{pmatrix}, \quad
D =
\begin{pmatrix}
B_{JI} & -\mathbf{1} \
\mathbf{1}' & 0
\end{pmatrix}, \quad
e = \begin{pmatrix}\mathbf{0} \ 1\end{pmatrix},
$$
where
$A_{IJ}$ is the submatrix of $A$ given by rows $I$ and columns $J$,
$B_{JI}$ is the submatrix of $B$ given by rows $J$ and columns $I$, and
$\mathbf{0}$ and $\mathbf{1}$ are the $k$-dimensional vectors of zeros and ones, respectively.
Using the algorithm described above, given any well-defined nondegenerate two-player game and a support pair $I, J$, we know whether there is a Nash equilirium corresponding to this support pair, and can calculate the NE $(x, y)$ if there is one.
Implementation
We apply the algorithm by writing a function indiff_mixed_action which solves the system of linear equations for one player, which is the half of the whole system to find a Nash equilibiurm. In other word, it checks whether there exists a mixed action of opponent with support given, against which the actions in the player's own support are all best responses.
The arguments are
numpy array payoff_matrix;
list (or numpy array) own_supp for the support of the player in consideration;
list (or numpy array) opp_supp for the support of the opponent player;
numpy array out that stores the candidate mixed action.
If there is a mixed action of the opponent with support opp_supp
against which the actions in own_supp are best responses,
then return True; otherwise return False.
In the former case, the mixed action is stored in out.
Array out must be of length equal to the number of the opponent's actions.
End of explanation
# Define a two-player normal form game
A = np.array([[3, 3],
[2, 5],
[0 ,6]])
B = np.array([[3, 2, 3],
[2, 6, 1]])
m, n = A.shape # Numbers of actions of players 0 and 1, respectively
M = np.arange(m)
N = np.arange(n)
# Set the equal-sized support pair I, J
I = [0, 1]
J = [0, 1]
out = np.empty(n)
indiff_mixed_action(A, I, J, out)
out
Explanation: Given support $I$ and $J$, if for each player, the indiff_mixed_action returns True, then there is a Nash equilirium with support $I$ and $J$, and the mixed actions are just as calculated.
Consider the example from von Stengel (2007):
$$
A =
\begin{bmatrix}
3 & 3 \
2 & 5 \
0 & 6
\end{bmatrix},
\quad
B =
\begin{bmatrix}
3 & 2 & 3 \
2 & 6 & 1 \
\end{bmatrix}.
$$
The action spaces of players 0 and 1 are replaced with Python indices:
$$
M = {0, 1, 2}, \quad
N = {0, 1}.
$$
We can attempt to use indiff_mixed_action to find a Nash equilibrium with $I = {0, 1}$ and $J = {0, 1}$.
End of explanation
g = gt.NormalFormGame((gt.Player(A), gt.Player(B)))
gt.support_enumeration(g)
Explanation: To find all Nash equilibria of a normal form game, we just iterate all possible combinations of $I$ and $J$, and then apply indiff_mixed_action to each players with each support pair. If indiff_mixed_action returns True for both players, we store mixed actions in list NEs.
This iteration procedure is wrapped by a function gt.support_enumeration which takes NormalFormGame as argument.
End of explanation
# Draw the best response polytope Q
from scipy.spatial import HalfspaceIntersection, ConvexHull
from itertools import combinations
fig = plt.figure(figsize=(15, 6))
ax1 = fig.add_subplot('121')
halfspaces = np.empty((5, 3))
halfspaces[:3, :-1] = A
halfspaces[3:, :-1] = -np.eye(2)
halfspaces[:3, -1] = -1
halfspaces[3:, -1] = 0
feasible_point = np.array([0.05, 0.05])
hs = HalfspaceIntersection(halfspaces, feasible_point)
vertices_ind = np.empty((len(halfspaces), len(hs.intersections)), dtype = bool)
for i, constraint in enumerate(halfspaces):
vertices_ind[i, :] = np.isclose(np.dot(np.hstack((hs.intersections,
np.ones((len(hs.intersections), 1)))),
constraint),
0)
xlim, ylim = (-0.05, 0.40), (-0.05, 0.20)
ax1.set_xlim(xlim)
ax1.set_ylim(ylim)
for ind in vertices_ind:
vertices = hs.intersections[ind]
ax1.plot(vertices[:, 0], vertices[:, 1], c = 'b')
for i in range(len(halfspaces)):
label_xyz = np.average(hs.intersections[vertices_ind[i]], axis=0)
ax1.text(*label_xyz, str(i))
pts_labels = ['0', 's', 'p', 'r', 'q']
for pt, l in zip(hs.intersections, pts_labels):
ax1.scatter(*pt, label=l)
ax1.legend()
ax1.set_title('Best response polytope Q')
# Draw the best response polytope P
from mpl_toolkits.mplot3d import Axes3D
ax2 = fig.add_subplot(122, projection='3d')
halfspaces = np.empty((5, 4))
halfspaces[:3, :-1] = -np.eye(3)
halfspaces[3:, :-1] = B
halfspaces[3:, -1] = -1
halfspaces[:3, -1] = 0
feasible_point = np.array([0.05, 0.05, 0.05])
hs = HalfspaceIntersection(halfspaces, feasible_point)
vertices_ind = np.empty((len(halfspaces), len(hs.intersections)), dtype = bool)
for i, constraint in enumerate(halfspaces):
vertices_ind[i, :] = np.isclose(np.dot(np.hstack((hs.intersections,
np.ones((len(hs.intersections), 1)))),
constraint),
0)
for i, j in combinations(range(vertices_ind.shape[0]), 2):
vertices = hs.intersections[np.logical_and(vertices_ind[i, :], vertices_ind[j, :])]
ax2.plot(vertices[:, 0], vertices[:, 1], vertices[:, 2], c = 'b')
for i in range(len(halfspaces)):
label_xyz = np.average(hs.intersections[vertices_ind[i]], axis=0)
ax2.text(*label_xyz, str(i))
pts_labels = ['e', '0', 'a', 'd', 'c', 'b']
for pt, l in zip(hs.intersections, pts_labels):
ax2.scatter(*pt, label=l)
ax2.view_init(elev=30, azim=25)
ax2.legend(loc=0)
ax2.set_title('Best response polytope P');
Explanation: There exist 3 Nash equilibria for this example:
$$
\left(\left(1,0,0\right),\left(1,0\right)\right), \quad \left(\left(\frac{4}{5}, \frac{1}{5}, 0\right), \left(\frac{2}{3}, \frac{1}{3}\right)\right), \quad \left(\left(0,\frac{1}{3}, \frac{2}{3}\right), \left(\frac{1}{3}, \frac{2}{3}\right)\right).
$$
Support Enumeration has one drawback, that the number of equal-sized support pairs increases quickly as the game gets larger:
If $m = n$, the number of equal-sized support pairs is
$$
\sum^n_{k=1} \binom{n} {k} ^2 = \binom {2n} {n} - 1 \approx \frac{4^n}{\sqrt{\pi n}},
$$
which increase exponentially with the number of actions. This motivates the usage of the vertex enumeration, which is much less computationally intensive. ?Moreover, if we just want to find one Nash Equilibrium, then Lemke-Howson algorithm is even more efficient.?
Lemke-Howson
Using the Lemke-Howson algorithm allows us to alleviate the computational complexity of the support enumeration algorithm. To understand how this algorithm works, we first need to introduce a representation of Nash equilibria which uses Polyhedra and Labels.
Polyhedra and Labels
Given a bimatrix game with payoff matrixs being $A$ and $B$ for player 0, 1 respectively, the best response can be represented as a polyhedron:
$$
\bar{P} = {(x, v) \in \mathbb{R}^M \times \mathbb{R} \mid x\geq \mathbf{0}, B x \leq v \mathbf{1}, \mathbf{1}^{\prime}x = 1}, \
\bar{Q} = {(y, u) \in \mathbb{R}^N \times \mathbb{R} \mid A y \leq u \mathbf{1}, y\geq \mathbf{0}, \mathbf{1}^{\prime}y = 1},
$$
where $x \in \Delta^M$, $y \in \Delta^N$. $v$ is the upper bound of expected payoffs for player 1 choosing pure actions when player 0's mixed action is $x$. Similarly, $u$ is the upper bound of expected payoffs for player 0.
Define the label of a point in best response polyhedron as:
$(x, v) \in \bar{P}$ has label $k \in M \cup N$ if
for $k = j \in N$, $(B x)_{j} = v$, so that $j \in \bar{x}$, or,
for $k = i \in M$, $x_i = 0$, so that $i \in x^{\circ}$.
$(y, u) \in \bar{Q}$ has label $k \in M \cup N$ if
for $k = i \in M$, $(A y)_{i} = u$, so that $i \in \bar{y}$, or,
for $k = j \in N$, $y_j = 0$, so that $j \in y^{\circ}$.
Without loss of generality, we assume that $A$ and $B$ are nonnegative and have no zero column. Dividing by $v$ and $u$, we can convert the best response polyhedra $\bar{P}$ and $\bar{Q}$ into the best response polytopes, that correspond to player 0 and player 1 respectively:
$$
P = {x \in \mathbb{R}^M \mid x \geq \mathbf{0}, B x \leq 1 }, \
Q = {y \in \mathbb{R}^N \mid A y \leq 1, y\geq \mathbf{0} }.
$$
Using the same example as in the Support Enumeration case, where
$$
M = {0, 1, 2}, \quad
N = {3, 4}, \
A =
\begin{bmatrix}
3 & 3 \
2 & 5 \
0 & 6
\end{bmatrix},
\quad
B =
\begin{bmatrix}
3 & 2 & 3 \
2 & 6 & 1 \
\end{bmatrix}.
$$
We can draw the best response polytopes as follows.
End of explanation
def initialize_tableaux(payoff_matrices):
m, n = payoff_matrices[0].shape
tableaux = (np.empty((n, m+n+1)), np.empty((m, n+m+1)))
bases = (np.arange(m, m+n), np.arange(0, m))
# Player 0
# fill the tableau of Player0
tableaux[0][:, :m] = payoff_matrices[1]
tableaux[0][:, m:m+n] = np.identity(n)
tableaux[0][:, -1] = 1
# Player 1
# create tableau of Player1
tableaux[1][:, :m] = np.identity(m)
tableaux[1][:, m:m+n] = payoff_matrices[0]
tableaux[1][:, -1] = 1
return tableaux, bases
tableaux, bases = initialize_tableaux([A, B])
tableaux
bases
Explanation: $((x, v), (y, u)) \in \bar{P} \times \bar{Q}$ is completely labeled if every $k \in M \cup N$ appears as a label of either $(x, v)$ or $(y, u)$.
In the context of best response polyhedra and labels, we can define Nash equilibrium as:
* $(x, y) \in \Delta^{M} \times \Delta^{N}$ is a Nash equilibrium if and only if $((x, v), (y, u))$ with $u = \text{max}_i (A y)_i$ and $v = \text{max}_j (B x)_j$ is completely labeled.
The transformed best response polytope is a special case where $u = v = 1$, and we have
$(x, y) \in P \times Q$, $(x, y) \neq (\mathbf{0}, \mathbf{0})$, is an ("un-normalized") Nash equilibirum if and only if $(x, y)$ is completely labeled.
Note that $(x, y)$ is called as "un-normalized" because the elements in $x(y)$ does not sum up to $1$. We can normalize it to be a Nash equilibrium by
$$
x^* = \frac{1}{\mathbf{1}_M'x}x,\quad v = \frac{1}{\mathbf{1}_M'x}, \
$$
and
$$
y^* = \frac{1}{\mathbf{1}_N'y}y,\quad u = \frac{1}{\mathbf{1}_N'y}, \
$$
where $\mathbf{1}_M$ is a $m$-dimensional vector of ones, and $\mathbf{1}_N$ is a $n$-dimensional vector of ones. $(x^, y^)$ is the Nash equilibrium that we desire.
If the game is nondegenerate, then in $P(Q)$, each vertex have $m(n)$ labels. As completely labeled requires the pair to have $m+n$ distint labels, therefore the Nash equilibrium can only be found in pairs of vertices of two best response polyhedra. This implies that we can find a Nash equilibrium by enumerating each pair of vertices until we find a completely labeled one. However, the Lemke-Howson algorithm provides a more efficient way that avoids enumerating, which is called "pivoting", as we will see in the following.
Lemke-Howson Algorithm for Finding Completely Labeled Pairs of Vertices
By all defintion, $(\mathbf{0}, \mathbf{0}) \in P \times Q$ is completely labeled. However, it is not a Nash equilibrium. We call it "artificial equilibrium". The Lemke-Howson algorithm starts from $(\mathbf{0}, \mathbf{0})$, and moves to a vertex next to the previous one by replacing the labels in $P$ and $Q$ in turn until it reaches a completely labeled pair, and thus an un-normalized Nash equilibrium is found.
In the previous example, this procedure would be as follows:
$(\mathbf{0}, \mathbf{0})$ has labels {0, 1, 2}, {3, 4}
$(\mathbf{0}, \mathbf{0}) \rightarrow (c, \mathbf{0})$ with label {0, 2, 4}, {3, 4}
Drop label 1 in $P$ (label to drop is arbitrarily chosen in the first step). Label 4 is picked up.
$(c, \mathbf{0}) \rightarrow (c, p)$ with label {0, 2, 4}, {2, 3}
Drop label 4 in $Q$ as it is duplicated label of both $x$ and $y$. Label 2 is picked up.
$(c, p) \rightarrow (d, p)$ with label {0, 3, 4}, {2, 3}
Drop label 2 in $P$ as it is duplicated label of both $x$ and $y$. Label 3 is picked up.
... ...
Continue the procedure, until $(d, q)$ is achieved, which is completely labeled, and thus is a un-normalized Nash equilibrium.
Complementary Pivoting
To implement the process of dropping and picking up labels, we introduce a technique called complementary pivoting, using slack variables denoted as $s_3, s_4, r_0, r_1, r_2$:
$$
\begin{matrix}
3 x_0 & + 2 x_1 & + 3 x_2 & + s_3 & & = 1\
2 x_0 & + 6 x_1 & + x_2 & & + s_4 & = 1 \
\
r_0 & & & + 3 y_3 & + 3 y_4 & = 1\
& r_1 & & + 2 y_3 & + 5 y_4 & = 1 \
& & r_2 & & + 6 y_4 & = 1
\end{matrix}
$$
$x \geq \mathbf{0}, s \geq \mathbf{0}, r \geq \mathbf{0}, y \geq \mathbf{0}.$
A solution $(x, s, r, y)$ is completely labeled if and only if
$$
x^\prime r = 0, \quad y^\prime s = 0.
$$
Variables in $(x, s, r, y)$ are called basic variables if they are positive, and nonbasic variables if they are equal to $0$.
In this context, the geometric procedure showed above is equivalent to the algebraic procedure below, which we call pivoting.
During the process, what we need to keep track of are the indices of basic variables, the coefficients of linear equations systems, and the values on the right-hand side. The latter two terms can be saved in arrays, which we denote as tableaux.
$$
\text{tableau}_1 =
\begin{bmatrix}
3 & 2 & 3 & 1 & 0 & 1 \
2 & 6 & 1 & 0 & 1 & 1 \
\end{bmatrix},
$$
and
$$
\text{tableau}_2 =
\begin{bmatrix}
1 & 0 & 0 & 3 & 3 & 1 \
0 & 1 & 0 & 2 & 5 & 1 \
0 & 0 & 1 & 0 & 6 & 1
\end{bmatrix},
$$
respectively.
In detail, the algorithm works as follows:
Given two tableaux and lists of basic variables, we start from $(\mathbf{0}, \mathbf{0})$ with the initial basic variables being ${s_3, s_4}$ and ${r_0, r_1, r_2}$.
Start with pivoting in $P$. The initial pivot indice can be arbitrarily chosen. Because of the nonnegativity constraint, the basic variable to be replaced is decided by minimum ratio test. Then update the tableau and basis of $P$ by dropping and adding basic variables.
Do pivoting in $Q$. Check whether the newly dropped basic variable has the same indice with the first added basic variable by initial pivoting. If it does, then the solution is completely labeled, and a Nash equilibrium has been found.
Repeat 3, until a Nash equilibirum is found.
Note that we will not be in infinite loop, as the existence of Nash equilibrium is guaranteed.
Implementation
First step, we create tableaux and lists of indices of basic variables. Note that the indices of the list of basic variables correspond to the row indices of tableau in order respectively. (the example below will show this in detail)
End of explanation
def min_ratio_test(tableau, pivot):
ind_nonpositive = tableau[:, pivot] <= 0
# we suppress the "divide by zero" warning message
with np.errstate(divide='ignore', invalid='ignore'):
ratios = tableau[:, -1] / tableau[:, pivot]
# leave out the pivots that have negative ratio
ratios[ind_nonpositive] = np.inf
# find the pivot with minimum ratio, under nonnegativity condition
row_min = ratios.argmin()
return row_min
Explanation: Consider the first list of indices of basic variables as example. The first element is $3$, which means the basic variable for first row of $\text{tableu}_1$ is $s_3$. Similarly, the basic variable for second row of $\text{tableu}_1$ is $s_4$.
Define the min_ratio_test function for deciding the leaving basic variable, given the indice of the entering basic variable in a pivoting process.
End of explanation
def pivoting(tableau, pivot, pivot_row):
Perform a pivoting step.
Parameters
----------
tableau : ndarray(float, ndim=2)
The tableau to be updated
pivot : scalar(int)
The indice of entering basic variable
pivot_row: scalar(int)
The row indice of tableau chosen by minimum ratio test
the corresponding basic variable is going to be dropped
Returns
-------
tableau : ndarray(float, ndim=2)
The updated tableau
# Row indices except pivot_row
ind = np.ones(tableau.shape[0], dtype=bool)
ind[pivot_row] = False
# Store the values in the pivot column, except for row_min
# Made 2-dim by np.newaxis
multipliers = tableau[ind, pivot, np.newaxis]
# Update the tableau
tableau[pivot_row, :] /= tableau[pivot_row, pivot]
tableau[ind, :] -= tableau[pivot_row, :] * multipliers
return tableau
Explanation: The Pivoting function below is for updating the tableau after we decide the basic variables to be added and dropped.
End of explanation
import matplotlib.animation as animation
from IPython.display import HTML
tableaux, bases = initialize_tableaux([A, B])
init_pivot = 1
init_player = int((bases[0]==init_pivot).any())
players = [init_player, 1-init_player]
pivot = init_pivot
# starting from the "artificial equilibrium"
pt1, = ax1.plot([0], [0], c='r', marker='x', markersize=20)
# Second subplot
pt2, = ax2.plot([0], [0], [0], c='r', marker='x', markersize=20)
def init():
pt1.set_data([0], [0])
pt2.set_data([0], [0])
pt2.set_3d_properties([0])
return pt1, pt2
def animate(i):
perform recursive pivoting.
if i == 0:
# the "artificial equilibrium"
return pt1, pt2
global pivot
# Determine the leaving variable
row_min = min_ratio_test(tableaux[players[(i+1)%2]], pivot)
# Pivoting step: modify tableau in place
pivoting(tableaux[players[(i+1)%2]], pivot, row_min)
# Update the basic variables and the pivot
bases[players[(i+1)%2]][row_min], pivot = pivot, bases[players[(i+1)%2]][row_min]
# find the vertices implied by the updated tableaux
out = np.zeros(m+n)
for pl, (start, stop) in enumerate(zip((0, m),
(m, m+n))):
ind = bases[pl] < stop if pl == 0 else start <= bases[pl]
out[bases[pl][ind]] = tableaux[pl][ind, -1]
pt1.set_data(out[3], out[4])
pt2.set_data(out[0], out[1])
pt2.set_3d_properties(out[2])
return pt1, pt2
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=5, interval=2000, blit=True)
HTML(anim.to_jshtml())
Explanation: We can apply the pivoting recursively to the tableaux until one equilirium is found as we described above. The following gives an example where the initial pivot is set to be 1.
End of explanation
def lemke_howson_tbl(tableaux, bases, init_pivot, max_iter=10**6):
Main body of the Lemke-Howson algorithm implementation.
Parameters
----------
tableaux : tuple(ndarray(float, ndim=2))
Tuple of two arrays containing the tableaux, of shape (n, m+n+1)
and (m, m+n+1), respectively. Modified in place.
bases : tuple(ndarray(int, ndim=1))
Tuple of two arrays containing the bases, of shape (n,) and
(m,), respectively. Modified in place.
init_pivot : scalar(int)
Initial pivot, an integer k such that 0 <= k < m+n, where
integers 0, ..., m-1 and m, ..., m+n-1 correspond to the actions
of players 0 and 1, respectively.
max_iter : scalar(int), optional(default=10**6)
Maximum number of pivoting steps.
Returns
-------
converged : bool
Whether the pivoting terminated before `max_iter` was reached.
init_player = int((bases[0]==init_pivot).any())
players = [init_player, 1 - init_player]
pivot = init_pivot
num_iter = 0
converged = False
while True:
for i in players:
# Determine the leaving variable
row_min = min_ratio_test(tableaux[i], pivot)
# Pivoting step: modify tableau in place
pivoting(tableaux[i], pivot, row_min)
# Update the basic variables and the pivot
bases[i][row_min], pivot = pivot, bases[i][row_min]
num_iter += 1
if pivot == init_pivot:
converged = True
break
if num_iter >= max_iter:
return converged
else:
continue
break
return converged
Explanation: The red cross markers show the vertices of a pair. By pivoting, we move the point in polytope $P$ and $Q$ in turn. After 4 times of pivoting, one completely labeled pair $(q, d)$ is found.
In the following, we define the function for doing pivoting process recursively until a completely labeled pair is found, taking tableaux and lists of indices of basic variables as arguments.
End of explanation
def get_mixed_actions(tableaux, bases):
m, n = tableaux[1].shape[0], tableaux[0].shape[0]
# get the mixed actions and normalize them
out_dtype = np.result_type(*tableaux)
out = np.zeros(m+n, dtype=out_dtype)
for pl, (start, stop) in enumerate(zip((0, m),
(m, m+n))):
ind = bases[pl] < stop if pl == 0 else start <= bases[pl]
out[bases[pl][ind]] = tableaux[pl][ind, -1]
out[bases[pl][ind]] /= sum(out[bases[pl][ind]])
return out[:m], out[m:]
Explanation: After the completely pair is found, we can get mixed actions from tableux and lists of indices of basic variables.
End of explanation
def lemke_howson_numpy(g, init_pivot=0, max_iter=10**6):
Wrap the procedure of initializing tableaux, complementary pivoting,
and get the mixed actions.
Parameters
----------
g : NormalFormGame
NormalFormGame instance with 2 players.
init_pivot : scalar(int), optional(default=0)
Initial pivot, an integer k such that 0 <= k < m+n, where
integers 0, ..., m-1 and m, ..., m+n-1 correspond to the actions
of players 0 and 1, respectively.
max_iter : scalar(int), optional(default=10**6)
Maximum number of pivoting steps.
Returns
-------
NE : tuple(ndarray(float, ndim=1)) or None
Tuple of computed Nash equilibrium mixed actions.
If no Nash equilibrium is found, return None.
payoff_matrices = tuple(g.players[i].payoff_array for i in range(2))
tableaux, bases = initialize_tableaux(payoff_matrices)
converged = lemke_howson_tbl(tableaux, bases, init_pivot, max_iter)
if converged:
NE = get_mixed_actions(tableaux, bases)
return NE
else:
print("Not converged")
return None
# create normal form game
g = gt.NormalFormGame((gt.Player(A), gt.Player(B)))
init_pivot = 1
lemke_howson_numpy(g, init_pivot, 10)
Explanation: Finally we wrap all the procedures together, and define a function called lemke_howson_numpy which takes NormalFormGame as argument.
End of explanation
# use lemke_howson with initial pivot being 1
gt.lemke_howson(g, init_pivot=1)
Explanation: As it shows, one Nash equilibrium for this nondegenerate game is:
$$
x = (0, \frac{1}{3}, \frac{2}{3}), \quad y = (\frac{1}{3}, \frac{2}{3}), \
u = \frac{8}{3}, \quad v = 4.
$$
We can use the lemke_howson routine from quantecon.py, which wraps the procedure of creating tableaus and indice lists of basic variables, pivoting, and normalizing the found Nash equilibrium.
End of explanation
C = np.array([[0, 0, 0],
[0, 1, 1],
[1, 1, 0]])
D = np.array([[1, 0, 1],
[1, 1, 0],
[0, 0, 2]])
g = gt.NormalFormGame((gt.Player(C), gt.Player(D)))
lemke_howson_numpy(g, init_pivot=0, max_iter=100)
Explanation: Degenerate Games
In a degenerate game, the minimum ratio test may have more than one minimizers. In this case, arbitrary tie breaking may lead to cycling, causing the algorithm to fall into an infinite loop.
For example, consider a game with the payoff matrices being $C$ and $D$ for Player 0 and 1 respectively:
$$
C =
\begin{bmatrix}
0 & 0 & 0 \
0 & 1 & 1 \
1 & 1 & 0
\end{bmatrix},
\quad
D =
\begin{bmatrix}
1 & 0 & 1 \
1 & 1 & 0 \
0 & 0 & 2
\end{bmatrix}.
$$
As numpy.argmin() returns the first indice when there is a tie in a 1-dimensional array, if we still use lemke_howson_numpy which utilize min_ratio_test, it may fail to find a Nash equilirbium properly because of cycling.
End of explanation
gt.lemke_howson(g, init_pivot=0)
Explanation: As such, we replace the minimum ratio test with the lexico-minimum ratio test for determining the leaving basic variable in degenerate games.
Note that the original system can be written as
$$
D x + I s = \mathbf{1},
$$
where $x$ is the mixed action, and $s$ is the slack variable vector.
The lexico-minimum ratio test introduces $(\epsilon, \epsilon^1, \cdots, \epsilon^n)^\prime$ to the right hand:
$$
D x + I s = \mathbf{1} + (\epsilon, \epsilon^1, \cdots, \epsilon^n)^\prime.
$$
After any number of pivoting steps, the system can be representd by pre-multiplying a inverse of a basic matrix $P$:
$$
P D x + P I s = P \mathbf{1} + P (\epsilon, \epsilon^1, \cdots, \epsilon^n)^\prime.
$$
Write $p_{i0} + p_{i1} ε^1 + ··· + p_{in} ε^n$ for the ith entry of the vector in the right hand, and let $d_i$ be the ith row of the pivoting column.
The lexico-minimum ratio test break a tie by comparing the $p_{ik}$ and $p_{jk}$ in order:
Choose the minimizers of $p_{i0} / d_{i}$.
If more than one, among them choose the minimizers of $p_{i1} / d_{i}$.
repeat this until there is only one minimizer
Note that when implementing this in code, the matrix $P$ is the same with the coefficient matrix of slack variables, and thus we do not need to extend tableau to record more information.
As the lemke_howson routine in quantecon.py always uses lexico-minimum ratio test, there is no need to be concerned with nondegenerate games when using it.
End of explanation
import timeit
ns = 10
g = gt.random_game((n, n))
lemke_howson_numpy(g)
gt.lemke_howson(g)
ns = [10, 11, 12]
seed = 1234
for n in ns:
print("{0} by {0} payoff matrices: ".format(n))
g = gt.random_game((n, n), random_state=seed)
print("lemke_howson_numpy : ", end='')
%timeit lemke_howson_numpy(g)
print("gt.lemke_howson : ", end='')
%timeit gt.lemke_howson(g)
print("\n")
Explanation: Comparing quantecon.lemke_howson and lemke_howson_numpy
The lemke_howson routine in quantecon.game_thoery is accelerated by numba, and therefore, the speed of finding a Nash equilibrium is much faster than lemke_howson_numpy. Below is a comparison of these two functions.
End of explanation |
11,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'lc' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Dataset Parameters
Let's add a lightcurve dataset to the Bundle (see also the lc API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
Step3: times
Step4: fluxes
Step5: sigmas
Step6: compute_times / compute_phases
See the Compute Times & Phases tutorial.
Step7: ld_mode
See the Limb Darkening tutorial
Step8: ld_func
ld_func will only be available if ld_mode is not 'interp', so let's set it to 'lookup'. See the limb darkening tutorial for more details.
Step9: ld_coeffs_source
ld_coeffs_source will only be available if ld_mode is 'lookup'. See the limb darkening tutorial for more details.
Step10: ld_coeffs
ld_coeffs will only be available if ld_mode is set to 'manual'. See the limb darkening tutorial for more details.
Step11: passband
See the Atmospheres & Passbands tutorial
Step12: intens_weighting
See the Intensity Weighting tutorial
Step13: pblum_mode
See the Passband Luminosity tutorial
Step14: pblum_component
pblum_component is only available if pblum_mode is set to 'component-coupled'. See the passband luminosity tutorial for more details.
Step15: pblum_dataset
pblum_dataset is only available if pblum_mode is set to 'dataset-coupled'. In this case we'll get a warning because there is only one dataset. See the passband luminosity tutorial for more details.
Step16: pblum
pblum is only available if pblum_mode is set to 'decoupled' (in which case there is a pblum entry per-star) or 'component-coupled' (in which case there is only an entry for the star chosen by pblum_component). See the passband luminosity tutorial for more details.
Step17: l3_mode
See the "Third" Light tutorial
Step18: l3
l3 is only avaible if l3_mode is set to 'flux'. See the "Third" Light tutorial for more details.
Step19: l3_frac
l3_frac is only avaible if l3_mode is set to 'fraction'. See the "Third" Light tutorial for more details.
Step20: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to computing fluxes and the LC dataset.
Other compute options are covered elsewhere
Step21: lc_method
Step22: irrad_method
Step23: For more details on irradiation, see the Irradiation tutorial
boosting_method
Step24: For more details on boosting, see the Beaming and Boosting example script
atm
Step25: For more details on atmospheres, see the Atmospheres & Passbands tutorial
Synthetics
Step26: Plotting
By default, LC datasets plot as flux vs time.
Step27: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
Step28: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
Step29: Mesh Fields
By adding a mesh dataset and setting the columns parameter, light-curve (i.e. passband-dependent) per-element quantities can be exposed and plotted.
Let's add a single mesh at the first time of the light-curve and re-call run_compute
Step30: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
Step31: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
Step32: Now let's look at each of the available fields.
pblum
For more details, see the tutorial on Passband Luminosities
Step33: pblum_ext is the extrinsic passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
abs_normal_intensities
Step34: abs_normal_intensities are the absolute normal intensities per-element.
normal_intensities
Step35: normal_intensities are the relative normal intensities per-element.
abs_intensities
Step36: abs_intensities are the projected absolute intensities (towards the observer) per-element.
intensities
Step37: intensities are the projected relative intensities (towards the observer) per-element.
boost_factors | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: 'lc' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc')
print(b.get_dataset(kind='lc', check_visible=False))
Explanation: Dataset Parameters
Let's add a lightcurve dataset to the Bundle (see also the lc API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
End of explanation
print(b.get_parameter(qualifier='times'))
Explanation: times
End of explanation
print(b.get_parameter(qualifier='fluxes'))
Explanation: fluxes
End of explanation
print(b.get_parameter(qualifier='sigmas'))
Explanation: sigmas
End of explanation
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='compute_phases_t0'))
Explanation: compute_times / compute_phases
See the Compute Times & Phases tutorial.
End of explanation
print(b.get_parameter(qualifier='ld_mode', component='primary'))
Explanation: ld_mode
See the Limb Darkening tutorial
End of explanation
b.set_value('ld_mode', component='primary', value='lookup')
print(b.get_parameter(qualifier='ld_func', component='primary'))
Explanation: ld_func
ld_func will only be available if ld_mode is not 'interp', so let's set it to 'lookup'. See the limb darkening tutorial for more details.
End of explanation
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary'))
Explanation: ld_coeffs_source
ld_coeffs_source will only be available if ld_mode is 'lookup'. See the limb darkening tutorial for more details.
End of explanation
b.set_value('ld_mode', component='primary', value='manual')
print(b.get_parameter(qualifier='ld_coeffs', component='primary'))
Explanation: ld_coeffs
ld_coeffs will only be available if ld_mode is set to 'manual'. See the limb darkening tutorial for more details.
End of explanation
print(b.get_parameter(qualifier='passband'))
Explanation: passband
See the Atmospheres & Passbands tutorial
End of explanation
print(b.get_parameter(qualifier='intens_weighting'))
Explanation: intens_weighting
See the Intensity Weighting tutorial
End of explanation
print(b.get_parameter(qualifier='pblum_mode'))
Explanation: pblum_mode
See the Passband Luminosity tutorial
End of explanation
b.set_value('pblum_mode', value='component-coupled')
print(b.get_parameter(qualifier='pblum_component'))
Explanation: pblum_component
pblum_component is only available if pblum_mode is set to 'component-coupled'. See the passband luminosity tutorial for more details.
End of explanation
b.set_value('pblum_mode', value='dataset-coupled')
print(b.get_parameter(qualifier='pblum_dataset'))
Explanation: pblum_dataset
pblum_dataset is only available if pblum_mode is set to 'dataset-coupled'. In this case we'll get a warning because there is only one dataset. See the passband luminosity tutorial for more details.
End of explanation
b.set_value('pblum_mode', value='decoupled')
print(b.get_parameter(qualifier='pblum', component='primary'))
Explanation: pblum
pblum is only available if pblum_mode is set to 'decoupled' (in which case there is a pblum entry per-star) or 'component-coupled' (in which case there is only an entry for the star chosen by pblum_component). See the passband luminosity tutorial for more details.
End of explanation
print(b.get_parameter(qualifier='l3_mode'))
Explanation: l3_mode
See the "Third" Light tutorial
End of explanation
b.set_value('l3_mode', value='flux')
print(b.get_parameter(qualifier='l3'))
Explanation: l3
l3 is only avaible if l3_mode is set to 'flux'. See the "Third" Light tutorial for more details.
End of explanation
b.set_value('l3_mode', value='fraction')
print(b.get_parameter(qualifier='l3_frac'))
Explanation: l3_frac
l3_frac is only avaible if l3_mode is set to 'fraction'. See the "Third" Light tutorial for more details.
End of explanation
print(b.get_compute())
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to computing fluxes and the LC dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
* parameters related to meshing, eclipse detection, and subdivision are explained in the section on the mesh dataset
End of explanation
print(b.get_parameter(qualifier='lc_method'))
Explanation: lc_method
End of explanation
print(b.get_parameter(qualifier='irrad_method'))
Explanation: irrad_method
End of explanation
print(b.get_parameter(qualifier='boosting_method'))
Explanation: For more details on irradiation, see the Irradiation tutorial
boosting_method
End of explanation
print(b.get_parameter(qualifier='atm', component='primary'))
Explanation: For more details on boosting, see the Beaming and Boosting example script
atm
End of explanation
b.set_value('times', phoebe.linspace(0,1,101))
b.run_compute()
print(b.filter(context='model').twigs)
print(b.get_parameter(qualifier='times', kind='lc', context='model'))
print(b.get_parameter(qualifier='fluxes', kind='lc', context='model'))
Explanation: For more details on atmospheres, see the Atmospheres & Passbands tutorial
Synthetics
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: Plotting
By default, LC datasets plot as flux vs time.
End of explanation
afig, mplfig = b.plot(x='phases', show=True)
Explanation: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
End of explanation
print(b.filter(qualifier='period').components)
afig, mplfig = b.plot(x='phases:binary', show=True)
Explanation: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
b.set_value('columns', value=['intensities@lc01',
'abs_intensities@lc01',
'normal_intensities@lc01',
'abs_normal_intensities@lc01',
'pblum_ext@lc01',
'boost_factors@lc01'])
b.run_compute()
print(b.get_model().datasets)
Explanation: Mesh Fields
By adding a mesh dataset and setting the columns parameter, light-curve (i.e. passband-dependent) per-element quantities can be exposed and plotted.
Let's add a single mesh at the first time of the light-curve and re-call run_compute
End of explanation
print(b.filter(dataset='lc01', kind='mesh', context='model').twigs)
Explanation: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
End of explanation
afig, mplfig = b.filter(kind='mesh').plot(fc='intensities', ec='None', show=True)
Explanation: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
End of explanation
print(b.get_parameter(qualifier='pblum_ext',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: Now let's look at each of the available fields.
pblum
For more details, see the tutorial on Passband Luminosities
End of explanation
print(b.get_parameter(qualifier='abs_normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: pblum_ext is the extrinsic passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
abs_normal_intensities
End of explanation
print(b.get_parameter(qualifier='normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: abs_normal_intensities are the absolute normal intensities per-element.
normal_intensities
End of explanation
print(b.get_parameter(qualifier='abs_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: normal_intensities are the relative normal intensities per-element.
abs_intensities
End of explanation
print(b.get_parameter(qualifier='intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: abs_intensities are the projected absolute intensities (towards the observer) per-element.
intensities
End of explanation
print(b.get_parameter(qualifier='boost_factors',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
Explanation: intensities are the projected relative intensities (towards the observer) per-element.
boost_factors
End of explanation |
11,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Clean Raw Annotations
Load raw annotations
Step2: Make random and blocked samples disjoint
Step3: Tidy is_harassment_or_attack column
Step4: Remap aggression score
Step5: Remove answers to test questions
Step6: Remove annotations where revision could not be read
Step7: Examine aggression_score or is_harassment_or_attack input
Step8: Drop NAs in aggression_score or is_harassment_or_attack input
Step9: Remove ambivalent is_harassment_or_attack annotations
An annotations is ambivalent if it was labeled as both an attack and not an attack
Step10: Make sure that each rev was only annotated by the same worker once
Step11: Filter out annotations for revisions with duplicated diff content
Step12: Check that labels are not None
Step13: Remove annotations from all revisions that were annotated less than 8 times
Step14: Discard nuisance columns
Step15: Summary Stats | Python Code:
# v4_annotated
user_blocked = [
'annotated_onion_layer_5_rows_0_to_5000_raters_20',
'annotated_onion_layer_5_rows_0_to_10000',
'annotated_onion_layer_5_rows_0_to_10000_raters_3',
'annotated_onion_layer_5_rows_10000_to_50526_raters_10',
'annotated_onion_layer_10_rows_0_to_1000',
'annotated_onion_layer_20_rows_0_to_1000',
'annotated_onion_layer_30_rows_0_to_1000',
]
user_random = [
'annotated_random_data_rows_0_to_5000_raters_20',
'annotated_random_data_rows_5000_to_10000',
'annotated_random_data_rows_5000_to_10000_raters_3',
'annotated_random_data_rows_10000_to_20000_raters_10',
]
article_blocked = ['article_onion_layer_5_all_rows_raters_10',]
article_random = ['article_random_data_all_rows_raters_10',]
user_blocked = [
'user_blocked',
'user_blocked_2',
'user_blocked_3',
'user_blocked_4',
'user_blocked_layer_10',
'user_blocked_layer_20',
'user_blocked_layer_30',
]
user_random = [
'user_random',
'user_random_2',
'user_random_3',
'user_random_4',
'user_random_extra_baselines',
]
article_blocked = [ 'article_blocked',
'article_blocked_layer_5_extra_baselines' ]
article_random = ['article_random',
'article_random_extra_baselines']
files = {
'user': {'blocked': user_blocked, 'random': user_random},
'article': {'blocked': article_blocked, 'random': article_random}
}
dfs = []
for ns, d in files.items():
for sample, files in d.items():
for f in files:
df = pd.read_csv('../../data/annotations/raw/%s/%s.csv' % (ns,f))
df['src'] = f
df['ns'] = ns
df['sample'] = sample
dfs.append(df)
df = pd.concat(dfs)
print('# annotations: ', df.shape[0])
Explanation: Clean Raw Annotations
Load raw annotations
End of explanation
df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts().value_counts()
df.index = df.rev_id
df.sample_count = df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts()
df.sample_count.value_counts()
# just set them all to random
df['sample'][df.sample_count == 2] = 'random'
df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts().value_counts()
del df.sample_count
print('# annotations: ', df.shape[0])
Explanation: Make random and blocked samples disjoint
End of explanation
df = tidy_labels(df)
Explanation: Tidy is_harassment_or_attack column
End of explanation
df['aggression'] = df['aggression_score'].apply(map_aggression_score_to_2class)
Explanation: Remap aggression score
End of explanation
df = df.query('_golden == False')
print('# annotations: ', df.shape[0])
Explanation: Remove answers to test questions
End of explanation
# remove all annotations for a revisions where more than 50% of annotators for that revision could not read the comment
df = remove_na(df)
print('# annotations: ', df.shape[0])
# remove all annotations where the annotator could not read the comment
df = df.query('na==False')
print('# annotations: ', df.shape[0])
Explanation: Remove annotations where revision could not be read
End of explanation
df['aggression_score'].value_counts(dropna=False)
df['is_harassment_or_attack'].value_counts(dropna=False)
Explanation: Examine aggression_score or is_harassment_or_attack input
End of explanation
df = df.dropna(subset = ['aggression_score', 'is_harassment_or_attack'])
print('# annotations: ', df.shape[0])
Explanation: Drop NAs in aggression_score or is_harassment_or_attack input
End of explanation
# remove all annotations from users who are ambivalent in 10% or more of revisions
# we consider these users unreliable
def ambivalent(s):
return 'not_attack' in s and s!= 'not_attack'
df['ambivalent'] = df['is_harassment_or_attack'].apply(ambivalent)
non_ambivalent_workers = df.groupby('_worker_id', as_index = False)['ambivalent'].mean().query('ambivalent < 0.1')
df = df.merge(non_ambivalent_workers[['_worker_id']], how = 'inner', on = '_worker_id')
print('# annotations: ', df.shape[0])
# remove all other ambivalent annotations
df = df.query('ambivalent==False')
print('# annotations: ', df.shape[0])
Explanation: Remove ambivalent is_harassment_or_attack annotations
An annotations is ambivalent if it was labeled as both an attack and not an attack
End of explanation
df.groupby(['rev_id', '_worker_id']).size().value_counts()
df = df.drop_duplicates(subset = ['rev_id', '_worker_id'])
print('# annotations: ', df.shape[0])
Explanation: Make sure that each rev was only annotated by the same worker once
End of explanation
comments = df.drop_duplicates(subset = ['rev_id'])
print(comments.shape[0])
u_comments = comments.drop_duplicates(subset = ['clean_diff'])
print(u_comments.shape[0])
comments[comments.duplicated(subset = ['clean_diff'])].head(5)
df = df.merge(u_comments[['rev_id']], how = 'inner', on = 'rev_id')
print('# annotations: ', df.shape[0])
Explanation: Filter out annotations for revisions with duplicated diff content
End of explanation
df['recipient'].value_counts(dropna=False)
df['attack'].value_counts(dropna=False)
df['aggression'].value_counts(dropna=False)
Explanation: Check that labels are not None
End of explanation
counts = df['rev_id'].value_counts().to_frame()
counts.columns = ['n']
counts['rev_id'] = counts.index
counts.shape
counts['n'].value_counts().head()
counts_enough = counts.query("n>=8")
counts_enough.shape
df = df.merge(counts_enough[['rev_id']], how = 'inner', on = 'rev_id')
print('# annotations: ', df.shape[0])
Explanation: Remove annotations from all revisions that were annotated less than 8 times
End of explanation
df.columns
cols = ['rev_id', '_worker_id', 'ns', 'sample', 'src','clean_diff', 'diff', 'insert_only', 'page_id',
'page_title', 'rev_comment', 'rev_timestamp',
'user_id', 'user_text', 'not_attack', 'other', 'quoting', 'recipient',
'third_party', 'attack', 'aggression', 'aggression_score']
df = df[cols]
Explanation: Discard nuisance columns
End of explanation
df.groupby(['ns', 'sample']).size()
df.to_csv('../../data/annotations/clean/annotations.tsv', index=False, sep='\t')
pd.read_csv('../../data/annotations/clean/annotations.tsv', sep='\t').shape
Explanation: Summary Stats
End of explanation |
11,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyPanair Tutorial#1 Rectangular Wing
In this tutorial we will perform an analysis of a rectangular wing with a NACA0012 airfoil.
A brief overview of the procedure is listed below
Step1: 1.2 Creating a LaWGS file using pyPanair
Next, we will create a LaWGS file using the wgs_creator module of pyPanair. In the wgs_creator module, LaWGS files are created using objects that derive from the four classes, LaWGS, Network, Line, and Point. A brief explanation of these classes are written below.
* LaWGS
Step2: In the next step, the Network of the wing will be defined by interpolating two Lines, the root_airfoil and tip_airfoil. The root_airfoil, which is an NACA0012 airfoil, can be constructed using the naca4digit method.
Step3: The resulting airfoil has a chord length of 100., and its spanwise position (i.e. y-coordinate) is 0..
The upper and lower surfaces are each represented with 25 points.
Below is a plot of the xz-coordinates of the airfoil.
Step4: The tip_airfoil can be defined in the same manner as the root_airfoil.
However, this time, the y-coordinate will be 300..
Step5: The Network of the wing will be defined by interpolating to the two Lines, root_airfoil and tip_airfoil.
To do so, we use the linspace method.
Step6: The wing Network will have 20 lines, which are linear interpolations of the root_airfoil and tip_airfoil.
Networks can be visualized using the plot_wireframe method.
Step7: Along with coordinates of each point in the Network, the corners (e.g. 1) and edges (e.g. edge1) are displayed.
(Read reference 1 for more information on network corners and edges.)
Also, an arrow indicating the front side of the network is depicted.
(Details of "front side" will be mentioned later.)
After defining the Network for the wing, we register it to wgs.
Step8: The first variable, "wing", is the name of the network.
The second variable, wing, is the Network we are registering.
The third variable, 1, is the boundary type of the network. If the network represents a solid wall, the type is 1.
(Read reference 2 for more information on the different types of boundary conditions.)
The next process will be to define the geometry of the wingtip.
To do so, we split the tip_airfoil into upper and lower halves, and linearly interpolate them.
All of this can be done by typing
Step9: The wing tip looks like ...
Step10: The wingtip will also be registered to wgs.
Step11: In addition to the wing and wingtip, we must also define the "wake" of the wing.
In Panair, the wake is defined as a square network stretching out from the trailing edge of the wing.
The length of the wake should be about 25 to 50 times the length of the reference chord of the wing.
The wake can be easily defined by using the method make_wake.
Step12: The edge_number variable, means that we are attaching the wake to edge3 of the Network wing.
The wake_length variable defines how long the wake is.
In this case, the wake stretches from x=100. (the TE of the wing) to x=5000..
The wingwake will also be registered to wgs.
Step13: Notice that this time we are setting the boundary type as 18.
Boundary type 18 is used to define the that network is a "wake" emitted from sharp edges.
Now that we have finished defining the geometry of the rectangular wing,
we will check to see if there are any errors in the model we've constructed.
To make this task easy, we will write the geometry into a STL (STereoLithography) format.
To do so, type
Step14: A stl file named naca0012.stl should be created in the current working directory.
Open this file with a stl viewer. (I recommend Materialise MiniMagics 3.0)
Below is a screen shot of the stl.
Using the stl viewer, we should watch out for the following four points
Step15: Two files, naca0012.wgs and naca0012.aux should be crated in the current directory.
naca0012.wgs is a file that defines the geometry of the model.
naca0012.aux is a file that defines the analysis conditions.
The definition of each variable is listed below
Step16: (The n_wake variable is used to input the number of wakes.
For example, if we enter 2, the last 2 networks in the geometry will not be included in the output vtk file.)
agps.vtk should be created in the current directory, which can be open with ParaView.
Below is a screen shot of ParaView.
4.3 Visualization of the local lift coefficient
Next, we calculate the local lift coefficient from the surface pressure distribution.
This can be done using the section_force method.
Step17: The definition of each variable is as follows | Python Code:
%matplotlib notebook
from pyPanair.preprocess import wgs_creator
delta_wing = wgs_creator.read_wgs("sample1.wgs")
print(delta_wing._networks.keys())
delta_wing._networks["wing"].plot_wireframe(show_normvec=False, show_corners=False, show_edges=False)
Explanation: pyPanair Tutorial#1 Rectangular Wing
In this tutorial we will perform an analysis of a rectangular wing with a NACA0012 airfoil.
A brief overview of the procedure is listed below:
1. Define the geometry of the wing using wgs_creator.py, and create input files naca0012.wgs and naca0012.aux for panin
2. Using the preprocessor panin, create an input file a502.in for panair
3. Run the analysis
4. Visualize the results from the analysis via agps_converter.py, ffmf_converter.py, and calc_section_force.py
1. Defining the geometry
1.1 LaWGS Format
First off, we will begin by defining the geometry of the rectangular wing.
The input geometry for panair is defined in the Langley Wireframe Geometry Standard (LaWGS) format. The format is described in reference 1.
In a nutshell, LaWGS files are a bundle of 3 dimensional arrays, which are referred to as "networks". A network is a stack of "lines", and a line is a group of 3-dimensional "points". If a network has m lines, and each line has n points, the shape of the 3d array correspondig to the network will be (m, n, 3).
Below is an example of a LaWGS file for a delta wing.
sample1.wgs
deltawing created by wgs_creator
wing
1 3 5 0 0 0 0 0 0 0 1 1 1 0
1.0000000e+01 0.0000000e+00 0.0000000e+00
5.0000000e+00 0.0000000e+00 1.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00
5.0000000e+00 0.0000000e+00 -1.0000000e+00
1.0000000e+01 0.0000000e+00 0.0000000e+00
7.5000000e+00 1.0000000e+01 0.0000000e+00
5.0000000e+00 1.0000000e+01 5.0000000e-01
2.5000000e+00 1.0000000e+01 0.0000000e+00
5.0000000e+00 1.0000000e+01 -5.0000000e-01
7.5000000e+00 1.0000000e+01 0.0000000e+00
5.0000000e+00 2.0000000e+01 0.0000000e+00
5.0000000e+00 2.0000000e+01 0.0000000e+00
5.0000000e+00 2.0000000e+01 0.0000000e+00
5.0000000e+00 2.0000000e+01 0.0000000e+00
5.0000000e+00 2.0000000e+01 0.0000000e+00
The first row displays the title of the LaWGS file.
deltawing created by wgs_creator
The second row displays the name of the network.
wing
The third row lists the parameters of the network.
1 3 5 0 0 0 0 0 0 0 1 1 1 0
The definitions of the first three numbers are as follows:
* "1": the id of the network
* "3": the number of lines in the network
* "5": the number of points in each line
The remaining 11 numbers, 0 0 0 0 0 0 0 1 1 1 0, define the local and global axes. (Read reference 1 for more information.)
The fourth and subsequent lines, define the coordinates of each point. For example, the fourth line, 1.0000000e+01 0.0000000e+00 0.0000000e+00, means that the coordinate of the first point is (x, y, z) = (1., 0., 0.).
The wireframe defined by the above file looks like ...
End of explanation
wgs = wgs_creator.LaWGS("NACA0012")
Explanation: 1.2 Creating a LaWGS file using pyPanair
Next, we will create a LaWGS file using the wgs_creator module of pyPanair. In the wgs_creator module, LaWGS files are created using objects that derive from the four classes, LaWGS, Network, Line, and Point. A brief explanation of these classes are written below.
* LaWGS: A class that represents a LaWGS format geometry as a list of Networks. Can be used to read/write LaWGS files.
* Network: A class that represents a network as a 3-dimensional array.
* Line: A class that represents a line as a 2-dimensional array.
* Point: A class that represents the xyz coordinates of a point. An 1-dimensional array is used to define the coordinates.
Now we shall begin the actual work of creating a LaWGS file. First, we start of by initializing a LaWGS object. The title of the LaWGS object will be "NACA0012".
End of explanation
root_airfoil = wgs_creator.naca4digit("0012", num=25, chord=100., y_coordinate=0.)
Explanation: In the next step, the Network of the wing will be defined by interpolating two Lines, the root_airfoil and tip_airfoil. The root_airfoil, which is an NACA0012 airfoil, can be constructed using the naca4digit method.
End of explanation
import matplotlib.pyplot as plt
plt.plot(root_airfoil[:,0], root_airfoil[:,2], "s", mfc="None", mec="b")
plt.xlabel("x")
plt.ylabel("z")
plt.grid()
plt.show()
Explanation: The resulting airfoil has a chord length of 100., and its spanwise position (i.e. y-coordinate) is 0..
The upper and lower surfaces are each represented with 25 points.
Below is a plot of the xz-coordinates of the airfoil.
End of explanation
tip_airfoil = wgs_creator.naca4digit("0012", num=25, chord=100., y_coordinate=300.)
Explanation: The tip_airfoil can be defined in the same manner as the root_airfoil.
However, this time, the y-coordinate will be 300..
End of explanation
wing = root_airfoil.linspace(tip_airfoil, num=20)
Explanation: The Network of the wing will be defined by interpolating to the two Lines, root_airfoil and tip_airfoil.
To do so, we use the linspace method.
End of explanation
wing.plot_wireframe()
Explanation: The wing Network will have 20 lines, which are linear interpolations of the root_airfoil and tip_airfoil.
Networks can be visualized using the plot_wireframe method.
End of explanation
wgs.append_network("wing", wing, boun_type=1)
Explanation: Along with coordinates of each point in the Network, the corners (e.g. 1) and edges (e.g. edge1) are displayed.
(Read reference 1 for more information on network corners and edges.)
Also, an arrow indicating the front side of the network is depicted.
(Details of "front side" will be mentioned later.)
After defining the Network for the wing, we register it to wgs.
End of explanation
wingtip_upper, wingtip_lower = tip_airfoil.split_half()
wingtip_lower = wingtip_lower.flip()
wingtip = wingtip_upper.linspace(wingtip_lower, num = 5)
Explanation: The first variable, "wing", is the name of the network.
The second variable, wing, is the Network we are registering.
The third variable, 1, is the boundary type of the network. If the network represents a solid wall, the type is 1.
(Read reference 2 for more information on the different types of boundary conditions.)
The next process will be to define the geometry of the wingtip.
To do so, we split the tip_airfoil into upper and lower halves, and linearly interpolate them.
All of this can be done by typing
End of explanation
wingtip.plot_wireframe()
Explanation: The wing tip looks like ...
End of explanation
wgs.append_network("wingtip", wingtip, 1)
Explanation: The wingtip will also be registered to wgs.
End of explanation
wingwake = wing.make_wake(edge_number=3, wake_length=50*100.)
Explanation: In addition to the wing and wingtip, we must also define the "wake" of the wing.
In Panair, the wake is defined as a square network stretching out from the trailing edge of the wing.
The length of the wake should be about 25 to 50 times the length of the reference chord of the wing.
The wake can be easily defined by using the method make_wake.
End of explanation
wgs.append_network("wingwake", wingwake, 18)
Explanation: The edge_number variable, means that we are attaching the wake to edge3 of the Network wing.
The wake_length variable defines how long the wake is.
In this case, the wake stretches from x=100. (the TE of the wing) to x=5000..
The wingwake will also be registered to wgs.
End of explanation
wgs.create_stl()
Explanation: Notice that this time we are setting the boundary type as 18.
Boundary type 18 is used to define the that network is a "wake" emitted from sharp edges.
Now that we have finished defining the geometry of the rectangular wing,
we will check to see if there are any errors in the model we've constructed.
To make this task easy, we will write the geometry into a STL (STereoLithography) format.
To do so, type
End of explanation
wgs.create_wgs()
wgs.create_aux(alpha=6, mach=0.2, cbar=100., span=600.,
sref=60000., xref=25., zref=0.)
Explanation: A stl file named naca0012.stl should be created in the current working directory.
Open this file with a stl viewer. (I recommend Materialise MiniMagics 3.0)
Below is a screen shot of the stl.
Using the stl viewer, we should watch out for the following four points:
The stl model is watertight (e.g. No holes between each network)
There are no intersecting networks (e.g. No networks crossing over each other)
The front side of the network (excluding wake networks) is facing outwards.
In the picture above, the grey side of the stl (which correspond to the front side of the networks) is facing the flow.
The corners of each network abut on corners from other networks
If you've followed the instructions, the naca0012.stl should fulfill all four points. (If not try again.)
Finally, we will write the input files for panin.
This can be accomplished, using the methods, write_wgs and write_aux.
End of explanation
from pyPanair.postprocess import write_vtk
write_vtk(n_wake=1)
Explanation: Two files, naca0012.wgs and naca0012.aux should be crated in the current directory.
naca0012.wgs is a file that defines the geometry of the model.
naca0012.aux is a file that defines the analysis conditions.
The definition of each variable is listed below:
* alpha: The angle of attack (AoA) of the analysis.
When conducting multiple cases, the variable is a tuple defining the AoAs. (e.g. (2, 4, 6, 8))
Up to four cases can be conducted in one run.
* mach: The mach number of the flow
* cbar: The reference chord of the wing
* span: The reference span of the wing
* sref: The reference area of the wing
* xref: The x-axis coordinate of the center of rotation
* zref: The z-axis coordinate of the center of rotation
2. Creating an input file for panair
In this chapter we will create an input file for panair, using its preprocessor panin.
If you do not have a copy of panin, download it from PDAS, and compile it.
(I'm using cygwin, so the code blocks below are for cygwin environment.)
bash
$ gfortran -o panin.exe panin.f90
After compiling panin, place panin.exe, naca0012.wgs, and naca0012.aux under the tutorial1/panin/ directory.
Then run panin.
bash
$ ./panin
It will display
Prepare input for PanAir
Version 1.0 (4Jan2000)
Ralph L. Carmichael, Public Domain Aeronautical Software
Enter the name of the auxiliary file:
Enter naca0012.aux. If everything goes fine, it should display
10 records copied from auxiliary file.
9 records in the internal data file.
Geometry data to be read from NACA0012.wgs
Reading WGS file...
Reading network wing
Reading network wingtip
Reading network wingwake
Reading input file instructions...
Command 1 MACH 0.2
Command 11 ALPHA 6
Command 6 CBAR 100.0
Command 7 SPAN 600.0
Command 2 SREF 60000.0
Command 3 XREF 25.0
Command 5 ZREF 0.0
Command 35 BOUN 1 1 18
Writing PanAir input file...
Files a502.in added to your directory.
Also, file panin.dbg
Normal termination of panin, version 1.0 (4Jan2000)
Normal termination of panin
and a502.in (an input file for panair) should be created under the current directory.
3. Running panair
Now its time to run the analysis.
If you do not have a copy of panair, download it from PDAS, and compile it.
(A bunch of warnings will appear, but it should work.)
bash
$ gfortran -o panair.exe -Ofast -march=native panair.f90
Place panair.exe and a502.in under the tutorial1/panair/ directory, and run panair.
bash
$ ./panair
panair will display the below text.
bash
Panair High Order Panel Code, Version 15.0 (10 December 2009)
Enter name of input file:
Enter a502.in. The analysis will end in a few seconds.
After the analysis ends, output files such as , panair.out, agps, and ffmf will be created in the current directory.
panair.out contains the output of the whole analysis (e.g. source and doublet strength of each panel)
agps contains the surface pressure distribution for each case
ffmf contains the aerodynamic coefficients for each case
Warning: Along with the output files, you will also notice the existence of intermediate files (e.g. rwms01).
Users should always delete these intermediate files when running new cases.
(To do so, run clean502.bat or clean502.sh which is contained in the archive file panair.zip)
4. Visualizing the output
4.1 Validation
In this chapter we will visualize the results of the analysis, but before we do so, we will validate the results by checking the aerodynamic coefficients.
Open the ffmf file contained in the tutorial1/panair/ directory with a text editor.
After the headers of the file, you shall see
```
sol-no alpha beta cl cdi cy fx fy fz
mx my mz area
------ ------- ------- ------- ------- --------- --------- --------- --------- ------------
1 6.0000 0.0000 0.47895 0.01268 0.00000 -0.03745 0.00000 0.47765
0.00000 0.00196 0.00000 123954.39335
```
This area shows the aerodynamic coefficients of each case.
A brief explanation of each column is listed below:
sol-no: The case number
alpha: The AoA of the case
beta: The side slip angle of the case
cl: The lift coefficient of the entire geometry
cdi: The induced drag coefficient of the entire geometry
cy: The side force coefficient of the entire geometry
fx, fy, fz: The non-dimensional force in x, y, z direction, respectively
mx, my, mz: The non-dimensional torque in x, y, z direction, respectively
According to the lifiting line theory<sup>(3</sup>, when the AoA is $\alpha\mathrm{[rad]}$, the lift coefficient ($C_L$) and induced drag coefficient ($C_{D_i}$) for a untwisted uncambered rectangular wing with an aspect ratio of $6$ are
$$C_L = 0.9160\frac{\pi^2}{2}\alpha$$
$$C_{D_i} = 0.8744\frac{\pi^3}{24}\alpha^2$$
In the analysis, the AoA is $0.1047 \mathrm{[rad]}$, so the lift and drag coefficients should be $C_L = 0.4734$ and $C_{D_i} = 0.01239$.
The analysis predicted a fairly close value of $C_L = 0.4790$ and $C_{D_i} = 0.01268$.
4.2 Visualization of the surface pressure distribution
Now we shall move on to the visualization of the result.
First, we begin by converting the agps file into a format that can be used in common visualization applications.
The agps file can be converted into three formats:
vtk: Legacy paraview format
vtu: Multi-block paraview format
dat: Multi-block tecplot format
In this tutorial we choose the vtk format.
To convert the agps file, first move the agps file to the tutorial1/ directory.
Then, use the write_vtk method of pyPanair. (If you wish to use tecplot, enter write_tec instead of write_vtk.)
End of explanation
from pyPanair.postprocess import calc_section_force
calc_section_force(aoa=6, mac=100., rot_center=(25,0,0), casenum=1, networknum=1)
Explanation: (The n_wake variable is used to input the number of wakes.
For example, if we enter 2, the last 2 networks in the geometry will not be included in the output vtk file.)
agps.vtk should be created in the current directory, which can be open with ParaView.
Below is a screen shot of ParaView.
4.3 Visualization of the local lift coefficient
Next, we calculate the local lift coefficient from the surface pressure distribution.
This can be done using the section_force method.
End of explanation
import pandas as pd
section_force = pd.read_csv("section_force.csv")
section_force
plt.plot(section_force.pos, section_force.cl, "s", mfc="None", mec="b")
plt.xlabel("spanwise position")
plt.ylabel("local lift coefficient")
plt.grid()
plt.show()
Explanation: The definition of each variable is as follows:
aoa: The AoA of the case
mac: The mean aerodynamic chord of the wing
rot_center: The xyz-coordinates of the center of rotation
casenum: The case number of the analysis (e.g. 2 if the sol-num for the case is 2 in ffmf)
networknum: The network number of the wing (e.g. 1 if the wing is first network in the LaWGS file)
section_force.csv should be created in the current directory.
To visualize it, we will use pandas.
End of explanation |
11,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook presents how to perform maximum-likelihood parameter estimation for multiple neurons. The neurons depend on each other through a set of weights.
Step1: Reading input-output data
Step2: Extracting a spike train from spike positions
Step3: Creating filters
Step4: Conditional Intensity (spike rate)
Step5: <!---Simulating a neuron spike trains
Step6: Conditional intensity as a function of the covariates
Step7: Fitting the likelihood
Step8: Specifying the true parameters
Step9: Extracting the weight matrix | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import random
import csv
%matplotlib inline
import os
import sys
sys.path.append(os.path.join(os.getcwd(),'..'))
sys.path.append(os.path.join(os.getcwd(),'..','code'))
sys.path.append(os.path.join(os.getcwd(),'..','data'))
import filters
import likelihood_functions as lk
import PoissonProcessClasses as PP
import auxiliary_functions as auxfun
import imp
imp.reload(filters)
imp.reload(lk)
imp.reload(auxfun)
imp.reload(PP)
# Number of neurons
nofCells = 2
Explanation: This notebook presents how to perform maximum-likelihood parameter estimation for multiple neurons. The neurons depend on each other through a set of weights.
End of explanation
# creating the path to the data
data_path = os.path.join(os.getcwd(),'..','data')
# reading stimulus
Stim = np.array(pd.read_csv(os.path.join(data_path,'Stim2.csv'),header = None))
# reading location of spikes
# (lengths of tsp sequences are not equal so reading them line by line)
tsp_list = []
with open(os.path.join(data_path,'tsp2.csv')) as csvfile:
tspreader = csv.reader(csvfile)
for row in tspreader:
tsp_list.append(row)
Explanation: Reading input-output data:
End of explanation
dt = 0.01
y_list = []
for tsp in tsp_list:
tsp = np.array(tsp).astype(np.float)
tsp_int = np.ceil((tsp - dt*0.001)/dt)
tsp_int = np.reshape(tsp_int,(tsp_int.shape[0],1))
tsp_int = tsp_int.astype(int)
y_list.append(np.array([item in tsp_int for item in np.arange(Stim.shape[0]/dt)+1]).astype(int))
Explanation: Extracting a spike train from spike positions:
End of explanation
# create a stimulus filter
kpeaks = np.array([0,round(20/3)])
pars_k = {'neye':5,'n':5,'kpeaks':kpeaks,'b':3}
K,K_orth,kt_domain = filters.createStimulusBasis(pars_k, nkt = 20)
# create a post-spike filter
hpeaks = np.array([0.1,2])
pars_h = {'n':5,'hpeaks':hpeaks,'b':.4}
H,H_orth,ht_domain = filters.createPostSpikeBasis(pars_h,dt)
# Interpolate Post Spike Filter
MSP = auxfun.makeInterpMatrix(len(ht_domain),1)
MSP[0,0] = 0
H_orth = np.dot(MSP,H_orth)
Explanation: Creating filters:
End of explanation
M_k = lk.construct_M_k(Stim,K,dt)
M_h_list = []
for tsp in tsp_list:
tsp = np.array(tsp).astype(np.float)
M_h_list.append(lk.construct_M_h(tsp,H_orth,dt,Stim))
# creating a matrix of output covariates
Y = np.array(y_list).T
Explanation: Conditional Intensity (spike rate):
$$\lambda_{\beta}(i) = \exp(K(\beta_k)Stim + H(\beta_h)y + \sum_{j\ne i}w_j I(\beta_{I})*y_j) + \mu$$
$$\lambda_{\beta}(i) = \exp(M_k\beta_k + M_h \beta_h + Y w + \mu)$$
Creating a matrix of covariates:
End of explanation
# tsp_list = []
# for i in range(nofCells):
# tsp_list.append(auxfun.simSpikes(np.hstack((coeff_k,coeff_h)),M,dt))
M_list = []
for i in range(len(M_h_list)):
# exclude the i'th spike-train
M_list.append(np.hstack((M_k,M_h_list[i],np.delete(Y,i,1),np.ones((M_k.shape[0],1)))))
#M_list.append(np.hstack((M_k,M_h_list[i],np.ones((M_h.shape[0],1)))))
Explanation: <!---Simulating a neuron spike trains:-->
End of explanation
coeff_k0 = np.array([ 0.061453,0.284916,0.860335,1.256983,0.910615,0.488660,-0.887091,0.097441,0.026607,-0.090147])
coeff_h0 = np.zeros((5,))
coeff_w0 = np.zeros((nofCells,))
mu_0 = 0
pars0 = np.hstack((coeff_k0,coeff_h0,coeff_w0,mu_0))
pars0 = np.hstack((coeff_k0,coeff_h0,mu_0))
pars0 = np.zeros((17,))
Explanation: Conditional intensity as a function of the covariates:
$$ \lambda_{\beta} = \exp(M\beta) $$
Create a Poisson process model with this intensity:
Setting initial parameters:
End of explanation
res_list = []
for i in range(len(y_list)):
model = PP.PPModel(M_list[i].T,dt = dt/100)
res_list.append(model.fit(y_list[i],start_coef = pars0,maxiter = 500, method = 'L-BFGS-B'))
Explanation: Fitting the likelihood:
End of explanation
k_coeff = np.array([0.061453, 0.284916, 0.860335, 1.256983, 0.910615, 0.488660, -0.887091, 0.097441, 0.026607, -0.090147])
h_coeff = np.array([-15.18,38.24,-67.58,-14.06,-3.36])
for i in range(len(res_list)):
k_coeff_predicted = res_list[i].x[:10]
h_coeff_predicted = res_list[i].x[10:15]
print('Estimated dc for neuron '+str(i)+': '+str(res_list[i].x[-1]))
fig,axs = plt.subplots(1,2,figsize = (10,5))
fig.suptitle('Neuron%d'%(i+1))
axs[0].plot(-kt_domain[::-1],np.dot(K,k_coeff_predicted),'r',label = 'predicted')
axs[0].set_title('Stimulus Filter')
axs[0].hold(True)
axs[0].plot(-kt_domain[::-1],np.dot(K,k_coeff),'b',label = 'true')
axs[0].plot(-kt_domain[::-1],np.dot(K,pars0[:10]),'g',label = 'initial')
axs[0].set_xlabel('Time')
axs[0].legend(loc = 'upper left')
axs[1].set_title('Post-Spike Filter')
axs[1].plot(ht_domain,np.dot(H_orth,h_coeff_predicted),'r',label = 'predicted')
axs[1].plot(ht_domain,np.dot(H_orth,h_coeff),'b',label = 'true')
axs[1].plot(ht_domain,np.dot(H_orth,coeff_h0[:H_orth.shape[1]]),'g',label = 'initial')
axs[1].set_title('Post-Spike Filter')
axs[1].set_xlabel('Time')
axs[1].legend(loc = 'upper right')
Explanation: Specifying the true parameters:
End of explanation
W = np.array([np.hstack((res_list[i].x[-(nofCells):-nofCells+i],0,res_list[i].x[-nofCells+i:-1])) for i in range(len(res_list))])
print(W)
Explanation: Extracting the weight matrix:
End of explanation |
11,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reflection and Heating
For a comparison between "Horvat" and "Wilson" methods in the "irad_method" parameter, see the tutorial on Lambert Scattering.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
The parameters that define reflection and heating are all prefaced by "irrad_frac" (fraction of incident flux) and suffixed by "bol" to indicate that they all refer to a bolometric (rather than passband-dependent) process. For this reason, they are not stored in the dataset, but rather directly in the component.
Each of these parameters dictates how much incident flux will be handled by each of the available processes. For now these only include reflection (heating with immediate re-emission, without heat distribution) and lost flux. In the future, heating with distribution and scattering will also be supported.
For each component, these parameters must add up to exactly 1.0 - and this is handled by a constraint which by default constrains the "lost" parameter.
Step3: In order to see the effect of reflection, let's set "irrad_frac_refl_bol" of both of our stars to 0.9 - that is 90% of the incident flux will go towards reflection and 10% will be ignored.
Step4: Since reflection can be a computationally expensive process and in most cases is a low-order effect, there is a switch in the compute options that needs to be enabled in order for reflection to be taken into account. If this switch is False (which it is by default), the albedos are completely ignored and will be treated as if all incident light is lost/ignored.
Step5: Reflection has the most noticeable effect when the two stars are close to each other and have a large temperature ratio.
Step6: Influence on Light Curves (fluxes)
Step7: Let's run models with the reflection switch both turned on and off so that we can compare the two results. We'll also override delta to be a larger number since the computation time required by delta depends largely on the number of surface elements.
Step8: Influence on Meshes (Intensities) | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Reflection and Heating
For a comparison between "Horvat" and "Wilson" methods in the "irad_method" parameter, see the tutorial on Lambert Scattering.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
#logger = phoebe.logger('error')
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
print(b['irrad_frac_refl_bol'])
print(b['irrad_frac_lost_bol'])
print(b['irrad_frac_refl_bol@primary'])
print(b['irrad_frac_lost_bol@primary@component'])
Explanation: Relevant Parameters
The parameters that define reflection and heating are all prefaced by "irrad_frac" (fraction of incident flux) and suffixed by "bol" to indicate that they all refer to a bolometric (rather than passband-dependent) process. For this reason, they are not stored in the dataset, but rather directly in the component.
Each of these parameters dictates how much incident flux will be handled by each of the available processes. For now these only include reflection (heating with immediate re-emission, without heat distribution) and lost flux. In the future, heating with distribution and scattering will also be supported.
For each component, these parameters must add up to exactly 1.0 - and this is handled by a constraint which by default constrains the "lost" parameter.
End of explanation
b.set_value_all('irrad_frac_refl_bol', 0.9)
Explanation: In order to see the effect of reflection, let's set "irrad_frac_refl_bol" of both of our stars to 0.9 - that is 90% of the incident flux will go towards reflection and 10% will be ignored.
End of explanation
print(b['irrad_method@compute'])
Explanation: Since reflection can be a computationally expensive process and in most cases is a low-order effect, there is a switch in the compute options that needs to be enabled in order for reflection to be taken into account. If this switch is False (which it is by default), the albedos are completely ignored and will be treated as if all incident light is lost/ignored.
End of explanation
b['sma@orbit'] = 4.0
b['teff@primary'] = 10000
b['teff@secondary'] = 5000
Explanation: Reflection has the most noticeable effect when the two stars are close to each other and have a large temperature ratio.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101))
Explanation: Influence on Light Curves (fluxes)
End of explanation
b.run_compute(irrad_method='none', ntriangles=700, model='refl_false')
b.run_compute(irrad_method='wilson', ntriangles=700, model='refl_true')
afig, mplfig = b.plot(show=True, legend=True)
artists = plt.plot(b['value@times@refl_false'], b['value@fluxes@refl_true']-b['value@fluxes@refl_false'], 'r-')
Explanation: Let's run models with the reflection switch both turned on and off so that we can compare the two results. We'll also override delta to be a larger number since the computation time required by delta depends largely on the number of surface elements.
End of explanation
b.add_dataset('mesh', times=[0.2], columns=['teffs', 'intensities@lc01'])
b.disable_dataset('lc01')
b.run_compute(irrad_method='none', ntriangles=700, model='refl_false', overwrite=True)
b.run_compute(irrad_method='wilson', ntriangles=700, model='refl_true', overwrite=True)
#phoebe.logger('debug')
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_false',
fc='intensities', ec='face',
draw_sidebars=True, show=True)
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_true',
fc='intensities', ec='face',
draw_sidebars=True, show=True)
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_false',
fc='teffs', ec='face',
draw_sidebars=True, show=True)
afig, mplfig = b.plot(component='secondary', kind='mesh', model='refl_true',
fc='teffs', ec='face',
draw_sidebars=True, show=True)
Explanation: Influence on Meshes (Intensities)
End of explanation |
11,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Exercise 1
Step2: You can use as input the sound files from the sounds directory, thus using a relative path to it. If you run the read_audio_samples() function using the piano.wav sound file as input, with the default arguments, it should return the following samples
Step4: Part 2 - Basic operations with audio
The function minMaxAudio() should read an audio file and return the minimum and maximum values of the audio samples in that file. The input to the function is the wav file name (including the path) and the output should be two floating point values returned as a tuple.
Step5: If you run min_max_audio() using oboe-A4.wav as input, it should return the following output
Step7: Part 3 - Python array indexing
For the function hop_samples(), given a numpy array x, it should return every Mth element of x, starting from the first element. The input arguments to this function are a numpy array x and a positive integer M such that M < number of elements in x. The output of this function should be a numpy array.
Step8: If you run the functionhop_samples() with x = np.arange(10) and M = 2 as inputs, it should return
Step10: Part 4 - Downsampling
One of the required processes to represent an analog signal inside a computer is sampling. The sampling rate is the number of samples obtained in one second when sampling a continuous analog signal to a discrete digital signal. As mentioned we will be working with wav audio files that have a sampling rate of 44100 Hz, which is a typical value. Here you will learn a simple way of changing the original sampling rate of a sound to a lower sampling rate, and will learn the implications it has in the audio quality.
The function down_sample_audio() has as input an audio file with a given sampling rate, it should apply downsampling by a factor of M and return a down-sampled version of the input samples. The sampling rates and downsampling factors to use have to be integer values.
From the output samples if you need to create a wav audio file from an array, you can use the wavwrite() function from the utilFunctions.py module. However, in this exercise there is no need to write an audio file, we will be able to hear the sound without creating a file, just playing the array of samples.
Step12: Test cases for down_sample_audio() | Python Code:
import sys
import os
import numpy as np
# to use this notebook with colab uncomment the next line
# !git clone https://github.com/MTG/sms-tools.git
# and change the next line to sys.path.append('sms-tools/software/models/')
sys.path.append('../software/models/')
from utilFunctions import wavread, wavwrite
# E1 - 1.1: Complete the read_audio_samples() function
def read_audio_samples(input_file, first_sample=50001, num_samples=10):
Read num_samples samples from an audio file starting at sample first_sample
Args:
input_file (str): path of a wav file
Returns:
np.array: numpy array containing the selected samples
### Your code here
Explanation: Exercise 1: Python and sounds
This exercise aims to get familiar with some basic audio operations using Python. There are four parts to it: 1) Reading an audio file, 2) Basic operations with audio, 3) Python array indexing, and 4) Downsampling audio - Changing the sampling rate.
Before doing the exercise, please go through the general information for all the exercises given in README.txt of the notebooks directory.
Relevant concepts
Python: Python is a powerful and easy to learn programming language, which is used in a wide variety of application areas. More information in https://www.python.org/. We will use python in all the exercises and in this first one you will start learning about it by performing some basic operations with sound files.
Jupyter notebooks: Jupiter notebooks are interactive documents containing live code, equations, visualizations and narrative text. More information in https://jupyter.org/. It supports Python and all the exercises here use it.
Wav file: The wav file format is a lossless format to store sounds on a hard drive. Each audio sample is stored as a 16 bit integer number (sometimes also as 24 bit integer or 32 bit float). In this course we will work with only one type of audio files. All the sound files we use in the assignments should be wav files that are mono (one channel), in which the samples are stored in 16 bits, and that use (most of the time) the sampling rate of 44100 Hz. Once read into python, the samples will be converted to floating point values with a range from -1 to 1, resulting in a one-dimensional array of floating point values.
Part 1 - Reading in an audio file
The read_audio_samples() function bellow should read an audio file and return a specified number of consecutive samples of the file starting at a given sample.
The input to the function is the file name (including the path), plus the location of first sample and the number of consecutive samples to take, and the output should be a numpy array.
If you use the wavread() function from the utilFunctions module available in the software/models directory, the input samples will be automatically converted to a numpy array of floating point numbers with a range from -1 to 1, which is what we want.
Remember that in python, the index of the first sample of an array is 0 and not 1.
End of explanation
# E1 - 1.2: Call read_audio_samples() with the proposed input sound and default arguments
### Your code here
Explanation: You can use as input the sound files from the sounds directory, thus using a relative path to it. If you run the read_audio_samples() function using the piano.wav sound file as input, with the default arguments, it should return the following samples:
array([-0.06213569, -0.04541154, -0.02734458, -0.0093997, 0.00769066, 0.02319407, 0.03503525, 0.04309214, 0.04626606, 0.0441908], dtype=float32)
End of explanation
# E1 - 2.1: Complete function minMaxAudio()
def min_max_audio(input_file):
Compute the minimum and maximum values of the audio samples in the input file
Args:
inputFile(str): file name of the wav file (including path)
Returns:
tuple: minimum and maximum value of the audio samples, like: (min_val, max_val)
### Your code here
Explanation: Part 2 - Basic operations with audio
The function minMaxAudio() should read an audio file and return the minimum and maximum values of the audio samples in that file. The input to the function is the wav file name (including the path) and the output should be two floating point values returned as a tuple.
End of explanation
# E1 - 2.2: Plot input sound with x-axis in seconds, and call min_max_audio() with the proposed sound file
### Your code here
Explanation: If you run min_max_audio() using oboe-A4.wav as input, it should return the following output:
(-0.83486432, 0.56501967)
End of explanation
# E1 - 3.1: Complete the function hop_samples()
def hop_samples(x, M):
Return every Mth element of the input array
Args:
x(np.array): input numpy array
M(int): hop size (positive integer)
Returns:
np.array: array containing every Mth element in x, starting from the first element in x
### Your code here
Explanation: Part 3 - Python array indexing
For the function hop_samples(), given a numpy array x, it should return every Mth element of x, starting from the first element. The input arguments to this function are a numpy array x and a positive integer M such that M < number of elements in x. The output of this function should be a numpy array.
End of explanation
# E1 - 3.2: Plot input array, call hop_samples() with proposed input, and plot output array
### Your code here
Explanation: If you run the functionhop_samples() with x = np.arange(10) and M = 2 as inputs, it should return:
array([0, 2, 4, 6, 8])
End of explanation
# E1 - 4.1: Complete function down_sample_audio()
def down_sample_audio(input_file, M):
Downsample by a factor of M the input signal
Args:
input_file(str): file name of the wav file (including path)
M(int): downsampling factor (positive integer)
Returns:
tuple: input samples (np.array), original sampling rate (int), down-sampled signal (np.array),
and new sampling rate (int), like: (x, fs, y, fs_new)
### Your code here
Explanation: Part 4 - Downsampling
One of the required processes to represent an analog signal inside a computer is sampling. The sampling rate is the number of samples obtained in one second when sampling a continuous analog signal to a discrete digital signal. As mentioned we will be working with wav audio files that have a sampling rate of 44100 Hz, which is a typical value. Here you will learn a simple way of changing the original sampling rate of a sound to a lower sampling rate, and will learn the implications it has in the audio quality.
The function down_sample_audio() has as input an audio file with a given sampling rate, it should apply downsampling by a factor of M and return a down-sampled version of the input samples. The sampling rates and downsampling factors to use have to be integer values.
From the output samples if you need to create a wav audio file from an array, you can use the wavwrite() function from the utilFunctions.py module. However, in this exercise there is no need to write an audio file, we will be able to hear the sound without creating a file, just playing the array of samples.
End of explanation
import IPython.display as ipd
import matplotlib.pyplot as plt
# E1 - 4.2: Plot and play input sounds, call the function down_sample_audio() for the two test cases,
# and plot and play the output sounds.
### Your code here
# E1 - 4.3: Explain the results of part 4. What happened to the output signals compared to the input ones?
# Is there a difference between the 2 cases? Why? How could we avoid damaging the signal when downsampling it?
Explanation: Test cases for down_sample_audio():
Test Case 1: Use the file from the sounds directory vibraphone-C6.wav and a downsampling factor of M=14.
Test Case 2: Use the file from the sounds directory sawtooth-440.wav and a downsampling factor of M=14.
To play the output samples, import the Ipython.display package and use ipd.display(ipd.Audio(data=y, rate=fs_new)). To visualize the output samples import the matplotlib.pyplot package and use plt.plot(x).
You can find some related information in https://en.wikipedia.org/wiki/Downsampling_(signal_processing)
End of explanation |
11,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Will be focusing on LINEAR linear_0006 for now, until I better understand how they compare.
Step1: Focusing on one of the periapse tables for now | Python Code:
df = df[df.BIN_PATTERN_INDEX == 'LINEAR linear_0006']
# now can drop that column
df = df.drop('BIN_PATTERN_INDEX', axis=1)
bin_tables = df.BIN_TBL.value_counts()
bin_tables
for ind in bin_tables.index:
print(ind)
print(df[df.BIN_TBL==ind].orbit_segment.value_counts())
Explanation: Will be focusing on LINEAR linear_0006 for now, until I better understand how they compare.
End of explanation
df = df[df.BIN_TBL=='LINEAR 7,8 linear_0006']
df = df.drop('BIN_TBL', axis=1)
df.orbit_segment.value_counts()
df.index
df.columns
df.CHANNEL.value_counts()
df.INT_TIME.value_counts()
df.BINNING_SET.value_counts()
df.NAXIS1.value_counts()
df.NAXIS2.value_counts()
to_drop = []
for col in df.columns:
length = len(df[col].value_counts())
if length == 1:
to_drop.append(col)
df = df.drop(to_drop, axis=1)
df.columns
from iuvs import calib
df.DET_TEMP = df.DET_TEMP.map(calib.convert_det_temp_to_C) +273.15
df.CASE_TEMP = df.CASE_TEMP.map(calib.convert_case_temp_to_C) + 273.15
%matplotlib nbagg
import seaborn as sns
sns.set_context('talk')
from sklearn.preprocessing import normalize
df.index
df = df.reset_index()
df.set_index('TIME_OF_INT', inplace=True)
df['normalized_mean'] = normalize(df['mean']).T
df[['mean']].plot(style='*')
df.plot(kind='scatter', x='CASE_TEMP', y='mean')
df.plot(kind='scatter',x='DET_TEMP', y='CASE_TEMP')
df.plot(kind='scatter', x='SOLAR_LONGITUDE',y='mean')
df.plot(kind='scatter', x='SOLAR_LONGITUDE', y='DET_TEMP')
from sklearn import linear_model, decomposition, datasets
pca = decomposition.RandomizedPCA()
df.columns
Xcols = 'case_temp det_temp fov_deg lya_centroid mirror_deg mirror_dn mir_deg solar_longitude'.upper().split()
Xcols += ['mean']
Xcols
pca.fit(df[Xcols].values)
plt.close('all')
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.axes([.2, .2, .7, .7])
plt.semilogy(pca.explained_variance_, linewidth=2)
plt.axis('tight')
plt.xlabel('n_components')
plt.ylabel('explained_variance_')
Explanation: Focusing on one of the periapse tables for now:
End of explanation |
11,029 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How does one convert a list of Z-scores from the Z-distribution (standard normal distribution, Gaussian distribution) to left-tailed p-values? Original data is sampled from X ~ N(mu, sigma). I have yet to find the magical function in Scipy's stats module to do this, but one must be there. | Problem:
import scipy.stats
import numpy as np
z_scores = [-3, -2, 0, 2, 2.5]
mu = 3
sigma = 4
temp = np.array(z_scores)
p_values = scipy.stats.norm.cdf(temp) |
11,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 11
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: In this chapter, we develop a model of an epidemic as it spreads in a
susceptible population, and use it to evaluate the effectiveness of
possible interventions.
My presentation of the model in the next few chapters is based on an excellent article by David Smith and Lang Moore, [^1]
Step2: And then convert the numbers to fractions by dividing by the total
Step3: For now, let's assume we know the time between contacts and time between
recoveries
Step4: We can use them to compute the parameters of the model
Step5: Now we need a System object to store the parameters and initial
conditions. The following function takes the system parameters and returns a new System object
Step6: The default value for t_end is 14 weeks, about the length of a
semester.
Here's what the System object looks like.
Step7: The update function
At any point in time, the state of the system is represented by a
State object with three variables, S, I and R. So I'll define an
update function that takes as parameters a State object, the current
time, and a System object
Step8: The first line uses a feature we have not seen before, multiple
assignment. The value on the right side is a State object that
contains three values. The left side is a sequence of three variable
names. The assignment does just what we want
Step9: You might notice that this version of update_func does not use one of its parameters, t. I include it anyway because update functions
sometimes depend on time, and it is convenient if they all take the same parameters, whether they need them or not.
Running the simulation
Now we can simulate the model over a sequence of time steps
Step10: The parameters of run_simulation are the System object and the
update function. The System object contains the parameters, initial
conditions, and values of t0 and t_end.
We can call run_simulation like this
Step11: The result is the final state of the system
Step12: This result indicates that after 14 weeks (98 days), about 52% of the
population is still susceptible, which means they were never infected,
less than 1% are actively infected, and 48% have recovered, which means they were infected at some point.
Collecting the results
The previous version of run_simulation only returns the final state,
but we might want to see how the state changes over time. We'll consider two ways to do that
Step13: First, we create TimeSeries objects to store the results. Notice that
the variables S, I, and R are TimeSeries objects now.
Next we initialize state, t0, and the first elements of S, I and
R.
Inside the loop, we use update_func to compute the state of the system
at the next time step, then use multiple assignment to unpack the
elements of state, assigning each to the corresponding TimeSeries.
At the end of the function, we return the values S, I, and R. This
is the first example we have seen where a function returns more than one
value.
Now we can run the function like this
Step14: We'll use the following function to plot the results
Step15: And run it like this
Step16: Notice that it takes about three weeks (21 days) for the outbreak to get going, and about six weeks (42 days) before it peaks. The fraction of the population that's infected is never very high, but it adds up. In total, almost half the population gets sick.
Now with a TimeFrame
If the number of state variables is small, storing them as separate
TimeSeries objects might not be so bad. But a better alternative is to use a TimeFrame, which is another object defined in the ModSim
library.
A TimeFrame is a kind of a DataFrame, which we used in.
Here's a more concise version of run_simulation using a TimeFrame
Step17: The first line creates an empty TimeFrame with one column for each
state variable. Then, before the loop starts, we store the initial
conditions in the TimeFrame at t0. Based on the way we've been using
TimeSeries objects, it is tempting to write
Step18: And plot the results like this
Step19: As with a DataFrame, we can use the dot operator to select columns
from a TimeFrame.
Summary
Exercises
Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?
Hint | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 11
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
init = State(S=89, I=1, R=0)
init
Explanation: In this chapter, we develop a model of an epidemic as it spreads in a
susceptible population, and use it to evaluate the effectiveness of
possible interventions.
My presentation of the model in the next few chapters is based on an excellent article by David Smith and Lang Moore, [^1]: Smith and Moore, "The SIR Model for Spread of Disease," Journal of Online Mathematics and its Applications, December 2001, available at http://modsimpy.com/sir.
The Freshman Plague
Every year at Olin College, about 90 new students come to campus from
around the country and the world. Most of them arrive healthy and happy, but usually at least one brings with them some kind of infectious disease. A few weeks later, predictably, some fraction of the incoming class comes down with what we call "The Freshman Plague".
In this chapter we introduce a well-known model of infectious disease,
the Kermack-McKendrick model, and use it to explain the progression of
the disease over the course of the semester, predict the effect of
possible interventions (like immunization) and design the most effective intervention campaign.
So far we have done our own modeling; that is, we've chosen physical
systems, identified factors that seem important, and made decisions
about how to represent them. In this chapter we start with an existing
model and reverse-engineer it. Along the way, we consider the modeling
decisions that went into it and identify its capabilities and
limitations.
The SIR model
The Kermack-McKendrick model is a simple version of an SIR model,
so-named because it considers three categories of people:
S: People who are "susceptible\", that is, capable of
contracting the disease if they come into contact with someone who
is infected.
I: People who are "infectious\", that is, capable of passing
along the disease if they come into contact with someone
susceptible.
R: People who are "recovered\". In the basic version of the
model, people who have recovered are considered to be immune to
reinfection. That is a reasonable model for some diseases, but not
for others, so it should be on the list of assumptions to reconsider
later.
Let's think about how the number of people in each category changes over time. Suppose we know that people with the disease are infectious for a period of 4 days, on average. If 100 people are infectious at a
particular point in time, and we ignore the particular time each one
became infected, we expect about 1 out of 4 to recover on any particular day.
Putting that a different way, if the time between recoveries is 4 days, the recovery rate is about 0.25 recoveries per day, which we'll denote with the Greek letter gamma, $\gamma$, or the variable name gamma.
If the total number of people in the population is $N$, and the fraction currently infectious is $i$, the total number of recoveries we expect per day is $\gamma i N$.
Now let's think about the number of new infections. Suppose we know that each susceptible person comes into contact with 1 person every 3 days, on average, in a way that would cause them to become infected if the other person is infected. We'll denote this contact rate with the Greek letter beta, $\beta$.
It's probably not reasonable to assume that we know $\beta$ ahead of
time, but later we'll see how to estimate it based on data from previous outbreaks.
If $s$ is the fraction of the population that's susceptible, $s N$ is
the number of susceptible people, $\beta s N$ is the number of contacts per day, and $\beta s i N$ is the number of those contacts where the other person is infectious.
In summary:
The number of recoveries we expect per day is $\gamma i N$; dividing by $N$ yields the fraction of the population that recovers in a day, which is $\gamma i$.
The number of new infections we expect per day is $\beta s i N$;
dividing by $N$ yields the fraction of the population that gets
infected in a day, which is $\beta s i$.
This model assumes that the population is closed; that is, no one
arrives or departs, so the size of the population, $N$, is constant.
The SIR equations
If we treat time as a continuous quantity, we can write differential
equations that describe the rates of change for $s$, $i$, and $r$ (where $r$ is the fraction of the population that has recovered):
$$\begin{aligned}
\frac{ds}{dt} &= -\beta s i \
\frac{di}{dt} &= \beta s i - \gamma i\
\frac{dr}{dt} &= \gamma i\end{aligned}$$
To avoid cluttering the equations, I leave it implied that $s$ is a function of time, $s(t)$, and likewise for $i$ and $r$.
SIR models are examples of compartment models, so-called because
they divide the world into discrete categories, or compartments, and
describe transitions from one compartment to another. Compartments are
also called stocks and transitions between them are called
flows.
In this example, there are three stocks---susceptible, infectious, and
recovered---and two flows---new infections and recoveries. Compartment
models are often represented visually using stock and flow diagrams (see http://modsimpy.com/stock).
The following figure shows the stock and flow diagram for an SIR
model.
{width="4in"}
Stocks are represented by rectangles, flows by arrows. The widget in the middle of the arrows represents a valve that controls the rate of flow; the diagram shows the parameters that control the valves.
Implementation
For a given physical system, there are many possible models, and for a
given model, there are many ways to represent it. For example, we can
represent an SIR model as a stock-and-flow diagram, as a set of
differential equations, or as a Python program. The process of
representing a model in these forms is called implementation. In
this section, we implement the SIR model in Python.
I'll represent the initial state of the system using a State object
with state variables S, I, and R; they represent the fraction of
the population in each compartment.
We can initialize the State object with the number of people in each compartment, assuming there is one infected student in a class of 90:
End of explanation
from numpy import sum
init /= sum(init)
init
Explanation: And then convert the numbers to fractions by dividing by the total:
End of explanation
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
Explanation: For now, let's assume we know the time between contacts and time between
recoveries:
End of explanation
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
Explanation: We can use them to compute the parameters of the model:
End of explanation
#export
def make_system(beta, gamma):
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
Explanation: Now we need a System object to store the parameters and initial
conditions. The following function takes the system parameters and returns a new System object:
End of explanation
system = make_system(beta, gamma)
system
Explanation: The default value for t_end is 14 weeks, about the length of a
semester.
Here's what the System object looks like.
End of explanation
#export
def update_func(state, t, system):
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
Explanation: The update function
At any point in time, the state of the system is represented by a
State object with three variables, S, I and R. So I'll define an
update function that takes as parameters a State object, the current
time, and a System object:
End of explanation
state = update_func(init, 0, system)
state
Explanation: The first line uses a feature we have not seen before, multiple
assignment. The value on the right side is a State object that
contains three values. The left side is a sequence of three variable
names. The assignment does just what we want: it assigns the three
values from the State object to the three variables, in order.
The variables s, i and r, are lowercase to distinguish them
from the state variables, S, I and R.
The update function computes infected and recovered as a fraction of the population, then updates s, i and r. The return value is a State that contains the updated values.
When we call update_func like this:
End of explanation
#export
from numpy import arange
def run_simulation(system, update_func):
state = system.init
for t in arange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
Explanation: You might notice that this version of update_func does not use one of its parameters, t. I include it anyway because update functions
sometimes depend on time, and it is convenient if they all take the same parameters, whether they need them or not.
Running the simulation
Now we can simulate the model over a sequence of time steps:
End of explanation
system = make_system(beta, gamma)
final_state = run_simulation(system, update_func)
Explanation: The parameters of run_simulation are the System object and the
update function. The System object contains the parameters, initial
conditions, and values of t0 and t_end.
We can call run_simulation like this:
End of explanation
final_state
Explanation: The result is the final state of the system:
End of explanation
from modsim import TimeSeries
def run_simulation(system, update_func):
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in arange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
Explanation: This result indicates that after 14 weeks (98 days), about 52% of the
population is still susceptible, which means they were never infected,
less than 1% are actively infected, and 48% have recovered, which means they were infected at some point.
Collecting the results
The previous version of run_simulation only returns the final state,
but we might want to see how the state changes over time. We'll consider two ways to do that: first, using three TimeSeries objects, then using a new object called a TimeFrame.
Here's the first version:
End of explanation
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
Explanation: First, we create TimeSeries objects to store the results. Notice that
the variables S, I, and R are TimeSeries objects now.
Next we initialize state, t0, and the first elements of S, I and
R.
Inside the loop, we use update_func to compute the state of the system
at the next time step, then use multiple assignment to unpack the
elements of state, assigning each to the corresponding TimeSeries.
At the end of the function, we return the values S, I, and R. This
is the first example we have seen where a function returns more than one
value.
Now we can run the function like this:
End of explanation
#export
from modsim import decorate
def plot_results(S, I, R):
S.plot(style='--', label='Susceptible')
I.plot(style='-', label='Infected')
R.plot(style=':', label='Resistant')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
Explanation: We'll use the following function to plot the results:
End of explanation
plot_results(S, I, R)
Explanation: And run it like this:
End of explanation
#export
from modsim import TimeFrame
def run_simulation(system, update_func):
frame = TimeFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in arange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], t, system)
return frame
Explanation: Notice that it takes about three weeks (21 days) for the outbreak to get going, and about six weeks (42 days) before it peaks. The fraction of the population that's infected is never very high, but it adds up. In total, almost half the population gets sick.
Now with a TimeFrame
If the number of state variables is small, storing them as separate
TimeSeries objects might not be so bad. But a better alternative is to use a TimeFrame, which is another object defined in the ModSim
library.
A TimeFrame is a kind of a DataFrame, which we used in.
Here's a more concise version of run_simulation using a TimeFrame:
End of explanation
results = run_simulation(system, update_func)
Explanation: The first line creates an empty TimeFrame with one column for each
state variable. Then, before the loop starts, we store the initial
conditions in the TimeFrame at t0. Based on the way we've been using
TimeSeries objects, it is tempting to write:
frame[system.t0] = system.init
But when you use the bracket operator with a TimeFrame or DataFrame, it selects a column, not a row.
To select a row, we have to use loc, like this:
frame.loc[system.t0] = system.init
Since the value on the right side is a State, the assignment matches
up the index of the State with the columns of the TimeFrame; that
is, it assigns the S value from system.init to the S column of
frame, and likewise with I and R.
We use the same feature to write the loop more concisely, assigning the State we get from update_func directly to the next row of
frame.
Finally, we return frame. We can call this version of run_simulation like this:
End of explanation
plot_results(results.S, results.I, results.R)
Explanation: And plot the results like this:
End of explanation
# Solution
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
s_0 = system.init.S
final = run_simulation(system, update_func)
s_end = final.S[system.t_end]
s_0 - s_end
Explanation: As with a DataFrame, we can use the dot operator to select columns
from a TimeFrame.
Summary
Exercises
Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?
Hint: what is the change in S between the beginning and the end of the simulation?
End of explanation |
11,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MCMC
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: For most of this book we've been using grid methods to approximate posterior distributions.
For models with one or two parameters, grid algorithms are fast and the results are precise enough for most practical purposes.
With three parameters, they start to be slow, and with more than three they are usually not practical.
In the previous chapter we saw that we can solve some problems using conjugate priors.
But the problems we can solve this way tend to be the same ones we can solve with grid algorithms.
For problems with more than a few parameters, the most powerful tool we have is MCMC, which stands for "Markov chain Monte Carlo".
In this context, "Monte Carlo" refers to to methods that generate random samples from a distribution.
Unlike grid methods, MCMC methods don't try to compute the posterior distribution; they sample from it instead.
It might seem strange that you can generate a sample without ever computing the distribution, but that's the magic of MCMC.
To demonstrate, we'll start by solving the World Cup problem.
Yes, again.
The World Cup Problem
In <<_PoissonProcesses>> we modeled goal scoring in football (soccer) as a Poisson process characterized by a goal-scoring rate, denoted $\lambda$.
We used a gamma distribution to represent the prior distribution of $\lambda$, then we used the outcome of the game to compute the posterior distribution for both teams.
To answer the first question, we used the posterior distributions to compute the "probability of superiority" for France.
To answer the second question, we computed the posterior predictive distributions for each team, that is, the distribution of goals we expect in a rematch.
In this chapter we'll solve this problem again using PyMC3, which is a library that provide implementations of several MCMC methods.
But we'll start by reviewing the grid approximation of the prior and the prior predictive distribution.
Grid Approximation
As we did in <<_TheGammaDistribution>> we'll use a gamma distribution with parameter $\alpha=1.4$ to represent the prior.
Step2: I'll use linspace to generate possible values for $\lambda$, and pmf_from_dist to compute a discrete approximation of the prior.
Step3: We can use the Poisson distribution to compute the likelihood of the data; as an example, we'll use 4 goals.
Step4: Now we can do the update in the usual way.
Step5: Soon we will solve the same problem with PyMC3, but first it will be useful to introduce something new
Step6: The result is an array of possible values for the goal-scoring rate, $\lambda$.
For each value in sample_prior, I'll generate one value from a Poisson distribution.
Step7: sample_prior_pred is a sample from the prior predictive distribution.
To see what it looks like, we'll compute the PMF of the sample.
Step8: And here's what it looks like
Step9: One reason to compute the prior predictive distribution is to check whether our model of the system seems reasonable.
In this case, the distribution of goals seems consistent with what we know about World Cup football.
But in this chapter we have another reason
Step10: After importing pymc3, we create a Model object named model.
If you are not familiar with the with statement in Python, it is a way to associate a block of statements with an object.
In this example, the two indented statements are associated with the new Model object. As a result, when we create the distribution objects, Gamma and Poisson, they are added to the Model.
Inside the with statement
Step11: In this visualization, the ovals show that lam is drawn from a gamma distribution and goals is drawn from a Poisson distribution.
The arrow shows that the values of lam are used as parameters for the distribution of goals.
Sampling the Prior
PyMC3 provides a function that generates samples from the prior and prior predictive distributions.
We can use a with statement to run this function in the context of the model.
Step12: The result is a dictionary-like object that maps from the variables, lam and goals, to the samples.
We can extract the sample of lam like this
Step14: The following figure compares the CDF of this sample to the CDF of the sample we generated using the gamma object from SciPy.
Step15: The results are similar, which confirms that the specification of the model is correct and the sampler works as advertised.
From the trace we can also extract goals, which is a sample from the prior predictive distribution.
Step16: And we can compare it to the sample we generated using the poisson object from SciPy.
Because the quantities in the posterior predictive distribution are discrete (number of goals) I'll plot the CDFs as step functions.
Step17: Again, the results are similar, so we have some confidence we are using PyMC3 right.
When Do We Get to Inference?
Finally, we are ready for actual inference. We just have to make one small change.
Here is the model we used to generate the prior predictive distribution
Step18: And here is the model we'll use to compute the posterior distribution.
Step19: The difference is that we mark goals as observed and provide the observed data, 4.
And instead of calling sample_prior_predictive, we'll call sample, which is understood to sample from the posterior distribution of lam.
Step20: Although the specification of these models is similar, the sampling process is very different.
I won't go into the details of how PyMC3 works, but here are a few things you should be aware of
Step21: And we can compare the CDF of this sample to the posterior we computed by grid approximation
Step22: The results from PyMC3 are consistent with the results from the grid approximation.
Posterior Predictive Distribution
Finally, to sample from the posterior predictive distribution, we can use sample_posterior_predictive
Step23: The result is a dictionary that contains a sample of goals.
Step24: I'll also generate a sample from the posterior distribution we computed by grid approximation.
Step25: And we can compare the two samples.
Step26: Again, the results are consistent.
So we've established that we can compute the same results using a grid approximation or PyMC3.
But it might not be clear why.
In this example, the grid algorithm requires less computation than MCMC, and the result is a pretty good approximation of the posterior distribution, rather than a sample.
However, this is a simple model with just one parameter.
In fact, we could have solved it with even less computation, using a conjugate prior.
The power of PyMC3 will be clearer with a more complex model.
Happiness
Recently I read "Happiness and Life Satisfaction"
by Esteban Ortiz-Ospina and Max Roser, which discusses (among many other things) the relationship between income and happiness, both between countries, within countries, and over time.
It cites the "World Happiness Report", which includes results of a multiple regression analysis that explores the relationship between happiness and six potentially predictive factors
Step27: We can use Pandas to read the data into a DataFrame.
Step28: The DataFrame has one row for each of 153 countries and one column for each of 20 variables.
The column called 'Ladder score' contains the measurements of happiness we will try to predict.
Step29: Simple Regression
To get started, let's look at the relationship between happiness and income as represented by gross domestic product (GDP) per person.
The column named 'Logged GDP per capita' represents the natural logarithm of GDP for each country, divided by population, corrected for purchasing power parity (PPP).
Step30: The following figure is a scatter plot of score versus log_gdp, with one marker for each country.
Step31: It's clear that there is a relationship between these variables
Step32: And here are the results.
Step33: The estimated slope is about 0.72, which suggests that an increase of one unit in log-GDP, which is a factor of $e \approx 2.7$ in GDP, is associated with an increase of 0.72 units on the happiness ladder.
Now let's estimate the same parameters using PyMC3.
We'll use the same regression model as in Section <<_RegressionModel>>
Step34: The prior distributions for the parameters a, b, and sigma are uniform with ranges that are wide enough to cover the posterior distributions.
y_est is the estimated value of the dependent variable, based on the regression equation.
And y is a normal distribution with mean y_est and standard deviation sigma.
Notice how the data are included in the model
Step35: When you run the sampler, you might get warning messages about "divergences" and the "acceptance probability".
You can ignore them for now.
The result is an object that contains samples from the joint posterior distribution of a, b, and sigma.
Step36: ArviZ provides plot_posterior, which we can use to plot the posterior distributions of the parameters.
Here are the posterior distributions of slope, a, and intercept, b.
Step37: The graphs show the distributions of the samples, estimated by KDE, and 94% credible intervals. In the figure, "HDI" stands for "highest-density interval".
The means of these samples are consistent with the parameters we estimated with linregress.
Step38: Finally, we can check the marginal posterior distribution of sigma
Step39: The values in the posterior distribution of sigma seem plausible.
The simple regression model has only three parameters, so we could have used a grid algorithm.
But the regression model in the happiness report has six predictive variables, so it has eight parameters in total, including the intercept and sigma.
It is not practical to compute a grid approximation for a model with eight parameters.
Even a coarse grid, with 20 points along each dimension, would have more than 25 billion points.
And with 153 countries, we would have to compute almost 4 trillion likelihoods.
But PyMC3 can handle a model with eight parameters comfortably, as we'll see in the next section.
Step40: Multiple Regression
Before we implement the multiple regression model, I'll select the columns we need from the DataFrame.
Step41: The predictive variables have different units
Step42: Now let's build the model.
I'll extract the dependent variable.
Step43: And the dependent variables.
Step44: And here's the model. b0 is the intercept; b1 through b6 are the parameters associated with the predictive variables.
Step45: We could express this model more concisely using a vector of predictive variables and a vector of parameters, but I decided to keep it simple.
Now we can sample from the joint posterior distribution.
Step46: Because we standardized the data, we expect the intercept to be 0, and in fact the posterior mean of b0 is close to 0.
Step47: We can also check the posterior mean of sigma
Step48: From trace4 we can extract samples from the posterior distributions of the parameters and compute their means.
Step50: We can also compute 94% credible intervals (between the 3rd and 97th percentiles).
Step51: The following table summarizes the results.
Step52: It looks like GDP has the strongest association with happiness (or satisfaction), followed by social support, life expectancy, and freedom.
After controlling for those other factors, the parameters of the other factors are substantially smaller, and since the CI for generosity includes 0, it is plausible that generosity is not substantially related to happiness, at least as they were measured in this study.
This example demonstrates the power of MCMC to handle models with more than a few parameters.
But it does not really demonstrate the power of Bayesian regression.
If the goal of a regression model is to estimate parameters, there is no great advantage to Bayesian regression compared to conventional least squares regression.
Bayesian methods are more useful if we plan to use the posterior distribution of the parameters as part of a decision analysis process.
Summary
In this chapter we used PyMC3 to implement two models we've seen before
Step53: Exercise
Step54: Exercise
Step55: Exercise
Step56: I'll use groupby to separate the treated group from the control group.
Step57: Now estimate the parameters for the treated group.
Step58: Exercise
Step59: In total, 32 bugs have been discovered | Python Code:
# If we're running on Colab, install libraries
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: MCMC
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
from scipy.stats import gamma
alpha = 1.4
prior_dist = gamma(alpha)
Explanation: For most of this book we've been using grid methods to approximate posterior distributions.
For models with one or two parameters, grid algorithms are fast and the results are precise enough for most practical purposes.
With three parameters, they start to be slow, and with more than three they are usually not practical.
In the previous chapter we saw that we can solve some problems using conjugate priors.
But the problems we can solve this way tend to be the same ones we can solve with grid algorithms.
For problems with more than a few parameters, the most powerful tool we have is MCMC, which stands for "Markov chain Monte Carlo".
In this context, "Monte Carlo" refers to to methods that generate random samples from a distribution.
Unlike grid methods, MCMC methods don't try to compute the posterior distribution; they sample from it instead.
It might seem strange that you can generate a sample without ever computing the distribution, but that's the magic of MCMC.
To demonstrate, we'll start by solving the World Cup problem.
Yes, again.
The World Cup Problem
In <<_PoissonProcesses>> we modeled goal scoring in football (soccer) as a Poisson process characterized by a goal-scoring rate, denoted $\lambda$.
We used a gamma distribution to represent the prior distribution of $\lambda$, then we used the outcome of the game to compute the posterior distribution for both teams.
To answer the first question, we used the posterior distributions to compute the "probability of superiority" for France.
To answer the second question, we computed the posterior predictive distributions for each team, that is, the distribution of goals we expect in a rematch.
In this chapter we'll solve this problem again using PyMC3, which is a library that provide implementations of several MCMC methods.
But we'll start by reviewing the grid approximation of the prior and the prior predictive distribution.
Grid Approximation
As we did in <<_TheGammaDistribution>> we'll use a gamma distribution with parameter $\alpha=1.4$ to represent the prior.
End of explanation
import numpy as np
from utils import pmf_from_dist
lams = np.linspace(0, 10, 101)
prior_pmf = pmf_from_dist(prior_dist, lams)
Explanation: I'll use linspace to generate possible values for $\lambda$, and pmf_from_dist to compute a discrete approximation of the prior.
End of explanation
from scipy.stats import poisson
data = 4
likelihood = poisson.pmf(data, lams)
Explanation: We can use the Poisson distribution to compute the likelihood of the data; as an example, we'll use 4 goals.
End of explanation
posterior = prior_pmf * likelihood
posterior.normalize()
Explanation: Now we can do the update in the usual way.
End of explanation
sample_prior = prior_dist.rvs(1000)
Explanation: Soon we will solve the same problem with PyMC3, but first it will be useful to introduce something new: the prior predictive distribution.
Prior Predictive Distribution
We have seen the posterior predictive distribution in previous chapters; the prior predictive distribution is similar except that (as you might have guessed) it is based on the prior.
To estimate the prior predictive distribution, we'll start by drawing a sample from the prior.
End of explanation
from scipy.stats import poisson
sample_prior_pred = poisson.rvs(sample_prior)
Explanation: The result is an array of possible values for the goal-scoring rate, $\lambda$.
For each value in sample_prior, I'll generate one value from a Poisson distribution.
End of explanation
from empiricaldist import Pmf
pmf_prior_pred = Pmf.from_seq(sample_prior_pred)
Explanation: sample_prior_pred is a sample from the prior predictive distribution.
To see what it looks like, we'll compute the PMF of the sample.
End of explanation
from utils import decorate
pmf_prior_pred.bar()
decorate(xlabel='Number of goals',
ylabel='PMF',
title='Prior Predictive Distribution')
Explanation: And here's what it looks like:
End of explanation
import pymc3 as pm
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=1.4, beta=1.0)
goals = pm.Poisson('goals', lam)
Explanation: One reason to compute the prior predictive distribution is to check whether our model of the system seems reasonable.
In this case, the distribution of goals seems consistent with what we know about World Cup football.
But in this chapter we have another reason: computing the prior predictive distribution is a first step toward using MCMC.
Introducing PyMC3
PyMC3 is a Python library that provides several MCMC methods.
To use PyMC3, we have to specify a model of the process that generates the data.
In this example, the model has two steps:
First we draw a goal-scoring rate from the prior distribution,
Then we draw a number of goals from a Poisson distribution.
Here's how we specify this model in PyMC3:
End of explanation
pm.model_to_graphviz(model)
Explanation: After importing pymc3, we create a Model object named model.
If you are not familiar with the with statement in Python, it is a way to associate a block of statements with an object.
In this example, the two indented statements are associated with the new Model object. As a result, when we create the distribution objects, Gamma and Poisson, they are added to the Model.
Inside the with statement:
The first line creates the prior, which is a gamma distribution with the given parameters.
The second line creates the prior predictive, which is a Poisson distribution with the parameter lam.
The first parameter of Gamma and Poisson is a string variable name.
PyMC3 provides a function that generates a visual representation of the model.
End of explanation
with model:
trace = pm.sample_prior_predictive(1000)
Explanation: In this visualization, the ovals show that lam is drawn from a gamma distribution and goals is drawn from a Poisson distribution.
The arrow shows that the values of lam are used as parameters for the distribution of goals.
Sampling the Prior
PyMC3 provides a function that generates samples from the prior and prior predictive distributions.
We can use a with statement to run this function in the context of the model.
End of explanation
sample_prior_pymc = trace['lam']
sample_prior_pymc.shape
Explanation: The result is a dictionary-like object that maps from the variables, lam and goals, to the samples.
We can extract the sample of lam like this:
End of explanation
from empiricaldist import Cdf
def plot_cdf(sample, **options):
Plot the CDF of a sample.
sample: sequence of quantities
Cdf.from_seq(sample).plot(**options)
plot_cdf(sample_prior,
label='SciPy sample',
color='C5')
plot_cdf(sample_prior_pymc,
label='PyMC3 sample',
color='C0')
decorate(xlabel=r'Goals per game ($\lambda$)',
ylabel='CDF',
title='Prior distribution')
Explanation: The following figure compares the CDF of this sample to the CDF of the sample we generated using the gamma object from SciPy.
End of explanation
sample_prior_pred_pymc = trace['goals']
sample_prior_pred_pymc.shape
Explanation: The results are similar, which confirms that the specification of the model is correct and the sampler works as advertised.
From the trace we can also extract goals, which is a sample from the prior predictive distribution.
End of explanation
def plot_pred(sample, **options):
Cdf.from_seq(sample).step(**options)
plot_pred(sample_prior_pred,
label='SciPy sample',
color='C5')
plot_pred(sample_prior_pred_pymc,
label='PyMC3 sample',
color='C13')
decorate(xlabel='Number of goals',
ylabel='PMF',
title='Prior Predictive Distribution')
Explanation: And we can compare it to the sample we generated using the poisson object from SciPy.
Because the quantities in the posterior predictive distribution are discrete (number of goals) I'll plot the CDFs as step functions.
End of explanation
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=1.4, beta=1.0)
goals = pm.Poisson('goals', lam)
Explanation: Again, the results are similar, so we have some confidence we are using PyMC3 right.
When Do We Get to Inference?
Finally, we are ready for actual inference. We just have to make one small change.
Here is the model we used to generate the prior predictive distribution:
End of explanation
with pm.Model() as model2:
lam = pm.Gamma('lam', alpha=1.4, beta=1.0)
goals = pm.Poisson('goals', lam, observed=4)
Explanation: And here is the model we'll use to compute the posterior distribution.
End of explanation
options = dict(return_inferencedata=False)
with model2:
trace2 = pm.sample(500, **options)
Explanation: The difference is that we mark goals as observed and provide the observed data, 4.
And instead of calling sample_prior_predictive, we'll call sample, which is understood to sample from the posterior distribution of lam.
End of explanation
sample_post_pymc = trace2['lam']
sample_post_pymc.shape
Explanation: Although the specification of these models is similar, the sampling process is very different.
I won't go into the details of how PyMC3 works, but here are a few things you should be aware of:
Depending on the model, PyMC3 uses one of several MCMC methods; in this example, it uses the No U-Turn Sampler (NUTS), which is one of the most efficient and reliable methods we have.
When the sampler starts, the first values it generates are usually not a representative sample from the posterior distribution, so these values are discarded. This process is called "tuning".
Instead of using a single Markov chain, PyMC3 uses multiple chains. Then we can compare results from multiple chains to make sure they are consistent.
Although we asked for a sample of 500, PyMC3 generated two samples of 1000, discarded half of each, and returned the remaining 1000.
From trace2 we can extract a sample from the posterior distribution, like this:
End of explanation
posterior.make_cdf().plot(label='posterior grid',
color='C5')
plot_cdf(sample_post_pymc,
label='PyMC3 sample',
color='C4')
decorate(xlabel=r'Goals per game ($\lambda$)',
ylabel='CDF',
title='Posterior distribution')
Explanation: And we can compare the CDF of this sample to the posterior we computed by grid approximation:
End of explanation
with model2:
post_pred = pm.sample_posterior_predictive(trace2)
Explanation: The results from PyMC3 are consistent with the results from the grid approximation.
Posterior Predictive Distribution
Finally, to sample from the posterior predictive distribution, we can use sample_posterior_predictive:
End of explanation
sample_post_pred_pymc = post_pred['goals']
sample_post_pred_pymc.shape
Explanation: The result is a dictionary that contains a sample of goals.
End of explanation
sample_post = posterior.sample(1000)
sample_post_pred = poisson(sample_post).rvs()
Explanation: I'll also generate a sample from the posterior distribution we computed by grid approximation.
End of explanation
plot_pred(sample_post_pred,
label='grid sample',
color='C5')
plot_pred(sample_post_pred_pymc,
label='PyMC3 sample',
color='C12')
decorate(xlabel='Number of goals',
ylabel='PMF',
title='Posterior Predictive Distribution')
Explanation: And we can compare the two samples.
End of explanation
# Get the data file
download('https://happiness-report.s3.amazonaws.com/2020/WHR20_DataForFigure2.1.xls')
Explanation: Again, the results are consistent.
So we've established that we can compute the same results using a grid approximation or PyMC3.
But it might not be clear why.
In this example, the grid algorithm requires less computation than MCMC, and the result is a pretty good approximation of the posterior distribution, rather than a sample.
However, this is a simple model with just one parameter.
In fact, we could have solved it with even less computation, using a conjugate prior.
The power of PyMC3 will be clearer with a more complex model.
Happiness
Recently I read "Happiness and Life Satisfaction"
by Esteban Ortiz-Ospina and Max Roser, which discusses (among many other things) the relationship between income and happiness, both between countries, within countries, and over time.
It cites the "World Happiness Report", which includes results of a multiple regression analysis that explores the relationship between happiness and six potentially predictive factors:
Income as represented by per capita GDP
Social support
Healthy life expectancy at birth
Freedom to make life choices
Generosity
Perceptions of corruption
The dependent variable is the national average of responses to the "Cantril ladder question" used by the Gallup World Poll:
Please imagine a ladder with steps numbered from zero at the bottom to 10 at the top. The top of the ladder represents the best possible life for you and the bottom of the ladder represents the worst possible life for you. On which step of the ladder would you say you personally feel you stand at this time?
I'll refer to the responses as "happiness", but it might be more precise to think of them as a measure of satisfaction with quality of life.
In the next few sections we'll replicate the analysis in this report using Bayesian regression.
The data from this report can be downloaded from here.
End of explanation
import pandas as pd
filename = 'WHR20_DataForFigure2.1.xls'
df = pd.read_excel(filename)
df.head(3)
df.shape
Explanation: We can use Pandas to read the data into a DataFrame.
End of explanation
score = df['Ladder score']
Explanation: The DataFrame has one row for each of 153 countries and one column for each of 20 variables.
The column called 'Ladder score' contains the measurements of happiness we will try to predict.
End of explanation
log_gdp = df['Logged GDP per capita']
Explanation: Simple Regression
To get started, let's look at the relationship between happiness and income as represented by gross domestic product (GDP) per person.
The column named 'Logged GDP per capita' represents the natural logarithm of GDP for each country, divided by population, corrected for purchasing power parity (PPP).
End of explanation
import matplotlib.pyplot as plt
plt.plot(log_gdp, score, '.')
decorate(xlabel='Log GDP per capita at PPP',
ylabel='Happiness ladder score')
Explanation: The following figure is a scatter plot of score versus log_gdp, with one marker for each country.
End of explanation
from scipy.stats import linregress
result = linregress(log_gdp, score)
Explanation: It's clear that there is a relationship between these variables: people in countries with higher GDP generally report higher levels of happiness.
We can use linregress from SciPy to compute a simple regression of these variables.
End of explanation
pd.DataFrame([result.slope, result.intercept],
index=['Slope', 'Intercept'],
columns=[''])
Explanation: And here are the results.
End of explanation
x_data = log_gdp
y_data = score
with pm.Model() as model3:
a = pm.Uniform('a', 0, 4)
b = pm.Uniform('b', -4, 4)
sigma = pm.Uniform('sigma', 0, 2)
y_est = a * x_data + b
y = pm.Normal('y',
mu=y_est, sd=sigma,
observed=y_data)
Explanation: The estimated slope is about 0.72, which suggests that an increase of one unit in log-GDP, which is a factor of $e \approx 2.7$ in GDP, is associated with an increase of 0.72 units on the happiness ladder.
Now let's estimate the same parameters using PyMC3.
We'll use the same regression model as in Section <<_RegressionModel>>:
$$y = a x + b + \epsilon$$
where $y$ is the dependent variable (ladder score), $x$ is the predictive variable (log GDP) and $\epsilon$ is a series of values from a normal distribution with standard deviation $\sigma$.
$a$ and $b$ are the slope and intercept of the regression line.
They are unknown parameters, so we will use the data to estimate them.
The following is the PyMC3 specification of this model.
End of explanation
with model3:
trace3 = pm.sample(500, **options)
Explanation: The prior distributions for the parameters a, b, and sigma are uniform with ranges that are wide enough to cover the posterior distributions.
y_est is the estimated value of the dependent variable, based on the regression equation.
And y is a normal distribution with mean y_est and standard deviation sigma.
Notice how the data are included in the model:
The values of the predictive variable, x_data, are used to compute y_est.
The values of the dependent variable, y_data, are provided as the observed values of y.
Now we can use this model to generate a sample from the posterior distribution.
End of explanation
trace3
Explanation: When you run the sampler, you might get warning messages about "divergences" and the "acceptance probability".
You can ignore them for now.
The result is an object that contains samples from the joint posterior distribution of a, b, and sigma.
End of explanation
import arviz as az
with model3:
az.plot_posterior(trace3, var_names=['a', 'b']);
Explanation: ArviZ provides plot_posterior, which we can use to plot the posterior distributions of the parameters.
Here are the posterior distributions of slope, a, and intercept, b.
End of explanation
print('Sample mean:', trace3['a'].mean())
print('Regression slope:', result.slope)
print('Sample mean:', trace3['b'].mean())
print('Regression intercept:', result.intercept)
Explanation: The graphs show the distributions of the samples, estimated by KDE, and 94% credible intervals. In the figure, "HDI" stands for "highest-density interval".
The means of these samples are consistent with the parameters we estimated with linregress.
End of explanation
az.plot_posterior(trace3['sigma']);
Explanation: Finally, we can check the marginal posterior distribution of sigma
End of explanation
20 ** 8 / 1e9
153 * 20 ** 8 / 1e12
Explanation: The values in the posterior distribution of sigma seem plausible.
The simple regression model has only three parameters, so we could have used a grid algorithm.
But the regression model in the happiness report has six predictive variables, so it has eight parameters in total, including the intercept and sigma.
It is not practical to compute a grid approximation for a model with eight parameters.
Even a coarse grid, with 20 points along each dimension, would have more than 25 billion points.
And with 153 countries, we would have to compute almost 4 trillion likelihoods.
But PyMC3 can handle a model with eight parameters comfortably, as we'll see in the next section.
End of explanation
columns = ['Ladder score',
'Logged GDP per capita',
'Social support',
'Healthy life expectancy',
'Freedom to make life choices',
'Generosity',
'Perceptions of corruption']
subset = df[columns]
subset.head(3)
Explanation: Multiple Regression
Before we implement the multiple regression model, I'll select the columns we need from the DataFrame.
End of explanation
standardized = (subset - subset.mean()) / subset.std()
Explanation: The predictive variables have different units: log-GDP is in log-dollars, life expectancy is in years, and the other variables are on arbitrary scales.
To make these factors comparable, I'll standardize the data so that each variable has mean 0 and standard deviation 1.
End of explanation
y_data = standardized['Ladder score']
Explanation: Now let's build the model.
I'll extract the dependent variable.
End of explanation
x1 = standardized[columns[1]]
x2 = standardized[columns[2]]
x3 = standardized[columns[3]]
x4 = standardized[columns[4]]
x5 = standardized[columns[5]]
x6 = standardized[columns[6]]
Explanation: And the dependent variables.
End of explanation
with pm.Model() as model4:
b0 = pm.Uniform('b0', -4, 4)
b1 = pm.Uniform('b1', -4, 4)
b2 = pm.Uniform('b2', -4, 4)
b3 = pm.Uniform('b3', -4, 4)
b4 = pm.Uniform('b4', -4, 4)
b5 = pm.Uniform('b5', -4, 4)
b6 = pm.Uniform('b6', -4, 4)
sigma = pm.Uniform('sigma', 0, 2)
y_est = b0 + b1*x1 + b2*x2 + b3*x3 + b4*x4 + b5*x5 + b6*x6
y = pm.Normal('y',
mu=y_est, sd=sigma,
observed=y_data)
Explanation: And here's the model. b0 is the intercept; b1 through b6 are the parameters associated with the predictive variables.
End of explanation
with model4:
trace4 = pm.sample(500, **options)
Explanation: We could express this model more concisely using a vector of predictive variables and a vector of parameters, but I decided to keep it simple.
Now we can sample from the joint posterior distribution.
End of explanation
trace4['b0'].mean()
Explanation: Because we standardized the data, we expect the intercept to be 0, and in fact the posterior mean of b0 is close to 0.
End of explanation
trace4['sigma'].mean()
Explanation: We can also check the posterior mean of sigma:
End of explanation
param_names = ['b1', 'b3', 'b3', 'b4', 'b5', 'b6']
means = [trace4[name].mean()
for name in param_names]
Explanation: From trace4 we can extract samples from the posterior distributions of the parameters and compute their means.
End of explanation
def credible_interval(sample):
Compute 94% credible interval.
ci = np.percentile(sample, [3, 97])
return np.round(ci, 3)
cis = [credible_interval(trace4[name])
for name in param_names]
Explanation: We can also compute 94% credible intervals (between the 3rd and 97th percentiles).
End of explanation
index = columns[1:]
table = pd.DataFrame(index=index)
table['Posterior mean'] = np.round(means, 3)
table['94% CI'] = cis
table
Explanation: The following table summarizes the results.
End of explanation
# Solution
n = 250
k_obs = 140
with pm.Model() as model5:
x = pm.Beta('x', alpha=1, beta=1)
k = pm.Binomial('k', n=n, p=x, observed=k_obs)
trace5 = pm.sample(500, **options)
az.plot_posterior(trace5)
Explanation: It looks like GDP has the strongest association with happiness (or satisfaction), followed by social support, life expectancy, and freedom.
After controlling for those other factors, the parameters of the other factors are substantially smaller, and since the CI for generosity includes 0, it is plausible that generosity is not substantially related to happiness, at least as they were measured in this study.
This example demonstrates the power of MCMC to handle models with more than a few parameters.
But it does not really demonstrate the power of Bayesian regression.
If the goal of a regression model is to estimate parameters, there is no great advantage to Bayesian regression compared to conventional least squares regression.
Bayesian methods are more useful if we plan to use the posterior distribution of the parameters as part of a decision analysis process.
Summary
In this chapter we used PyMC3 to implement two models we've seen before: a Poisson model of goal-scoring in soccer and a simple regression model.
Then we implemented a multiple regression model that would not have been possible to compute with a grid approximation.
MCMC is more powerful than grid methods, but that power comes with some disadvantages:
MCMC algorithms are fiddly. The same model might behave well with some priors and less well with others. And the sampling process often produces warnings about tuning steps, divergences, "r-hat statistics", acceptance rates, and effective samples. It takes some expertise to diagnose and correct these issues.
I find it easier to develop models incrementally using grid algorithms, checking intermediate results along the way. With PyMC3, it is not as easy to be confident that you have specified a model correctly.
For these reasons, I recommend a model development process that starts with grid algorithms and resorts to MCMC if necessary.
As we saw in the previous chapters, you can solve a lot of real-world problems with grid methods.
But when you need MCMC, it is useful to have a grid algorithm to compare to (even if it is based on a simpler model).
All of the models in this book can be implemented in PyMC3, but some of them are easier to translate than others.
In the exercises, you will have a chance to practice.
Exercises
Exercise: As a warmup, let's use PyMC3 to solve the Euro problem.
Suppose we spin a coin 250 times and it comes up heads 140 times.
What is the posterior distribution of $x$, the probability of heads?
For the prior, use a beta distribution with parameters $\alpha=1$ and $\beta=1$.
See the PyMC3 documentation for the list of continuous distributions.
End of explanation
# Solution
k = 23
n = 19
x = 4
with pm.Model() as model6:
N = pm.DiscreteUniform('N', 50, 500)
y = pm.HyperGeometric('y', N=N, k=k, n=n, observed=x)
trace6 = pm.sample(1000, **options)
az.plot_posterior(trace6)
Explanation: Exercise: Now let's use PyMC3 to replicate the solution to the Grizzly Bear problem in <<_TheGrizzlyBearProblem>>, which is based on the hypergeometric distribution.
I'll present the problem with slightly different notation, to make it consistent with PyMC3.
Suppose that during the first session, k=23 bears are tagged. During the second session, n=19 bears are identified, of which x=4 had been tagged.
Estimate the posterior distribution of N, the number of bears in the environment.
For the prior, use a discrete uniform distribution from 50 to 500.
See the PyMC3 documentation for the list of discrete distributions.
Note: HyperGeometric was added to PyMC3 after version 3.8, so you might need to update your installation to do this exercise.
End of explanation
data = [0.80497283, 2.11577082, 0.43308797, 0.10862644, 5.17334866,
3.25745053, 3.05555883, 2.47401062, 0.05340806, 1.08386395]
# Solution
with pm.Model() as model7:
lam = pm.Uniform('lam', 0.1, 10.1)
k = pm.Uniform('k', 0.1, 5.1)
y = pm.Weibull('y', alpha=k, beta=lam, observed=data)
trace7 = pm.sample(1000, **options)
az.plot_posterior(trace7)
Explanation: Exercise: In <<_TheWeibullDistribution>> we generated a sample from a Weibull distribution with $\lambda=3$ and $k=0.8$.
Then we used the data to compute a grid approximation of the posterior distribution of those parameters.
Now let's do the same with PyMC3.
For the priors, you can use uniform distributions as we did in <<_SurvivalAnalysis>>, or you could use HalfNormal distributions provided by PyMC3.
Note: The Weibull class in PyMC3 uses different parameters than SciPy. The parameter alpha in PyMC3 corresponds to $k$, and beta corresponds to $\lambda$.
Here's the data again:
End of explanation
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/drp_scores.csv')
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head()
Explanation: Exercise: In <<_ImprovingReadingAbility>> we used data from a reading test to estimate the parameters of a normal distribution.
Make a model that defines uniform prior distributions for mu and sigma and uses the data to estimate their posterior distributions.
Here's the data again.
End of explanation
grouped = df.groupby('Treatment')
responses = {}
for name, group in grouped:
responses[name] = group['Response']
Explanation: I'll use groupby to separate the treated group from the control group.
End of explanation
data = responses['Treated']
# Solution
with pm.Model() as model8:
mu = pm.Uniform('mu', 20, 80)
sigma = pm.Uniform('sigma', 5, 30)
y = pm.Normal('y', mu, sigma, observed=data)
trace8 = pm.sample(500, **options)
# Solution
with model8:
az.plot_posterior(trace8)
Explanation: Now estimate the parameters for the treated group.
End of explanation
k10 = 20 - 3
k01 = 15 - 3
k11 = 3
Explanation: Exercise: In <<_TheLincolnIndexProblem>> we used a grid algorithm to solve the Lincoln Index problem as presented by John D. Cook:
"Suppose you have a tester who finds 20 bugs in your program. You want to estimate how many bugs are really in the program. You know there are at least 20 bugs, and if you have supreme confidence in your tester, you may suppose there are around 20 bugs. But maybe your tester isn't very good. Maybe there are hundreds of bugs. How can you have any idea how many bugs there are? There's no way to know with one tester. But if you have two testers, you can get a good idea, even if you don't know how skilled the testers are."
Suppose the first tester finds 20 bugs, the second finds 15, and they
find 3 in common; use PyMC3 to estimate the number of bugs.
Note: This exercise is more difficult that some of the previous ones. One of the challenges is that the data includes k00, which depends on N:
k00 = N - num_seen
So we have to construct the data as part of the model.
To do that, we can use pm.math.stack, which makes an array:
data = pm.math.stack((k00, k01, k10, k11))
Finally, you might find it helpful to use pm.Multinomial.
I'll use the following notation for the data:
k11 is the number of bugs found by both testers,
k10 is the number of bugs found by the first tester but not the second,
k01 is the number of bugs found by the second tester but not the first, and
k00 is the unknown number of undiscovered bugs.
Here are the values for all but k00:
End of explanation
num_seen = k01 + k10 + k11
num_seen
# Solution
with pm.Model() as model9:
p0 = pm.Beta('p0', alpha=1, beta=1)
p1 = pm.Beta('p1', alpha=1, beta=1)
N = pm.DiscreteUniform('N', num_seen, 350)
q0 = 1-p0
q1 = 1-p1
ps = [q0*q1, q0*p1, p0*q1, p0*p1]
k00 = N - num_seen
data = pm.math.stack((k00, k01, k10, k11))
y = pm.Multinomial('y', n=N, p=ps, observed=data)
# Solution
with model9:
trace9 = pm.sample(1000, **options)
# Solution
with model9:
az.plot_posterior(trace9)
Explanation: In total, 32 bugs have been discovered:
End of explanation |
11,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using scipy iterative solvers
The aim of this notebook is to show how the scipy.sparse.linalg module can be used to solve iteratively the Lippmann–Schwinger equation.
The problem at hand is a single ellipsoid in a periodic unit-cell, subjected to a macroscopic strain $\mathbf E$. Again, we strive to make the implementation dimension independent. We therefore introduce the dimension dim of the physical space, and the dimension sym = (dim*(dim+1))//2 of the space of second-order, symmetric tensors.
We start by importing a few modules, including h5py, since input and output data are stored in HDF5 format.
Step1: Generating the microstructure
The microstructure is generated by means of the gen_ellipsoid.py script, which is listed below.
Step2: The above script should be invoked as follows
python gen_ellipsoid.py tutorial.json
where the tutorial.json file holds the geometrical parameters.
Step3: The resulting microstructure is to be saved into the example.h5 file.
Step4: The microstructure is then retrieved as follows.
Step5: We can retrieve dim and sym from the dimensions of the microstructure.
Step6: And we can check visually that everything went all right.
Step7: Creating the basic objects for the simulations
We first select the elastic properties of the inclusion, the matrix, and the reference material. For the latter, we select a material which is close to the matrix, but not equal, owing to the $(\mathbf C_{\mathrm m}-\mathbf C_{\mathrm{ref}})^{-1}$ factor in the Lippmann–Schwinger equation.
Step8: We then define instances of the IsotropicLinearElasticMaterial class for all three materials.
Step9: We want to solve the Lippmann–Schwinger equation, which reads
\begin{equation}
\bigl(\mathbf C-\mathbf C_{\mathrm{ref}}\bigr)^{-1}
Step10: The constructor of the above class first computes the local map $\boldsymbol\tau^h\mapsto\bigl(\mathbf C^h-\mathbf C_{\mathrm{ref}}\bigr)^{-1}$. Then, it implements the operator $\boldsymbol\tau^h\mapsto\bigl(\mathbf C^h-\mathbf C_{\mathrm{ref}}\bigr)^{-1}
Step11: We then create the macroscopic strain $\mathbf E$ that is imposed to the unit-cell. In the present case, we take $E_{xy}=1$ (beware the Mandel–Voigt notation!).
Step12: We then populate the right-hand side vector, $b$. b_arr is the column-vector $b$, viewed as a discrete, second order 2D tensor field. It is then flattened through the ravel method.
Step13: We know that the linear operator $A$ is definite. We can therefore use the conjugate gradient method to solve $A\cdot x=b$.
Step14: The resulting solution, $x$, must be reshaped into a $N_1\times\cdots\times N_d\times s$ array.
Step15: We can plot the $\tau_{xy}$ component.
Step16: And compute the associated strain field.
Step17: And plot the $\varepsilon_{xy}$ component. | Python Code:
import h5py as h5
import matplotlib.pyplot as plt
import numpy as np
import janus
import janus.material.elastic.linear.isotropic as material
import janus.operators as operators
import janus.fft.serial as fft
import janus.green as green
from scipy.sparse.linalg import cg, LinearOperator
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 8)
Explanation: Using scipy iterative solvers
The aim of this notebook is to show how the scipy.sparse.linalg module can be used to solve iteratively the Lippmann–Schwinger equation.
The problem at hand is a single ellipsoid in a periodic unit-cell, subjected to a macroscopic strain $\mathbf E$. Again, we strive to make the implementation dimension independent. We therefore introduce the dimension dim of the physical space, and the dimension sym = (dim*(dim+1))//2 of the space of second-order, symmetric tensors.
We start by importing a few modules, including h5py, since input and output data are stored in HDF5 format.
End of explanation
%pfile gen_ellipsoid.py
Explanation: Generating the microstructure
The microstructure is generated by means of the gen_ellipsoid.py script, which is listed below.
End of explanation
%pfile tutorial.json
Explanation: The above script should be invoked as follows
python gen_ellipsoid.py tutorial.json
where the tutorial.json file holds the geometrical parameters.
End of explanation
%run gen_ellipsoid.py tutorial.json
Explanation: The resulting microstructure is to be saved into the example.h5 file.
End of explanation
with h5.File('./tutorial.h5', 'r') as f:
phase = np.asarray(f['phase'])
Explanation: The microstructure is then retrieved as follows.
End of explanation
dim = phase.ndim
sym = (dim*(dim+1))//2
Explanation: We can retrieve dim and sym from the dimensions of the microstructure.
End of explanation
plt.imshow(phase);
Explanation: And we can check visually that everything went all right.
End of explanation
mu_i, nu_i = 10., 0.3 # Elastic properties of the ellipsoidal inclusion
mu_m, nu_m = 1., 0.2 # Elastic properties of the matrix
mu_ref, nu_ref = 0.99*mu_m, nu_m # Elastic properties of the reference material
Explanation: Creating the basic objects for the simulations
We first select the elastic properties of the inclusion, the matrix, and the reference material. For the latter, we select a material which is close to the matrix, but not equal, owing to the $(\mathbf C_{\mathrm m}-\mathbf C_{\mathrm{ref}})^{-1}$ factor in the Lippmann–Schwinger equation.
End of explanation
C_i = material.create(mu_i, nu_i, dim=dim)
C_m = material.create(mu_m, nu_m, dim=dim)
C_ref = material.create(mu_ref, nu_ref, dim=dim)
type(C_i)
Explanation: We then define instances of the IsotropicLinearElasticMaterial class for all three materials.
End of explanation
class MyLinearOperator(LinearOperator):
def __init__(self, phase, C_m, C_i, C_ref):
dim = phase.ndim
sym = (dim*(dim+1))//2
alpha_i = 1./dim/(C_i.k-C_ref.k)
beta_i = 1./2./(C_i.g-C_ref.g)
alpha_m = 1./dim/(C_m.k-C_ref.k)
beta_m = 1./2./(C_m.g-C_ref.g)
T = np.array([operators.isotropic_4(alpha_i, beta_i, dim),
operators.isotropic_4(alpha_m, beta_m, dim)])
self.tau2eps = operators.block_diagonal_operator(T[phase])
self.green = green.filtered(C_ref.green_operator(),
phase.shape, 1.,
fft.create_real(phase.shape))
self.arr_shape = phase.shape+(sym,)
n = np.product(self.arr_shape)
super().__init__(np.float64, (n, n))
def _matvec(self, x):
tau = x.reshape(self.arr_shape)
eta = np.zeros_like(tau)
self.tau2eps.apply(tau, eta)
eta += self.green.apply(tau)
y = eta.ravel()
return y
def _rmatvec(self, x):
return self._matvec(x)
def empty_arr(self):
return np.empty(self.arr_shape)
Explanation: We want to solve the Lippmann–Schwinger equation, which reads
\begin{equation}
\bigl(\mathbf C-\mathbf C_{\mathrm{ref}}\bigr)^{-1}:\boldsymbol\tau+\boldsymbol\Gamma_{\mathrm{ref}}[\boldsymbol\tau]=\mathbf E,
\end{equation}
where $\mathbf C=\mathbf C_{\mathrm i}$ in the inclusion, $\mathbf C=\mathbf C_{\mathrm m}$ in the matrix, and $\boldsymbol\Gamma_{\mathrm{ref}}$ is the fourth-order Green operator for strains. After suitable discretization, the above problem reads
\begin{equation}
\bigl(\mathbf C^h-\mathbf C_{\mathrm{ref}}\bigr)^{-1}:\boldsymbol\tau^h+\boldsymbol\Gamma_{\mathrm{ref}}^h[\boldsymbol\tau^h]=\mathbf E,
\end{equation}
where $\mathbf C^h$ denotes the local stiffness, discretized over a cartesian grid of size $N_1\times\cdots\times N_d$; in other words, it can be viewed as an array of size $N_1\times\cdot\times N_d\times s\times s$ and $\boldsymbol\Gamma_{\mathrm{ref}}^h$ is the discrete Green operator. The unknown discrete polarization field $\boldsymbol\tau^h$ ($N_1\times\cdots\times N_d\times s$ array) is constant over each cell of the cartesian grid. It can be assembled into a column vector, $x$. Likewise, $\mathbf E$ should be understood as a macroscopic strain field which is equal to $\mathbf E$ in each cell of the grid; it can be assembled into a column vector, $b$.
Finally, the operator $\boldsymbol\tau^h\mapsto\bigl(\mathbf C^h-\mathbf C_{\mathrm{ref}}\bigr)^{-1}:\boldsymbol\tau^h+\boldsymbol\Gamma_{\mathrm{ref}}^h[\boldsymbol\tau^h]$ is linear in $\boldsymbol\tau^h$ (or, equivalently, $x$); it can be assembled as a matrix, $A$. Then, the discrete Lippmann–Schwinger equation reads
\begin{equation}
A\cdot x=b,
\end{equation}
which can be solved by means of any linear solver. However, two observations should be made. First, the matrix $A$ is full; its assembly and storage might be extremely costly. Second, the matrix-vector product $x\mapsto A\cdot x$ can efficiently be implemented. This is the raison d'être of a library like Janus!
These observation suggest to implement $A$ as a linearOperator, in the sense of the scipy library (see reference).
End of explanation
a = MyLinearOperator(phase, C_m, C_i, C_ref)
Explanation: The constructor of the above class first computes the local map $\boldsymbol\tau^h\mapsto\bigl(\mathbf C^h-\mathbf C_{\mathrm{ref}}\bigr)^{-1}$. Then, it implements the operator $\boldsymbol\tau^h\mapsto\bigl(\mathbf C^h-\mathbf C_{\mathrm{ref}}\bigr)^{-1}:\boldsymbol\tau^h$. The resulting operator is called tau2eps.
The constructor also implements a discrete Green operator, associated with the reference material. Several discretization options are offered in Janus. The filtered Green operator is a good option. TODO Use the Willot operator instead.
Finally, the operator $A$ is implemented in the _matvec method, where attention should be paid to the fact that x is a column-vector, while green and tau2eps both operates on fields that have the shape of a symmetric, second-order tensor field defined over the whole grid, hence the reshape operation. It is known that the operator $A$ is symmetric by construction. Therefore, the _rmatvec method calls _matvec.
Solving the Lippmann–Schwinger equation
We are now ready to solve the equation. We first create an instance of the linear operator $A$.
End of explanation
eps_macro = np.zeros((sym,), dtype=np.float64)
eps_macro[-1] = np.sqrt(2.)
Explanation: We then create the macroscopic strain $\mathbf E$ that is imposed to the unit-cell. In the present case, we take $E_{xy}=1$ (beware the Mandel–Voigt notation!).
End of explanation
b_arr = a.empty_arr()
b_arr[...] = eps_macro
b = b_arr.ravel()
Explanation: We then populate the right-hand side vector, $b$. b_arr is the column-vector $b$, viewed as a discrete, second order 2D tensor field. It is then flattened through the ravel method.
End of explanation
x, info = cg(a, b)
assert info == 0
Explanation: We know that the linear operator $A$ is definite. We can therefore use the conjugate gradient method to solve $A\cdot x=b$.
End of explanation
tau = x.reshape(a.arr_shape)
Explanation: The resulting solution, $x$, must be reshaped into a $N_1\times\cdots\times N_d\times s$ array.
End of explanation
plt.imshow(tau[..., -1]);
Explanation: We can plot the $\tau_{xy}$ component.
End of explanation
eps = a.tau2eps.apply(tau)
Explanation: And compute the associated strain field.
End of explanation
plt.imshow(eps[..., -1]);
plt.plot(eps[63, :, -1])
Explanation: And plot the $\varepsilon_{xy}$ component.
End of explanation |
11,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Characteristics of Autonomous Market Makers
Date
Step1: From the Balancer whitepaper
Step2: We can specify that swaps happen on some invariant surface $V(x,y)$ which allows us to replace the spot price $-\frac{\partial{x}}{\partial{y}}$ in $x - y\frac{\partial{x}}{\partial{y}} = w_x$, subsitituting $-\frac{\partial{x}}{\partial{y}} = \frac{\partial{V}}{\partial{y}}/\frac{\partial{V}}{\partial{x}}$ via the implicit function theorem.
Step3: SymPy's PDE solver balks at this equation as written, so multiply through to make things easier.
Step4: It turns out that SymPy is capable of solving the PDE directly to give a general solution.
Step5: We can simplify the solution
Step6: We show below that the spot price is as expected regardless of the exact form of $F$ and if a specific form is chosen we achieve the desired Cobb-Douglas form.
Step7: By taking the exponential of the constant we obtain the general Uniswap invariant.
Step8: Interestingly, the general solution offers the opportunity to use different functional forms to achieve the same constant share of value contraint.
Step9: We could derive the swap formulae for each form. Would the formulae from log form of invariant be easier to implement on the Ethererum blockchain? This is left as an exercise for the reader.
Balancer
We now consider the general case of a Balancer pool with $n$ assets and weights summing to 1. Here we examine the three asset case and identify the geometric constraints imposed by the share of value conditions
$$
w_x = \frac{x}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}\
w_y = \frac{-\frac{\partial{x}}{\partial{y}}y}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}\
w_z = \frac{-\frac{\partial{x}}{\partial{z}}z}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}
$$
for tokens X, Y and Z having total value $v_x(x,y,z) = x + p_{x}^{y}y + p_{x}^{z}z = x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z$.
As before we can replace the spot prices given by the partial derivatives with expressions for the invariant surface V. Below we show the $w_x$ condition, the others are similar. The weight sum to 1 condition again allows us to eliminate one of the constraint equations.
Step10: We can simplify this to a constant coefficient first order PDE by the change of variables
$$
u = \xi(x) = \log{x}\
v = \eta(y) = \log{y}\
w = \zeta(z) = \log{z}
$$
The general form for the the change of variables is given by
Step11: where $\mathcal{V} = V(\xi(x), \eta(y), \zeta(z))$
Substituting the logarithmic functional form for the transformed variables gives
Step12: The share of value weight constraints can written as a matrix equation
Step13: We can use the weights summing to 1 condition to eliminate one of these equations. The nullspace of the constraint matrix then defines a plane of constant value of the invariant in $u,v,w$ space which we can then map back to the original X, Y, Z token balances.
Step14: It can be seen by inspection that exponentiating this invariant results in the original Balancer form. As for the two asset case it is also possible to retain the value function in this form and derive new forms for the trading functions, which is again an exercise left to the reader.
Curve
Curve uses the StableSwap invariant to reduce slippage for stablecoins all having equivalent value. For example a pool could consist of USDC, USDT, BUSD and DAI, all of which are designed to track USD. Pools tracking other assets are possible e.g. a BTC pool backed by sBTC, renBTC, wBTC. The market maker token pool ideally consists of a balanced mix of each token type. We consider a two asset pool below, this analysis extends to an arbitrary number of assets.
The StableSwap invariant is designed to act as a constant sum market maker $x+y=1$ for small imbalances, and a constant product Uniswap market maker $xy=k$ as the pool becomes more imbalanced. These are the price constraints defining the system i.e.
at small imbalance $-\frac{\partial{x}}{\partial{y}}=1$ and tokens are freely interchangeable
at larger imbalance $-\frac{\partial{x}}{\partial{y}}=x/y$ as for Uniswap (or in general an equal weight Balancer pool)
The StableSwap invariant can be written as $V{\left(x,y \right)} = s \left(x + y\right) + x^{w_{x}} y^{w_{y}}$ where $s$ is an amplification parameter that determines the transition between constant sum and constant product behaviour.
Step15: Below we show the spot price of Y tokens in terms of X tokens that results from this invariant function. We can see that the limit $s \rightarrow \infty$ gives the constant sum behaviour while the $s \rightarrow 0$ limit gives constant product behaviour.
Step16: Following the previous procedure we'd hope to be able to solve the PDE for $V(x,y)$ from the spot price constraint
Step17: A new Curve
We can look at the form of the PDE from the StableSwap constraint and explore related functional forms. It looks like the same limiting behaviour could be achieved with any ratio of $x/y$ i.e. without requiring the $\surd$. We hence try $$\left(s x y + y\right) \frac{\partial}{\partial y} V{\left(x,y \right)} = \left(s x y + x\right) \frac{\partial}{\partial x} V{\left(x,y \right)}$$
which is easily solvable by SymPy and results in a new Curve invariant
Step18: We see that the spot price has the same desired limiting behaviour | Python Code:
from IPython.display import HTML
# Hide code cells https://gist.github.com/uolter/970adfedf44962b47d32347d262fe9be
def hide_code():
return HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$("div.input").hide();
} else {
$("div.input").show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
import sympy as sp
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
%matplotlib inline
hide_code()
Explanation: Characteristics of Autonomous Market Makers
Date: 2021-03-26
(Date started: December 2020 Christmas holidays)
Author: @mattmcd
This notebook describes an approach using method of characteristics solutions of partial differential equations (PDEs) to examine existing AMM invariants such as Uniswap, Balancer and Curve (a.k.a. Stableswap).
End of explanation
x, y, w_x, w_y = sp.symbols('x y w_x w_y', positive=True)
k = sp.symbols('k', real=True)
X, Y = map(sp.Function, 'XY')
V = sp.Function('V')
Explanation: From the Balancer whitepaper:
The bedrock of Balancer’s exchange functions is a surface defined by constraining a value function $V$
— a function of the pool’s weights and balances — to a constant. We will prove that this surface implies a spot price at each point such that, no matter what exchanges are carried out, the share of value of each token in the pool remains constant.
The Balancer whitepaper shows that the value function
$$V = \prod_{i=1}^{n} x_{i}^{w_{i}}$$
is related to the token spot prices by the ratio of partial derivative.
Starting from the Constraints
The idea of constant level sets of a value function creating constraints on system state (including prices) is discussed in 'From Curved Bonding to Configuration Spaces' by Zargham, Shorish, and Paruch (2020).
The existing Balancer value function implicit state constraint is 'the share of value of each token in the pool remains constant'.
In this section we look at starting from a set of constraints and see if it is possible to derive the corresponding value function. We can then use this value function to determine allowed state changes e.g. for a swap the number of output tokens for an initial state and given number of input tokens.
This approach feels familiar to the Lagrangian dynamics approach in classical physics (the author's background). In economics the standard approach seems to be start from a value function (a.k.a. utility function) and derive substitution functions that give prices. Here we attempt to solve the inverse problem.
As a starting point, we consider deriving the Balancer value function (a Cobb-Douglas Utility Function) from the set of constraints for swaps 'the share of value of each token in the pool remains constant'.
We consider below two and three asset pools with tokens $X, Y, Z$. The state of the system can be defined by three token balances $x$, $y$, $z$, and three weights $w_x$, $w_y$, $w_z$.
Uniswap
The two asset case is a generalized form of Uniswap having token weights $w_x$ and $w_y$ summing to 1. Uniswap uses $w_x = w_y = \frac{1}{2}$. Here we use $x$ and $y$ to represent the token X and token Y balances.
The total value of the pool in tems of token X is
$$v_{x}(x,y) = x + p_{x}^{y}y = x - \frac{\partial{x}}{\partial{y}}y$$
i.e. number of tokens X plus the spot price of converting Y tokens into X tokens.
The constant share of value constraint is hence represented by the equations:
$$w_{x} = \frac{x}{x - \frac{\partial{x}}{\partial{y}}y} \
w_y = \frac{- \frac{\partial{x}}{\partial{y}}y}{x - \frac{\partial{x}}{\partial{y}}y}$$
where in the two asset case the second equation is redundant since the weights sum to 1.
End of explanation
sp.Eq(x/(x + y*V(x,y).diff(y)/V(x,y).diff(x)), w_x)
Explanation: We can specify that swaps happen on some invariant surface $V(x,y)$ which allows us to replace the spot price $-\frac{\partial{x}}{\partial{y}}$ in $x - y\frac{\partial{x}}{\partial{y}} = w_x$, subsitituting $-\frac{\partial{x}}{\partial{y}} = \frac{\partial{V}}{\partial{y}}/\frac{\partial{V}}{\partial{x}}$ via the implicit function theorem.
End of explanation
const_share_eq = sp.Eq(x*V(x,y).diff(x), w_x*(x*V(x,y).diff(x) + y*V(x,y).diff(y)))
const_share_eq
Explanation: SymPy's PDE solver balks at this equation as written, so multiply through to make things easier.
End of explanation
V_sol = sp.pdsolve(const_share_eq).subs({(1-w_x): w_y})
V_sol
Explanation: It turns out that SymPy is capable of solving the PDE directly to give a general solution.
End of explanation
sp.Eq(V(x,y), V_sol.rhs.simplify())
Explanation: We can simplify the solution:
End of explanation
sp.Eq((V(x,y).diff(y)/V(x,y).diff(x)), V_sol.rhs.diff(y)/V_sol.rhs.diff(x))
Explanation: We show below that the spot price is as expected regardless of the exact form of $F$ and if a specific form is chosen we achieve the desired Cobb-Douglas form.
End of explanation
sp.Eq(V(x,y), sp.exp(w_y*V_sol.rhs.args[0]).simplify())
Explanation: By taking the exponential of the constant we obtain the general Uniswap invariant.
End of explanation
sp.Eq(V(x,y), (w_y*V_sol.rhs.args[0]).expand())
Explanation: Interestingly, the general solution offers the opportunity to use different functional forms to achieve the same constant share of value contraint.
End of explanation
x, y, z, w_x, w_y, w_z = sp.symbols('x y z w_x w_y w_z', positive=True)
u, v, w = sp.symbols('u v w', positive=True)
xi = sp.Function('xi')
eta = sp.Function('eta')
zeta = sp.Function('zeta')
V = sp.Function('V')
V_d = sp.Function('\mathcal{V}')
const_share_eq_3_1 = sp.Eq(x*V(x,y,z).diff(x), w_x*(x*V(x,y,z).diff(x) + y*V(x,y,z).diff(y) + z*V(x,y,z).diff(z)))
const_share_eq_3_1
Explanation: We could derive the swap formulae for each form. Would the formulae from log form of invariant be easier to implement on the Ethererum blockchain? This is left as an exercise for the reader.
Balancer
We now consider the general case of a Balancer pool with $n$ assets and weights summing to 1. Here we examine the three asset case and identify the geometric constraints imposed by the share of value conditions
$$
w_x = \frac{x}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}\
w_y = \frac{-\frac{\partial{x}}{\partial{y}}y}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}\
w_z = \frac{-\frac{\partial{x}}{\partial{z}}z}{x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z}
$$
for tokens X, Y and Z having total value $v_x(x,y,z) = x + p_{x}^{y}y + p_{x}^{z}z = x - \frac{\partial{x}}{\partial{y}}y - \frac{\partial{x}}{\partial{z}}z$.
As before we can replace the spot prices given by the partial derivatives with expressions for the invariant surface V. Below we show the $w_x$ condition, the others are similar. The weight sum to 1 condition again allows us to eliminate one of the constraint equations.
End of explanation
const_share_eq_3_1.subs({
V(x,y,z): V(xi(x),eta(y),zeta(z)),
}).simplify().subs({V(xi(x),eta(y),zeta(z)): V_d })
Explanation: We can simplify this to a constant coefficient first order PDE by the change of variables
$$
u = \xi(x) = \log{x}\
v = \eta(y) = \log{y}\
w = \zeta(z) = \log{z}
$$
The general form for the the change of variables is given by
End of explanation
const_share_eq_3_1.subs({
V(x,y,z): V(xi(x),eta(y),zeta(z)),
xi(x): sp.log(x),
eta(y): sp.log(y),
zeta(z): sp.log(z)
}).simplify()
Explanation: where $\mathcal{V} = V(\xi(x), \eta(y), \zeta(z))$
Substituting the logarithmic functional form for the transformed variables gives
End of explanation
balancer_constraints = sp.ImmutableMatrix([
[1-w_x, -w_x, -w_x],
[-w_y, 1-w_y, -w_y],
[-w_z, -w_z, 1-w_z]
])
v_grad = sp.ImmutableMatrix([V(u,v,w).diff(i) for i in [u,v,w]])
sp.Eq(sp.MatMul(balancer_constraints, sp.UnevaluatedExpr(v_grad), evaluate=False).subs({V(u,v,w): V_d }),
(balancer_constraints*v_grad).subs({V(u,v,w): V_d }),
evaluate=False)
Explanation: The share of value weight constraints can written as a matrix equation:
End of explanation
(sp.simplify(sp.ImmutableMatrix([
# [1-w_x, -w_x, -w_x],
[-w_y, 1-w_y, -w_y],
[-w_z, -w_z, 1-w_z]
]).nullspace()[0]).subs({(1-w_y-w_z): w_x})*w_z
).T*sp.ImmutableMatrix([sp.log(x), sp.log(y), sp.log(z)])
Explanation: We can use the weights summing to 1 condition to eliminate one of these equations. The nullspace of the constraint matrix then defines a plane of constant value of the invariant in $u,v,w$ space which we can then map back to the original X, Y, Z token balances.
End of explanation
s = sp.symbols('s', positive=True)
V_ss = s*(x+y) + x**w_x*y**w_y
sp.Eq(V(x,y), V_ss)
Explanation: It can be seen by inspection that exponentiating this invariant results in the original Balancer form. As for the two asset case it is also possible to retain the value function in this form and derive new forms for the trading functions, which is again an exercise left to the reader.
Curve
Curve uses the StableSwap invariant to reduce slippage for stablecoins all having equivalent value. For example a pool could consist of USDC, USDT, BUSD and DAI, all of which are designed to track USD. Pools tracking other assets are possible e.g. a BTC pool backed by sBTC, renBTC, wBTC. The market maker token pool ideally consists of a balanced mix of each token type. We consider a two asset pool below, this analysis extends to an arbitrary number of assets.
The StableSwap invariant is designed to act as a constant sum market maker $x+y=1$ for small imbalances, and a constant product Uniswap market maker $xy=k$ as the pool becomes more imbalanced. These are the price constraints defining the system i.e.
at small imbalance $-\frac{\partial{x}}{\partial{y}}=1$ and tokens are freely interchangeable
at larger imbalance $-\frac{\partial{x}}{\partial{y}}=x/y$ as for Uniswap (or in general an equal weight Balancer pool)
The StableSwap invariant can be written as $V{\left(x,y \right)} = s \left(x + y\right) + x^{w_{x}} y^{w_{y}}$ where $s$ is an amplification parameter that determines the transition between constant sum and constant product behaviour.
End of explanation
V_x = (V_ss).diff(x).subs({w_x: sp.Rational(1,2), w_y: sp.Rational(1,2)}).simplify()
V_y = (V_ss).diff(y).subs({w_x: sp.Rational(1,2), w_y: sp.Rational(1,2)}).simplify()
ss_spot = V_y/V_x.simplify()
ss_spot_eq = sp.Eq(V(x,y).diff(y)/V(x,y).diff(x), ss_spot)
ss_spot_eq
sp.Eq(sp.Limit(ss_spot, s, sp.oo), (ss_spot).limit(s, sp.oo))
sp.Eq(sp.Limit(ss_spot, s, 0), (ss_spot).limit(s, 0))
Explanation: Below we show the spot price of Y tokens in terms of X tokens that results from this invariant function. We can see that the limit $s \rightarrow \infty$ gives the constant sum behaviour while the $s \rightarrow 0$ limit gives constant product behaviour.
End of explanation
ss_spot_denom = sp.denom(ss_spot_eq.rhs)* sp.denom(ss_spot_eq.lhs)
sp.Eq(ss_spot_eq.lhs * ss_spot_denom, ss_spot_eq.rhs * ss_spot_denom)
Explanation: Following the previous procedure we'd hope to be able to solve the PDE for $V(x,y)$ from the spot price constraint:
$$\left(s + \frac{\sqrt{y}}{2 \sqrt{x}}\right) \frac{\partial}{\partial y} V{\left(x,y \right)} = \left(s + \frac{\sqrt{x}}{2 \sqrt{y}}\right) \frac{\partial}{\partial x} V{\left(x,y \right)}$$
Attempting this with the actual StableSwap spot price above doesn't immediately work although it should be possible to solve numerically.
End of explanation
sol2 = sp.pdsolve((s*x*y + y)*V(x,y).diff(y) - (s*x*y + x)*V(x,y).diff(x), V(x,y)).rhs
V_ss_new = sp.log(sol2.args[0]).expand()
sp.Eq(V(x,y),V_ss_new)
Explanation: A new Curve
We can look at the form of the PDE from the StableSwap constraint and explore related functional forms. It looks like the same limiting behaviour could be achieved with any ratio of $x/y$ i.e. without requiring the $\surd$. We hence try $$\left(s x y + y\right) \frac{\partial}{\partial y} V{\left(x,y \right)} = \left(s x y + x\right) \frac{\partial}{\partial x} V{\left(x,y \right)}$$
which is easily solvable by SymPy and results in a new Curve invariant:
End of explanation
ss_spot_new = (V_ss_new.diff(y)/V_ss_new.diff(x)).simplify()
ss_spot_new
sp.Eq(sp.Limit(ss_spot_new, s, sp.oo), (ss_spot_new).limit(s, sp.oo))
sp.Eq(sp.Limit(ss_spot_new, s, 0), (ss_spot_new).limit(s, 0))
Explanation: We see that the spot price has the same desired limiting behaviour:
End of explanation |
11,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The frequency of a Ricker wavelet
We often use Ricker wavelets to model seismic, for example when making a synthetic seismogram with which to help tie a well. One simple way to guesstimate the peak or central frequency of the wavelet that will model a particlar seismic section is to count the peaks per unit time in the seismic. But this tends to overestimate the actual frequency because the maximum frequency of a Ricker wavelet is more than the peak frequency. The question is, how much more?
To investigate, let's make a Ricker wavelet and see what it looks like in the time and frequency domains.
Step1: When we count the peaks in a section, the assumption is that this apparent frequency — that is, the reciprocal of apparent period or distance between the extrema — tells us the dominant or peak frequency.
To help see why this assumption is wrong, let's compare the Ricker with a signal whose apparent frequency does match its peak frequency
Step2: Notice that the signal is much narrower in bandwidth. If we allowed more oscillations, it would be even narrower. If it lasted forever, it would be a spike in the frequency domain.
Let's overlay the signals to get a picture of the difference in the relative periods
Step3: The practical consequence of this is that if we estimate the peak frequency to be $f\ \mathrm{Hz}$, then we need to reduce $f$ by some factor if we want to design a wavelet to match the data. To get this factor, we need to know the apparent period of the Ricker function, as given by the time difference between the two minima.
Let's look at a couple of different ways to find those minima
Step4: Check that the wavelet looks like it did before, by comparing the output of this function when f is 25 with the wavelet w we were using before
Step5: Now we call SciPy's minimize function on our ricker function. It itertively searches for a minimum solution, then gives us the x (which is really t in our case) at that minimum
Step6: So the minimum amplitude, given by fun, is $-0.44626$ and it occurs at an x (time) of $\pm 0.01559\ \mathrm{s}$.
In comparison, the minima of the cosine function occur at a time of $\pm 0.02\ \mathrm{s}$. In other words, the period appears to be $0.02 - 0.01559 = 0.00441\ \mathrm{s}$ shorter than the pure waveform, which is...
Step7: ...about 22% shorter. This means that if we naively estimate frequency by counting peaks or zero crossings, we'll tend to overestimate the peak frequency of the wavelet by about 22% — assuming it is approximately Ricker-like; if it isn't we can use the same method to estimate the error for other functions.
This is good to know, but it would be interesting to know if this parameter depends on frequency, and also to have a more precise way to describe it than a decimal. To get at these questions, we need an analytic solution.
Find minima analytically
Python's SymPy package is a bit like Maple — it understands math symbolically. We'll use sympy.solve to find an analytic solution. It turns out that it needs the Ricker function writing in yet another way, using SymPy symbols and expressions for $\mathrm{e}$ and $\pi$.
Step8: Now we can easily find the solutions to the Ricker equation, that is, the times at which the function is equal to zero
Step9: But this is not quite what we want. We need the minima, not the zero-crossings.
Maybe there's a better way to do this, but here's one way. Note that the gradient (slope or derivative) of the Ricker function is zero at the minima, so let's just solve the first time derivative of the Ricker function. That will give us the three times at which the function has a gradient of zero.
Step10: In other words, the non-zero minima of the Ricker function are at
Step11: The solutions agree.
While we're looking at this, we can also compute the analytic solution to the amplitude of the minima, which SciPy calculated as -0.446. We just substitute one of the expressions for the minimum time into the expression for r
Step12: Apparent frequency
So what's the result of all this? What's the correction we need to make?
The minima of the Ricker wavelet are $\sqrt{6}\ /\ \pi f_\mathrm{actual}\ \mathrm{s}$ apart — this is the apparent period. If we're assuming a pure tone, this period corresponds to an apparent frequency of $\pi f_\mathrm{actual}\ /\ \sqrt{6}\ \mathrm{Hz}$. For $f = 25\ \mathrm{Hz}$, this apparent frequency is
Step13: If we were to try to model the data with a Ricker of 32 Hz, the frequency will be too high. We need to multiply the frequency by a factor of $\sqrt{6} / \pi$, like so
Step14: This gives the correct frequency of 25 Hz.
To sum up, rearranging the expression above | Python Code:
T, dt, f = 0.256, 0.001, 25
import bruges
w, t = bruges.filters.ricker(T, dt, f, return_t=True)
import scipy.signal
f_W, W = scipy.signal.welch(w, fs=1/dt, nperseg=256)
fig, axs = plt.subplots(figsize=(15,5), ncols=2)
axs[0].plot(t, w)
axs[0].set_xlabel("Time [s]")
axs[1].plot(f_W[:25], W[:25], c="C1")
axs[1].set_xlabel("Frequency [Hz]")
plt.show()
Explanation: The frequency of a Ricker wavelet
We often use Ricker wavelets to model seismic, for example when making a synthetic seismogram with which to help tie a well. One simple way to guesstimate the peak or central frequency of the wavelet that will model a particlar seismic section is to count the peaks per unit time in the seismic. But this tends to overestimate the actual frequency because the maximum frequency of a Ricker wavelet is more than the peak frequency. The question is, how much more?
To investigate, let's make a Ricker wavelet and see what it looks like in the time and frequency domains.
End of explanation
c = np.cos(2*25*np.pi*t)
f_C, C = scipy.signal.welch(c, fs=1/dt, nperseg=256)
fig, axs = plt.subplots(figsize=(15,5), ncols=2)
axs[0].plot(t, c, c="C2")
axs[0].set_xlabel("Time [s]")
axs[1].plot(f_C[:25], C[:25], c="C1")
axs[1].set_xlabel("Frequency [Hz]")
plt.show()
Explanation: When we count the peaks in a section, the assumption is that this apparent frequency — that is, the reciprocal of apparent period or distance between the extrema — tells us the dominant or peak frequency.
To help see why this assumption is wrong, let's compare the Ricker with a signal whose apparent frequency does match its peak frequency: a pure cosine:
End of explanation
plt.figure(figsize=(15, 5))
plt.plot(t, c, c='C2')
plt.plot(t, w)
plt.xlabel("Time [s]")
plt.show()
Explanation: Notice that the signal is much narrower in bandwidth. If we allowed more oscillations, it would be even narrower. If it lasted forever, it would be a spike in the frequency domain.
Let's overlay the signals to get a picture of the difference in the relative periods:
End of explanation
def ricker(t, f):
return (1 - 2*(np.pi*f*t)**2) * np.exp(-(np.pi*f*t)**2)
Explanation: The practical consequence of this is that if we estimate the peak frequency to be $f\ \mathrm{Hz}$, then we need to reduce $f$ by some factor if we want to design a wavelet to match the data. To get this factor, we need to know the apparent period of the Ricker function, as given by the time difference between the two minima.
Let's look at a couple of different ways to find those minima: numerically and analytically.
Find minima numerically
We'll use scipy.optimize.minimize to find a numerical solution. In order to use it, we'll need a slightly different expression for the Ricker function — casting it in terms of a time basis t. We'll also keep f as a variable, rather than hard-coding it in the expression, to give us the flexibility of computing the minima for different values of f.
Here's the equation we're implementing:
$$w(t, f) = (1 - 2\pi^2 f^2 t^2)\ e^{-\pi^2 f^2 t^2}$$
End of explanation
f = 25
np.allclose(w, ricker(t, f=25))
plt.figure(figsize=(15, 5))
plt.plot(w, lw=3)
plt.plot(ricker(t, f), '--', c='C4', lw=3)
plt.show()
Explanation: Check that the wavelet looks like it did before, by comparing the output of this function when f is 25 with the wavelet w we were using before:
End of explanation
import scipy.optimize
f = 25
scipy.optimize.minimize(ricker, x0=0, args=(f))
Explanation: Now we call SciPy's minimize function on our ricker function. It itertively searches for a minimum solution, then gives us the x (which is really t in our case) at that minimum:
End of explanation
(0.02 - 0.01559) / 0.02
Explanation: So the minimum amplitude, given by fun, is $-0.44626$ and it occurs at an x (time) of $\pm 0.01559\ \mathrm{s}$.
In comparison, the minima of the cosine function occur at a time of $\pm 0.02\ \mathrm{s}$. In other words, the period appears to be $0.02 - 0.01559 = 0.00441\ \mathrm{s}$ shorter than the pure waveform, which is...
End of explanation
import sympy as sp
t = sp.Symbol('t')
f = sp.Symbol('f')
r = (1 - 2*(sp.pi*f*t)**2) * sp.exp(-(sp.pi*f*t)**2)
Explanation: ...about 22% shorter. This means that if we naively estimate frequency by counting peaks or zero crossings, we'll tend to overestimate the peak frequency of the wavelet by about 22% — assuming it is approximately Ricker-like; if it isn't we can use the same method to estimate the error for other functions.
This is good to know, but it would be interesting to know if this parameter depends on frequency, and also to have a more precise way to describe it than a decimal. To get at these questions, we need an analytic solution.
Find minima analytically
Python's SymPy package is a bit like Maple — it understands math symbolically. We'll use sympy.solve to find an analytic solution. It turns out that it needs the Ricker function writing in yet another way, using SymPy symbols and expressions for $\mathrm{e}$ and $\pi$.
End of explanation
sp.solvers.solve(r, t)
Explanation: Now we can easily find the solutions to the Ricker equation, that is, the times at which the function is equal to zero:
End of explanation
dwdt = sp.diff(r, t)
sp.solvers.solve(dwdt, t)
Explanation: But this is not quite what we want. We need the minima, not the zero-crossings.
Maybe there's a better way to do this, but here's one way. Note that the gradient (slope or derivative) of the Ricker function is zero at the minima, so let's just solve the first time derivative of the Ricker function. That will give us the three times at which the function has a gradient of zero.
End of explanation
np.sqrt(6) / (2 * np.pi * 25)
Explanation: In other words, the non-zero minima of the Ricker function are at:
$$\pm \frac{\sqrt{6}}{2\pi f}$$
Let's just check that this evaluates to the same answer we got from scipy.optimize, which was 0.01559.
End of explanation
r.subs({t: sp.sqrt(6)/(2*sp.pi*f)})
Explanation: The solutions agree.
While we're looking at this, we can also compute the analytic solution to the amplitude of the minima, which SciPy calculated as -0.446. We just substitute one of the expressions for the minimum time into the expression for r:
End of explanation
(np.pi * 25) / np.sqrt(6)
Explanation: Apparent frequency
So what's the result of all this? What's the correction we need to make?
The minima of the Ricker wavelet are $\sqrt{6}\ /\ \pi f_\mathrm{actual}\ \mathrm{s}$ apart — this is the apparent period. If we're assuming a pure tone, this period corresponds to an apparent frequency of $\pi f_\mathrm{actual}\ /\ \sqrt{6}\ \mathrm{Hz}$. For $f = 25\ \mathrm{Hz}$, this apparent frequency is:
End of explanation
32.064 * np.sqrt(6) / (np.pi)
Explanation: If we were to try to model the data with a Ricker of 32 Hz, the frequency will be too high. We need to multiply the frequency by a factor of $\sqrt{6} / \pi$, like so:
End of explanation
np.sqrt(6) / np.pi
Explanation: This gives the correct frequency of 25 Hz.
To sum up, rearranging the expression above:
$$f_\mathrm{actual} = f_\mathrm{apparent} \frac{\sqrt{6}}{\pi}$$
Expressed as a decimal, the factor we were seeking is therefore $\sqrt{6}\ /\ \pi$:
End of explanation |
11,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<b>Step 1</b> Get the links of different app categories on iTunes.
Step1: <b>Step2</b>
Get the links for all popular apps of different catigories on iTunes.
Step2: <b>Step3</b> Extract the information for all popular apps. | Python Code:
r = urllib.urlopen('https://itunes.apple.com/us/genre/ios-books/id6018?mt=8').read()
soup = BeautifulSoup(r)
print type(soup)
all_categories = soup.find_all("div", class_="nav")
category_url = all_categories[0].find_all(class_ = "top-level-genre")
categories_url = pd.DataFrame()
for itm in category_url:
category = itm.get_text()
url = itm.attrs['href']
d = {'category':[category], 'url':[url]}
df = pd.DataFrame(d)
categories_url = categories_url.append(df, ignore_index = True)
print categories_url
categories_url['url'][0]
Explanation: <b>Step 1</b> Get the links of different app categories on iTunes.
End of explanation
def extract_apps(url):
r = urllib.urlopen(url).read()
soup = BeautifulSoup(r)
apps = soup.find_all("div", class_="column")
apps_link = apps[0].find_all('a')
column_first = pd.DataFrame()
for itm in apps_link:
app_name = itm.get_text()
url = itm.attrs['href']
d = {'category':[app_name], 'url':[url]}
df = pd.DataFrame(d)
column_first = column_first.append(df, ignore_index = True)
apps_link2 = apps[1].find_all('a')
column_second = pd.DataFrame()
for itm in apps_link2:
app_name = itm.get_text()
url = itm.attrs['href']
d = {'category':[app_name], 'url':[url]}
df = pd.DataFrame(d)
column_second = column_second.append(df, ignore_index = True)
apps_link3 = apps[2].find_all('a')
column_last = pd.DataFrame()
for itm in apps_link3:
app_name = itm.get_text()
url = itm.attrs['href']
d = {'category':[app_name], 'url':[url]}
df = pd.DataFrame(d)
column_last = column_last.append(df, ignore_index = True)
Final_app_link = pd.DataFrame()
Final_app_link = Final_app_link.append(column_first, ignore_index = True)
Final_app_link = Final_app_link.append(column_second, ignore_index = True)
Final_app_link = Final_app_link.append(column_last, ignore_index = True)
return Final_app_link
app_url = pd.DataFrame()
for itm in categories_url['url']:
apps = extract_apps(itm)
app_url = app_url.append(apps, ignore_index = True)
app_url['url'][0]
Explanation: <b>Step2</b>
Get the links for all popular apps of different catigories on iTunes.
End of explanation
def get_content(url):
r = urllib.urlopen(url).read()
soup = BeautifulSoup(r)
des = soup.find_all('div', id = "content")
apps = soup.find_all("div", class_="lockup product application")
rate = soup.find_all("div", class_="extra-list customer-ratings")
dic = []
global app_name, descript, link, price, category, current_rate, current_count, total_count, total_rate, seller,mul_dev,mul_lang,new_ver_des
for itm in des:
try:
descript = itm.find_all('div',{'class':"product-review"})[0].get_text().strip().split('\n')[2].encode('utf-8')
except:
descript = ''
try:
new_ver_des = itm.find_all('div',{'class':"product-review"})[1].get_text().strip().split('\n')[2].encode('utf-8')
except:
new_ver_des = ''
try:
app_name = itm.find_all('div',{'class':"left" })[0].get_text().split('\n')[1]
except:
app_name = ''
for itm in apps:
category = itm.find_all('span',{'itemprop':"applicationCategory" })[0].get_text()
price = itm.find_all('div',{'class':"price" })[0].get_text()
link = itm.a["href"]
seller = itm.find_all("span", itemprop="name")[0].get_text()
try:
device = itm.find_all("span", itemprop="operatingSystem")[0].get_text()
if 'and' in device.lower():
mul_dev = 'Y'
else:
mul_dev = "N"
except:
mul_dev = "N"
try:
lang = itm.find_all("li",class_ = "language")[0].get_text().split(',')
if len(lang) >1:
mul_lang = "Y"
else:
mul_lang = "N"
except:
mul_lang = "N"
for itm in rate:
try:
current_rate = itm.find_all('span',{'itemprop':"ratingValue"})[0].get_text()
except:
current_rate = ''
try:
current_count = itm.find_all('span',{'itemprop':"reviewCount"})[0].get_text()
except:
current_count = ''
try:
total_count = itm.find_all('span',{'class':"rating-count"})[1].get_text()
except:
try:
total_count = itm.find_all('span',{'class':"rating-count"})[0].get_text()
except:
total_count = ''
try:
total_rate = itm.find_all('div', class_="rating",itemprop = False)[0]['aria-label'].split(',')[0]
except:
total_rate = ''
for i in range(3):
try:
globals()['user_{0}'.format(i)] = soup.find_all("div", class_="customer-reviews")[0].find_all("span", class_='user-info')[i].get_text().strip( ).split(' ')[-1]
except:
globals()['user_{0}'.format(i)] = ''
try:
globals()['star_{0}'.format(i)] = soup.find_all("div", class_="customer-reviews")[0].find_all("div", class_="rating")[i]['aria-label']
except:
globals()['star_{0}'.format(i)] = ''
try:
globals()['comm_{0}'.format(i)] = soup.find_all("div", class_="customer-reviews")[0].find_all("p", class_="content")[i].get_text()
except:
globals()['comm_{0}'.format(i)] = ''
dic.append({'app':app_name,'link':link, 'price':price,'category':category,'current rating':current_rate,
'current reviews':current_count,'overall rating':total_rate,'overall reviews':total_count,
'description':descript,'seller':seller,'multiple languages':mul_lang,
'multiple devices':mul_dev,'new version description':new_ver_des,'user 1':user_0,
'rate 1':star_0,'comment 1':comm_0,'user 2':user_1,'rate 2':star_1,'comment 2':comm_1,
'user 3':user_2,'rate 3':star_2,'comment 3':comm_2})
dic = pd.DataFrame(dic)
return dic
full_content = pd.DataFrame()
for itm in app_url['url']:
content = get_content(itm)
full_content = full_content.append(content, ignore_index = True)
full_content
full_content.to_csv('app.csv',encoding='utf-8',index=True)
Explanation: <b>Step3</b> Extract the information for all popular apps.
End of explanation |
11,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Purpose
The purpose of this notebook is to work out the code for how to combine and average tetrode pairs over brain areas over multiple sessions
Step1: Make sure we can get the ripple-triggered connectivity for two epochs
Step2: Now figure out how to combine the epochs
Step3: Use tetrode info to index into the epoch arrays to select out the relevant brain areas
Step4: Show that the indexing works by getting all the CA1-PFC tetrode pairs and averaging the coherence over the two epochs to get the average CA1-PFC coherence. In this case we show two plots of two different frequency bands to show the flexibility of using the xarray package.
Step5: Alternatively we can use the transform function in the read_netcdf function to select brain areas | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import xarray as xr
from src.analysis import (decode_ripple_clusterless,
detect_epoch_ripples,
ripple_triggered_connectivity,
connectivity_by_ripple_type)
from src.data_processing import (get_LFP_dataframe, make_tetrode_dataframe,
save_ripple_info,
save_tetrode_info)
from src.parameters import (ANIMALS, SAMPLING_FREQUENCY,
MULTITAPER_PARAMETERS, FREQUENCY_BANDS,
RIPPLE_COVARIATES)
epoch_keys = [('HPa', 6, 2), ('HPa', 6, 4)]
def estimate_ripple_coherence(epoch_key):
ripple_times = detect_epoch_ripples(
epoch_key, ANIMALS, sampling_frequency=SAMPLING_FREQUENCY)
tetrode_info = make_tetrode_dataframe(ANIMALS)[epoch_key]
tetrode_info = tetrode_info[
~tetrode_info.descrip.str.endswith('Ref').fillna(False)]
lfps = {tetrode_key: get_LFP_dataframe(tetrode_key, ANIMALS)
for tetrode_key in tetrode_info.index}
# Compare all ripples
for parameters_name, parameters in MULTITAPER_PARAMETERS.items():
ripple_triggered_connectivity(
lfps, epoch_key, tetrode_info, ripple_times, parameters,
FREQUENCY_BANDS,
multitaper_parameter_name=parameters_name,
group_name='all_ripples')
# save_tetrode_info(epoch_key, tetrode_info)
Explanation: Purpose
The purpose of this notebook is to work out the code for how to combine and average tetrode pairs over brain areas over multiple sessions
End of explanation
for epoch_key in epoch_keys:
estimate_ripple_coherence(epoch_key)
Explanation: Make sure we can get the ripple-triggered connectivity for two epochs
End of explanation
from glob import glob
def read_netcdfs(files, dim, transform_func=None, group=None):
def process_one_path(path):
# use a context manager, to ensure the file gets closed after use
with xr.open_dataset(path, group=group) as ds:
# transform_func should do some sort of selection or
# aggregation
if transform_func is not None:
ds = transform_func(ds)
# load all data from the transformed dataset, to ensure we can
# use it after closing each original file
ds.load()
return ds
paths = sorted(glob(files))
datasets = [process_one_path(p) for p in paths]
return xr.concat(datasets, dim)
combined = read_netcdfs('../Processed-Data/*.nc', dim='session',
group='4Hz_Resolution/all_ripples/coherence',
transform_func=None)
Explanation: Now figure out how to combine the epochs
End of explanation
tetrode_info = pd.concat(make_tetrode_dataframe(ANIMALS).values())
tetrode_info = tetrode_info[
~tetrode_info.descrip.str.endswith('Ref').fillna(False)]
tetrode_info = tetrode_info.loc[
(tetrode_info.animal=='HPa') &
(tetrode_info.day == 6) &
(tetrode_info.epoch.isin((2, 4)))]
Explanation: Use tetrode info to index into the epoch arrays to select out the relevant brain areas
End of explanation
coh = (
combined
.sel(
tetrode1=tetrode_info.query('area == "CA1"').tetrode_id.values,
tetrode2=tetrode_info.query('area == "PFC"').tetrode_id.values)
.coherence_magnitude
.mean(dim=['tetrode1', 'tetrode2', 'session']))
fig, axes = plt.subplots(2, 1, figsize=(12, 9))
coh.sel(frequency=slice(0, 30)).plot(x='time', y='frequency', ax=axes[0]);
coh.sel(frequency=slice(30, 125)).plot(x='time', y='frequency', ax=axes[1]);
coh = (
combined
.sel(
tetrode1=tetrode_info.query('area == "iCA1"').tetrode_id.values,
tetrode2=tetrode_info.query('area == "PFC"').tetrode_id.values)
.coherence_magnitude
.mean(dim=['tetrode1', 'tetrode2', 'session']))
fig, axes = plt.subplots(2, 1, figsize=(12, 9))
coh.sel(frequency=slice(0, 30)).plot(x='time', y='frequency', ax=axes[0]);
coh.sel(frequency=slice(30, 125)).plot(x='time', y='frequency', ax=axes[1]);
coh_diff = ((combined - combined.isel(time=0))
.sel(
tetrode1=tetrode_info.query('area == "iCA1"').tetrode_id.values,
tetrode2=tetrode_info.query('area == "PFC"').tetrode_id.values)
.coherence_magnitude
.mean(dim=['tetrode1', 'tetrode2', 'session']))
fig, axes = plt.subplots(2, 1, figsize=(12, 9))
coh_diff.sel(frequency=slice(0, 30)).plot(x='time', y='frequency', ax=axes[0]);
coh_diff.sel(frequency=slice(30, 125)).plot(x='time', y='frequency', ax=axes[1]);
Explanation: Show that the indexing works by getting all the CA1-PFC tetrode pairs and averaging the coherence over the two epochs to get the average CA1-PFC coherence. In this case we show two plots of two different frequency bands to show the flexibility of using the xarray package.
End of explanation
from functools import partial
def select_brain_areas(dataset, area1='', area2=''):
if 'tetrode1' in dataset.coords:
return dataset.sel(
tetrode1=dataset.tetrode1[dataset.brain_area1==area1],
tetrode2=dataset.tetrode2[dataset.brain_area2==area2]
)
else:
# The dataset is power
return dataset.sel(
tetrode=dataset.tetrode[dataset.brain_area==area1],
)
CA1_PFC = partial(select_brain_areas, area1='CA1', area2='PFC')
combined = read_netcdfs('../Processed-Data/*.nc', dim='session',
group='4Hz_Resolution/all_ripples/coherence',
transform_func=CA1_PFC)
combined
print(combined.brain_area1)
print(combined.brain_area2)
coh = combined.mean(['tetrode1', 'tetrode2', 'session']).coherence_magnitude
fig, axes = plt.subplots(2, 1, figsize=(12, 9))
coh.sel(frequency=slice(0, 30)).plot(x='time', y='frequency', ax=axes[0]);
coh.sel(frequency=slice(30, 125)).plot(x='time', y='frequency', ax=axes[1]);
Explanation: Alternatively we can use the transform function in the read_netcdf function to select brain areas
End of explanation |
11,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Configuration
Step1: Get the Trace
Step2: FTrace Object
Step3: Assertions
Step4: Assertion
Step5: Assertion
Step6: Statistics
Check if 95% of the temperature readings are below CONTROL_TEMP + MARGIN
Step7: Check if the mean temperauture is less than CONTROL_TEMP
Step8: We can also use getStatement to get the absolute values. Here we are getting the standard deviation expressed as a percentage of the mean
Step9: Thermal Residency | Python Code:
import trappy
import numpy
config = {}
# TRAPpy Events
config["THERMAL"] = trappy.thermal.Thermal
config["OUT"] = trappy.cpu_power.CpuOutPower
config["IN"] = trappy.cpu_power.CpuInPower
config["PID"] = trappy.pid_controller.PIDController
config["GOVERNOR"] = trappy.thermal.ThermalGovernor
# Control Temperature
config["CONTROL_TEMP"] = 77000
# A temperature margin of 2.5 degrees Celsius
config["TEMP_MARGIN"] = 2500
# The Sustainable power at the control Temperature
config["SUSTAINABLE_POWER"] = 2500
# Expected percentile of CONTROL_TEMP + TEMP_MARGIN
config["EXPECTED_TEMP_QRT"] = 95
# Maximum expected Standard Deviation as a percentage
# of mean temperature
config["EXPECTED_STD_PCT"] = 5
Explanation: Configuration
End of explanation
import urllib
import os
TRACE_DIR = "example_trace_dat_thermal"
TRACE_FILE = os.path.join(TRACE_DIR, 'bart_thermal_trace.dat')
TRACE_URL = 'http://cdn.rawgit.com/sinkap/4e0a69cbff732b57e36f/raw/7dd0ed74bfc17a34a3bd5ea6b9eb3a75a42ddbae/bart_thermal_trace.dat'
if not os.path.isdir(TRACE_DIR):
os.mkdir(TRACE_DIR)
if not os.path.isfile(TRACE_FILE):
print "Fetching trace file.."
urllib.urlretrieve(TRACE_URL, filename=TRACE_FILE)
Explanation: Get the Trace
End of explanation
# Create a Trace object
ftrace = trappy.FTrace(TRACE_FILE, "SomeBenchMark")
Explanation: FTrace Object
End of explanation
# Create an Assertion Object
from bart.common.Analyzer import Analyzer
t = Analyzer(ftrace, config)
BIG = '000000f0'
LITTLE = '0000000f'
Explanation: Assertions
End of explanation
result = t.getStatement("((IN:load0 + IN:load1 + IN:load2 + IN:load3) == 0) \
& (IN:dynamic_power > 0)",reference=True, select=BIG)
if len(result):
print "FAIL: Dynamic Power is NOT Zero when load is Zero for the BIG cluster"
else:
print "PASS: Dynamic Power is Zero when load is Zero for the BIG cluster"
result = t.getStatement("((IN:load0 + IN:load1 + IN:load2 + IN:load3) == 0) \
& (IN:dynamic_power > 0)",reference=True, select=LITTLE)
if len(result):
print "FAIL: Dynamic Power is NOT Zero when load is Zero for the LITTLE cluster"
else:
print "PASS: Dynamic Power is Zero when load is Zero for the LITTLE cluster"
Explanation: Assertion: Load and Dynamic Power
<html>
This assertion makes sure that the dynamic power for the each cluster is zero when the sum of the "loads" of each CPU is 0
$$\forall\ t\ |\ Load(t) = \sum\limits_{i=0}^{cpus} Load_i(t) = 0 \implies dynamic\ power(t)=0 $$
</html>
End of explanation
result = t.getStatement("(GOVERNOR:current_temperature > CONTROL_TEMP) &\
(PID:output > SUSTAINABLE_POWER)", reference=True, select=0)
if len(result):
print "FAIL: The Governor is allocating power > sustainable when T > CONTROL_TEMP"
else:
print "PASS: The Governor is allocating power <= sustainable when T > CONTROL_TEMP"
Explanation: Assertion: Control Temperature and Sustainable Power
<html>
When the temperature is greater than the control temperature, the total power granted to all cooling devices should be less than sustainable_power
$$\forall\ t\ |\ Temperature(t) > control_temp \implies Total\ Granted\ Power(t) < sustainable_power$$
<html/>
End of explanation
t.assertStatement("numpy.percentile(THERMAL:temp, 95) < (CONTROL_TEMP + TEMP_MARGIN)")
Explanation: Statistics
Check if 95% of the temperature readings are below CONTROL_TEMP + MARGIN
End of explanation
t.assertStatement("numpy.mean(THERMAL:temp) <= CONTROL_TEMP", select=0)
Explanation: Check if the mean temperauture is less than CONTROL_TEMP
End of explanation
t.getStatement("(numpy.std(THERMAL:temp) * 100.0) / numpy.mean(THERMAL:temp)", select=0)
Explanation: We can also use getStatement to get the absolute values. Here we are getting the standard deviation expressed as a percentage of the mean
End of explanation
from bart.thermal.ThermalAssert import ThermalAssert
t_assert = ThermalAssert(ftrace)
end = ftrace.get_duration()
LOW = 0
HIGH = 78000
# The thermal residency gives the percentage (or absolute time) spent in the
# specified temperature range.
result = t_assert.getThermalResidency(temp_range=(0, 78000),
window=(0, end),
percent=True)
for tz_id in result:
print "Thermal Zone: {} spends {:.2f}% time in the temperature range [{}, {}]".format(tz_id,
result[tz_id],
LOW/1000,
HIGH/1000)
pct_temp = numpy.percentile(t.getStatement("THERMAL:temp")[tz_id], result[tz_id])
print "The {:.2f}th percentile temperature is {:.2f}".format(result[tz_id], pct_temp / 1000.0)
Explanation: Thermal Residency
End of explanation |
11,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
全局参数
Step1: 初始化权重
如果需要,会给权重加上L2 loss。为了在后面计算神经网络的总体loss的时候被用上,需要统一存到一个collection。
加载数据
使用cifa10_input来获取数据,这个文件来自tensorflow github,可以下载下来直接使用。如果使用distorted_input方法,那么得到的数据是经过增强处理的。会对图片随机做出切片、翻转、修改亮度、修改对比度等操作。这样就能多样化我们的训练数据。
得到一个tensor,batch_size大小的batch。并且可以迭代的读取下一个batch。
Step2: 第一个卷积层
同样的,我们使用5x5卷积核,3个通道(input_channel),64个output_channel。不对第一层的参数做正则化,所以将lambda_value设定为0。其中涉及到一个小技巧,就是在pool层,使用了3x3大小的ksize,但是使用2x2的stride,这样增加数据的丰富性。最后使用LRN。LRN最早见于Alex参见ImageNet的竞赛的那篇CNN论文中,Alex在论文中解释了LRN层模仿了生物神经系统的“侧抑制”机制,对局部神经元的活动创建竞争环境,使得其中响应比较大的值变得相对更大,并抑制其他反馈较小的神经元,增加了模型的泛化能力。不过在之后的VGGNet论文中,对比了使用和不使用LRN两种模型,结果表明LRN并不能提高模型的性能。不过这里还是基于AlexNet的设计将其加上。
Step3: 第二个卷积层
输入64个channel,输出依然是64个channel
设定bias的大小为0.1
调换最大池化层和LRN的顺序,先进行LRN然后再最大池化层
但是为什么要这么做,完全不知道?
多看论文。
Step4: 第一个全连接层
要将卷积层拉伸
全连接到新的隐藏层,设定为384个节点
正态分布设定为0.04,bias设定为0.1
重点是,在这里我们还设定weight loss的lambda数值为0.04
Step5: 第二个全连接层
下降为192个节点,减少了一半
Step6: 输出层
最后有10个类别
Step7: 使用in_top_k来输出top k的准确率,默认使用top 1。常用的可以是top 5。
Step8: 启动caifar_input中需要用的线程队列。主要用途是图片数据增强。这里总共使用了16个线程来处理图片。
Step9: 每次在计算之前,先执行image_train,label_train来获取一个batch_size大小的训练数据。然后,feed到train_op和loss中,训练样本。每10次迭代计算就会输出一些必要的信息。 | Python Code:
max_steps = 3000
batch_size = 128
data_dir = 'data/cifar10/cifar-10-batches-bin/'
model_dir = 'model/_cifar10_v2/'
Explanation: 全局参数
End of explanation
X_train, y_train = cifar10_input.distorted_inputs(data_dir, batch_size)
X_test, y_test = cifar10_input.inputs(eval_data=True, data_dir=data_dir, batch_size=batch_size)
image_holder = tf.placeholder(tf.float32, [batch_size, 24, 24, 3])
label_holder = tf.placeholder(tf.int32, [batch_size])
Explanation: 初始化权重
如果需要,会给权重加上L2 loss。为了在后面计算神经网络的总体loss的时候被用上,需要统一存到一个collection。
加载数据
使用cifa10_input来获取数据,这个文件来自tensorflow github,可以下载下来直接使用。如果使用distorted_input方法,那么得到的数据是经过增强处理的。会对图片随机做出切片、翻转、修改亮度、修改对比度等操作。这样就能多样化我们的训练数据。
得到一个tensor,batch_size大小的batch。并且可以迭代的读取下一个batch。
End of explanation
weight1 = variable_with_weight_loss([5, 5, 3, 64], stddev=0.05, lambda_value=0)
kernel1 = tf.nn.conv2d(image_holder, weight1, [1, 1, 1, 1], padding='SAME')
bias1 = tf.Variable(tf.constant(0.0, shape=[64]))
conv1 = tf.nn.relu(tf.nn.bias_add(kernel1, bias1))
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
Explanation: 第一个卷积层
同样的,我们使用5x5卷积核,3个通道(input_channel),64个output_channel。不对第一层的参数做正则化,所以将lambda_value设定为0。其中涉及到一个小技巧,就是在pool层,使用了3x3大小的ksize,但是使用2x2的stride,这样增加数据的丰富性。最后使用LRN。LRN最早见于Alex参见ImageNet的竞赛的那篇CNN论文中,Alex在论文中解释了LRN层模仿了生物神经系统的“侧抑制”机制,对局部神经元的活动创建竞争环境,使得其中响应比较大的值变得相对更大,并抑制其他反馈较小的神经元,增加了模型的泛化能力。不过在之后的VGGNet论文中,对比了使用和不使用LRN两种模型,结果表明LRN并不能提高模型的性能。不过这里还是基于AlexNet的设计将其加上。
End of explanation
weight2 = variable_with_weight_loss(shape=[5, 5, 64, 64], stddev=5e-2, lambda_value=0.0)
kernel2 = tf.nn.conv2d(norm1, weight2, strides=[1, 1, 1, 1], padding='SAME')
bias2 = tf.Variable(tf.constant(0.1, shape=[64]))
conv2 = tf.nn.relu(tf.nn.bias_add(kernel2, bias2))
norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001/9.0, beta=0.75)
pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
Explanation: 第二个卷积层
输入64个channel,输出依然是64个channel
设定bias的大小为0.1
调换最大池化层和LRN的顺序,先进行LRN然后再最大池化层
但是为什么要这么做,完全不知道?
多看论文。
End of explanation
flattern = tf.reshape(pool2, [batch_size, -1])
dim = flattern.get_shape()[1].value
weight3 = variable_with_weight_loss(shape=[dim, 384], stddev=0.04, lambda_value=0.04)
bias3 = tf.Variable(tf.constant(0.1, shape=[384]))
local3 = tf.nn.relu(tf.matmul(flattern, weight3) + bias3)
Explanation: 第一个全连接层
要将卷积层拉伸
全连接到新的隐藏层,设定为384个节点
正态分布设定为0.04,bias设定为0.1
重点是,在这里我们还设定weight loss的lambda数值为0.04
End of explanation
weight4 = variable_with_weight_loss(shape=[384, 192], stddev=0.04, lambda_value=0.04)
bias4 = tf.Variable(tf.constant(0.1, shape=[192]))
local4 = tf.nn.relu(tf.matmul(local3, weight4) + bias4)
Explanation: 第二个全连接层
下降为192个节点,减少了一半
End of explanation
weight5 = variable_with_weight_loss(shape=[192, 10], stddev=1/192.0, lambda_value=0.0)
bias5 = tf.Variable(tf.constant(0.0, shape=[10]))
logits = tf.add(tf.matmul(local4, weight5), bias5)
def loss(logits, labels):
labels = tf.cast(labels, tf.int64)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels,
name = 'cross_entropy_per_example'
)
cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
tf.add_to_collection('losses', cross_entropy_mean)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
loss = loss(logits, label_holder)
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
Explanation: 输出层
最后有10个类别
End of explanation
top_k_op = tf.nn.in_top_k(logits, label_holder, 1)
sess = tf.InteractiveSession()
saver = tf.train.Saver()
tf.global_variables_initializer().run()
Explanation: 使用in_top_k来输出top k的准确率,默认使用top 1。常用的可以是top 5。
End of explanation
tf.train.start_queue_runners()
Explanation: 启动caifar_input中需要用的线程队列。主要用途是图片数据增强。这里总共使用了16个线程来处理图片。
End of explanation
for step in range(max_steps):
start_time = time.time()
image_batch, label_batch = sess.run([X_train, y_train])
_, loss_value = sess.run([train_op, loss],
feed_dict={image_holder: image_batch, label_holder: label_batch})
duration = time.time() - start_time
if step % 10 == 0:
examples_per_sec = batch_size / duration
sec_this_batch = float(duration)
format_str = ('step %d, loss = %.2f (%.1f examples/sec; %.3f sec/batch)')
print(format_str % (step, loss_value, examples_per_sec, sec_this_batch))
saver.save(sess, save_path=os.path.join(model_dir, 'model.chpt'), global_step=max_steps)
num_examples = 10000
num_iter = int(math.ceil(num_examples / batch_size))
ture_count = 0
total_sample_count = num_iter * batch_size
step = 0
while step < num_iter:
image_batch, label_batch = sess.run([X_test, y_test])
predictions = sess.run([top_k_op],
feed_dict={image_holder: image_batch, label_holder: label_batch})
true_count += np.sum(predictions)
step += 1
precision = ture_count / total_sample_count
print("Precision @ 1 = %.3f" % precision)
sess.close()
Explanation: 每次在计算之前,先执行image_train,label_train来获取一个batch_size大小的训练数据。然后,feed到train_op和loss中,训练样本。每10次迭代计算就会输出一些必要的信息。
End of explanation |
11,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 3
Step1: It's easy to determine the name of the variable; in this case, the name is $x$. It can be a bit more complicated to determine the type of the variable, as it depends on the value the variable is storing. In this case, it's storing the number 2. Since there's no decimal point on the number, we call this number an integer, or int for short.
Numerical types
What other types of variables are there?
Step2: y is assigned a value of 2.0
Step3: In this case, we've defined two variables x and y and assigned them integer values, so they are both of type int. However, we've used them both in a division operation and assigned the result to a variable named z. If we were to check the type of z, what type do you think it would be?
z is a float!
Step4: How does that happen? Shouldn't an operation involving two ints produce an int?
In general, yes it does. However, in cases where a decimal number is outputted, Python implicitly "promotes" the variable storing the result.
This is known as casting, and it can take two forms
Step5: Explicit casting, on the other hand, is a little trickier. In this case, it's you the programmer who are making explicit (hence the name) what type you want your variables to be.
Python has a couple special built-in functions for performing explicit casting on variables, and they're named what you would expect
Step6: Any idea what's happening here?
With explicit casting, you are telling Python to override its default behavior. In doing so, it has to make some decisions as to how to do so in a way that still makes sense.
When you cast a float to an int, some information is lost; namely, the decimal. So the way Python handles this is by quite literally discarding the entire decimal portion.
In this way, even if your number was 9.999999999 and you perfomed an explicit cast to int(), Python would hand you back a 9.
Language typing mechanisms
Python as a language is known as dynamically typed. This means you don't have to specify the type of the variable when you define it; rather, Python infers the type based on how you've defined it and how you use it.
As we've already seen, Python creates a variable of type int when you assign it an integer number like 5, and it automatically converts the type to a float whenever the operations produce decimals.
Other languages, like C++ and Java, are statically typed, meaning in addition to naming a variable when it is declared, the programmer must also explicitly state the type of the variable.
Pros and cons of dynamic typing (as opposed to static typing)?
Pros
Step7: Unlike numerical types like ints and floats, you can't really perform arithmetic operations on strings, with one exception
Step8: The + operator, when applied to strings, is called string concatenation.
This means that it glues or concatenates two strings together to create a new string. In this case, we took the string in x, concatenated it to an empty space " ", and concatenated that again to the string in y, storing the whole thing in a final string z.
Other than the + operator, the other arithmetic operations aren't defined for strings, so I wouldn't recommend trying them...
Step9: Casting, however, is alive and well with strings. In particular, if you know the string you're working with is a string representation of a number, you can cast it from a string to a numeric type
Step10: And back again
Step11: Strings also have some useful methods that numeric types don't for doing some basic text processing.
Step12: A very useful method that will come in handy later in the course when we do some text processing is strip().
Often when you're reading text from a file and splitting it into tokens, you're left with strings that have leading or trailing whitespace
Step13: Anyone who looked at these three strings would say they're the same, but the whitespace before and after the word python in each of them results in Python treating them each as unique. Thankfully, we can use the strip method
Step14: You can also delimit strings using either single-quotes or double-quotes. Either is fine and largely depends on your preference.
Step15: Python also has a built-in method len() that can be used to return the length of a string. The length is simply the number of individual characters (including any whitespace) in the string.
Step16: Variable comparisons and Boolean types
We can also compare variables! By comparing variables, we can ask whether two things are equal, or greater than or less than some other value.
This sort of true-or-false comparison gives rise to yet another type in Python
Step17: Hooray! The == sign is the equality comparison operator, and it will return True or False depending on whether or not the two values are exactly equal. This works for strings as well
Step18: We can also ask if variables are less than or greater than each other, using the < and > operators, respectively.
Step19: In a small twist of relative magnitude comparisons, we can also ask if something is less than or equal to or greater than or equal to some other value. To do this, in addition to the comparison operators < or >, we also add an equal sign
Step20: Interestingly, these operators also work for strings. Be careful, though
Step21: Part 2
Step22: Comments are important to good coding style and should be used often for clarification.
However, even more preferable to the liberal use of comments is a good variable naming convention. For instance, instead of naming a variable "x" or "y" or "c", give it a name that describes its purpose.
Step23: I could've used a comment to explain how this variable was storing the length of the string, but by naming the variable itself in terms of what it was doing, I don't even need such a comment. It's self-evident from the name itself what this variable is doing.
Part 3 | Python Code:
x = 2
Explanation: Lecture 3: Python Variables and Syntax
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
In this lecture, we'll get into more detail on Python variables, as well as language syntax. By the end, you should be able to:
Define variables of string and numerical types, convert between them, and use them in basic operations
Explain the different variants of typing in programming languages, and what "duck-typing" in Python does
Understand how Python uses whitespace in its syntax
Demonstrate how smart variable-naming and proper use of comments can effectively document your code
This week will effectively be a "crash-course" in Python basics; there's a lot of ground to cover!
Part 1: Variables and Types
We saw in the last lecture how to define variables, as well as a few of the basic variable "types" available in Python. It's important to keep in mind that each variable you define has a "type", and this type will dictate much (if not all) of the operations you can perform on and with that variable.
To recap: a variable in Python is a sort of placeholder that stores a value. Critically, a variable has both a name and a type. For example:
End of explanation
y = 2.0
Explanation: It's easy to determine the name of the variable; in this case, the name is $x$. It can be a bit more complicated to determine the type of the variable, as it depends on the value the variable is storing. In this case, it's storing the number 2. Since there's no decimal point on the number, we call this number an integer, or int for short.
Numerical types
What other types of variables are there?
End of explanation
x = 2
y = 3
z = x / y
Explanation: y is assigned a value of 2.0: it is referred to as a floating-point variable, or float for short.
Floats do the heavy-lifting of much of the computation in data science. Whenever you're computing probabilities or fractions or normalizations, floats are the types of variables you're using. In general, you tend to use floats for heavy computation, and ints for counting things.
There is an explicit connection between ints and floats. Let's illustrate with an example:
End of explanation
type(z)
Explanation: In this case, we've defined two variables x and y and assigned them integer values, so they are both of type int. However, we've used them both in a division operation and assigned the result to a variable named z. If we were to check the type of z, what type do you think it would be?
z is a float!
End of explanation
x = 2
y = 3
z = x * y
type(z)
x = 2.5
y = 3.5
z = x * y
type(z)
Explanation: How does that happen? Shouldn't an operation involving two ints produce an int?
In general, yes it does. However, in cases where a decimal number is outputted, Python implicitly "promotes" the variable storing the result.
This is known as casting, and it can take two forms: implicit casting (as we just saw), or explicit casting.
Casting
Implicit casting is done in such a way as to try to abide by "common sense": if you're dividing two numbers, you would all but expect to receive a fraction, or decimal, on the other end. If you're multiplying two numbers, the type of the output depends on the types of the inputs--two floats multiplied will likely produce a float, while two ints multiplied will produce an int.
End of explanation
x = 2.5
y = 3.5
z = x * y
print("Float z:\t{}\nInteger z:\t{}".format(z, int(z)))
Explanation: Explicit casting, on the other hand, is a little trickier. In this case, it's you the programmer who are making explicit (hence the name) what type you want your variables to be.
Python has a couple special built-in functions for performing explicit casting on variables, and they're named what you would expect: int() for casting a variable as an int, and float for casting it as a float.
End of explanation
x = "this is a string"
type(x)
Explanation: Any idea what's happening here?
With explicit casting, you are telling Python to override its default behavior. In doing so, it has to make some decisions as to how to do so in a way that still makes sense.
When you cast a float to an int, some information is lost; namely, the decimal. So the way Python handles this is by quite literally discarding the entire decimal portion.
In this way, even if your number was 9.999999999 and you perfomed an explicit cast to int(), Python would hand you back a 9.
Language typing mechanisms
Python as a language is known as dynamically typed. This means you don't have to specify the type of the variable when you define it; rather, Python infers the type based on how you've defined it and how you use it.
As we've already seen, Python creates a variable of type int when you assign it an integer number like 5, and it automatically converts the type to a float whenever the operations produce decimals.
Other languages, like C++ and Java, are statically typed, meaning in addition to naming a variable when it is declared, the programmer must also explicitly state the type of the variable.
Pros and cons of dynamic typing (as opposed to static typing)?
Pros:
- Streamlined
- Flexible
Cons:
- Easier to make mistakes
- Potential for malicious bugs
For type-checking, Python implements what is known as duck typing: if it walks like a duck and quacks like a duck, it's a duck.
This brings us to a concept known as type safety. This is an important point, especially in dynamically typed languages where the type is not explicitly set by the programmer: there are countless examples of nefarious hacking that has exploited a lack of type safety in certain applications in order to execute malicious code.
A particularly fun example is known as a roundoff error, or more specifically to our case, a representation error. This occurs when we are attempting to represent a value for which we simply don't have enough precision to accurately store.
When there are too many decimal values to represent (usually because the number we're trying to store is very, very small), we get an underflow error.
When there are too many whole numbers to represent (usually because the number we're trying to store is very, very large), we get an overflow error.
One of the most popular examples of an overflow error was the Y2K bug. In this case, most Windows machines internally stored the year as simply the last two digits. Thus, when the year 2000 rolled around, the two numbers representing the year overflowed and reset to 00. A similar problem is anticipated for 2038, when 32-bit Unix machines will also see their internal date representations overflow to 0.
In these cases, and especially in dynamically typed languages like Python, it is very important to know what types of variables you're working with and what the limitations of those types are.
String types
Strings, as we've also seen previously, are the variable types used in Python to represent text.
End of explanation
x = "some string"
y = "another string"
z = x + " " + y
print(z)
Explanation: Unlike numerical types like ints and floats, you can't really perform arithmetic operations on strings, with one exception:
End of explanation
s = "2"
t = "divisor"
x = s / t
Explanation: The + operator, when applied to strings, is called string concatenation.
This means that it glues or concatenates two strings together to create a new string. In this case, we took the string in x, concatenated it to an empty space " ", and concatenated that again to the string in y, storing the whole thing in a final string z.
Other than the + operator, the other arithmetic operations aren't defined for strings, so I wouldn't recommend trying them...
End of explanation
s = "2"
x = int(s)
print("x = {} and has type {}.".format(x, type(x)))
Explanation: Casting, however, is alive and well with strings. In particular, if you know the string you're working with is a string representation of a number, you can cast it from a string to a numeric type:
End of explanation
x = 2
s = str(x)
print("s = {} and has type {}.".format(s, type(s)))
Explanation: And back again:
End of explanation
s = "Some string with WORDS"
print(s.upper()) # make all the letters uppercase
print(s.lower()) # make all the letters lowercase
Explanation: Strings also have some useful methods that numeric types don't for doing some basic text processing.
End of explanation
s1 = " python "
s2 = " python"
s3 = "python "
Explanation: A very useful method that will come in handy later in the course when we do some text processing is strip().
Often when you're reading text from a file and splitting it into tokens, you're left with strings that have leading or trailing whitespace:
End of explanation
print("|" + s1.strip() + "|")
print("|" + s2.strip() + "|")
print("|" + s3.strip() + "|")
Explanation: Anyone who looked at these three strings would say they're the same, but the whitespace before and after the word python in each of them results in Python treating them each as unique. Thankfully, we can use the strip method:
End of explanation
s = "some string"
t = 'this also works'
Explanation: You can also delimit strings using either single-quotes or double-quotes. Either is fine and largely depends on your preference.
End of explanation
s = "some string"
len(s)
Explanation: Python also has a built-in method len() that can be used to return the length of a string. The length is simply the number of individual characters (including any whitespace) in the string.
End of explanation
x = 2
y = 2
x == y
Explanation: Variable comparisons and Boolean types
We can also compare variables! By comparing variables, we can ask whether two things are equal, or greater than or less than some other value.
This sort of true-or-false comparison gives rise to yet another type in Python: the boolean type. A variable of this type takes only two possible values: True or False.
Let's say we have two numeric variables, x and y, and want to check if they're equal. To do this, we use a variation of the assginment operator:
End of explanation
s1 = "a string"
s2 = "a string"
s1 == s2
s3 = "another string"
s1 == s3
Explanation: Hooray! The == sign is the equality comparison operator, and it will return True or False depending on whether or not the two values are exactly equal. This works for strings as well:
End of explanation
x = 1
y = 2
x < y
x > y
Explanation: We can also ask if variables are less than or greater than each other, using the < and > operators, respectively.
End of explanation
x = 2
y = 3
x <= y
x = 3
x <= y
x = 3.00001
x <= y
Explanation: In a small twist of relative magnitude comparisons, we can also ask if something is less than or equal to or greater than or equal to some other value. To do this, in addition to the comparison operators < or >, we also add an equal sign:
End of explanation
s1 = "some string"
s2 = "another string"
s1 > s2
s1 = "Some string"
s1 > s2
Explanation: Interestingly, these operators also work for strings. Be careful, though: their behavior may be somewhat unexpected until you figure out what actual trick is happening:
End of explanation
# Adds two numbers that are initially strings by converting them to an int and a float,
# then converting the final result to an int and storing it in the variable x.
x = int(int("1345") + float("31.5"))
print(x)
Explanation: Part 2: Variable naming conventions and documentation
There are some rules regarding what can and cannot be used as a variable name.
Beyond those rules, there are guidelines.
Variable naming rules
Names can contain only letters, numbers, and underscores.
All the letters a-z (upper and lowercase), the numbers 0-9, and underscores are at your disposal. Anything else is illegal. No special characters like pound signs, dollar signs, or percents are allowed. Hashtag alphanumerics only.
Variable names can only start with letters or underscores.
Numbers cannot be the first character of a variable name. message_1 is a perfectly valid variable name; however, 1_message is not and will throw an error.
Spaces are not allowed in variable names.
Underscores are how Python programmers tend to "simulate" spaces in variable names, but simply put there's no way to name a variable with multiple words separated by spaces.
Avoid using Python keywords or function names as variables.
This might take some trial-and-error. Basically, if you try to name a variable print or float or str, you'll run into a lot of problems down the road.
Technically this isn't outlawed in Python, but it will cause a lot of headaches later in your program.
Variable naming conventions
These are not hard-and-fast rules, but rather suggestions to help "standardize" code and make it easier to read by people who aren't necessarily familiar with the code you've written.
Make variable names short, but descriptive.
I've been giving a lot of examples using variables named x, s, and so forth. This is bad. Don't do it--unless, for example, you're defining x and y to be points in a 2D coordinate axis, or as a counter; one-letter variable names for counters are quite common.
Outside of those narrow use-cases, the variable names should constitute a pithy description that reflects their function in your program. A variable storing a name, for example, could be name or even student_name, but don't go as far as to use the_name_of_the_student.
Be careful with the lowercase l or uppercase O.
This is one of those annoying rules that largely only applies to one-letter variables: stay away from using letters that also bear striking resemblance to numbers. Naming your variable l or O may confuse downstream readers of your code, making them think you're sprinkling 1s and 0s throughout your code.
Variable names should be all lowercase, using underscores for multiple words.
Java programmers may take umbrage with this point: the convention there is to useCamelCase for multi-word variable names.
Since Python takes quite a bit from the C language (and its back-end is implemented in C), it also borrows a lot of C conventions, one of which is to use underscores and all lowercase letters in variable names. So rather than multiWordVariable, we do multi_word_variable.
The one exception to this rule is when you define variables that are constant; that is, their values don't change. In this case, the variable name is usually in all-caps. For example: PI = 3.14159.
Self-documenting code
The practice of pithy but precise variable naming strategies is known as "self-documenting code."
We've learned before that we can insert comments into our code to explain things that might otherwise be confusing:
End of explanation
str_length = len("some string")
Explanation: Comments are important to good coding style and should be used often for clarification.
However, even more preferable to the liberal use of comments is a good variable naming convention. For instance, instead of naming a variable "x" or "y" or "c", give it a name that describes its purpose.
End of explanation
x = 5
x += 10
Explanation: I could've used a comment to explain how this variable was storing the length of the string, but by naming the variable itself in terms of what it was doing, I don't even need such a comment. It's self-evident from the name itself what this variable is doing.
Part 3: Whitespace in Python
Whitespace (no, not that Whitespace) is important in the Python language.
Some languages like C++ and Java use semi-colons to delineate the end of a single statement. Python, however, does not, but still needs some way to identify when we've reached the end of a statement.
In Python, it's the return key that denotes the end of a statement.
Returns, tabs, and spaces are all collectively known as "whitespace", and each can drastically change how your Python program runs. Especially when we get into loops, conditionals, and functions, this will become critical and may be the source of many insidious bugs.
For example, the following code won't run:
End of explanation |
11,040 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
use one hot encoding on the given dataset named 'onehotend_data.csv' on column 'town'
| Python Code::
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import make_column_transformer
ohe = OneHotEncoder()
df = pd.read_csv('onehotend_data.csv')
ohe.fit(df[['town']])
ct = make_column_transformer((OneHotEncoder(categories = ohe.categories_), ['town']), remainder = 'passthrough')
|
11,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
load in evaluation dataset
sub-sample a large set of features
calculate PCA and save out for loading in other places.
Step1: How similar are PCs on 2 sub-samples of data?
Step2: After component 2, they are not that correlated which is surprising.
Calculate PCA on all view items | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn
import pandas as pd
from sklearn.decomposition import PCA
import pickle
%matplotlib inline
# load smaller user behavior dataset
user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl')
user_sample = user_profile.user_id.unique()
print(len(user_profile))
print(len(user_sample))
user_profile.head()
# read nn features
spu_fea = pd.read_pickle("../data_nn_features/spu_fea_sample1000.pkl")
Explanation: load in evaluation dataset
sub-sample a large set of features
calculate PCA and save out for loading in other places.
End of explanation
# sub-sample possible items
np.random.seed(1000)
item_sample = np.random.choice(user_profile.view_spu.unique(),size=3000)
# get item X feature matrix #
X_item_feature = np.empty((len(item_sample),len(spu_fea.features.as_matrix()[0])))
for ii,item_spu in enumerate(item_sample):
X_item_feature[ii,:]=spu_fea.loc[spu_fea.spu_id==item_spu,'features'].as_matrix()[0]
# calculate PC's
pca1 = PCA()
pca1.fit(X_item_feature)
# sub-sample possible items
np.random.seed(2000)
item_sample = np.random.choice(user_profile.view_spu.unique(),size=3000)
# get item X feature matrix #
X_item_feature = np.empty((len(item_sample),len(spu_fea.features.as_matrix()[0])))
for ii,item_spu in enumerate(item_sample):
X_item_feature[ii,:]=spu_fea.loc[spu_fea.spu_id==item_spu,'features'].as_matrix()[0]
# calculate PC's
pca2 = PCA()
pca2.fit(X_item_feature)
for i in range(10):
print(np.corrcoef(pca1.components_[i,:],pca2.components_[i,:])[0,1])
Explanation: How similar are PCs on 2 sub-samples of data?
End of explanation
# get item X feature matrix for all
item_sample = user_profile.view_spu.unique()
X_item_feature = np.empty((len(item_sample),len(spu_fea.features.as_matrix()[0])))
for ii,item_spu in enumerate(item_sample):
X_item_feature[ii,:]=spu_fea.loc[spu_fea.spu_id==item_spu,'features'].as_matrix()[0]
X_item_feature.shape
# calculate PC's
pca_all = PCA()
pca_all.fit(X_item_feature)
pickle.dump(pca_all,open( "../data_nn_features/pca_all_items_sample1000.pkl", "wb" ))
pca_all = pickle.load(open('../data_nn_features/pca_all_items_sample1000.pkl','rb'))
plt.plot(pca_all.explained_variance_ratio_.cumsum())
plt.ylabel('cumulative percent explained variance')
plt.xlabel('component #')
plt.xlim([0,500])
%%bash
#jupyter nbconvert --to Plotting_Sequences_in_low_dimensions.ipynb && mv Plotting_Sequences_in_low_dimensions.slides.html ../notebook_slides/Plotting_Sequences_in_low_dimensions_v1.slides.html
jupyter nbconvert --to html Dimensionality_Reduction_on_Features.ipynb && mv Dimensionality_Reduction_on_Features.html ../notebook_htmls/Dimensionality_Reduction_on_Features_v1.html
cp Dimensionality_Reduction_on_Features.ipynb ../notebook_versions/Dimensionality_Reduction_on_Features_v1.ipynb
Explanation: After component 2, they are not that correlated which is surprising.
Calculate PCA on all view items
End of explanation |
11,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 3
Warmup
Write a function that simulates a dice roll every time the function is called.
Step1: Rewrite your function to take in an int n and simulate n dice rolls.
Write a function that takes in a list of red and blue balls and on each call, pulls a ball randomly out of the list and updates the list.
Step6: Simulate n coin flips for n = 10, 100, 1000, 10000. Are the ratios of heads to tails what you would expect?
Step11: Write a function that pulls the text from http
Step17: Be careful what you __init__
After init has finished, the caller can rightly assume that the object is ready to use. That is, after jeff = Customer('Jeff Knupp', 1000.0), we can start making deposit and withdraw calls on jeff; jeff is a fully-initialized object.
Step19: A change in perspective
Objects can have state
Exercise
With your partner
Step20: Class attributes
Class attributes are attributes that are set at the class-level, as opposed to the instance-level. Normal attributes are introduced in the init method, but some attributes of a class hold for all instances in all cases. For example, consider the following definition of a Car object
Step21: Static methods | Python Code:
import random
def dice():
return random.randint(1,6)
def roll_dice(n):
for i in range(n):
print(dice())
roll_dice(5)
Explanation: Lecture 3
Warmup
Write a function that simulates a dice roll every time the function is called.
End of explanation
balls = ['r', 'r', 'b', 'b', 'b']
def ball_game(num_red, num_blue, select_k):
balls = ['r']*num_red + ['b']*num_blue
res = []
for k in range(select_k):
random.shuffle(balls)
res.append(balls.pop())
print('ratio of blue to red in sack', balls.count('b')/balls.count('r') )
assert len(balls) + len(res) == num_red + num_blue
return ("balls = ", balls,"result = ", res)
#ball_game(1000,100, 1000)
balls.count('r')
Explanation: Rewrite your function to take in an int n and simulate n dice rolls.
Write a function that takes in a list of red and blue balls and on each call, pulls a ball randomly out of the list and updates the list.
End of explanation
def coin(n):
res = []
for i in range(n):
res.append(random.randint(0,1))
return res
# %load ../examples/compressor
def groupby_char(lst):
Returns a list of strings containing identical characters.
Takes a list of characters produced by running split on a string.
Groups runs (in order sequences) of identical
characters into string elements in the list.
Parameters
---------
Input:
lst: list
A list of single character strings.
Output:
grouped: list
A list of strings containing grouped characters.
new_lst = []
count = 1
for i in range(len(lst) - 1): # we range to the second to last index since we're checking if lst[i] == lst[i + 1].
if lst[i] == lst[i + 1]:
count += 1
else:
new_lst.append([lst[i],count]) # Create a lst of lists. Each list contains a character and the count of adjacent identical characters.
count = 1
new_lst.append((lst[-1],count)) # Return the last character (we didn't reach it with our for loop since indexing until second to last).
grouped = [char*count for [char, count] in new_lst]
return grouped
def compress_group(string):
Returns a compressed two character string containing a character and a number.
Takes in a string of identical characters and returns the compressed string consisting of the character and the length of the original string.
Example
-------
"AAA"-->"A3"
Parameters:
-----------
Input:
string: str
A string of identical characters.
Output:
------
compressed_str: str
A compressed string of length two containing a character and a number.
return str(string[0]) + str(len(string))
def compress(string):
Returns a compressed representation of a string.
Compresses the string by mapping each run of identical characters to a
single character and a count.
Ex.
--
compress('AAABBCDDD')--> 'A3B2C1D3'.
Only compresses string if the compression is shorter than the original string.
Ex.
--
compress('A')--> 'A' # not 'A1'.
Parameters
----------
Input:
string: str
The string to compress
Output:
compressed: str
The compressed representation of the string.
try:
split_str = [char for char in string] # Create list of single characters.
grouped = groupby_char(split_str) # Group characters if characters are identical.
compressed = ''.join( # Compress each element of the grouped list and join to a string.
[compress_group(elem) for elem in grouped])
if len(compressed) < len(string): # Only return compressed if compressed is actually shorter.
return compressed
else:
return string
except IndexError: # If our input string is empty, return an empty string.
return ""
except TypeError: # If we get something that's not compressible (including NoneType) return None.
return None
if __name__ == "__main__":
import sys
print(sys.argv[0])
string = sys.argv[1]
print("string is", string)
print("compression is", compress(string))
lst = [0,0,0,0,1,1,1,1,1, 0, 0, 0, 0, 0]
lst_str = [str(elem) for elem in lst]
lst_str
grouped = groupby_char(lst_str)
grouped
lengths = [len(elem) for elem in grouped]
lengths
max(lengths)
def get_max_run(n):
Generates n coin flips and returns the max run over all the coin flips
lst = coin(n)
print(lst)
lst_str = [str(elem) for elem in lst]
grouped = groupby_char(lst_str)
lengths = [len(elem) for elem in grouped]
return max(lengths)
get_max_run(10)
Explanation: Simulate n coin flips for n = 10, 100, 1000, 10000. Are the ratios of heads to tails what you would expect?
End of explanation
class Customer(object):
A customer of ABC Bank with a checking account. Customers have the
following properties:
Attributes:
name: A string representing the customer's name.
balance: A float tracking the current balance of the customer's account.
def __init__(self, name, balance=0.0):
Return a Customer object whose name is *name* and starting
balance is *balance*.
self.name = name
self.balance = balance
def withdraw(self, amount):
Return the balance remaining after withdrawing *amount*
dollars.
if amount > self.balance:
raise RuntimeError('Amount greater than available balance.')
self.balance -= amount
return self.balance
def deposit(self, amount):
Return the balance remaining after depositing *amount*
dollars.
self.balance += amount
return self.balance
help(jeff)
Explanation: Write a function that pulls the text from http://www.py4inf.com/code/romeo-full.txt and displays all the lines containing the word 'love'. Use the requests library.
Find a built in python library that you haven't heard of before. Learn how some of the functions work. Write a small script testing out the functions.
Introducing classes
Python: everything is an object
So everything has a class???
Classes: a blueprint for creating objects
End of explanation
class Customer(object):
A customer of ABC Bank with a checking account. Customers have the
following properties:
Attributes:
name: A string representing the customer's name.
balance: A float tracking the current balance of the customer's account.
def __init__(self, name):
Return a Customer object whose name is *name*.
self.name = name
def set_balance(self, balance=0.0):
Set the customer's starting balance.
self.balance = balance
def withdraw(self, amount):
Return the balance remaining after withdrawing *amount*
dollars.
if amount > self.balance:
raise RuntimeError('Amount greater than available balance.')
self.balance -= amount
return self.balance
def deposit(self, amount):
Return the balance remaining after depositing *amount*
dollars.
self.balance += amount
return self.balance
Explanation: Be careful what you __init__
After init has finished, the caller can rightly assume that the object is ready to use. That is, after jeff = Customer('Jeff Knupp', 1000.0), we can start making deposit and withdraw calls on jeff; jeff is a fully-initialized object.
End of explanation
class SMS_store(object):
def __init__(self):
self.inbox = []
def add_new_arrival(self, from_number, time_arrived, text_of_SMS, has_been_viewed=False):
# Makes new SMS tuple, inserts it after other messages
# in the store. When creating this message, its
# has_been_viewed status is set False.
msg = (has_been_viewed, from_number, time_arrived, text_of_SMS)
self.inbox.append(msg)
def message_count(self):
# Returns the number of sms messages in my_inbox
return len(self.inbox)
def get_unread_indexes(self):
# Returns list of indexes of all not-yet-viewed SMS messages
return [i for i, elem in enumerate(self.inbox) if not elem[0]]
my_inbox.get_message(i)
# Return (from_number, time_arrived, text_of_sms) for message[i]
# Also change its state to "has been viewed".
# If there is no message at position i, return None
my_inbox.delete(i) # Delete the message at index i
my_inbox.clear() # Delete all messages from inbox
my_inbox = SMS_store()
my_inbox.add_new_arrival('adasf', 'asdf', 'asdf')
my_inbox.inbox
my_inbox.get_unread_indexes()
class WhatDoesThe(object):
def cow_say():
return "MOOO"
def elephant_say():
return "PFHARGLE"
def seal_say():
return "AUGHAUGHAUGH"
def fox_say():
return "tingalingalingalinga"
@staticmethod
def FoxSay(self):
print(cow_say())
print(elephant_say())
print(seal_say())
print(fox_say())
WhatDoesThe.FoxSay()
what_does_the.FoxSay()
Explanation: A change in perspective
Objects can have state
Exercise
With your partner:
Create a new class, SMS_store. The class will instantiate SMS_store objects, similar to an inbox or outbox on a cellphone:
my_inbox = SMS_store()
This store can hold multiple SMS messages (i.e. its internal state will just be a list of messages). Each message will be represented as a tuple:
(has_been_viewed, from_number, time_arrived, text_of_SMS)
The inbox object should provide these methods:
```
my_inbox.add_new_arrival(from_number, time_arrived, text_of_SMS)
# Makes new SMS tuple, inserts it after other messages
# in the store. When creating this message, its
# has_been_viewed status is set False.
my_inbox.message_count()
# Returns the number of sms messages in my_inbox
my_inbox.get_unread_indexes()
# Returns list of indexes of all not-yet-viewed SMS messages
my_inbox.get_message(i)
# Return (from_number, time_arrived, text_of_sms) for message[i]
# Also change its state to "has been viewed".
# If there is no message at position i, return None
my_inbox.delete(i) # Delete the message at index i
my_inbox.clear() # Delete all messages from inbox
```
Write the class, create a message store object, write tests for these methods, and implement the methods.
End of explanation
class Car(object):
wheels = 4
def __init__(self, make, model):
self.make = make
self.model = model
mustang = Car('Ford', 'Mustang')
print(mustang.wheels)
print(Car.wheels)
Explanation: Class attributes
Class attributes are attributes that are set at the class-level, as opposed to the instance-level. Normal attributes are introduced in the init method, but some attributes of a class hold for all instances in all cases. For example, consider the following definition of a Car object:
End of explanation
class Car(object):
wheels = 4
def make_car_sound():
print('VRooooommmm!')
def __init__(self, make, model):
self.make = make
self.model = model
my_car = Car('ford', 'mustang')
# my_car.make_car_sound() # This will break
Car.make_car_sound()
class Car(object):
wheels = 4
@staticmethod
def make_car_sound():
print('VRooooommmm!')
def __init__(self, make, model):
self.make = make
self.model = model
my_car = Car('ford', 'mustang')
my_car.make_car_sound()
Explanation: Static methods
End of explanation |
11,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Divide continuous data into equally-spaced epochs
This tutorial shows how to segment continuous data into a set of epochs spaced
equidistantly in time. The epochs will not be created based on experimental
events; instead, the continuous data will be "chunked" into consecutive epochs
(which may be temporally overlapping, adjacent, or separated).
We will also briefly demonstrate how to use these epochs in connectivity
analysis.
First, we import necessary modules and read in a sample raw data set.
This data set contains brain activity that is event-related, i.e.,
synchronized to the onset of auditory stimuli. However, rather than creating
epochs by segmenting the data around the onset of each stimulus, we will
create 30 second epochs that allow us to perform non-event-related analyses of
the signal.
<div class="alert alert-info"><h4>Note</h4><p>Starting in version 1.0, all functions in the ``mne.connectivity``
sub-module are housed in a separate package called
Step1: For this tutorial we'll crop and resample the raw data to a manageable size
for our web server to handle, ignore EEG channels, and remove the heartbeat
artifact so we don't get spurious correlations just because of that.
Step2: To create fixed length epochs, we simply call the function and provide it
with the appropriate parameters indicating the desired duration of epochs in
seconds, whether or not to preload data, whether or not to reject epochs that
overlap with raw data segments annotated as bad, whether or not to include
projectors, and finally whether or not to be verbose. Here, we choose a long
epoch duration (30 seconds). To conserve memory, we set preload to
False.
Step3: Characteristics of Fixed Length Epochs
Fixed length epochs are generally unsuitable for event-related analyses. This
can be seen in an image map of our fixed length
epochs. When the epochs are averaged, as seen at the bottom of the plot,
misalignment between onsets of event-related activity results in noise.
Step4: For information about creating epochs for event-related analyses, please see
tut-epochs-class.
Example Use Case for Fixed Length Epochs
Step5: If desired, separate correlation matrices for each epoch can be obtained.
For envelope correlations, this is the default return if you use
Step6: Now we can plot correlation matrices. We'll compare the first and last
30-second epochs of the recording | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.preprocessing import compute_proj_ecg
from mne_connectivity import envelope_correlation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: Divide continuous data into equally-spaced epochs
This tutorial shows how to segment continuous data into a set of epochs spaced
equidistantly in time. The epochs will not be created based on experimental
events; instead, the continuous data will be "chunked" into consecutive epochs
(which may be temporally overlapping, adjacent, or separated).
We will also briefly demonstrate how to use these epochs in connectivity
analysis.
First, we import necessary modules and read in a sample raw data set.
This data set contains brain activity that is event-related, i.e.,
synchronized to the onset of auditory stimuli. However, rather than creating
epochs by segmenting the data around the onset of each stimulus, we will
create 30 second epochs that allow us to perform non-event-related analyses of
the signal.
<div class="alert alert-info"><h4>Note</h4><p>Starting in version 1.0, all functions in the ``mne.connectivity``
sub-module are housed in a separate package called
:mod:`mne-connectivity <mne_connectivity>`. Download it by running:
```console
$ pip install mne-connectivity</p></div>
```
End of explanation
raw.crop(tmax=150).resample(100).pick('meg')
ecg_proj, _ = compute_proj_ecg(raw, ch_name='MEG 0511') # No ECG chan
raw.add_proj(ecg_proj)
raw.apply_proj()
Explanation: For this tutorial we'll crop and resample the raw data to a manageable size
for our web server to handle, ignore EEG channels, and remove the heartbeat
artifact so we don't get spurious correlations just because of that.
End of explanation
epochs = mne.make_fixed_length_epochs(raw, duration=30, preload=False)
Explanation: To create fixed length epochs, we simply call the function and provide it
with the appropriate parameters indicating the desired duration of epochs in
seconds, whether or not to preload data, whether or not to reject epochs that
overlap with raw data segments annotated as bad, whether or not to include
projectors, and finally whether or not to be verbose. Here, we choose a long
epoch duration (30 seconds). To conserve memory, we set preload to
False.
End of explanation
event_related_plot = epochs.plot_image(picks=['MEG 1142'])
Explanation: Characteristics of Fixed Length Epochs
Fixed length epochs are generally unsuitable for event-related analyses. This
can be seen in an image map of our fixed length
epochs. When the epochs are averaged, as seen at the bottom of the plot,
misalignment between onsets of event-related activity results in noise.
End of explanation
epochs.load_data().filter(l_freq=8, h_freq=12)
alpha_data = epochs.get_data()
Explanation: For information about creating epochs for event-related analyses, please see
tut-epochs-class.
Example Use Case for Fixed Length Epochs: Connectivity Analysis
Fixed lengths epochs are suitable for many types of analysis, including
frequency or time-frequency analyses, connectivity analyses, or
classification analyses. Here we briefly illustrate their utility in a sensor
space connectivity analysis.
The data from our epochs object has shape (n_epochs, n_sensors, n_times)
and is therefore an appropriate basis for using MNE-Python's envelope
correlation function to compute power-based connectivity in sensor space. The
long duration of our fixed length epochs, 30 seconds, helps us reduce edge
artifacts and achieve better frequency resolution when filtering must
be applied after epoching.
Let's examine the alpha band. We allow default values for filter parameters
(for more information on filtering, please see tut-filter-resample).
End of explanation
corr_matrix = envelope_correlation(alpha_data).get_data()
print(corr_matrix.shape)
Explanation: If desired, separate correlation matrices for each epoch can be obtained.
For envelope correlations, this is the default return if you use
:meth:mne-connectivity:mne_connectivity.EpochConnectivity.get_data:
End of explanation
first_30 = corr_matrix[0]
last_30 = corr_matrix[-1]
corr_matrices = [first_30, last_30]
color_lims = np.percentile(np.array(corr_matrices), [5, 95])
titles = ['First 30 Seconds', 'Last 30 Seconds']
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.suptitle('Correlation Matrices from First 30 Seconds and Last 30 Seconds')
for ci, corr_matrix in enumerate(corr_matrices):
ax = axes[ci]
mpbl = ax.imshow(corr_matrix, clim=color_lims)
ax.set_xlabel(titles[ci])
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.85, 0.2, 0.025, 0.6])
cbar = fig.colorbar(ax.images[0], cax=cax)
cbar.set_label('Correlation Coefficient')
Explanation: Now we can plot correlation matrices. We'll compare the first and last
30-second epochs of the recording:
End of explanation |
11,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Assignment 3
Step1: Goal
The goal is to define with TensorFlow a vanilla recurrent neural network (RNN) model
Step2: Step 1
Initialize input variables of the computational graph
Step3: Step 2
Define the variables of the computational graph
Step4: Step 3
Implement the recursive formula
Step5: Step 4
Perplexity loss is implemented as
Step6: Step 5
Implement the optimization of the loss function.
Hint
Step7: Step 6
Implement the prediction scheme
Step8: Step 7
Run the computational graph with batches of training data.<br>
Predict the sequence of characters starting from the character "h".<br>
Hints | Python Code:
# Import libraries
import tensorflow as tf
import numpy as np
import collections
import os
# Load text data
data = open(os.path.join('datasets', 'text_ass_6.txt'), 'r').read() # must be simple plain text file
print('Text data:',data)
chars = list(set(data))
print('\nSingle characters:',chars)
data_len, vocab_size = len(data), len(chars)
print('\nText data has %d characters, %d unique.' % (data_len, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
print('\nMapping characters to numbers:',char_to_ix)
print('\nMapping numbers to characters:',ix_to_char)
Explanation: A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Assignment 3 : Recurrent Neural Networks
End of explanation
# hyperparameters of RNN
batch_size = 3 # batch size
batch_len = data_len // batch_size # batch length
T = 5 # temporal length
epoch_size = (batch_len - 1) // T # nb of iterations to get one epoch
D = vocab_size # data dimension = nb of unique characters
H = 5*D # size of hidden state, the memory layer
print('data_len=',data_len,' batch_size=',batch_size,' batch_len=',
batch_len,' T=',T,' epoch_size=',epoch_size,' D=',D)
Explanation: Goal
The goal is to define with TensorFlow a vanilla recurrent neural network (RNN) model:
$$
\begin{aligned}
h_t &= \textrm{tanh}(W_h h_{t-1} + W_x x_t + b_h)\
y_t &= W_y h_t + b_y
\end{aligned}
$$
to predict a sequence of characters. $x_t \in \mathbb{R}^D$ is the input character of the RNN in a dictionary of size $D$. $y_t \in \mathbb{R}^D$ is the predicted character (through a distribution function) by the RNN system. $h_t \in \mathbb{R}^H$ is the memory of the RNN, called hidden state at time $t$. Its dimensionality is arbitrarly chosen to $H$. The variables of the system are $W_h \in \mathbb{R}^{H\times H}$, $W_x \in \mathbb{R}^{H\times D}$, $W_y \in \mathbb{R}^{D\times H}$, $b_h \in \mathbb{R}^D$, and $b_y \in \mathbb{R}^D$. <br>
The number of time steps of the RNN is $T$, that is we will learn a sequence of data of length $T$: $x_t$ for $t=0,...,T-1$.
End of explanation
# input variables of computational graph (CG)
Xin = tf.placeholder(tf.float32, [batch_size,T,D]); #print('Xin=',Xin) # Input
Ytarget = tf.placeholder(tf.int64, [batch_size,T]); #print('Y_=',Y_) # target
hin = tf.placeholder(tf.float32, [batch_size,H]); #print('hin=',hin.get_shape())
Explanation: Step 1
Initialize input variables of the computational graph:<br>
(1) Xin of size batch_size x T x D and type tf.float32. Each input character is encoded on a vector of size D.<br>
(2) Ytarget of size batch_size x T and type tf.int64. Each target character is encoded by a value in {0,...,D-1}.<br>
(3) hin of size batch_size x H and type tf.float32<br>
End of explanation
# Model variables
Wx = tf.Variable(tf.random_normal([D,H], stddev=tf.sqrt(6./tf.to_float(D+H)))); print('Wx=',Wx.get_shape())
Wh = tf.Variable(0.01*np.identity(H, np.float32)); print('Wh=',Wh.get_shape())
Wy = tf.Variable(tf.random_normal([H,D], stddev=tf.sqrt(6./tf.to_float(H+D)))); print('Wy=',Wy.get_shape())
bh = tf.Variable(tf.zeros([H])); print('bh=',bh.get_shape())
by = tf.Variable(tf.zeros([D])); print('by=',by.get_shape())
Explanation: Step 2
Define the variables of the computational graph:<br>
(1) $W_x$ is a random variable of shape D x H with normal distribution of variance $\frac{6}{D+H}$<br>
(2) $W_h$ is an identity matrix multiplies by constant $0.01$<br>
(3) $W_y$ is a random variable of shape H x D with normal distribution of variance $\frac{6}{D+H}$<br>
(4) $b_h$, $b_y$ are zero vectors of size H, and D<br>
End of explanation
# Vanilla RNN implementation
Y = []
ht = hin
for t, xt in enumerate(tf.split(1, T, Xin)):
if batch_size>1:
xt = tf.squeeze(xt); #print('xt=',xt)
else:
xt = tf.squeeze(xt)[None,:]
ht = tf.matmul(ht, Wh); #print('ht1=',ht)
ht += tf.matmul(xt, Wx); #print('ht2=',ht)
ht += bh; #print('ht3=',ht)
ht = tf.tanh(ht); #print('ht4=',ht)
yt = tf.matmul(ht, Wy); #print('yt1=',yt)
yt += by; #print('yt2=',yt)
Y.append(yt)
#print('Y=',Y)
Y = tf.pack(Y);
if batch_size>1:
Y = tf.squeeze(Y);
Y = tf.transpose(Y, [1, 0, 2])
print('Y=',Y.get_shape())
print('Ytarget=',Ytarget.get_shape())
Explanation: Step 3
Implement the recursive formula:
$$
\begin{aligned}
h_t &= \textrm{tanh}(W_h h_{t-1} + W_x x_t + b_h)\
y_t &= W_y h_t + b_y
\end{aligned}
$$
with $h_{t=0}=hin$.<br>
Hints: <br>
(1) You may use functions tf.split(), enumerate(), tf.squeeze(), tf.matmul(), tf.tanh(), tf.transpose(), append(), pack().<br>
(2) You may use a matrix Y of shape batch_size x T x D. We recall that Ytarget should have the shape batch_size x T.<br>
End of explanation
# perplexity
logits = tf.reshape(Y,[batch_size*T,D])
weights = tf.ones([batch_size*T])
cross_entropy_perplexity = tf.nn.seq2seq.sequence_loss_by_example([logits],[Ytarget],[weights])
cross_entropy_perplexity = tf.reduce_sum(cross_entropy_perplexity) / batch_size
loss = cross_entropy_perplexity
Explanation: Step 4
Perplexity loss is implemented as:
End of explanation
# Optimization
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
Explanation: Step 5
Implement the optimization of the loss function.
Hint: You may use function tf.train.GradientDescentOptimizer().
End of explanation
# Predict
idx_pred = tf.placeholder(tf.int64) # input seed
xtp = tf.one_hot(idx_pred,depth=D); #print('xtp1=',xtp.get_shape())
htp = tf.zeros([1,H])
Ypred = []
for t in range(T):
htp = tf.matmul(htp, Wh); #print('htp1=',htp)
htp += tf.matmul(xtp, Wx); #print('htp2=',htp)
htp += bh; #print('htp3=',htp) # (1, 100)
htp = tf.tanh(htp); #print('htp4=',htp) # (1, 100)
ytp = tf.matmul(htp, Wy); #print('ytp1=',ytp)
ytp += by; #print('ytp2=',ytp)
ytp = tf.nn.softmax(ytp); #print('yt1=',ytp)
ytp = tf.squeeze(ytp); #print('yt2=',ytp)
seed_idx = tf.argmax(ytp,dimension=0); #print('seed_idx=',seed_idx)
xtp = tf.one_hot(seed_idx,depth=D)[None,:]; #print('xtp2=',xtp.get_shape())
Ypred.append(seed_idx)
Ypred = tf.convert_to_tensor(Ypred)
# Prepare train data matrix of size "batch_size x batch_len"
data_ix = [char_to_ix[ch] for ch in data[:data_len]]
train_data = np.array(data_ix)
print('original train set shape',train_data.shape)
train_data = np.reshape(train_data[:batch_size*batch_len], [batch_size,batch_len])
print('pre-processed train set shape',train_data.shape)
# The following function tansforms an integer value d between {0,...,D-1} into an one hot vector, that is a
# vector of dimension D x 1 which has value 1 for index d-1, and 0 otherwise
from scipy.sparse import coo_matrix
def convert_to_one_hot(a,max_val=None):
N = a.size
data = np.ones(N,dtype=int)
sparse_out = coo_matrix((data,(np.arange(N),a.ravel())), shape=(N,max_val))
return np.array(sparse_out.todense())
Explanation: Step 6
Implement the prediction scheme: from an input character e.g. "h" then the RNN should predict "ello". <br>
Hints: <br>
(1) You should use the learned RNN.<br>
(2) You may use functions tf.one_hot(), tf.nn.softmax(), tf.argmax().
End of explanation
# Run CG
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
h0 = np.zeros([batch_size,H])
indices = collections.deque()
costs = 0.0; epoch_iters = 0
for n in range(50):
# Batch extraction
if len(indices) < 1:
indices.extend(range(epoch_size))
costs = 0.0; epoch_iters = 0
i = indices.popleft()
batch_x = train_data[:,i*T:(i+1)*T]
batch_x = convert_to_one_hot(batch_x,D); batch_x = np.reshape(batch_x,[batch_size,T,D])
batch_y = train_data[:,i*T+1:(i+1)*T+1]
#print(batch_x.shape,batch_y.shape)
# Train
idx = char_to_ix['h'];
loss_value,_,Ypredicted = sess.run([loss,train_step,Ypred], feed_dict={Xin: batch_x, Ytarget: batch_y, hin: h0, idx_pred: [idx]})
# Perplexity
costs += loss_value
epoch_iters += T
perplexity = np.exp(costs/epoch_iters)
if not n%1:
idx_char = Ypredicted
txt = ''.join(ix_to_char[ix] for ix in list(idx_char))
print('\nn=',n,', perplexity value=',perplexity)
print('starting char=',ix_to_char[idx], ', predicted sequences=',txt)
sess.close()
Explanation: Step 7
Run the computational graph with batches of training data.<br>
Predict the sequence of characters starting from the character "h".<br>
Hints:<br>
(1) Initial memory is $h_{t=0}$ is 0.<br>
(2) Run the computational graph to optimize the perplexity loss, and to predict the the sequence of characters starting from the character "h".<br>
End of explanation |
11,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data
Step1: Load dataframes
Step2: 1. What are most popular categories?
Step3: 2. What are the most common restaurant chains?
Step4: 2a. Correlations in chain properties
higher rating --> more reviews, fewer branches
more branches --> fewer reviews
'u' relationship between cost and review count
Step6: 2b. Number of franchises vs rating (bokeh)
Step7: 3. Distributions of ratings, review counts, and costs
3a. Distibutions
Step8: 3b. Correlations (histograms) | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import glob
import os
import scipy as sp
from scipy import stats
from tools.plt import color2d #from the 'srcole/tools' repo
from matplotlib import cm
Explanation: Data: 1000 restaurants for each city
Cuisines: most popular (bar chart)
Chains: Rating and # franchinese (bokeh)
Distributions of features
2D distributions of features
Cuisines: price vs rating (bokeh... see next notebook)
TODO
* Scatter plot: average rating and cost for each cuisine, cuisine at least N samples
* Which cities have the greatest concentration of mexican, ethiopian, etc.
* Determine restaurants with single category and then look for relationships between category and price, rating, review count, etc.
* Explore data using a bokeh plot in http://localhost:8889/notebooks/examples/app/movies/Untitled.ipynb
* This is a plot could have on my webpage (not a dashboard) !!
* What are the most popular and least popular?
* Which cities are nicest, best restaurants? (may be sampling bias. maybe should use sort by alphabet?)
* Which cities are cheapest?
* plot poke on maps
Notes
* Per capita analysis may not be valid because yelp searches around a city, not just where the population was counted
* e.g. South San Francisco search on Yelp likely brings up restaurants outside the range of population counted
* This could be assuaged if I instead delineate restaurants by the city their address says
* Categories - might be overlapping
* I should look through top 100 and manually collapse some (deli and sanwich; japanese and sushi). One is a subset of another
* BOkeh sometimes stops working in its exported html when make multiple in 1 notebook
End of explanation
# Load cities info
df_cities = pd.read_csv('/gh/data2/yelp/city_pop.csv', index_col=0)
df_cities.head()
# Load restaurants
df_restaurants = pd.read_csv('/gh/data2/yelp/food_by_city/df_restaurants.csv', index_col=0)
df_restaurants.head()
# Load categories by restaurant
df_categories = pd.read_csv('/gh/data2/yelp/food_by_city/df_categories.csv', index_col=0)
df_categories.head()
Explanation: Load dataframes
End of explanation
# Manually concatenate categories with at least 500 counts
# Find categories D and V such that category 'D' should be counted as vategory 'V'
category_subsets = {'delis': 'sandwiches',
'sushi': 'japanese',
'icecream': 'desserts',
'cafes': 'coffee',
'sportsbars': 'bars',
'hotdog': 'hotdogs',
'wine_bars': 'bars',
'pubs': 'bars',
'cocktailbars': 'bars',
'beerbar': 'bars',
'tacos': 'mexican',
'gastropubs': 'bars',
'ramen': 'japanese',
'chocolate': 'desserts',
'dimsum': 'chinese',
'cantonese': 'chinese',
'szechuan': 'chinese',
'coffeeroasteries': 'coffee',
'hookah_bars': 'bars',
'irish_pubs': 'bars'}
for k in category_subsets.keys():
df_categories[category_subsets[k]] = np.logical_or(df_categories[k], df_categories[category_subsets[k]])
# Remove some categories # R
category_remove = ['hotdog', 'cafes']
for k in category_remove:
df_categories.drop(k, axis=1, inplace=True)
# Top categories
N = 20
category_counts = df_categories.sum().sort_values(ascending=False)
top_N_categories = list(category_counts.head(N).keys())
top_N_categories_counts = category_counts.head(N).values
category_counts.head(N)
# Bar chart
plt.figure(figsize=(12,5))
plt.bar(np.arange(N), top_N_categories_counts / len(df_restaurants), color='k', ecolor='.5')
plt.xticks(np.arange(N), top_N_categories)
plt.ylabel('Fraction of restaurants', size=20)
plt.xlabel('Restaurant category', size=20)
plt.xticks(size=15, rotation='vertical')
plt.yticks(size=15);
Explanation: 1. What are most popular categories?
End of explanation
gb = df_restaurants.groupby('name')
df_chains = gb.mean()[['rating', 'review_count', 'cost']]
df_chains['count'] = gb.size()
df_chains.sort_values('count', ascending=False, inplace=True)
df_chains.head(10)
Explanation: 2. What are the most common restaurant chains?
End of explanation
# Only consider restaurants with at least 50 locations
min_count = 50
df_temp = df_chains[df_chains['count'] >= min_count]
plt.figure(figsize=(8,12))
plt_num = 1
for i, k1 in enumerate(df_temp.keys()):
for j, k2 in enumerate(df_temp.keys()[i+1:]):
if k1 in ['review_count', 'count']:
if k2 in ['review_count', 'count']:
plot_f = plt.loglog
else:
plot_f = plt.semilogx
else:
if k2 in ['review_count', 'count']:
plot_f = plt.semilogy
else:
plot_f = plt.plot
plt.subplot(3, 2, plt_num)
plot_f(df_temp[k1], df_temp[k2], 'k.')
plt.xlabel(k1)
plt.ylabel(k2)
plt_num += 1
r, p = stats.spearmanr(df_temp[k1], df_temp[k2])
plt.title(r)
plt.tight_layout()
Explanation: 2a. Correlations in chain properties
higher rating --> more reviews, fewer branches
more branches --> fewer reviews
'u' relationship between cost and review count: lots of reviews to branches that are consistently 1, or consistently 2, but not those that are inconsistent
true for min_count = 40, 50, 100, 200,
End of explanation
from bokeh.io import output_notebook
from bokeh.layouts import row, widgetbox
from bokeh.models import CustomJS, Slider, Legend, HoverTool
from bokeh.plotting import figure, output_file, show, ColumnDataSource
output_notebook()
# Slider variables
min_N_franchises = 100
# Determine dataframe sources
df_chains2 = df_chains[df_chains['count'] > 10].reset_index()
df_temp = df_chains2[df_chains2['count'] >= min_N_franchises]
# Create data source for plotting and Slider callback
source1 = ColumnDataSource(df_temp, id='source1')
source2 = ColumnDataSource(df_chains2, id='source2')
hover = HoverTool(tooltips=[
("Name", "@name"),
("Avg Stars", "@rating"),
("# locations", "@count")])
# Make initial figure of net income vs years of saving
plot = figure(plot_width=400, plot_height=400,
x_axis_label='Number of locations',
y_axis_label='Average rating',
x_axis_type="log", tools=[hover])
plot.scatter('count', 'rating', source=source1, line_width=3, line_alpha=0.6, line_color='black')
# Declare how to update plot on slider change
callback = CustomJS(args=dict(s1=source1, s2=source2), code=
var d1 = s1.get("data");
var d2 = s2.get("data");
var N = N.value;
d1["count"] = [];
d1["rating"] = [];
for(i=0;i <=d2["count"].length; i++){
if (d2["count"][i] >= N) {
d1["count"].push(d2["count"][i]);
d1["rating"].push(d2["rating"][i]);
d1["name"].push(d2["name"][i]);
}
}
s1.change.emit();
)
N_slider = Slider(start=10, end=1000, value=min_N_franchises, step=10,
title="minimum number of franchises", callback=callback)
callback.args["N"] = N_slider
# Define layout of plot and sliders
layout = row(plot, widgetbox(N_slider))
# Output and show
output_file("/gh/srcole.github.io/assets/misc/yelp_bokeh.html", title="Yelp WIP")
show(layout)
Explanation: 2b. Number of franchises vs rating (bokeh)
End of explanation
N_bins_per_factor10 = 8
bins_by_key = {'rating': np.arange(0.75, 5.75, .5),
'review_count': np.logspace(1, 5, num=N_bins_per_factor10*4+1),
'cost': np.arange(.5, 5, 1)}
log_by_key = {'rating': False,
'review_count': True,
'cost': False}
plt.figure(figsize=(12, 4))
for i, k in enumerate(bins_by_key.keys()):
weights = np.ones_like(df_restaurants[k].values)/float(len(df_restaurants[k].values))
plt.subplot(1, 3, i+1)
plt.hist(df_restaurants[k].values, bins_by_key[k], log=log_by_key[k],
color='k', edgecolor='.5', weights=weights)
if k == 'review_count':
plt.semilogx(1,1)
plt.xlim((10, 40000))
elif i == 0:
plt.ylabel('Probability')
plt.xlabel(k)
plt.tight_layout()
Explanation: 3. Distributions of ratings, review counts, and costs
3a. Distibutions
End of explanation
# Prepare histogram analysis
gb_cost = df_restaurants.groupby('cost').groups
gb_rating = df_restaurants.groupby('rating').groups
# Remove 0 from gb_rating
gb_rating.pop(0.0)
N_bins_cost = len(gb_cost.keys())
N_bins_count = len(bins_by_key['review_count']) - 1
N_bins_rate = len(bins_by_key['rating']) - 1
# Hist: review count and rating as fn of cost
hist_count_by_cost = np.zeros((N_bins_cost, N_bins_count))
hist_rate_by_cost = np.zeros((N_bins_cost, N_bins_rate))
points_count_by_cost = np.zeros((N_bins_cost, 3))
points_rate_by_cost = np.zeros((N_bins_cost, 3))
for i, k in enumerate(gb_cost.keys()):
# Make histogram of review count as fn of cost
x = df_restaurants.loc[gb_cost[k]]['review_count'].values
hist_temp, _ = np.histogram(x, bins=bins_by_key['review_count'])
# Make each cost sum to 1
hist_count_by_cost[i] = hist_temp / np.sum(hist_temp)
# Compute percentiles
points_count_by_cost[i,0] = np.mean(x)
points_count_by_cost[i,1] = np.std(x)
points_count_by_cost[i,2] = np.min([np.std(x), 5-np.mean(x)])
# Repeat for rating
x = df_restaurants.loc[gb_cost[k]]['rating'].values
hist_temp, _ = np.histogram(x, bins=bins_by_key['rating'])
hist_rate_by_cost[i] = hist_temp / np.sum(hist_temp)
points_rate_by_cost[i,0] = np.mean(x)
points_rate_by_cost[i,1] = np.std(x)
points_rate_by_cost[i,2] = np.min([np.std(x), 5-np.mean(x)])
# Make histograms of review count as fn of rating
hist_count_by_rate = np.zeros((N_bins_rate, N_bins_count))
points_count_by_rate = np.zeros((N_bins_rate, 3))
for i, k in enumerate(gb_rating.keys()):
# Make histogram of review count as fn of cost
x = df_restaurants.loc[gb_rating[k]]['review_count'].values
hist_temp, _ = np.histogram(x, bins=bins_by_key['review_count'])
# Make each cost sum to 1
hist_count_by_rate[i] = hist_temp / np.sum(hist_temp)
points_count_by_rate[i,0] = np.mean(x)
points_count_by_rate[i,1] = np.std(x)
points_count_by_rate[i,2] = np.min([np.std(x), 5-np.mean(x)])
# Make a 2d colorplot
plt.figure(figsize=(10,4))
color2d(hist_rate_by_cost, cmap=cm.viridis,
clim=[0,.4], cticks = np.arange(0,.41,.05), color_label='Probability',
plot_xlabel='Rating', plot_ylabel='Cost ($)',
plot_xticks_locs=range(N_bins_rate), plot_xticks_labels=gb_rating.keys(),
plot_yticks_locs=range(N_bins_cost), plot_yticks_labels=gb_cost.keys(),
interpolation='none', fontsize_minor=14, fontsize_major=19)
# On top, plot the mean and st. dev.
# plt.errorbar(points_rate_by_cost[:,0] / , np.arange(N_bins_cost), fmt='.', color='w', ms=10,
# xerr=points_rate_by_cost[:,1:].T, ecolor='w', alpha=.5)
# Make a 2d colorplot
xbins_label = np.arange(0,N_bins_per_factor10*2+1, N_bins_per_factor10)
plt.figure(figsize=(10,4))
color2d(hist_count_by_cost, cmap=cm.viridis,
clim=[0,.2], cticks = np.arange(0,.21,.05), color_label='Probability',
plot_xlabel='Number of reviews', plot_ylabel='Cost ($)',
plot_xticks_locs=xbins_label, plot_xticks_labels=bins_by_key['review_count'][xbins_label].astype(int),
plot_yticks_locs=range(N_bins_cost), plot_yticks_labels=gb_cost.keys(),
interpolation='none', fontsize_minor=14, fontsize_major=19)
plt.xlim((-.5,N_bins_per_factor10*2 + .5))
# Make a 2d colorplot
xbins_label = np.arange(0,N_bins_per_factor10*2+1, N_bins_per_factor10)
plt.figure(figsize=(10,6))
color2d(hist_count_by_rate, cmap=cm.viridis,
clim=[0,.4], cticks = np.arange(0,.41,.1), color_label='Probability',
plot_xlabel='Number of reviews', plot_ylabel='Rating',
plot_xticks_locs=xbins_label, plot_xticks_labels=bins_by_key['review_count'][xbins_label].astype(int),
plot_yticks_locs=range(N_bins_rate), plot_yticks_labels=gb_rating.keys(),
interpolation='none', fontsize_minor=14, fontsize_major=19)
plt.xlim((-.5,N_bins_per_factor10*2 + .5))
Explanation: 3b. Correlations (histograms)
End of explanation |
11,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Catch that asteroid!
Step1: First, we need to increase the timeout time to allow the download of data occur properly
Step2: Two problems
Step3: We first propagate
Step4: And now we have to convert to another reference frame, using http
Step5: The NASA servers give the orbital elements of the asteroids in an Heliocentric Ecliptic frame. Fortunately, it is already defined in Astropy
Step6: Now we just have to convert to ICRS, which is the "standard" reference in which poliastro works
Step7: Let us compute the distance between Florence and the Earth
Step9: <div class="alert alert-success">This value is consistent with what ESA says! $7\,060\,160$ km</div>
Step10: And now we can plot!
Step11: The difference between doing it well and doing it wrong is clearly visible
Step12: And now let's do something more complicated
Step13: Notice that the ephemerides of the Moon is also given in ICRS, and therefore yields a weird hyperbolic orbit!
Step14: So we have to convert again.
Step15: And finally, we plot the Moon
Step16: And now for the final plot | Python Code:
import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time
from astropy.utils.data import conf
conf.dataurl
conf.remote_timeout
Explanation: Catch that asteroid!
End of explanation
conf.remote_timeout = 10000
from astropy.coordinates import solar_system_ephemeris
solar_system_ephemeris.set("jpl")
from poliastro.bodies import *
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter, plot
EPOCH = Time("2017-09-01 12:05:50", scale="tdb")
earth = Orbit.from_body_ephem(Earth, EPOCH)
earth
plot(earth, label=Earth)
from poliastro.neos import neows
florence = neows.orbit_from_name("Florence")
florence
Explanation: First, we need to increase the timeout time to allow the download of data occur properly
End of explanation
florence.epoch
florence.epoch.iso
florence.inc
Explanation: Two problems: the epoch is not the one we desire, and the inclination is with respect to the ecliptic!
End of explanation
florence = florence.propagate(EPOCH)
florence.epoch.tdb.iso
Explanation: We first propagate:
End of explanation
from astropy.coordinates import (
ICRS, GCRS,
CartesianRepresentation, CartesianDifferential
)
from poliastro.frames import HeliocentricEclipticJ2000
Explanation: And now we have to convert to another reference frame, using http://docs.astropy.org/en/stable/coordinates/.
End of explanation
florence_heclip = HeliocentricEclipticJ2000(
x=florence.r[0], y=florence.r[1], z=florence.r[2],
v_x=florence.v[0], v_y=florence.v[1], v_z=florence.v[2],
representation=CartesianRepresentation,
differential_type=CartesianDifferential,
obstime=EPOCH
)
florence_heclip
Explanation: The NASA servers give the orbital elements of the asteroids in an Heliocentric Ecliptic frame. Fortunately, it is already defined in Astropy:
End of explanation
florence_icrs_trans = florence_heclip.transform_to(ICRS)
florence_icrs_trans.representation = CartesianRepresentation
florence_icrs_trans
florence_icrs = Orbit.from_vectors(
Sun,
r=[florence_icrs_trans.x, florence_icrs_trans.y, florence_icrs_trans.z] * u.km,
v=[florence_icrs_trans.v_x, florence_icrs_trans.v_y, florence_icrs_trans.v_z] * (u.km / u.s),
epoch=florence.epoch
)
florence_icrs
florence_icrs.rv()
Explanation: Now we just have to convert to ICRS, which is the "standard" reference in which poliastro works:
End of explanation
from poliastro.util import norm
norm(florence_icrs.r - earth.r) - Earth.R
Explanation: Let us compute the distance between Florence and the Earth:
End of explanation
from IPython.display import HTML
HTML(
<blockquote class="twitter-tweet" data-lang="en"><p lang="es" dir="ltr">La <a href="https://twitter.com/esa_es">@esa_es</a> ha preparado un resumen del asteroide <a href="https://twitter.com/hashtag/Florence?src=hash">#Florence</a> 😍 <a href="https://t.co/Sk1lb7Kz0j">pic.twitter.com/Sk1lb7Kz0j</a></p>— AeroPython (@AeroPython) <a href="https://twitter.com/AeroPython/status/903197147914543105">August 31, 2017</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
)
Explanation: <div class="alert alert-success">This value is consistent with what ESA says! $7\,060\,160$ km</div>
End of explanation
frame = OrbitPlotter()
frame.plot(earth, label="Earth")
frame.plot(Orbit.from_body_ephem(Mars, EPOCH))
frame.plot(Orbit.from_body_ephem(Venus, EPOCH))
frame.plot(Orbit.from_body_ephem(Mercury, EPOCH))
frame.plot(florence_icrs, label="Florence")
Explanation: And now we can plot!
End of explanation
frame = OrbitPlotter()
frame.plot(earth, label="Earth")
frame.plot(florence, label="Florence (Ecliptic)")
frame.plot(florence_icrs, label="Florence (ICRS)")
Explanation: The difference between doing it well and doing it wrong is clearly visible:
End of explanation
florence_gcrs_trans = florence_heclip.transform_to(GCRS(obstime=EPOCH))
florence_gcrs_trans.representation = CartesianRepresentation
florence_gcrs_trans
florence_hyper = Orbit.from_vectors(
Earth,
r=[florence_gcrs_trans.x, florence_gcrs_trans.y, florence_gcrs_trans.z] * u.km,
v=[florence_gcrs_trans.v_x, florence_gcrs_trans.v_y, florence_gcrs_trans.v_z] * (u.km / u.s),
epoch=EPOCH
)
florence_hyper
Explanation: And now let's do something more complicated: express our orbit with respect to the Earth! For that, we will use GCRS, with care of setting the correct observation time:
End of explanation
moon = Orbit.from_body_ephem(Moon, EPOCH)
moon
moon.a
moon.ecc
Explanation: Notice that the ephemerides of the Moon is also given in ICRS, and therefore yields a weird hyperbolic orbit!
End of explanation
moon_icrs = ICRS(
x=moon.r[0], y=moon.r[1], z=moon.r[2],
v_x=moon.v[0], v_y=moon.v[1], v_z=moon.v[2],
representation=CartesianRepresentation,
differential_type=CartesianDifferential
)
moon_icrs
moon_gcrs = moon_icrs.transform_to(GCRS(obstime=EPOCH))
moon_gcrs.representation = CartesianRepresentation
moon_gcrs
moon = Orbit.from_vectors(
Earth,
[moon_gcrs.x, moon_gcrs.y, moon_gcrs.z] * u.km,
[moon_gcrs.v_x, moon_gcrs.v_y, moon_gcrs.v_z] * (u.km / u.s),
epoch=EPOCH
)
moon
Explanation: So we have to convert again.
End of explanation
plot(moon, label=Moon)
plt.gcf().autofmt_xdate()
Explanation: And finally, we plot the Moon:
End of explanation
frame = OrbitPlotter()
# This first plot sets the frame
frame.plot(florence_hyper, label="Florence")
# And then we add the Moon
frame.plot(moon, label=Moon)
plt.xlim(-1000000, 8000000)
plt.ylim(-5000000, 5000000)
plt.gcf().autofmt_xdate()
Explanation: And now for the final plot:
End of explanation |
11,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WaveSolver - slider
M. Lamoureux May 31, 2016. Pacific Insitute for the Mathematical Sciences
Updated June 2017, to remove all reference to Bokeh, cuz it doesn't work now. (deprecated = broken)
(Bokeh = Brokehn)
This code does some numerical simulation of wave propagation in 1D, suitable for demoing in a class.
This is a rewrite of some of my old Julia code, translated to run in Python. My belief is that Python is a more mature system and I will likely be more productive by sticking to Python instead of Julia. Let's see if this is true.
The code is based on formulas from the textbook Linear Partial Differential Equations for Scientists and Engineers, by Myint-U and Debnath. It's a good book -- you should buy it. Some extracts are also available online.
In this Jupyter notebook, animation has been added, using GUI sliders to advance the waveform in time.
This was also written as a test of how far we can push Bokeh and the Jupyter Hub in making some calculations and graphics. There is some dangerous code at the bottom, that can bring down our Jupyter Hub, probably because it eats up too much memory. Interesting for debugging purposes, but you have been warned.
Introduction to the 1D wave equation.
The basic idea is to represent the vibrations of a (horizonal) string under tension by a function $u(x,t)$ where $x$ is the horizontal displacement along the string, $t$ is the time parameter, and $y=u(x,t)$ indicates the (vertical) displacement of the string at point $x$ along the string, at time $t$
A typical wave would look something like
$$u(x,t) = \sin(x-ct)$$
which represents a sinusoidal wave moving to the right at velocity $c$.
From Newon's law of motion (mass times acceleration equals force), we can derive the wave equation for the string in the form
$$ \rho \cdot u_{tt} = k \cdot u_{xx}$$
where $\rho$ is a mass density, $u_{tt}$ is an acceleration (the second derivative w.r.t. time), $k$ is the modulus of elasticity and $u_{xx}$ is a measure of curvature (the second derivative w.r.t. $x$), causing the bending force.
Including boundary and initial conditions, while setting the parameter $c^2 = k/\rho$, we obtain the usual 1D wave equation on an interval as this
Step1: We also include some code from SciPy for numerical calculations
Step2: And we include some code to create the graphical user interface -- namely, sliders. You can read about them here
Step3: Explicit method of solution in finite differences
The finite difference method is probably the most common numerical method for solving PDEs. The derivatives are approximated by Newton difference ratios, and you step through in time to progress the solution from $t=0$ to some ending time.
The following defines a function that solves the wave equation, using an explicit finite difference method. We follow the notes in the referenced book by Myint-U and Debnath.
The differencing in spatial variable $x$ is done by a convolution, as this will be fast.
We print out the value of the CFL constant as a sanity check. The method is stable and convergent provided the CFL value is less than one. (This means $dt$ needs to be small enough.) See the book for details.
The input parameters are velocity squared $c2$, spatial step size $dx$, temporal step size $dt$, number of time steps $t$_$len$, initial position $u0$ and initial velocity $u1$. $u0$ and $u1$ are arrays of some length $N$, which the code will use to deduce everything it needs to know.
I tend to think of these as dimensionless variables, but you can use real physical values if you like, so long as you are consistent. For instance $dx$ in meters, $dt$ in seconds, and $c2$ in (meters/second) squared.
Step4: Let's try a real wave equation solution.
We start with a simple triangle waveform.
Step5: Now we call up our wave equation solver, using the parameters above
Step6: We can plot the inital waveform, just to see what it looks like.
Step7: And the next cell sets up a slider which controls the above graphs (it moves time along)
Step8: Derivative initial condition test
Step9: We can use the same update function, since nothing has changed.
Step10: Implicit method
Here we try an implicit method for solving the wave equation. Again from Myint-U and Debnath's book, in Section 14.5, part (B) on Hyperbolic equations.
We need to use scipy libraries as we have to solve a system of linear equation. In fact the system is tridiagonal and Toepliz, so this should be fast. I see how to use Toepliz in scipy, but I don't know how to tell it that the system is only tridiagonal. It should be possible to speed this up.
Step11: Derivative initial condition
Step12: D'Alembert's solution
Since the velocity $c$ is constant in these examples, we can get the exact solution via D'Alembert. The general solution will be of the form
$$u(x,t) = \phi(x+ct) + \psi(x-ct). $$
Initial conditions tell use that
$$u(x,0) = \phi(x) + \psi(x) = f(x), $$ and
$$u_t(x,0) = c\phi'(x) - c\psi'(x) = g(x). $$
With $G(x)$ the antiderivative of $g(x)$ with appropriate zero at zero, we get a 2x2 system
$$\phi(x) + \psi(x) = f(x), \ c(\phi(x) - \psi(x)) = G(x),$$
which we solve as
$$\phi(x) = \frac{1}{2}\left( f(x) + \frac{1}{c} G(x) \right), \
\psi(x) = \frac{1}{2}\left( f(x) - \frac{1}{c} G(x) \right).$$
Now $f(x)$ is given as the argument $u0$ in the code. $G(x)$ can be computed using scipy. The arguments $x+ct$ and $x-ct$ must be converted to integer indices. They have to wrap around. And with the zero boundary condition, we need to wrap around with a negative reflection.
There is the messy question as to whether we require $u(0,t)$ to actually equal zero, or do we require it to be zero one index "to the left" of x=0. Let's not think too much about that just yet.
Step13: Derivative initial condition
Step14: Comparing solutions
In principle, we want these different solution methods to be directly comparable.
So let's try this out, by computing the difference of two solution.
Here we compare the explicit f.d. method with d'Alambert's method.
Step15: A moving wavefront
Let's try an actual wave. We want something like
$$u(x,t) = \exp(-(x -x_a-ct)^2/w^2), $$
where $x_a$ is the center of the Gaussian at $t=0$, $w$ is the width of the Gaussian, $c$ is the velocity of the wave.
This gives
$$u_0(x) = \exp(-(x -x_a)^2/w^2) \
u_1(x) = \frac{2c(x-x_a)}{w^2}\exp(-(x -x_a)^2/w^2) = \frac{2c(x-x_a)}{w^2}u_0(x).$$ | Python Code:
%matplotlib inline
import numpy as np
from matplotlib.pyplot import *
Explanation: WaveSolver - slider
M. Lamoureux May 31, 2016. Pacific Insitute for the Mathematical Sciences
Updated June 2017, to remove all reference to Bokeh, cuz it doesn't work now. (deprecated = broken)
(Bokeh = Brokehn)
This code does some numerical simulation of wave propagation in 1D, suitable for demoing in a class.
This is a rewrite of some of my old Julia code, translated to run in Python. My belief is that Python is a more mature system and I will likely be more productive by sticking to Python instead of Julia. Let's see if this is true.
The code is based on formulas from the textbook Linear Partial Differential Equations for Scientists and Engineers, by Myint-U and Debnath. It's a good book -- you should buy it. Some extracts are also available online.
In this Jupyter notebook, animation has been added, using GUI sliders to advance the waveform in time.
This was also written as a test of how far we can push Bokeh and the Jupyter Hub in making some calculations and graphics. There is some dangerous code at the bottom, that can bring down our Jupyter Hub, probably because it eats up too much memory. Interesting for debugging purposes, but you have been warned.
Introduction to the 1D wave equation.
The basic idea is to represent the vibrations of a (horizonal) string under tension by a function $u(x,t)$ where $x$ is the horizontal displacement along the string, $t$ is the time parameter, and $y=u(x,t)$ indicates the (vertical) displacement of the string at point $x$ along the string, at time $t$
A typical wave would look something like
$$u(x,t) = \sin(x-ct)$$
which represents a sinusoidal wave moving to the right at velocity $c$.
From Newon's law of motion (mass times acceleration equals force), we can derive the wave equation for the string in the form
$$ \rho \cdot u_{tt} = k \cdot u_{xx}$$
where $\rho$ is a mass density, $u_{tt}$ is an acceleration (the second derivative w.r.t. time), $k$ is the modulus of elasticity and $u_{xx}$ is a measure of curvature (the second derivative w.r.t. $x$), causing the bending force.
Including boundary and initial conditions, while setting the parameter $c^2 = k/\rho$, we obtain the usual 1D wave equation on an interval as this:
$$u_{tt} = c^2 u_{xx} \mbox{ on interval } [x_0,x_1]$$
subject to boundary conditions
$$u(x_0,t) = u(x_1, t) = 0 \mbox{ for all $t$. }$$
and initial conditions
$$u(x,0) = u_0(x), \quad u_t(x,t) = u_1(x) \mbox{ on interval } [x_0,x_1].$$
Let's write some PDE solvers for this 1D wave equation.
Start with some simple solvers, then get more complex.
The function to solve the PDE needs to know $c^2, u_0, u_1$, the sampling intervals $dx$ and $dt$, and the number of time steps to execute. Everything else can be inferred from those inputs. The output should be a 2D array, indexed in x and t steps.
Maybe we could actually output the solution $u(x,t)$ as well as the vector of indices for $x$ and $t$.
Software tools
We import some tools for numerical work (NumPy) and plotting (Matplotlib), using Matplotlib because Boken seems to be broken. At least, it broke my code with deprecations.
End of explanation
from scipy.linalg import solve_toeplitz # a matrix equation solver
from scipy.integrate import cumtrapz # a numerical integrator, using trapezoid rule
Explanation: We also include some code from SciPy for numerical calculations
End of explanation
from ipywidgets import interact
Explanation: And we include some code to create the graphical user interface -- namely, sliders. You can read about them here:
http://bokeh.pydata.org/en/0.10.0/_images/notebook_interactors.png
End of explanation
# Based on Section 14.4 in Myint-U and Debnath's book
def w_solve1(c2,dx,dt,t_len,u0,u1):
x_len = np.size(u0) # the length of u0 implicitly defines the num of points in x direction
u = np.zeros((x_len,t_len),order='F') # output array initialized to zero
e2 = c2*dt*dt/(dx*dx) # Courant parameter squared (test for convergence!)
print("CFL value is ",np.sqrt(e2))
kern = np.array([e2, 2*(1-e2), e2]) # the convolution kernel we need for laplacian solver
u[:,0] = u0 # t=0 initial condition
u[:,1] = np.convolve(u0,kern/2)[1:x_len+1] + dt*u1 # t=0 derivative condition, eqn 14.4.6
for j in range(2,t_len):
u[:,j] = np.convolve(u[:,j-1],kern)[1:x_len+1] - u[:,j-2] # eqn 14.4.3
# let's produce the x and t vectors, in case we need them
x = np.linspace(0,dx*x_len,x_len,endpoint=False)
t = np.linspace(0,dt*t_len,t_len,endpoint=False)
return u,x,t
Explanation: Explicit method of solution in finite differences
The finite difference method is probably the most common numerical method for solving PDEs. The derivatives are approximated by Newton difference ratios, and you step through in time to progress the solution from $t=0$ to some ending time.
The following defines a function that solves the wave equation, using an explicit finite difference method. We follow the notes in the referenced book by Myint-U and Debnath.
The differencing in spatial variable $x$ is done by a convolution, as this will be fast.
We print out the value of the CFL constant as a sanity check. The method is stable and convergent provided the CFL value is less than one. (This means $dt$ needs to be small enough.) See the book for details.
The input parameters are velocity squared $c2$, spatial step size $dx$, temporal step size $dt$, number of time steps $t$_$len$, initial position $u0$ and initial velocity $u1$. $u0$ and $u1$ are arrays of some length $N$, which the code will use to deduce everything it needs to know.
I tend to think of these as dimensionless variables, but you can use real physical values if you like, so long as you are consistent. For instance $dx$ in meters, $dt$ in seconds, and $c2$ in (meters/second) squared.
End of explanation
x_len = 1000
t_len = 1000
dx = 1./x_len
dt = 1./t_len
x = np.linspace(0,1,x_len)
t = np.linspace(0,1,t_len)
triangle = np.maximum(0.,.1-np.absolute(x-.4))
Explanation: Let's try a real wave equation solution.
We start with a simple triangle waveform.
End of explanation
# Here we solve the wave equation, with initial position $u(x,0)$ set to the triangle waveform
(u,x,t)=w_solve1(.5,dx,dt,t_len,triangle,0*triangle)
Explanation: Now we call up our wave equation solver, using the parameters above
End of explanation
plot(x,u[:,0])
def update(k=0):
plot(x,u[:,k])
show()
Explanation: We can plot the inital waveform, just to see what it looks like.
End of explanation
# This runs an animation, controlled by a slider which advances time
interact(update,k=(0,t_len-1))
Explanation: And the next cell sets up a slider which controls the above graphs (it moves time along)
End of explanation
# We try again, but this time with the $u_t$ initial condition equal to the triangle impulse
(u,x,t)=w_solve1(.5,dx,dt,3*t_len,0*triangle,1*triangle)
Explanation: Derivative initial condition test
End of explanation
interact(update,k=(0,3*t_len-1))
Explanation: We can use the same update function, since nothing has changed.
End of explanation
# Based on Section 14.5 (B) in Myint-U and Debnath's book
def w_solve2(c2,dx,dt,t_len,u0,u1):
x_len = np.size(u0) # the length of u0 implicitly defines the num of points in x direction
u = np.zeros((x_len,t_len),order='F') # output array initialized to zero
e2 = c2*dt*dt/(dx*dx) # Courant parameter squared (test for convergence!)
print("CFL value is ",np.sqrt(e2))
kern = np.array([e2, 2*(1-e2), e2]) # the convolution kernel we need for laplacian solver
u[:,0] = u0 # t=0 initial condition
u[:,1] = np.convolve(u0,kern/2)[1:x_len+1] + dt*u1 # t=0 derivative condition, eqn 14.4.6
# Note the above is a cheat, we are using the explicit method to find u[:,1], Should do this implicitly
kern2 = np.array([e2, -2*(1+e2), e2]) # the convolution kernel we need for implicit solver. It is different.
toepk = np.zeros(x_len) # this will hold the entries for the tridiagonal Toeplitz matrix
toepk[0] = 2*(1+e2);
toepk[1] = -e2
for j in range(2,t_len):
rhs = np.convolve(u[:,j-2],kern2)[1:x_len+1] + 4*u[:,j-1] # eqn 14.5.17
u[:,j] = solve_toeplitz(toepk, rhs) # here is a linear system solver (hence an implicit method)
# let's produce the x and t vectors, in case we need them
x = np.linspace(0,dx*x_len,x_len,endpoint=False)
t = np.linspace(0,dt*t_len,t_len,endpoint=False)
return u,x,t
(u,x,t)=w_solve2(.5,dx,dt,t_len,1*triangle,0*triangle)
interact(update,k=(0,3*t_len-1))
Explanation: Implicit method
Here we try an implicit method for solving the wave equation. Again from Myint-U and Debnath's book, in Section 14.5, part (B) on Hyperbolic equations.
We need to use scipy libraries as we have to solve a system of linear equation. In fact the system is tridiagonal and Toepliz, so this should be fast. I see how to use Toepliz in scipy, but I don't know how to tell it that the system is only tridiagonal. It should be possible to speed this up.
End of explanation
(u,x,t)=w_solve2(.5,dx,dt,3*t_len,0*triangle,1*triangle)
interact(update,k=(0,3*t_len-1))
Explanation: Derivative initial condition
End of explanation
# Based on D'Alembert's solution, as described above
def w_solve3(c2,dx,dt,t_len,u0,u1):
x_len = np.size(u0) # the length of u0 implicitly defines the num of points in x direction
u = np.zeros((x_len,t_len),order='F') # output array initialized to zero
c = np.sqrt(c2) # the actual velocity parameter is needed
f = u0 # use notation from above notes
G = cumtrapz(u1,dx=dx,initial=0) # the antiderivative, using cumulative trapezoidal rule
f2 = np.append(f,-f[::-1]) # odd symmetry
G2 = np.append(G,G[::-1]) # even symmetry
phi2 = (f2 + G2/c)/2
psi2 = (f2 - G2/c)/2
x = np.linspace(0,dx*x_len,x_len,endpoint=False)
t = np.linspace(0,dt*t_len,t_len,endpoint=False)
# in the loop, we convert x+ct to index's into vectors phi2, psi2, modulo the vector length
for j in range(t_len):
ii1 = np.mod( np.round((x+c*t[j])/dx), 2*x_len)
ii2 = np.mod( np.round((x-c*t[j])/dx), 2*x_len)
u[:,j] = phi2[ii1.astype(int)] + psi2[ii2.astype(int)]
return u,x,t
(u,x,t)=w_solve3(.5,dx,dt,t_len,1*triangle,0*triangle)
interact(update,k=(0,t_len-1))
Explanation: D'Alembert's solution
Since the velocity $c$ is constant in these examples, we can get the exact solution via D'Alembert. The general solution will be of the form
$$u(x,t) = \phi(x+ct) + \psi(x-ct). $$
Initial conditions tell use that
$$u(x,0) = \phi(x) + \psi(x) = f(x), $$ and
$$u_t(x,0) = c\phi'(x) - c\psi'(x) = g(x). $$
With $G(x)$ the antiderivative of $g(x)$ with appropriate zero at zero, we get a 2x2 system
$$\phi(x) + \psi(x) = f(x), \ c(\phi(x) - \psi(x)) = G(x),$$
which we solve as
$$\phi(x) = \frac{1}{2}\left( f(x) + \frac{1}{c} G(x) \right), \
\psi(x) = \frac{1}{2}\left( f(x) - \frac{1}{c} G(x) \right).$$
Now $f(x)$ is given as the argument $u0$ in the code. $G(x)$ can be computed using scipy. The arguments $x+ct$ and $x-ct$ must be converted to integer indices. They have to wrap around. And with the zero boundary condition, we need to wrap around with a negative reflection.
There is the messy question as to whether we require $u(0,t)$ to actually equal zero, or do we require it to be zero one index "to the left" of x=0. Let's not think too much about that just yet.
End of explanation
(u,x,t)=w_solve3(.5,dx,dt,3*t_len,0*triangle,1*triangle)
interact(update,k=(0,3*t_len-1))
Explanation: Derivative initial condition
End of explanation
(u_exp,x,t)=w_solve1(.5,dx,dt,t_len,1*triangle,0*triangle)
(u_dal,x,t)=w_solve3(.5,dx,dt,t_len,1*triangle,0*triangle)
def update2(k=0):
plot(x,u_dal[:,k]-u_exp[:,k])
show()
interact(update2,k=(0,t_len-1))
Explanation: Comparing solutions
In principle, we want these different solution methods to be directly comparable.
So let's try this out, by computing the difference of two solution.
Here we compare the explicit f.d. method with d'Alambert's method.
End of explanation
c = .707 # velocity
x_len = 1000
t_len = 1000
dx = 1./x_len
dt = 1./t_len
x = np.linspace(0,1,x_len)
t = np.linspace(0,1,t_len)
u0 = np.exp(-(x-.5)*(x-.5)/.01)
u1 = 2*c*u0*(x-.5)/.01
(u,x,t)=w_solve3(c*c,dx,dt,t_len,u0,u1) # notice we input the velocity squared!
interact(update,k=(0,t_len-1))
Explanation: A moving wavefront
Let's try an actual wave. We want something like
$$u(x,t) = \exp(-(x -x_a-ct)^2/w^2), $$
where $x_a$ is the center of the Gaussian at $t=0$, $w$ is the width of the Gaussian, $c$ is the velocity of the wave.
This gives
$$u_0(x) = \exp(-(x -x_a)^2/w^2) \
u_1(x) = \frac{2c(x-x_a)}{w^2}\exp(-(x -x_a)^2/w^2) = \frac{2c(x-x_a)}{w^2}u_0(x).$$
End of explanation |
11,048 | Given the following text description, write Python code to implement the functionality described.
Description:
For a given string, flip lowercase characters to uppercase and uppercase to lowercase.
This is how the function will work:
flip_case('Hello')
'hELLO'
| Python Code:
def flip_case(string: str) -> str:
return string.swapcase() |
11,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm4-8', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM4-8
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
11,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Incrementally saving sampling progress
Can I save intermediate MCMC results for long runs, to avoid catastrophic loss of samples?
gully
February 2016
In this notebook I explore how to save intermediate results of the MCMC sampling to an intermediate hdf5 file.
This scenario is described in emcee Advanced Patterns as "Incrementally saving progress"
Step1: Based on reading the documentation and experimentation, I found no keyword arg (kwarg), that could be passed to achieve effortlessly the desired behavior. But it did show a path forward
Step2: That's slightly annoying... it saves as a flattened 1-D array. We know there are 6 parameters, so we could do
Step3: We sampled 310 iterations. The code does not get to the final 10 samples because it only saves every 100. | Python Code:
from emcee.sampler import Sampler
def bogus_lnprob(p):
return 1.0
samp = Sampler(3, bogus_lnprob)
samp.run_mcmc( # Hit shift-tab... also peak at samp.sample(), etc...
Explanation: Incrementally saving sampling progress
Can I save intermediate MCMC results for long runs, to avoid catastrophic loss of samples?
gully
February 2016
In this notebook I explore how to save intermediate results of the MCMC sampling to an intermediate hdf5 file.
This scenario is described in emcee Advanced Patterns as "Incrementally saving progress":
It is often useful to incrementally save the state of the chain to a file. This makes it easier to monitor the chain’s progress and it makes things a little less disastrous if your code/computer crashes somewhere in the middle of an expensive MCMC run. If you just want to append the walker positions to the end of a file, you could do something like:
```python3
f = open("chain.dat", "w")
f.close()
for result in sampler.sample(pos0, iterations=500, storechain=False):
position = result[0]
f = open("chain.dat", "a")
for k in range(position.shape[0]):
f.write("{0:4d} {1:s}\n".format(k, " ".join(position[k])))
f.close()
```
Where would I tell the code to save intermediate samples?
Let's start with the demo1 run01 in the current repository (starfish-demo). Try to sample a few hundred samples.
bash
$ star.py --sample=ThetaPhi --samples=310
Now bring up the star.py script in the Starfish scripts/ repo.
The --samples flag to star.py is passed to this main part of the code:
```python
sampler = StateSampler(lnprob, p0, cov, query_lnprob=query_lnprob, acceptfn=acceptfn, rejectfn=rejectfn, debug=True, outdir=Starfish.routdir)
p, lnprob, state = sampler.run_mcmc(p0, N=args.samples)
```
One key insight is that StateSampler is a subclass of emcee.Sampler...
End of explanation
import numpy as np
! ls
chain = np.fromfile('chain_backup.npy')
chain.shape
Explanation: Based on reading the documentation and experimentation, I found no keyword arg (kwarg), that could be passed to achieve effortlessly the desired behavior. But it did show a path forward: Modify the StateSampler.sample class.
A solution
I arrived at a fair solution: Save every 100 samples to a numpy binary file.
vals.tofile('chain_backup.npy')
This strategy has the disadvantage that it has to rewrite (rather than append), the entire chain, which can grow quite large. But since it only happens every 100 samples, that performance hit should not be too noticeable.
Specifically I modified this section of Starfish/samplers.py:
```python
if storechain and i % thin == 0:
ind = i0 + int(i / thin)
self._chain[ind, :] = p
self._lnprob[ind] = lnprob0
# Save every 100 samples (hardcoded!):
if ((i % 100) == 0) & (i > 100):
self._chain.tofile('chain_backup.npy')
# Heavy duty iterator action going on right here...
yield p, lnprob0, self.random_state
```
I hardcoded 100 as the incremental spacing, but this could easily be made into a commandline argument and keyword arg with a default.
Try it out
End of explanation
n_samples, dims = (chain.shape[0]//6, 6)
flatchain = chain.reshape((n_samples, dims))
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
%config InlineBackend.figure_format = 'svg' #could also do retina...
plt.plot(flatchain[:,0])
plt.ylabel('$T_{\mathrm{eff}}$'); plt.xlabel('Sample');
Explanation: That's slightly annoying... it saves as a flattened 1-D array. We know there are 6 parameters, so we could do:
End of explanation
plt.plot(flatchain[0:300,0])
plt.ylabel('$T_{\mathrm{eff}}$'); plt.xlabel('Sample');
Explanation: We sampled 310 iterations. The code does not get to the final 10 samples because it only saves every 100.
End of explanation |
11,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Classfication
https
Step1: Hypothesis
$$
H(X) = \frac {1} {1+e^{-W^T X}}
$$
https
Step2: Evaluation | Python Code:
import tensorflow as tf
import numpy as np
xy = np.loadtxt('../data/logistic_data.txt',unpack=True, dtype='float32')
x_data = xy[0:-1]
y_data = xy[-1]
Explanation: Logistic Classfication
https://ko.wikipedia.org/wiki/%EB%A1%9C%EC%A7%80%EC%8A%A4%ED%8B%B1_%ED%9A%8C%EA%B7%80
End of explanation
x_data = [ [1,2], [2,3], [3,1], [4,3], [5,3], [6,2] ]
y_data = [ [0], [0], [0], [1], [1], [1] ]
X = tf.placeholder(tf.float32,shape=[None,2])
Y = tf.placeholder(tf.float32,shape=[None,1])
W = tf.Variable(tf.random_normal([2,1], name='weight'))
b = tf.Variable(tf.random_normal([1]), name='bias')
h = tf.matmul(W,X)
hypothesis = tf.sigmoid(tf.matmul(X,W)+b)
#hypothesis = tf.div(1., 1.+tf.exp(-h))
cost = -tf.reduce_mean(Y*tf.log(hypothesis) + (1-Y)*tf.log(1-hypothesis))
opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train = opt.minimize(cost)
predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted,Y), dtype=tf.float32))
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for step in range(2000):
cost_val, _ = sess.run([cost, train], feed_dict={X:x_data, Y:y_data})
if step % 20 ==0:
print(step, cost_val)
h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict = {X: x_data, Y:y_data})
print("H:",h,"\nC:",c,"\nA:",a)
Explanation: Hypothesis
$$
H(X) = \frac {1} {1+e^{-W^T X}}
$$
https://www.desmos.com/calculator
cost
$$
C(H(x),y) =
$$
$$
-log(H(x)) : y=1
$$
$$
-log(1 - H(x)) : y=0
$$
$$
cost(W) = -ylog(H(x)) - (1-y)log(1-H(x))
$$
$$
cost(W) = -\frac {1}{m} \sum ylog(H(x)) + (1-y)log(1-H(x))
$$
$$
W := W - a\frac{\partial}{\partial W} cost(W)
$$
End of explanation
print(sess.run(predicted, feed_dict={X:[[2,2]]})>0.5)
print(sess.run(predicted, feed_dict={X:[[3,4]]})>0.5)
print(sess.run(hypothesis, feed_dict={X:[[6,6],[7,7]]})>0.5)
Explanation: Evaluation
End of explanation |
11,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 3
Python Basic, Lesson 3,
* v1.0, 2016
* v1.1, 2020.2,3,4, 6.13 edit by David Yi
本章内容要点
for 循环语句和 range() 函数
常用数据类型
字符串
字符串处理
思考
for 循环语句和 range() 函数
Python的循环语句主要是 for...in 循环,依次把 in 后面的 list列表或 tuple元组等中的每个元素迭代出来,进行循环;
Python 也有 while 循环,一般情况下不用。
for x in ... 循环就是把每个元素代入变量x,然后执行后面缩进块的语句。
如果是简单的按照次数的循环,一般用 range() 函数来产生一个可以生成迭代数字的序列。
我们看一下实际例子。
Step1: 循环语句 while
while 循环是在 Python中的循环结构之一。 while 执行循环中的内容,直到表达式变为假。
表达式是指一个逻辑表达式,返回一个 True 或 False 值。
一定要小心设置表达式,如果永远是 True 的话,循环就一直执行下去了,称之为死循环。
大部分场景里,循环语句不建议使用 while,但有时候使用 while 语句会让程序很简洁。
Step2: range() 函数
range() 函数产生一个等差序列,range(x,y,z),表示从 x 到 y(不含 y),z 为步长,可以为负。
修改下面例子中的 range() 中的起始、结束和步长,来看看各种效果。
Step3: 字符串
字符串即有序的字符的集合,用来存储或表现基于文本的信息。在程序开发中字符串被广泛使用。
Python 中使用单双引号都可以。
Step4: 字符串处理
字符串是程序中经常碰到的数据类型,字符串的很多处理和 list 有点像,但也有些区别
Step5: 练习
题目:有四个数字:1、2、3、4,能组成多少个互不相同且无重复数字的三位数?
程序分析:可填在百位、十位、个位的数字都是1、2、3、4。组成所有的排列后再去掉不满足条件的排列。 | Python Code:
# 按照字符串进行迭代循环
s = 'abcdef'
for i in s:
print(i)
# 按照列表进行循环,列表内容为字符
s = ['a', 'b', 'c']
for i in s:
print(i)
# 按照列列表进行循环,列表内容为数字
for i in range(3):
print(i)
Explanation: Lesson 3
Python Basic, Lesson 3,
* v1.0, 2016
* v1.1, 2020.2,3,4, 6.13 edit by David Yi
本章内容要点
for 循环语句和 range() 函数
常用数据类型
字符串
字符串处理
思考
for 循环语句和 range() 函数
Python的循环语句主要是 for...in 循环,依次把 in 后面的 list列表或 tuple元组等中的每个元素迭代出来,进行循环;
Python 也有 while 循环,一般情况下不用。
for x in ... 循环就是把每个元素代入变量x,然后执行后面缩进块的语句。
如果是简单的按照次数的循环,一般用 range() 函数来产生一个可以生成迭代数字的序列。
我们看一下实际例子。
End of explanation
# 用 while 进行循环
count = 0
while (count < 9):
print('The count is:', count)
count += 1
print("Good bye!")
Explanation: 循环语句 while
while 循环是在 Python中的循环结构之一。 while 执行循环中的内容,直到表达式变为假。
表达式是指一个逻辑表达式,返回一个 True 或 False 值。
一定要小心设置表达式,如果永远是 True 的话,循环就一直执行下去了,称之为死循环。
大部分场景里,循环语句不建议使用 while,但有时候使用 while 语句会让程序很简洁。
End of explanation
# for 循环,使用range()
for i in range(1,10,3):
print(i)
# for 循环,使用range()
for i in range(10,2,-1):
print(i)
# print 时候输出内容,可以不换行
for i in range(10,2,-1):
print(i,end='')
# print 时候不换行,优雅的分割
for i in range(10,2,-1):
print(i,end=',')
# 同时获得列表中的序号和内容,可以这样写
# len() 是获得列表的长度,可以理解为元素个数
s = ['Mary ', 'had', 'a', 'little ', 'lamb']
for i in range(len(s)):
print(i, s[i])
# 更加好的写法, 使用 enumerate()
s = ['Mary ', 'had', 'a', 'little ', 'lamb']
for i, item in enumerate(s):
print(i, item)
Explanation: range() 函数
range() 函数产生一个等差序列,range(x,y,z),表示从 x 到 y(不含 y),z 为步长,可以为负。
修改下面例子中的 range() 中的起始、结束和步长,来看看各种效果。
End of explanation
# 单引号
a = 'spam'
print(a)
# 双引号
a = "spam"
print(a)
b = "spam's log"
print(b)
# 多行字符串
a = '''multile lines
this is an example'''
print(a)
s = 'a\nb\tc' # 转义字符串
print(s)
# 输出内容因为转义而发生变化
a = 'C:\new\text.txt'
print(a)
# 输出内容不带转义
b = r'C:\new\text.txt'
print(b)
# 常用的字符串表达式
s = 'ABC.txt'
print(s + '123') # 字符串拼接
print(s * 2) # 重复
print(s[0]) # 索引
print(s[-1])
print(s[::-1]) # 反转
print(s[0:2]) # 切片
print(len(s)) # 长度
print(s.lower()) # 小写转换
print(s.upper()) # 大写转换
print(s.endswith('.txt')) # 后缀测试
print('AB' in s) # 成员关系测试
Explanation: 字符串
字符串即有序的字符的集合,用来存储或表现基于文本的信息。在程序开发中字符串被广泛使用。
Python 中使用单双引号都可以。
End of explanation
# 判断是否是字母
s1 = 'abcde'
s2 = '12'
s3 = '12s'
print(s1.isalpha())
print(s2.isalpha())
print(s3.isalpha())
# 判断是否是字母、是否是数字
s1 = 'abcde'
s2 = '12'
s3 = '12s'
print(s1.isalpha())
print(s2.isdigit())
print(s3.isalpha())
# 判断是否是小写
s1 = 'abc'
print(s1.islower())
print(''.islower())
# 判断是否是大写
s1 = 'ABC'
print(s1.isupper())
# 是否字母数字混合
s1 ='123abc'
print(s1.isalnum())
# 字符串查找
s1 = 'What is your name'
if s1.find('your') != -1:
print('find it')
print(s1.find('your'))
# 字符串查找的另外一种方式
s1 = 'What is your name'
if 'your' in s1 != -1:
print('find it')
# 字符串替换
s1 = 'What is your name'
s2 = s1.replace('your', 'my')
print(s2)
# 字符串切片
s1 = 'abcdef'
s2 = s1[::-1]
print(s2)
# 字符串和列表的转换
s1 = ' a b c d e f'
s2 = s1.split()
s3 = s1.split(' ')
print(s2)
print(s3)
# 字符串转换为小写、大写
s1 = 'aBCdef'
print(s1.lower())
print(s1.upper())
# 字符串去除空格
s1 = ' a b c d e f '
print(s1.strip(' '),len(s1.strip(' ')))
print(s1.lstrip(' '),len(s1.lstrip(' ')))
print(s1.rstrip(' '))
# 字符串对齐
s1 = '12'
s2 = '2302'
print(s1.zfill(3))
print(s2.zfill(3))
print(s1.ljust(4))
print(s2.ljust(4))
print(s1.rjust(4))
print(s2.rjust(4))
a='12345679012456'
print(a[8:4:-1])
Explanation: 字符串处理
字符串是程序中经常碰到的数据类型,字符串的很多处理和 list 有点像,但也有些区别
End of explanation
for i in range(1,5):
for j in range(1,5):
for k in range(1,5):
if( i != k ) and (i != j) and (j != k):
print(i,j,k)
Explanation: 练习
题目:有四个数字:1、2、3、4,能组成多少个互不相同且无重复数字的三位数?
程序分析:可填在百位、十位、个位的数字都是1、2、3、4。组成所有的排列后再去掉不满足条件的排列。
End of explanation |
11,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(inputs_, image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
learning_rate = 0.001
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
11,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic of Machine Learning
1.Data
Step1: Brodcasting
Step2: nd array <-> Numpy
Step3: Deal data with gpu
Step4: Scala, Vector, Matrices, Tensors | Python Code:
import mxnet as mx
from mxnet import nd
import numpy as np
mx.random.seed(1)
x = nd.empty((3, 4))
print(x)
x = nd.ones((3, 4))
x
y = nd.random_normal(0, 1, shape=(3, 4))
print y
print y.shape
print y.size
x * y
nd.exp(y)
nd.dot(x, y.T)
# Memory Host
print "The current mem host y is {}".format(id(y))
y[:] = x + y
print "The current mem host after add + assigning y is {}".format(id(y))
y = x + y
print "The current mem host after add y is {}".format(id(y))
print y
print y[1:3]
print y[1:3,1:2]
print x
x[1,2] = 9
print x
x[1:2,1:3] = 5
print x
Explanation: Basic of Machine Learning
1.Data:
Image, Text, Audio, Video, Structured data
2.A model of how to transform the data
3.A loss function to measure how well we’re doing
4.An algorithm to tweak the model parameters such that the loss function is minimized
ND Array in MXNet
End of explanation
x = nd.ones(shape=(3,3))
print('x = ', x)
y = nd.arange(3)
print('y = ', y)
print('x + y = ', x + y)
Explanation: Brodcasting
End of explanation
a = x.asnumpy()
print "The type of a is {}".format(type(a))
y = nd.array(a)
print "The type of a is {}".format(type(y))
Explanation: nd array <-> Numpy
End of explanation
# z = nd.ones(shape=(3,3), ctx=mx.gpu(0))
# z
# x_gpu = x.copyto(mx.gpu(0))
# print(x_gpu)
Explanation: Deal data with gpu
End of explanation
# scalars
x = nd.array([3.0])
y = nd.array([2.0])
print 'x + y = ', x + y
print 'x * y = ', x * y
print 'x / y = ', x / y
print 'x ** y = ', nd.power(x,y)
# convert it to python scala
x.asscalar()
# Vector
u = nd.arange(4)
print('u = ', u)
print u[3]
print len(u)
print u.shape
a = 2
x = nd.array([1,2,3])
y = nd.array([10,20,30])
print(a * x)
print(a * x + y)
# Matrices
x = nd.arange(20)
A = x.reshape((5, 4))
print A
print 'A[2, 3] = ', A[2, 3]
print('row 2', A[2, :])
print('column 3', A[:, 3])
print A.T
# Tensor
X = nd.arange(24).reshape((2, 3, 4))
print 'X.shape =', X.shape
print 'X =', X
u = nd.array([1, 2, 4, 8])
v = nd.ones_like(u) * 2
print 'v =', v
print 'u + v', u + v
print 'u - v', u - v
print 'u * v', u * v
print 'u / v', u / v
print nd.sum(u)
print nd.mean(u)
print nd.sum(u) / u.size
print nd.dot(u, v)
print nd.sum(u * v)
# Matrices multiple Vector
print nd.dot(A, u)
print nd.dot(A, A.T)
print nd.norm(u)
print nd.sqrt(nd.sum(u**2))
print nd.sum(nd.abs(u))
Explanation: Scala, Vector, Matrices, Tensors
End of explanation |
11,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mm', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-MM
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
11,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The OpenFermion Developers
Step1: The Jordan-Wigner and Bravyi-Kitaev Transforms
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Ladder operators and the canonical anticommutation relations
A system of $N$ fermionic modes is
described by a set of fermionic annihilation operators
${a_p}{p=0}^{N-1}$ satisfying the canonical anticommutation relations
$$\begin{align}
{a_p, a_q} &= 0, \label{eq
Step3: The parity transform
By comparing the action of $\tilde{a}p$ on $\lvert z_0, \ldots, z{N-1} \rangle$ in the JWT with the action of $a_p$ on $\lvert n_0, \ldots, n_{N-1} \rangle$ (described in the first section of this demo), we can see that the JWT is associated with a particular mapping of bitstrings $e
Step4: The purpose of the string of Pauli $Z$'s is to introduce the phase factor $(-1)^{\sum_{q=0}^{p-1} n_q}$ when acting on a computational basis state; when $e$ is the identity encoder, the modulo-2 sum $\sum_{q=0}^{p-1} n_q$ is computed as $\sum_{q=0}^{p-1} z_q$, which requires reading $p$ bits and leads to a Pauli $Z$ string with weight $p$. A simple solution to this problem is to consider instead the encoder defined by
$$e(x)p = \sum{q=0}^p x_q \quad (\text{mod 2}),$$
which is associated with the mapping of basis vectors
$\lvert n_0, \ldots, n_{N-1} \rangle \mapsto \lvert z_0, \ldots, z_{N-1} \rangle,$
where $z_p = \sum_{q=0}^p n_q$ (again addition is modulo 2). With this encoding, we can compute the sum $\sum_{q=0}^{p-1} n_q$ by reading just one bit because this is the value stored by $z_{p-1}$. The associated transform is called the parity transform because the $p$-th qubit is storing the parity (modulo-2 sum) of modes $0, \ldots, p$. Under the parity transform, annihilation operators are mapped as follows
Step5: Now let's map one of the FermionOperators again but with the total number of modes set to 100.
Step6: Note that with the JWT, it is not necessary to specify the total number of modes in the system because $\tilde{a}_p$ only acts on qubits $0, \ldots, p$ and not any higher ones.
The Bravyi-Kitaev transform
The discussion above suggests that we can think of the action of a transformed annihilation operator $\tilde{a}p$ on a computational basis vector $\lvert z \rangle$ as a 4-step classical algorithm
Step7: For the JWT, $U(j) = {j}$ and $P(j) = {0, \ldots, j}$, whereas for the parity transform, $U(j) = {j, \ldots, N-1}$ and $P(j) = {j}$. The size of these sets can be as large as $N$, the total number of modes. These sets are determined by the encoding function $e$.
It is possible to pick a clever encoder with the property that these sets have size $O(\log N)$. The corresponding transform will map annihilation operators to qubit operators with weight $O(\log N)$, which is much smaller than the $\Omega(N)$ weight associated with the JWT and parity transforms. This fact was noticed by Bravyi and Kitaev, and later Havlíček and others pointed out that the encoder which achieves this is implemented by a classical data structure called a Fenwick tree. The transforms described in these two papers actually correspond to different variants of the Fenwick tree data structure and give different results when the total number of modes is not a power of 2. OpenFermion implements the one from the first paper as bravyi_kitaev and the one from the second paper as bravyi_kitaev_tree. Generally, the first one (bravyi_kitaev) is preferred because it results in operators with lower weight and is faster to compute.
Let's transform our previously instantiated Majorana operator using the Bravyi-Kitaev transform.
Step8: The advantage of the Bravyi-Kitaev transform is not apparent in a system with so few modes. Let's look at a system with 100 modes.
Step9: Now let's go back to a system with 10 modes and check that the Bravyi-Kitaev transformed operators satisfy the expected relations. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The OpenFermion Developers
End of explanation
try:
import openfermion
except ImportError:
!pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion
Explanation: The Jordan-Wigner and Bravyi-Kitaev Transforms
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/openfermion/tutorials/jordan_wigner_and_bravyi_kitaev_transforms"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/jordan_wigner_and_bravyi_kitaev_transforms.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/jordan_wigner_and_bravyi_kitaev_transforms.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/jordan_wigner_and_bravyi_kitaev_transforms.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Setup
Install the OpenFermion package:
End of explanation
from openfermion import *
# Create some ladder operators
annihilate_2 = FermionOperator('2')
create_2 = FermionOperator('2^')
annihilate_5 = FermionOperator('5')
create_5 = FermionOperator('5^')
# Construct occupation number operators
num_2 = create_2 * annihilate_2
num_5 = create_5 * annihilate_5
# Map FermionOperators to QubitOperators using the JWT
annihilate_2_jw = jordan_wigner(annihilate_2)
create_2_jw = jordan_wigner(create_2)
annihilate_5_jw = jordan_wigner(annihilate_5)
create_5_jw = jordan_wigner(create_5)
num_2_jw = jordan_wigner(num_2)
num_5_jw = jordan_wigner(num_5)
# Create QubitOperator versions of zero and identity
zero = QubitOperator()
identity = QubitOperator(())
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_jw, annihilate_2_jw) == zero
assert anticommutator(annihilate_5_jw, annihilate_5_jw) == zero
assert anticommutator(annihilate_5_jw, create_2_jw) == zero
assert anticommutator(annihilate_5_jw, create_5_jw) == identity
# Check that the occupation number operators commute
assert commutator(num_2_jw, num_5_jw) == zero
# Print some output
print("annihilate_2_jw = \n{}".format(annihilate_2_jw))
print('')
print("create_2_jw = \n{}".format(create_2_jw))
print('')
print("annihilate_5_jw = \n{}".format(annihilate_5_jw))
print('')
print("create_5_jw = \n{}".format(create_5_jw))
print('')
print("num_2_jw = \n{}".format(num_2_jw))
print('')
print("num_5_jw = \n{}".format(num_5_jw))
Explanation: Ladder operators and the canonical anticommutation relations
A system of $N$ fermionic modes is
described by a set of fermionic annihilation operators
${a_p}{p=0}^{N-1}$ satisfying the canonical anticommutation relations
$$\begin{align}
{a_p, a_q} &= 0, \label{eq:car1} \
{a_p, a^\dagger_q} &= \delta{pq}, \label{eq:car2}
\end{align}$$ where ${A, B} := AB + BA$. The adjoint
$a^\dagger_p$ of an annihilation operator $a_p$ is called a creation
operator, and we refer to creation and annihilation operators as
fermionic ladder operators.
In a finite-dimensional vector space the anticommutation relations have the following consequences:
The operators ${a^\dagger_p a_p}_{p=0}^{N-1}$ commute with each
other and have eigenvalues 0 and 1. These are called the occupation
number operators.
There is a normalized vector $\lvert{\text{vac}}\rangle$, called the vacuum
state, which is a mutual 0-eigenvector of all
the $a^\dagger_p a_p$.
If $\lvert{\psi}\rangle$ is a 0-eigenvector of $a_p^\dagger a_p$, then
$a_p^\dagger\lvert{\psi}\rangle$ is a 1-eigenvector of $a_p^\dagger a_p$.
This explains why we say that $a_p^\dagger$ creates a fermion in
mode $p$.
If $\lvert{\psi}\rangle$ is a 1-eigenvector of $a_p^\dagger a_p$, then
$a_p\lvert{\psi}\rangle$ is a 0-eigenvector of $a_p^\dagger a_p$. This
explains why we say that $a_p$ annihilates a fermion in mode $p$.
$a_p^2 = 0$ for all $p$. One cannot create or annihilate a fermion
in the same mode twice.
The set of $2^N$ vectors
$$\lvert n_0, \ldots, n_{N-1} \rangle :=
(a^\dagger_0)^{n_0} \cdots (a^\dagger_{N-1})^{n_{N-1}} \lvert{\text{vac}}\rangle,
\qquad n_0, \ldots, n_{N-1} \in {0, 1}$$
are orthonormal. We can assume they form a basis for the entire vector space.
The annihilation operators $a_p$ act on this basis as follows:
$$\begin{aligned} a_p \lvert n_0, \ldots, n_{p-1}, 1, n_{p+1}, \ldots, n_{N-1} \rangle &= (-1)^{\sum_{q=0}^{p-1} n_q} \lvert n_0, \ldots, n_{p-1}, 0, n_{p+1}, \ldots, n_{N-1} \rangle \,, \ a_p \lvert n_0, \ldots, n_{p-1}, 0, n_{p+1}, \ldots, n_{N-1} \rangle &= 0 \,.\end{aligned}$$
See here for a derivation and discussion of these
consequences.
Mapping fermions to qubits with transforms
To simulate a system of fermions on a quantum computer, we must choose a representation of the ladder operators on the Hilbert space of the qubits. In other words, we must designate a set of qubit operators (matrices) which satisfy the canonical anticommutation relations. Qubit operators are written in terms of the Pauli matrices $X$, $Y$, and $Z$. In OpenFermion a representation is specified by a transform function which maps fermionic operators (typically instances of FermionOperator) to qubit operators (instances of QubitOperator). In this demo we will discuss the Jordan-Wigner and Bravyi-Kitaev transforms, which are implemented by the functions jordan_wigner and bravyi_kitaev.
The Jordan-Wigner Transform
Under the Jordan-Wigner Transform (JWT), the annihilation operators are mapped to qubit operators as follows:
$$\begin{aligned}
a_p &\mapsto \frac{1}{2} (X_p + \mathrm{i}Y_p) Z_1 \cdots Z_{p - 1} \
&= (\lvert{0}\rangle\langle{1}\rvert)p Z_1 \cdots Z{p - 1} \
&=: \tilde{a}_p.
\end{aligned}$$
This operator has the following action on a computational basis vector
$\lvert z_0, \ldots, z_{N-1} \rangle$:
$$\begin{aligned}
\tilde{a}p \lvert z_0 \ldots, z{p-1}, 1, z_{p+1}, \ldots, z_{N-1} \rangle &=
(-1)^{\sum_{q=0}^{p-1} z_q} \lvert z_0 \ldots, z_{p-1}, 0, z_{p+1}, \ldots, z_{N-1} \rangle \
\tilde{a}p \lvert z_0 \ldots, z{p-1}, 0, z_{p+1}, \ldots, z_{N-1} \rangle &= 0.
\end{aligned}$$
Note that $\lvert n_0, \ldots, n_{N-1} \rangle$ is a basis vector in the Hilbert space of fermions, while $\lvert z_0, \ldots, z_{N-1} \rangle$ is a basis vector in the Hilbert space of qubits. Similarly, in OpenFermion $a_p$ is a FermionOperator while $\tilde{a}_p$ is a QubitOperator.
Let's instantiate some FermionOperators, map them to QubitOperators using the JWT, and check that the resulting operators satisfy the expected relations.
End of explanation
print(jordan_wigner(FermionOperator('99')))
Explanation: The parity transform
By comparing the action of $\tilde{a}p$ on $\lvert z_0, \ldots, z{N-1} \rangle$ in the JWT with the action of $a_p$ on $\lvert n_0, \ldots, n_{N-1} \rangle$ (described in the first section of this demo), we can see that the JWT is associated with a particular mapping of bitstrings $e: {0, 1}^N \to {0, 1}^N$, namely, the identity map $e(x) = x$. In other words, under the JWT, the fermionic basis vector $\lvert n_0, \ldots, n_{N-1} \rangle$ is represented by the computational basis vector $\lvert z_0, \ldots, z_{N-1} \rangle$, where $z_p = n_p$ for all $p$. We can write this as
$$\lvert x \rangle \mapsto \lvert e(x) \rangle,$$
where the vector on the left is fermionic and the vector on the right is qubit. We call the mapping $e$ an encoder.
There are other transforms which are associated with different encoders. To see why we might be interested in these other transforms, observe that under the JWT, $\tilde{a}_p$ acts not only on qubit $p$ but also on qubits $0, \ldots, p-1$. This means that fermionic operators with low weight can get mapped to qubit operators with high weight, where by weight we mean the number of modes or qubits an operators acts on. There are some disadvantages to having high-weight operators; for instance, they may require more gates to simulate and are more expensive to measure on some near-term hardware platforms. In the worst case, the annihilation operator on the last mode will map to an operator which acts on all the qubits. To emphasize this point let's apply the JWT to the annihilation operator on mode 99:
End of explanation
# Set the number of modes in the system
n_modes = 10
# Define a function to perform the parity transform
def parity(fermion_operator, n_modes):
return binary_code_transform(fermion_operator, parity_code(n_modes))
# Map FermionOperators to QubitOperators using the parity transform
annihilate_2_parity = parity(annihilate_2, n_modes)
create_2_parity = parity(create_2, n_modes)
annihilate_5_parity = parity(annihilate_5, n_modes)
create_5_parity = parity(create_5, n_modes)
num_2_parity = parity(num_2, n_modes)
num_5_parity = parity(num_5, n_modes)
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_parity, annihilate_2_parity) == zero
assert anticommutator(annihilate_5_parity, annihilate_5_parity) == zero
assert anticommutator(annihilate_5_parity, create_2_parity) == zero
assert anticommutator(annihilate_5_parity, create_5_parity) == identity
# Check that the occupation number operators commute
assert commutator(num_2_parity, num_5_parity) == zero
# Print some output
print("annihilate_2_parity = \n{}".format(annihilate_2_parity))
print('')
print("create_2_parity = \n{}".format(create_2_parity))
print('')
print("annihilate_5_parity = \n{}".format(annihilate_5_parity))
print('')
print("create_5_parity = \n{}".format(create_5_parity))
print('')
print("num_2_parity = \n{}".format(num_2_parity))
print('')
print("num_5_parity = \n{}".format(num_5_parity))
Explanation: The purpose of the string of Pauli $Z$'s is to introduce the phase factor $(-1)^{\sum_{q=0}^{p-1} n_q}$ when acting on a computational basis state; when $e$ is the identity encoder, the modulo-2 sum $\sum_{q=0}^{p-1} n_q$ is computed as $\sum_{q=0}^{p-1} z_q$, which requires reading $p$ bits and leads to a Pauli $Z$ string with weight $p$. A simple solution to this problem is to consider instead the encoder defined by
$$e(x)p = \sum{q=0}^p x_q \quad (\text{mod 2}),$$
which is associated with the mapping of basis vectors
$\lvert n_0, \ldots, n_{N-1} \rangle \mapsto \lvert z_0, \ldots, z_{N-1} \rangle,$
where $z_p = \sum_{q=0}^p n_q$ (again addition is modulo 2). With this encoding, we can compute the sum $\sum_{q=0}^{p-1} n_q$ by reading just one bit because this is the value stored by $z_{p-1}$. The associated transform is called the parity transform because the $p$-th qubit is storing the parity (modulo-2 sum) of modes $0, \ldots, p$. Under the parity transform, annihilation operators are mapped as follows:
$$\begin{aligned}
a_p &\mapsto \frac{1}{2} (X_p Z_{p - 1} + \mathrm{i}Y_p) X_{p + 1} \cdots X_{N} \
&= \frac{1}{4} [(X_p + \mathrm{i} Y_p) (I + Z_{p - 1}) -
(X_p - \mathrm{i} Y_p) (I - Z_{p - 1})]
X_{p + 1} \cdots X_{N} \
&= [(\lvert{0}\rangle\langle{1}\rvert)p (\lvert{0}\rangle\langle{0}\rvert){p - 1} -
(\lvert{0}\rangle\langle{1}\rvert)p (\lvert{1}\rangle\langle{1}\rvert){p - 1}]
X_{p + 1} \cdots X_{N} \
\end{aligned}$$
The term in brackets in the last line means "if $z_p = n_p$ then annihilate in mode $p$; otherwise, create in mode $p$ and attach a minus sign". The value stored by $z_{p-1}$ contains the information needed to determine whether a minus sign should be attached or not. However, now there is a string of Pauli $X$'s acting on modes $p+1, \ldots, N-1$ and hence using the parity transform also yields operators with high weight. These Pauli $X$'s perform the necessary update to $z_{p+1}, \ldots, z_{N-1}$ which is needed if the value of $n_{p}$ changes. In the worst case, the annihilation operator on the first mode will map to an operator which acts on all the qubits.
Since the parity transform does not offer any advantages over the JWT, OpenFermion does not include a standalone function to perform it. However, there is functionality for defining new transforms by specifying an encoder and decoder pair, also known as a binary code (in our examples the decoder is simply the inverse mapping), and the binary code which defines the parity transform is included in the library as an example. See Lowering qubit requirements using binary codes for a demonstration of this functionality and how it can be used to reduce the qubit resources required for certain applications.
Let's use this functionality to map our previously instantiated FermionOperators to QubitOperators using the parity transform with 10 total modes and check that the resulting operators satisfy the expected relations.
End of explanation
print(parity(annihilate_2, 100))
Explanation: Now let's map one of the FermionOperators again but with the total number of modes set to 100.
End of explanation
# Create a Majorana operator from our existing operators
c_5 = annihilate_5 + create_5
# Set the number of modes (required for the parity transform)
n_modes = 10
# Transform the Majorana operator to a QubitOperator in two different ways
c_5_jw = jordan_wigner(c_5)
c_5_parity = parity(c_5, n_modes)
# Print some output
print("c_5_jw = \n{}".format(c_5_jw))
print('')
print("c_5_parity = \n{}".format(c_5_parity))
Explanation: Note that with the JWT, it is not necessary to specify the total number of modes in the system because $\tilde{a}_p$ only acts on qubits $0, \ldots, p$ and not any higher ones.
The Bravyi-Kitaev transform
The discussion above suggests that we can think of the action of a transformed annihilation operator $\tilde{a}p$ on a computational basis vector $\lvert z \rangle$ as a 4-step classical algorithm:
1. Check if $n_p = 0$. If so, then output the zero vector. Otherwise,
2. Update the bit stored by $z_p$.
3. Update the rest of the bits $z_q$, $q \neq p$.
4. Multiply by the parity $\sum{q=0}^{p-1} n_p$.
Under the JWT, Steps 1, 2, and 3 are represented by the operator $(\lvert{0}\rangle\langle{1}\rvert)p$ and Step 4 is accomplished by the operator $Z{0} \cdots Z_{p-1}$ (Step 3 actually requires no action).
Under the parity transform, Steps 1, 2, and 4 are represented by the operator
$(\lvert{0}\rangle\langle{1}\rvert)p (\lvert{0}\rangle\langle{0}\rvert){p - 1} -
(\lvert{0}\rangle\langle{1}\rvert)p (\lvert{1}\rangle\langle{1}\rvert){p - 1}$ and Step 3 is accomplished by the operator $X_{p+1} \cdots X_{N-1}$.
To obtain a simpler description of these and other transforms (with an aim at generalizing), it is better to put aside the ladder operators and work with an alternative set of $2N$ operators defined by
$$c_p = a_p + a_p^\dagger\,, \qquad d_p = -\mathrm{i} (a_p - a_p^\dagger)\,.$$
These operators are known as Majorana operators. Note that if we describe how Majorana operators should be transformed, then we also know how the annihilation operators should be transformed, since
$$a_p = \frac{1}{2} (c_p + \mathrm{i} d_p).$$
For simplicity, let's consider just the $c_p$; the $d_p$ are treated similarly. The action of $c_p$ on a fermionic basis vector is given by
$$c_p \lvert n_0, \ldots, n_{p-1}, n_p, n_{p+1}, \ldots, n_{N-1} \rangle =
(-1)^{\sum_{q=0}^{p-1} n_q} \lvert n_0, \ldots, n_{p-1}, 1 - n_p, n_{p+1}, \ldots, n_{N-1} \rangle$$
In words, $c_p$ flips the occupation of mode $p$ and multiplies by the ever-present parity factor. If we transform $c_p$ to a qubit operator $\tilde{c}p$, we should be able to describe the action of $\tilde{c}_p$ on a computational basis vector $\lvert z \rangle$ with a 2-step classical algorithm:
1. Update the string $z$ to a new string $z'$.
2. Multiply by the parity $\sum{q=0}^{p-1} n_q$.
Step 1 amounts to flipping some bits, so it will be performed by some Pauli $X$'s, and Step 2 will be performed by some Pauli $Z$'s. So $\tilde{c}p$ should take the form
$$\tilde{c}_p = X{U(p)} Z_{P(p - 1)},$$
where $U(j)$ is the set of bits that need to be updated upon flipping $n_j$, and $P(j)$ is a set of bits that stores the sum $\sum_{q=0}^{j} n_q$ (let's define $P(-1)$ to be the empty set). Let's see how this looks under the JWT and parity transforms.
End of explanation
c_5_bk = bravyi_kitaev(c_5, n_modes)
print("c_5_bk = \n{}".format(c_5_bk))
Explanation: For the JWT, $U(j) = {j}$ and $P(j) = {0, \ldots, j}$, whereas for the parity transform, $U(j) = {j, \ldots, N-1}$ and $P(j) = {j}$. The size of these sets can be as large as $N$, the total number of modes. These sets are determined by the encoding function $e$.
It is possible to pick a clever encoder with the property that these sets have size $O(\log N)$. The corresponding transform will map annihilation operators to qubit operators with weight $O(\log N)$, which is much smaller than the $\Omega(N)$ weight associated with the JWT and parity transforms. This fact was noticed by Bravyi and Kitaev, and later Havlíček and others pointed out that the encoder which achieves this is implemented by a classical data structure called a Fenwick tree. The transforms described in these two papers actually correspond to different variants of the Fenwick tree data structure and give different results when the total number of modes is not a power of 2. OpenFermion implements the one from the first paper as bravyi_kitaev and the one from the second paper as bravyi_kitaev_tree. Generally, the first one (bravyi_kitaev) is preferred because it results in operators with lower weight and is faster to compute.
Let's transform our previously instantiated Majorana operator using the Bravyi-Kitaev transform.
End of explanation
n_modes = 100
# Initialize some Majorana operators
c_17 = FermionOperator('[17] + [17^]')
c_50 = FermionOperator('[50] + [50^]')
c_73 = FermionOperator('[73] + [73^]')
# Map to QubitOperators
c_17_jw = jordan_wigner(c_17)
c_50_jw = jordan_wigner(c_50)
c_73_jw = jordan_wigner(c_73)
c_17_parity = parity(c_17, n_modes)
c_50_parity = parity(c_50, n_modes)
c_73_parity = parity(c_73, n_modes)
c_17_bk = bravyi_kitaev(c_17, n_modes)
c_50_bk = bravyi_kitaev(c_50, n_modes)
c_73_bk = bravyi_kitaev(c_73, n_modes)
# Print some output
print("Jordan-Wigner\n"
"-------------")
print("c_17_jw = \n{}".format(c_17_jw))
print('')
print("c_50_jw = \n{}".format(c_50_jw))
print('')
print("c_73_jw = \n{}".format(c_73_jw))
print('')
print("Parity\n"
"------")
print("c_17_parity = \n{}".format(c_17_parity))
print('')
print("c_50_parity = \n{}".format(c_50_parity))
print('')
print("c_73_parity = \n{}".format(c_73_parity))
print('')
print("Bravyi-Kitaev\n"
"-------------")
print("c_17_bk = \n{}".format(c_17_bk))
print('')
print("c_50_bk = \n{}".format(c_50_bk))
print('')
print("c_73_bk = \n{}".format(c_73_bk))
Explanation: The advantage of the Bravyi-Kitaev transform is not apparent in a system with so few modes. Let's look at a system with 100 modes.
End of explanation
# Set the number of modes in the system
n_modes = 10
# Map FermionOperators to QubitOperators using the Bravyi-Kitaev transform
annihilate_2_bk = bravyi_kitaev(annihilate_2, n_modes)
create_2_bk = bravyi_kitaev(create_2, n_modes)
annihilate_5_bk = bravyi_kitaev(annihilate_5, n_modes)
create_5_bk = bravyi_kitaev(create_5, n_modes)
num_2_bk = bravyi_kitaev(num_2, n_modes)
num_5_bk = bravyi_kitaev(num_5, n_modes)
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_bk, annihilate_2_bk) == zero
assert anticommutator(annihilate_5_bk, annihilate_5_bk) == zero
assert anticommutator(annihilate_5_bk, create_2_bk) == zero
assert anticommutator(annihilate_5_bk, create_5_bk) == identity
# Check that the occupation number operators commute
assert commutator(num_2_bk, num_5_bk) == zero
# Print some output
print("annihilate_2_bk = \n{}".format(annihilate_2_bk))
print('')
print("create_2_bk = \n{}".format(create_2_bk))
print('')
print("annihilate_5_bk = \n{}".format(annihilate_5_bk))
print('')
print("create_5_bk = \n{}".format(create_5_bk))
print('')
print("num_2_bk = \n{}".format(num_2_bk))
print('')
print("num_5_bk = \n{}".format(num_5_bk))
Explanation: Now let's go back to a system with 10 modes and check that the Bravyi-Kitaev transformed operators satisfy the expected relations.
End of explanation |
11,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Forward and Backward mode gradients in TFF
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: This notebook demonstrates the difference between forward and backward gradient computation
Step3: Consider a simple vector-function in two variables $x$ and $y$
Step4: Backward mode
For a vector $u = [u_1, u_2, u_3]$, backward gradient computes partial derivatives of the dot product $u \cdot f(x, y)$
$
\begin{align}
\frac {\partial (u \cdot f)}{\partial x} &= u_1 \frac{\partial f_1}{\partial x} + u_2 \frac{\partial f_2}{\partial x} + u_3 \frac{\partial f_3}{\partial x} \ \
&= 2 u_1 x + u_3 y\ \
\frac {\partial (u \cdot f)}{\partial y} &= u_1 \frac{\partial f_1}{\partial y} + u_2 \frac{\partial f_2}{\partial y} + u_3 \frac{\partial f_3}{\partial y} \ \
&= 2 u_1 y + u_3 x
\end{align}
$
In Tensorflow, [$u_1$, $u_2$, $u_3$] is by default set to [1, 1, 1].
Setting [$x$, $y$] to [1, 2], backward mode returns the gradients summed up by components
Step5: The user has access to [$u_1$, $u_2$, $u_3$] as well. Setting the values
to [0, 0, 1] leads to the gradient $[\frac{\partial f_3}{\partial x}, \frac{\partial f_3}{\partial y}]$
Step6: Forward mode
TFF provides an opportunity to compute a forward gradient as well.
For a vector $w = [w_1, w_2]$, forward gradient computes differentials for $[f_1, f_2, f_3]$
$
\begin{align}
{\partial f_1} &= w_1 \frac{\partial f_1}{\partial x} + w_2 \frac{\partial f_1}{\partial y} \ \
&= 2 w_1 x \ \
{\partial f_2} &= w_1 \frac{\partial f_2}{\partial x} + w_2 \frac{\partial f_2}{\partial y} \ \
&= 2 w_2 y \ \
{\partial f_3} &= w_1 \frac{\partial f_3}{\partial x} + w_2 \frac{\partial f_3}{\partial y} \ \
&= w_1 x + w_2 y \ \
\end{align}
$
In TFF, [$w_1$, $w_2$] is by default set to [1, 1]. Setting [$x$, $y$] to [1, 2], forward mode returns the differentials by components.
Step7: Remember, Tensorflow is the tool commonly used in Machine Learning. In Machine Learning, the aim is to minmize the scalar loss function, the loss function being the sum of the gradient with respect to the feature set. This lowest loss is the loss summed up over all training examples which can be computed via backward gradient.
However, let's take the use case where we are valuing a set of options, say ten, against a single spot price $S_0$. We now have ten price functions and we need their gradients against spot $S_0$ (ten deltas).
Using the forward gradients with respect to $S_0$ would give us the ten delta's in a single pass.
Using the backward gradients would result in the sum of the ten delta's, which may not be that useful.
It is useful to note that varying the weights would also give you individual components of the gradients (in other words [1, 0] and [0, 1] as values of [$w_1$, $w_2$], instead of the default [1, 1], similarly for backward. This is, of course, at the expense of more compute. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Upgrade to TensorFlow nightly
!pip install --upgrade tf-nightly
#@title Install TF Quant Finance
!pip install tff-nightly
Explanation: Forward and Backward mode gradients in TFF
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Forward_Backward_Diff.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Forward_Backward_Diff.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
End of explanation
#@title Imports { display-mode: "form" }
import tensorflow as tf
import functools
import tf_quant_finance as tff
Explanation: This notebook demonstrates the difference between forward and backward gradient computation
End of explanation
def func(x):
func = tf.stack([x[0]**2, x[1]**2, x[0] * x[1]])
return func
start = tf.constant([1,2], dtype=tf.float64)
Explanation: Consider a simple vector-function in two variables $x$ and $y$:
$
\begin{align}
& f = [f_1, f_2, f_3] \
& where \
\end{align}
$
$
\begin{align}
f_1 &= x^2 \
f_2 &= y^2 \
f_3 &= x y \
\end{align}
$
End of explanation
# Note that the output is u d(u.f(x, y))dx and d(u.f(x, y))dy
tff.math.gradients(func, start)
Explanation: Backward mode
For a vector $u = [u_1, u_2, u_3]$, backward gradient computes partial derivatives of the dot product $u \cdot f(x, y)$
$
\begin{align}
\frac {\partial (u \cdot f)}{\partial x} &= u_1 \frac{\partial f_1}{\partial x} + u_2 \frac{\partial f_2}{\partial x} + u_3 \frac{\partial f_3}{\partial x} \ \
&= 2 u_1 x + u_3 y\ \
\frac {\partial (u \cdot f)}{\partial y} &= u_1 \frac{\partial f_1}{\partial y} + u_2 \frac{\partial f_2}{\partial y} + u_3 \frac{\partial f_3}{\partial y} \ \
&= 2 u_1 y + u_3 x
\end{align}
$
In Tensorflow, [$u_1$, $u_2$, $u_3$] is by default set to [1, 1, 1].
Setting [$x$, $y$] to [1, 2], backward mode returns the gradients summed up by components
End of explanation
tff.math.gradients(func, start,
output_gradients=tf.constant([0, 0, 1], dtype=tf.float64))
Explanation: The user has access to [$u_1$, $u_2$, $u_3$] as well. Setting the values
to [0, 0, 1] leads to the gradient $[\frac{\partial f_3}{\partial x}, \frac{\partial f_3}{\partial y}]$
End of explanation
tff.math.fwd_gradient(func, start)
Explanation: Forward mode
TFF provides an opportunity to compute a forward gradient as well.
For a vector $w = [w_1, w_2]$, forward gradient computes differentials for $[f_1, f_2, f_3]$
$
\begin{align}
{\partial f_1} &= w_1 \frac{\partial f_1}{\partial x} + w_2 \frac{\partial f_1}{\partial y} \ \
&= 2 w_1 x \ \
{\partial f_2} &= w_1 \frac{\partial f_2}{\partial x} + w_2 \frac{\partial f_2}{\partial y} \ \
&= 2 w_2 y \ \
{\partial f_3} &= w_1 \frac{\partial f_3}{\partial x} + w_2 \frac{\partial f_3}{\partial y} \ \
&= w_1 x + w_2 y \ \
\end{align}
$
In TFF, [$w_1$, $w_2$] is by default set to [1, 1]. Setting [$x$, $y$] to [1, 2], forward mode returns the differentials by components.
End of explanation
tff.math.fwd_gradient(func, start,
input_gradients=tf.constant([1.0, 0.0], dtype=tf.float64))
tff.math.fwd_gradient(func, start,
input_gradients=tf.constant([0.0, 0.1], dtype=tf.float64))
Explanation: Remember, Tensorflow is the tool commonly used in Machine Learning. In Machine Learning, the aim is to minmize the scalar loss function, the loss function being the sum of the gradient with respect to the feature set. This lowest loss is the loss summed up over all training examples which can be computed via backward gradient.
However, let's take the use case where we are valuing a set of options, say ten, against a single spot price $S_0$. We now have ten price functions and we need their gradients against spot $S_0$ (ten deltas).
Using the forward gradients with respect to $S_0$ would give us the ten delta's in a single pass.
Using the backward gradients would result in the sum of the ten delta's, which may not be that useful.
It is useful to note that varying the weights would also give you individual components of the gradients (in other words [1, 0] and [0, 1] as values of [$w_1$, $w_2$], instead of the default [1, 1], similarly for backward. This is, of course, at the expense of more compute.
End of explanation |
11,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Dear professor Denoyer...
Warning
This is an early version of our entry for the Kaggle challenge
It's still very messy and we send it because we forgot that we had to submit our progress step by step...
To summarize our goal, we plan to use a RNN to take advantage of the sequential data
Step1: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Reduced to
10.000
5.000
Step2: verifier val aberantes sur labels
Step3: Get rid of Nan value for now
Step4: Forums indicate that a higher than 1m rainfall is probably an error. Which is quite understandable. We filter that out
Step5:
Step6:
Step8: Memento (mauri)
Step9: Submit
Step12: RNN | Python Code:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheats
from sklearn.cross_validation import cross_val_score # cross val
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.preprocessing import Imputer # get rid of nan
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Dear professor Denoyer...
Warning
This is an early version of our entry for the Kaggle challenge
It's still very messy and we send it because we forgot that we had to submit our progress step by step...
To summarize our goal, we plan to use a RNN to take advantage of the sequential data
End of explanation
%%time
#filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_test_5000.csv"
filename = "data/reduced_train_100000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')
#train = train.dropna()
l = float(len(raw["minutes_past"]))
comp = []
for i in raw.columns:
#print(raw"%.03f, %s"%(1-train[i].isnull().sum()/l , i) )
comp.append([1-raw[i].isnull().sum()/l , i])
comp.sort(key=lambda x: x[0], reverse=True)
comp
raw = raw.dropna()
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Reduced to
10.000
5.000
End of explanation
raw.head()
raw["Expected"].describe()
Explanation: verifier val aberantes sur labels
End of explanation
#train_clean = train[[not i for i in np.isnan(train["Ref_5x5_10th"])]]
Explanation: Get rid of Nan value for now
End of explanation
raw = raw[raw['Expected'] < 1000]
raw['Expected'].describe()
split = 0.2
train = raw.tail(int(len(raw)*1-split))
test = raw.tail(int(len(raw)*split))
Explanation: Forums indicate that a higher than 1m rainfall is probably an error. Which is quite understandable. We filter that out
End of explanation
#columns = [u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
# u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
# u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
# u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
# u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
# u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
# u'Kdp_5x5_50th', u'Kdp_5x5_90th']
#columns = [u'radardist_km', u'Ref', u'Ref_5x5_10th']
columns = [ u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th']
nb_features = len(columns)
data = raw[list(columns)]
data.head(5)
data.head(20)
%%time
#max_padding = 20
docX, docY = [], []
for i in raw.index.unique():
if isinstance(raw.loc[i],pd.core.series.Series):
m = [raw.loc[i].as_matrix()]
#pad = np.pad(m, ((max_padding -len(m), 0),(0,0)), 'constant') # pre-padding
docX.append(m)
docY.append(float(raw.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
#pad = np.pad(m, ((max_padding -len(m), 0),(0,0)), 'constant')
docX.append(m)
docY.append(float(raw.loc[i][:1]["Expected"]))
#docY.append(train.loc[i][:1]["Expected"].as_matrix)
X = np.array(docX)
y = np.array(docY)
np.shape(X)
XX = [np.array(t).mean(0) for t in X]
np.shape(XX)
XX[0]
global_means = np.nanmean(data,0)
#global_means = data.mean(0).values
a = [
[1,2,np.nan],
[3,4,np.nan],
[2,np.nan,np.nan]
]
n = np.nanmean(a,0)
[np.isnan(i) for i in n]
n
np.count_nonzero(~np.isnan(X[0])) / float(X[0].size)
t = []
for i in X:
t.append(np.count_nonzero(~np.isnan(i)) / float(i.size))
pd.DataFrame(np.array(t)).describe()
XX = []
for i in X:
nm = np.nanmean(i,0)
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
XX.append(np.array(nm))
XX = [np.array(t).mean(0) for t in X]
split = 0.2
ps = int(len(XX) * (1-split))
X_train = XX[:ps]
y_train = y[:ps]
X_test = XX[ps:]
y_test = y[ps:]
etreg = ExtraTreesRegressor(n_estimators=100, max_depth=None, min_samples_split=1, random_state=0)
y_train[0]
#%%time
#etreg = etreg.fit(X_train,y_train)
etreg = etreg.fit(XX,y)
%%time
et_score = cross_val_score(etreg, XX, y, cv=5)
print("Features: %s\nScore: %s\tMean: %.03f"%(columns, et_score,et_score.mean()))
pred = etreg.predict(X_test)
#pred = len(XX)
for idx,i in enumerate(X_test):
if (np.count_nonzero(~np.isnan(i)) / float(i.size)) < 0.7 :
pred[idx]=0.2
pred[1]
err = (pred-y_test)**2
err.sum()/len(err)
r = random.randrange(len(pred))
print(r)
print(pred[r])
print(y_test[r])
Explanation:
End of explanation
def marshall_palmer(ref, minutes_past):
#print("Estimating rainfall from {0} observations".format(len(minutes_past)))
# how long is each observation valid?
valid_time = np.zeros_like(minutes_past)
valid_time[0] = minutes_past.iloc[0]
for n in xrange(1, len(minutes_past)):
valid_time[n] = minutes_past.iloc[n] - minutes_past.iloc[n-1]
valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
valid_time = valid_time / 60.0
# sum up rainrate * validtime
sum = 0
for dbz, hours in zip(ref, valid_time):
# See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
if np.isfinite(dbz):
mmperhr = pow(pow(10, dbz/10)/200, 0.625)
sum = sum + mmperhr * hours
return sum
def simplesum(ref,hour):
hour.sum()
# each unique Id is an hour of data at some gauge
def myfunc(hour):
#rowid = hour['Id'].iloc[0]
# sort hour by minutes_past
hour = hour.sort('minutes_past', ascending=True)
est = marshall_palmer(hour['Ref'], hour['minutes_past'])
return est
estimates = raw.groupby(raw.index).apply(myfunc)
estimates.head(20)
err = (estimates-(np.hstack((y_train,y_test))))**2
err.sum()/len(err)
Explanation:
End of explanation
etreg = ExtraTreesRegressor(n_estimators=100, max_depth=None, min_samples_split=1, random_state=0)
columns = train_clean.columns
columns = ["minutes_past","radardist_km","Ref","Ref_5x5_10th", "Ref_5x5_50th"]
columns = [u'Id', u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th', u'Expected']
columns = [u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th']
labels = train["Expected"].values
features = train[list(columns)].values
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(features)
features_trans = imp.transform(features)
len(features_trans)
split = 0.2
ps = int(len(features_trans) * split)
ftrain = features_trans[:ps]
ltrain = labels[:ps]
ftest = features_trans[ps:]
ltest = labels[ps:]
%%time
etreg.fit(ftrain,ltrain)
def scorer(estimator, X, y):
return (estimator.predict(X[0])-y)**2
%%time
et_score = cross_val_score(etreg, features_trans, labels, cv=3)
print("Features: %s\nScore: %s\tMean: %.03f"%(columns, et_score,et_score.mean()))
r = random.randrange(len(ltrain))
print(r)
print(etreg.predict(ftrain[r]))
print(ltrain[r])
r = random.randrange(len(ltest))
print(r)
print(etreg.predict(ftest[r]))
print(ltest[r])
err = (etreg.predict(ftest)-ltest)**2
err.sum()/len(err)
Explanation: Memento (mauri)
End of explanation
filename = "data/reduced_test_5000.csv"
test = pd.read_csv(filename)
columns = [u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th']
features = test[list(columns)].values
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(features)
features_trans = imp.transform(features)
fall = test[test.columns].values
fall[20]
features_trans[0]
i = 1
pred = 0
while fall[i][0] == 1:
#print(fall[i])
pred+=etreg.predict(features_trans[i])[0]
#print(etreg.predict(features_trans[i])[0])
i+=1
print(i)
fall[-1][0]
%%time
res=[]
i=0
while i<len(fall) and i < 10000:
pred = 0
lenn = 0
curr=fall[i][0]
while i<len(fall) and fall[i][0] == curr:
#print(fall[i])
pred+=etreg.predict(features_trans[i])[0]
#print(etreg.predict(features_trans[i])[0])
i+=1
lenn += 1
res.append((curr,pred/lenn))
#i+=1
#print(i)
len(res)
res[:10]
def myfunc(hour):
#rowid = hour['Id'].iloc[0]
# sort hour by minutes_past
hour = hour.sort('minutes_past', ascending=True)
#est = (hour['Id'],random.random())
est = random.random()
return est
def marshall_palmer(ref, minutes_past):
#print("Estimating rainfall from {0} observations".format(len(minutes_past)))
# how long is each observation valid?
valid_time = np.zeros_like(minutes_past)
valid_time[0] = minutes_past.iloc[0]
for n in xrange(1, len(minutes_past)):
valid_time[n] = minutes_past.iloc[n] - minutes_past.iloc[n-1]
valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
valid_time = valid_time / 60.0
# sum up rainrate * validtime
sum = 0
for dbz, hours in zip(ref, valid_time):
# See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
if np.isfinite(dbz):
mmperhr = pow(pow(10, dbz/10)/200, 0.625)
sum = sum + mmperhr * hours
return sum
def simplesum(ref,hour):
hour.sum()
# each unique Id is an hour of data at some gauge
def myfunc(hour):
#rowid = hour['Id'].iloc[0]
# sort hour by minutes_past
hour = hour.sort('minutes_past', ascending=True)
est = marshall_palmer(hour['Ref'], hour['minutes_past'])
return est
estimates = test.groupby(train.index).apply(myfunc)
estimates.head(20)
estimates = train.groupby(train.index).apply(myfunc)
estimates.head(20)
train["Expected"].head(20)
print(features_trans[0])
print(etreg.predict(features_trans[0]))
def marshall_palmer(data):
res=[]
for n in data:
res.append(etreg.predict(n)[0])
return np.array(res).mean()
def simplesum(ref,hour):
hour.sum()
def myfunc(hour):
hour = hour.sort('minutes_past', ascending=True)
est = marshall_palmer(hour[train.columns])
return est
estimates = train_clean.groupby(train_clean.index).apply(myfunc)
estimates.head(20)
Explanation: Submit
End of explanation
import pandas as pd
from random import random
flow = (list(range(1,10,1)) + list(range(10,1,-1)))*1000
pdata = pd.DataFrame({"a":flow, "b":flow})
pdata.b = pdata.b.shift(9)
data = pdata.iloc[10:] * random() # some noise
#columns = [u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
# u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
# u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
# u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
# u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
# u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
# u'Kdp_5x5_50th', u'Kdp_5x5_90th']
columns = [u'radardist_km', u'Ref', u'Ref_5x5_10th']
nb_features = len(columns)
data = train[list(columns)]
data.head(10)
data.iloc[0].as_matrix()
train.head(5)
train.loc[11]
train.loc[11][:1]["Expected"].as_matrix
#train.index.unique()
def _load_data(data, n_prev = 100):
data should be pd.DataFrame()
docX, docY = [], []
for i in range(len(data)-n_prev):
docX.append(data.iloc[i:i+n_prev].as_matrix())
docY.append(data.iloc[i+n_prev].as_matrix())
alsX = np.array(docX)
alsY = np.array(docY)
return alsX, alsY
def train_test_split(df, test_size=0.1):
ntrn = round(len(df) * (1 - test_size))
X_train, y_train = _load_data(df.iloc[0:ntrn])
X_test, y_test = _load_data(df.iloc[ntrn:])
return (X_train, y_train), (X_test, y_test)
(X_train, y_train), (X_test, y_test) = train_test_split(data)
np.shape(X_train)
t = np.array([2,1])
t.shape = (1,2)
t.tolist()[0]
np.shape(t)
X_train[:2,:2]
train.index.unique()
max_padding = 20
%%time
docX, docY = [], []
for i in train.index.unique():
if isinstance(train.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
pad = np.pad(m, ((max_padding -len(m), 0),(0,0)), 'constant') # pre-padding
docX.append(pad)
docY.append(float(train.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
pad = np.pad(m, ((max_padding -len(m), 0),(0,0)), 'constant')
docX.append(pad)
docY.append(float(train.loc[i][:1]["Expected"]))
#docY.append(train.loc[i][:1]["Expected"].as_matrix)
XX = np.array(docX)
yy = np.array(docY)
np.shape(XX)
XX[0].mean()
#from keras.preprocessing import sequence
#sequence.pad_sequences(X_train, maxlen=maxlen)
def _load_data(data):
data should be pd.DataFrame()
docX, docY = [], []
for i in data.index.unique():
#np.pad(tmp, ((0, max_padding -len(tmp) ),(0,0)), 'constant')
m = data.loc[i].as_matrix()
pad = np.pad(m, ((0, max_padding -len(m) ),(0,0)), 'constant')
docX.append(pad)
if isinstance(train.loc[i],pd.core.series.Series):
docY.append(float(train.loc[i]["Expected"]))
else:
docY.append(float(train.loc[i][:1]["Expected"]))
alsX = np.array(docX)
alsY = np.array(docY)
return alsX, alsY
def train_test_split(df, test_size=0.1):
ntrn = round(len(df) * (1 - test_size))
X_train, y_train = _load_data(df.iloc[0:ntrn])
X_test, y_test = _load_data(df.iloc[ntrn:])
return (X_train, y_train), (X_test, y_test)
(X_train, y_train), (X_test, y_test) = train_test_split(train)
len(X_train[0])
train.head()
X_train[0][:10]
yt = []
for i in y_train:
yt.append([i[0]])
yt[0]
X_train.shape
len(fea[0])
len(X_train[0][0])
f = np.array(fea)
f.shape()
#(X_train, y_train), (X_test, y_test) = train_test_split(data) # retrieve data
# and now train the model
# batch_size should be appropriate to your memory size
# number of epochs should be higher for real world problems
model.fit(X_train, yt, batch_size=450, nb_epoch=2, validation_split=0.05)
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
%%time
input_dim = nb_features
out_dim = 1
hidden_dim = 200
model = Sequential()
#Embedding(input_dim, hidden_dim, mask_zero=True)
#model.add(LSTM(hidden_dim, hidden_dim, return_sequences=False))
model.add(LSTM(input_dim, hidden_dim, return_sequences=False))
model.add(Dropout(0.5))
model.add(Dense(hidden_dim, out_dim))
model.add(Activation("linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
model.fit(XX, yy, batch_size=10, nb_epoch=10, validation_split=0.1)
test = random.randint(0,len(XX))
print(model.predict(XX[test:test+1])[0][0])
print(yy[test])
Explanation: RNN
End of explanation |
11,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Train Models
Train a logistic regression model with the engineered features
Including LDA-based topic similarity, sentence position, sentence length, and readability metrics, I trained a logistic regression model that can be applied to new sentences to predict whether they should be highlighted or not. I used logistic regression because this is a binary classification problem and because features weights can later be inspected to get an idea of their importance.
I also tested a couple of methods (synthetic oversampling [SMOTE] and undersampling) from the imblearn library to account for the imbalanced dataset (~2% highlighted vs ~98% non-highlighted sentences). These gave similar highlight sensitivity results and slightly more balanced precision and f1-scores than the logistic regression with sklearn's automatic class weight rebalancing.
Finally, I tested a random forest model for classification. This approach led to the non-highlighted samples dominating the prediction (despite balancing class weights in sklearn), such that highlight recall was very low.
Thus, the final model that is applied in the backend of the skimr web app is a logistic regression model.
Step2: Load dictionary and data
Step3: CONVERT list of paragraphs in 'text' column into string containing all text
Step4: DELETE HIGHLIGHTS FROM FULLTEXT SENTENCES
Step5: Save set_tr
Step6: LOAD set_tr
Step7: Create 'dataset' for further analysis (and pickle)
Step8: LOAD LDA vectors for articles
Step9: Convert lda topic tuple output to a vector
Step10: Clean text for LDA
Step11: test if all highlights are in main texts
Step12: Function to calculate position of sentence within article (frac of sentences into text)
Step13: Calculate values for logistic regression features
for each id (corresponds to a highlight and a full-text),
tokenize highlight into sentences
tokenize full-text w/o highlights into non-highlighted sentences
for each sentence in highlight and full-text,
calculate
Step14: Save article ids and highlight-or-not label after analyzing for FRE and position (without length and LDA)
Step15: Save article ids and highlight-or-not label after analyzing for length and LDA
Step16: Put into pandas dataframe to submit to logistic regression
Step17: Describe statistical model with patsy
Step18: Split into train and test sets
Step19: Include data preprocessing to scale all features! (i.e. calculate z-scores)
Step20: save/load model
Step21: Evaluate logistic regression model
Step22: Examine feature weights (feature coefficients)
Step23: ROC curve and evaluation metrics
Step24: Evaluate the model using 10-fold cross-validation
Step25: 10-fold cross-validation gives consistent results (0.61 ± 0.03 highlight recall)
Step26: Inspect distribution of class probabilities
Step27: Construct validation plots -- roc type curve for decision function value (confidence score)
Before computing the ROC curves shown above, I created a separate cumulative density function (CDF) plot by sorting the confidence scores of every sentence in the dataset by rank, then plotting the cumulative highlight recovery going down the ranks. This gave a similar result to the ROC curves, since moving along the x-axis of this CDF plot is similar to adjusting the confidence score threshold.
Step28: Define a function to draw CDF plots
Step29: Test undersampling from imblearn for balancing class weights
Step30: Generate CDF plots (see "Construct validation plots" section above)
Step31: Test SMOTE (synthetic oversampling) from imblearn for balancing class weights
Step32: Generate CDF plots (see "Construct validation plots" section above)
Step33: Conclusion from imblearn tests
I tested applying undersampling and synthetic oversampling (through the imblearn library) to the logistic regression and found that these methods achieved similar highlight recall (sensitivity, TP/(TP+FN)) but with more balanced precision and f1-scores between the highlighted and non-highlighted classes.
TRY RANDOM FOREST with current features
I wanted to see whether a random forest model would be better than the logistic regression at classifying highlights. This was not my preferred method, since random forest models are generally less interpretable than logistic regression.
Here, I found that a random forest model attains a deceptively high overall accuracy, because it predicts almost all sentences to be non-highlighted, allowing the much bigger non-highlighted portion of the samples to overwhelm the prediction. Thus, I decided to stick with the logistic regression model. | Python Code:
import matplotlib.pyplot as plt
import csv
from textblob import TextBlob, Word
import pandas as pd
import sklearn
import pickle
import numpy as np
import scipy
from scipy import spatial
import nltk.data
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC, LinearSVC
from sklearn.metrics import classification_report, f1_score, accuracy_score, confusion_matrix
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import learning_curve, GridSearchCV, StratifiedKFold, cross_val_score, train_test_split
sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
from nltk.tokenize import RegexpTokenizer
word_tokenizer = RegexpTokenizer('\s+', gaps=True)
from patsy import dmatrices
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.metrics.pairwise import cosine_similarity
import imblearn
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import SMOTE
from collections import Counter
from stop_words import get_stop_words
stop_en = get_stop_words('en')
from nltk.stem.porter import PorterStemmer
p_stemmer = PorterStemmer()
en_words = set(nltk.corpus.words.words())
from gensim import corpora, models
import gensim
import timeit
import re
import string
from string import whitespace, punctuation
from nltk.corpus import stopwords
stopw_en = stopwords.words('english')
all_stopw = set(stopw_en) | set(stop_en)
print(len(all_stopw))
import math
from utils import get_char_count
from utils import get_words
from utils import get_sentences
from utils import count_syllables
from utils import count_complex_words
class Readability:
analyzedVars = {}
def __init__(self, text):
self.analyze_text(text)
def analyze_text(self, text):
words = get_words(text)
char_count = get_char_count(words)
word_count = len(words)
sentence_count = len(get_sentences(text))
syllable_count = count_syllables(words)
complexwords_count = count_complex_words(text)
avg_words_p_sentence = word_count/sentence_count
self.analyzedVars = {
'words': words,
'char_cnt': float(char_count),
'word_cnt': float(word_count),
'sentence_cnt': float(sentence_count),
'syllable_cnt': float(syllable_count),
'complex_word_cnt': float(complexwords_count),
'avg_words_p_sentence': float(avg_words_p_sentence)
}
def ARI(self):
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
score = 4.71 * (self.analyzedVars['char_cnt'] / self.analyzedVars['word_cnt']) + 0.5 * (self.analyzedVars['word_cnt'] / self.analyzedVars['sentence_cnt']) - 21.43
return score
def FleschReadingEase(self):
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
score = 206.835 - (1.015 * (self.analyzedVars['avg_words_p_sentence'])) - (84.6 * (self.analyzedVars['syllable_cnt']/ self.analyzedVars['word_cnt']))
return round(score, 4)
def FleschKincaidGradeLevel(self):
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
score = 0.39 * (self.analyzedVars['avg_words_p_sentence']) + 11.8 * (self.analyzedVars['syllable_cnt']/ self.analyzedVars['word_cnt']) - 15.59
return round(score, 4)
def GunningFogIndex(self):
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
score = 0.4 * ((self.analyzedVars['avg_words_p_sentence']) + (100 * (self.analyzedVars['complex_word_cnt']/self.analyzedVars['word_cnt'])))
return round(score, 4)
def SMOGIndex(self):
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
score = (math.sqrt(self.analyzedVars['complex_word_cnt']*(30/self.analyzedVars['sentence_cnt'])) + 3)
return score
def ColemanLiauIndex(self):
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
score = (5.89*(self.analyzedVars['char_cnt']/self.analyzedVars['word_cnt']))-(30*(self.analyzedVars['sentence_cnt']/self.analyzedVars['word_cnt']))-15.8
return round(score, 4)
def LIX(self):
longwords = 0.0
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
for word in self.analyzedVars['words']:
if len(word) >= 7:
longwords += 1.0
score = self.analyzedVars['word_cnt'] / self.analyzedVars['sentence_cnt'] + float(100 * longwords) / self.analyzedVars['word_cnt']
return score
def RIX(self):
longwords = 0.0
score = 0.0
if self.analyzedVars['word_cnt'] > 0.0:
for word in self.analyzedVars['words']:
if len(word) >= 7:
longwords += 1.0
score = longwords / self.analyzedVars['sentence_cnt']
return score
if __name__ == "__main__":
text = We are close to wrapping up our 10 week Rails Course. This week we will cover a handful of topics commonly encountered in Rails projects. We then wrap up with part 2 of our Reddit on Rails exercise! By now you should be hard at work on your personal projects. The students in the course just presented in front of the class with some live demos and a brief intro to to the problems their app were solving. Maybe set aside some time this week to show someone your progress, block off 5 minutes and describe what goal you are working towards, the current state of the project (is it almost done, just getting started, needs UI, etc.), and then show them a quick demo of the app. Explain what type of feedback you are looking for (conceptual, design, usability, etc.) and see what they have to say. As we are wrapping up the course you need to be focused on learning as much as you can, but also making sure you have the tools to succeed after the class is over.
rd = Readability(text)
# testing readability
rd = Readability('We are close to wrapping up our 10 week Rails Course. This week we will cover a handful of topics commonly encountered in Rails projects. We then wrap up with part 2 of our Reddit on Rails exercise! By now you should be hard at work on your personal projects. The students in the course just presented in front of the class with some live demos and a brief intro to to the problems their app were solving. Maybe set aside some time this week to show someone your progress, block off 5 minutes and describe what goal you are working towards, the current state of the project (is it almost done, just getting started, needs UI, etc.), and then show them a quick demo of the app. Explain what type of feedback you are looking for (conceptual, design, usability, etc.) and see what they have to say. As we are wrapping up the course you need to be focused on learning as much as you can, but also making sure you have the tools to succeed after the class is over.')
print(rd.ARI())
Explanation: Train Models
Train a logistic regression model with the engineered features
Including LDA-based topic similarity, sentence position, sentence length, and readability metrics, I trained a logistic regression model that can be applied to new sentences to predict whether they should be highlighted or not. I used logistic regression because this is a binary classification problem and because features weights can later be inspected to get an idea of their importance.
I also tested a couple of methods (synthetic oversampling [SMOTE] and undersampling) from the imblearn library to account for the imbalanced dataset (~2% highlighted vs ~98% non-highlighted sentences). These gave similar highlight sensitivity results and slightly more balanced precision and f1-scores than the logistic regression with sklearn's automatic class weight rebalancing.
Finally, I tested a random forest model for classification. This approach led to the non-highlighted samples dominating the prediction (despite balancing class weights in sklearn), such that highlight recall was very low.
Thus, the final model that is applied in the backend of the skimr web app is a logistic regression model.
End of explanation
# dict_all = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/dict_all_new','rb'))
# data = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/data_pd_new')
# data.head()
Explanation: Load dictionary and data
End of explanation
set_tr = data
n = 0
sent_join = []
for i in set_tr['text']:
sent = str(' '.join(i))
sent_join.append(sent)
Explanation: CONVERT list of paragraphs in 'text' column into string containing all text
End of explanation
n = 0
fulls_noh = []
for i in set_tr['highlights']:
full = sent_join[n]
# print(full)
# print(i)
full_noh = full.replace(i,'.')
# print(full_noh)
fulls_noh.append(full_noh)
n+=1
set_tmp = set_tr
set_tr = pd.DataFrame({'ids':set_tmp['ids'], 'highlights':set_tmp['highlights'], 'text':set_tmp['text'], 'textwohighlight':fulls_noh})
print(len(set_tr['textwohighlight']))
Explanation: DELETE HIGHLIGHTS FROM FULLTEXT SENTENCES
End of explanation
# fset_tr = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/set_tr','wb')
# pickle.dump(set_tr, fset_tr)
Explanation: Save set_tr
End of explanation
set_tr = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/set_tr')
print(set_tr)
Explanation: LOAD set_tr
End of explanation
dataset = pd.DataFrame({'highlights':set_tr['highlights'], 'ids':set_tr['ids'], 'text':set_tr['text'], \
'textwohighlight':fulls_noh, 'textjoined':sent_join})
print(dataset)
dataset.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/dataset_hl_ids_txt_wohl_joined')
Explanation: Create 'dataset' for further analysis (and pickle)
End of explanation
ldamodel = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_10topic20pass','rb'))
print(ldamodel.print_topics( num_topics=10, num_words=5))
all_lda_vecs = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/all_lda_vecs','rb'))
print(all_lda_vecs[0])
commonwords_2 = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/commonwords2','rb'))
# wordlist = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/wordlist','rb')
dictionary = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/lda_dictionary_new2','rb'))
print(dictionary)
Explanation: LOAD LDA vectors for articles
End of explanation
def lda_to_vec(lda_input):
num_topics = 10
vec = [0]*num_topics
for i in lda_input:
col = i[0]
val = i[1]
vec[col] = val
return vec
Explanation: Convert lda topic tuple output to a vector
End of explanation
def clean_text(sent):
# remove punctuation
translator = str.maketrans('', '', string.punctuation)
txt2 = re.sub(u'\u2014','',sent) # remove em dashes
txt3 = re.sub(r'\d+', '', txt2) # remove digits
txt4 = txt3.translate(translator) # remove punctuation
# split text into words
tokens = word_tokenizer.tokenize(txt4.lower())
# strip single and double quotes from ends of words
tokens_strip = [i.strip('”“’‘') for i in tokens]
# keep only english words
tokens_en = [i for i in tokens_strip if i in en_words]
# remove nltk/stop_word stop words
nostop_tokens = [i for i in tokens_en if not i in all_stopw]
# strip single and double quotes from ends of words
nostop_strip = [i.strip('”“’‘') for i in nostop_tokens]
# stem words
stemmed = [p_stemmer.stem(i) for i in nostop_strip]
# strip single and double quotes from ends of words
stemmed_strip = [i.strip('”“’‘') for i in stemmed]
# stem words
stemmed2 = [p_stemmer.stem(i) for i in stemmed_strip]
# strip single and double quotes from ends of words
stemmed2_strip = [i.strip('”“’‘') for i in stemmed2]
# remove common words post-stemming
stemmed_nocommon = [i for i in stemmed2_strip if not i in commonwords_2]
return stemmed_nocommon
Explanation: Clean text for LDA
End of explanation
# for h in dataset['highlights']:
# # sentence is tokenized from highlight or full text
# # break text into sentences and get total sents in full text
# full_sents = sent_tokenizer.tokenize(text)
# num_sents = len(full_sents)
# # break text into words and get total words in full text
# full_words = word_tokenizer.tokenize(text)
# num_words = len(full_words)
# try:
# pos = text.index(sentence)
# # total words in full text before highlight position
# b4_words = word_tokenizer.tokenize(text[:pos])
# b4_wlen = len(b4_words)
# # sentences in full text before highlight position
# b4_sents = sent_tokenizer.tokenize(text[:pos])
# b4_slen = len(b4_sents)
# frc_w = b4_wlen / num_words
# frc_s = b4_slen / num_sents
# except ValueError:
# print('\nsentence not in text!\n')
# return frc_w, frc_s
Explanation: test if all highlights are in main texts
End of explanation
def sent_pos(sentence, text, idval):
# sentence is tokenized from highlight or full text
# remove 1-word sentences?
# break text into sentences and get total sents in full text
full_sents = sent_tokenizer.tokenize(text)
num_sents = len(full_sents)
# break text into words and get total words in full text
full_words = word_tokenizer.tokenize(text)
num_words = len(full_words)
# try:
pos = text.find(sentence)
if pos >= 0:
# total words in full text before highlight position
b4_words = word_tokenizer.tokenize(text[:pos])
b4_wlen = len(b4_words)
# sentences in full text before highlight position
b4_sents = sent_tokenizer.tokenize(text[:pos])
b4_slen = len(b4_sents)
frc_w = b4_wlen / num_words
frc_s = b4_slen / num_sents
elif pos < 0:
# print('\nsentence not in text!\n')
print(str(idval) + ' ' + str(sentence))
frc_w = -1
frc_s = -1
# except ValueError:
# print('\nvalueerror: sentence not in text!\n')
return frc_w, frc_s
Explanation: Function to calculate position of sentence within article (frac of sentences into text)
End of explanation
tic = timeit.default_timer()
# print(set_tr.head(10))
n = 0
articleids = []
hlorno = []
all_ARI = []
all_FRE = []
all_FKG = []
all_GFI = []
all_SMG = []
all_CLI = []
all_LIX = []
all_RIX = []
alllens = []
all_ldadists = []
all_wposes = []
all_sposes = []
h_ARI = []
h_FRE = []
h_FKG = []
h_GFI = []
h_SMG = []
h_CLI = []
h_LIX = []
h_RIX = []
hllens = []
h_ldadists = []
h_wposes = []
h_sposes = []
f_ARI = []
f_FRE = []
f_FKG = []
f_GFI = []
f_SMG = []
f_CLI = []
f_LIX = []
f_RIX = []
ftlens = []
f_ldadists = []
f_wposes = []
f_sposes = []
for i, row in dataset.iterrows():
idval = row.ids
if i%10 == 0:
print('analyzing row: '+str(i)+' ')
# Get topic vector for the whole article
lda_art = all_lda_vecs[i]
# lda_art = np.asarray(all_lda_vecs[i])
# print(lda_art)
hlsents = sent_tokenizer.tokenize(row.highlights)
for h in hlsents:
#
# get LDA metric
h_clean = clean_text(h)
h_corpus = dictionary.doc2bow(h_clean)
sent_lda = ldamodel[h_corpus]
vec_lda = lda_to_vec(sent_lda)
h_lda = 1-spatial.distance.cosine(vec_lda, lda_art)
np_vec_lda = np.asarray(vec_lda)
h_ldadists.append(h_lda)
all_ldadists.append(h_lda)
# get fraction position
h_wpos, h_spos = sent_pos(h, row.textjoined, idval)
# if n <= 5:
# print('wordpos: '+str(h_wpos))
# print('sentpos: '+str(h_spos))
h_wposes.append( float(h_wpos) )
h_sposes.append( float(h_spos) )
all_wposes.append( float(h_wpos) )
all_sposes.append( float(h_spos) )
# get length
hlwords = word_tokenizer.tokenize(h)
hllen = len(hlwords)
hllens.append(int(hllen))
alllens.append(int(hllen))
# get readability
h_rd = Readability(h)
h_ARI.append( float(h_rd.ARI()) )
h_FRE.append( float(h_rd.FleschReadingEase()) )
h_FKG.append( float(h_rd.FleschKincaidGradeLevel()) )
h_GFI.append( float(h_rd.GunningFogIndex()) )
h_SMG.append( float(h_rd.SMOGIndex()) )
h_CLI.append( float(h_rd.ColemanLiauIndex()) )
h_LIX.append( float(h_rd.LIX()) )
h_RIX.append( float(h_rd.RIX()) )
all_ARI.append( float(h_rd.ARI()) )
all_FRE.append( float(h_rd.FleschReadingEase()) )
all_FKG.append( float(h_rd.FleschKincaidGradeLevel()) )
all_GFI.append( float(h_rd.GunningFogIndex()) )
all_SMG.append( float(h_rd.SMOGIndex()) )
all_CLI.append( float(h_rd.ColemanLiauIndex()) )
all_LIX.append( float(h_rd.LIX()) )
all_RIX.append( float(h_rd.RIX()) )
# get label and id
articleids.append(int(i))
hlorno.append(1)
# count lengths of non-highlighted sentences
ftsents = sent_tokenizer.tokenize(row.textwohighlight)
for f in ftsents:
# get LDA metric
f_clean = clean_text(f)
f_corpus = dictionary.doc2bow(f_clean)
sent_lda = ldamodel[f_corpus]
vec_lda = lda_to_vec(sent_lda)
f_lda = 1-spatial.distance.cosine(vec_lda, lda_art)
np_vec_lda = np.asarray(vec_lda)
f_ldadists.append(f_lda)
all_ldadists.append(f_lda)
# get fraction position
f_wpos, f_spos = sent_pos(f[:-2], row.textjoined, idval)
# if n <= 5:
# print('wordpos: '+str(f_wpos))
# print('sentpos: '+str(f_spos))
f_wposes.append( float(f_wpos) )
f_sposes.append( float(f_spos) )
all_wposes.append( float(f_wpos) )
all_sposes.append( float(f_spos) )
# get length
ftwords = word_tokenizer.tokenize(f)
ftlen = len(ftwords)
ftlens.append(int(ftlen))
alllens.append(int(ftlen))
# get readability
f_rd = Readability(f)
f_ARI.append( float(f_rd.ARI()) )
f_FRE.append( float(f_rd.FleschReadingEase()) )
f_FKG.append( float(f_rd.FleschKincaidGradeLevel()) )
f_GFI.append( float(f_rd.GunningFogIndex()) )
f_SMG.append( float(f_rd.SMOGIndex()) )
f_CLI.append( float(f_rd.ColemanLiauIndex()) )
f_LIX.append( float(f_rd.LIX()) )
f_RIX.append( float(f_rd.RIX()) )
all_ARI.append( float(f_rd.ARI()) )
all_FRE.append( float(f_rd.FleschReadingEase()) )
all_FKG.append( float(f_rd.FleschKincaidGradeLevel()) )
all_GFI.append( float(f_rd.GunningFogIndex()) )
all_SMG.append( float(f_rd.SMOGIndex()) )
all_CLI.append( float(f_rd.ColemanLiauIndex()) )
all_LIX.append( float(f_rd.LIX()) )
all_RIX.append( float(f_rd.RIX()) )
# get label and id
articleids.append(int(i))
hlorno.append(0)
n += 1
# if n == 5:
# break
# print(len(articleids))
# print(len(hlorno))
# print(len(alllens))
# print(len(all_rds))
# print(len(all_ldadists))
# print(len(hllens))
# print(len(ftlens))
# print(len(h_rds))
# print(len(f_rds))
# print(len(h_ldadists))
# print(len(f_ldadists))
toc = timeit.default_timer()
print(str(toc - tic) + ' seconds elapsed')
# # time required for position + readability analysis:
# # 2985.0007762390014 seconds elapsed
# # 2746.6176740369992 seconds elapsed for just position
# # count number of sentences excluded from position analysis
# print(h_wposes.count(-1))
# print(h_sposes.count(-1))
# print(f_wposes.count(-1))
# print(f_sposes.count(-1))
# print(all_wposes.count(-1))
# print(all_sposes.count(-1))
# time required for length + LDA analysis:
# 398.4295387339953 seconds elapsed
plt.pie([276120,5211], explode=(0,0.2), labels=['non-highlighted (276120)','highlighted (5211)'], shadow=False, startangle=90)
plt.axis('equal')
plt.show()
analyzed_data_h = pd.DataFrame({ \
'h_wposes':h_wposes, \
'h_sposes':h_sposes, \
'h_ARI':h_ARI, \
'h_FRE':h_FRE, \
'h_FKG':h_FKG, \
'h_GFI':h_GFI, \
'h_SMG':h_SMG, \
'h_CLI':h_CLI, \
'h_LIX':h_LIX, \
'h_RIX':h_RIX, \
})
analyzed_data_all = pd.DataFrame({ \
'all_wposes':all_wposes, \
'all_sposes':all_sposes, \
'all_ARI':all_ARI, \
'all_FRE':all_FRE, \
'all_FKG':all_FKG, \
'all_GFI':all_GFI, \
'all_SMG':all_SMG, \
'all_CLI':all_CLI, \
'all_LIX':all_LIX, \
'all_RIX':all_RIX, \
})
analyzed_data_f = pd.DataFrame({ \
'f_wposes':f_wposes, \
'f_sposes':f_sposes, \
'f_ARI':f_ARI, \
'f_FRE':f_FRE, \
'f_FKG':f_FKG, \
'f_GFI':f_GFI, \
'f_SMG':f_SMG, \
'f_CLI':f_CLI, \
'f_LIX':f_LIX, \
'f_RIX':f_RIX, \
})
analyzed_data_h.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/analyzed_data_h')
analyzed_data_f.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/analyzed_data_f')
analyzed_data_all.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/analyzed_data_all')
Explanation: Calculate values for logistic regression features
for each id (corresponds to a highlight and a full-text),
tokenize highlight into sentences
tokenize full-text w/o highlights into non-highlighted sentences
for each sentence in highlight and full-text,
calculate:
sentence length
readability (various)
LDA vector of sentence -> cos similarity to LDA vector of article
put in array:
id sentence length readability (various) LDA similarity
NOTE: Calculated some features at a time by commenting out the appropriate lines below; this is why there are separate pickle files for different sets of feature calculations. Not critical, because all features are combined into one dataframe downstream anyway.
End of explanation
# articleids_w_FRE_pos_list = articleids
# hlorno_w_FRE_pos_list = hlorno
# articleids_w_FRE_pos = pd.DataFrame({'articleids':articleids_w_FRE_pos_list})
# hlorno_w_FRE_pos = pd.DataFrame({'hlorno':hlorno_w_FRE_pos_list})
# articleids_w_FRE_pos.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/articleids_w_FRE_pos')
# hlorno_w_FRE_pos.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/hlorno_w_FRE_pos')
Explanation: Save article ids and highlight-or-not label after analyzing for FRE and position (without length and LDA)
End of explanation
# articleids_w_len_lda_list = articleids
# hlorno_w_len_lda_list = hlorno
# articleids_w_len_lda = pd.DataFrame({'articleids':articleids_w_len_lda_list})
# hlorno_w_len_lda = pd.DataFrame({'hlorno':hlorno_w_len_lda_list})
# articleids_w_len_lda.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/articleids_w_len_lda')
# hlorno_w_len_lda.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/hlorno_w_len_lda')
# print(articleids_w_len_lda_list == articleids_w_FRE_pos_list)
# print(hlorno_w_FRE_pos_list == hlorno_w_len_lda_list)
from statistics import mode
# print(max(hllens))
# print(scipy.stats.mode(hllens))
# print(max(ftlens))
# print(scipy.stats.mode(ftlens))
# print(max(h_rds))
# print(scipy.stats.mode(h_rds))
# print(max(f_rds))
# print(scipy.stats.mode(f_rds))
plt.hist(hllens, bins=50, range=(-10,140))
plt.title("All unique highlighted sentences")
plt.xlabel("Number of words")
plt.ylabel("Frequency")
plt.show()
plt.hist(ftlens, bins=50, range=(-10,140))
plt.title("All unique non-highlighted sentences")
plt.xlabel("Number of words")
plt.ylabel("Frequency")
plt.show()
# print(len(hllens))
# print(len(ftlens))
# print(len(alllens))
# plt.hist(h_ldadists, bins=50)#, normed=1)
# plt.title("All unique highlighted sentences")
# plt.xlabel("LDA distance")
# plt.ylabel("Frequency")
# plt.show()
# plt.hist(f_ldadists, bins=50)#, normed=1)
# plt.title("All unique non-highlighted sentences")
# plt.xlabel("LDA distance")
# plt.ylabel("Frequency")
# plt.show()
# print(len(h_ldadists))
# print(len(f_ldadists))
# print(len(all_ldadists))
# plt.hist(h_wposes, bins=25, range=(0,1), normed=1)
# plt.title("All unique highlighted sentences")
# plt.xlabel("Fraction words into full text")
# plt.ylabel("Frequency")
# plt.show()
# plt.hist(h_sposes, bins=25, range=(0,1), normed=1)
# plt.title("All unique highlighted sentences")
# plt.xlabel("Fraction sentences into full text")
# plt.ylabel("Frequency")
# plt.show()
# plt.hist(f_wposes, bins=25, range=(0,1), normed=1)
# plt.title("All unique non-highlighted sentences")
# plt.xlabel("Fraction words into full text")
# plt.ylabel("Frequency")
# plt.show()
# plt.hist(f_sposes, bins=25, range=(0,1), normed=1)
# plt.title("All unique non-highlighted sentences")
# plt.xlabel("Fraction sentences into full text")
# plt.ylabel("Frequency")
# plt.show()
# print(len(h_wposes))
# print(len(h_sposes))
# print(len(f_wposes))
# print(len(f_sposes))
# print(len(all_wposes))
# plt.hist(h_sposes, bins=25, range=(0,1), normed=1)
# plt.title("All unique highlighted sentences")
# plt.xlabel("Fraction sentences into full text")
# plt.ylabel("Frequency")
# plt.show()
# plt.hist(f_sposes, bins=25, range=(0,1), normed=1)
# plt.title("All unique non-highlighted sentences")
# plt.xlabel("Fraction sentences into full text")
# plt.ylabel("Frequency")
# plt.show()
# plt.hist(all_sposes, bins=25, range=(0,1), normed=1)
# plt.title("All unique sentences")
# plt.xlabel("Fraction sentences into full text")
# plt.ylabel("Frequency")
# plt.show()
Explanation: Save article ids and highlight-or-not label after analyzing for length and LDA
End of explanation
dataset_submit = pd.DataFrame({ \
'highlightornot':hlorno, \
'length':alllens, \
'LDAdist':all_ldadists, \
'wordPos':all_wposes, \
'sentPos':all_sposes, \
'ARI':all_ARI, \
'FRE':all_FRE, \
'FKG':all_FKG, \
'GFI':all_GFI, \
'SMG':all_SMG, \
'CLI':all_CLI, \
'LIX':all_LIX, \
'RIX':all_RIX, \
})
dataset_submit.to_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/dataset_submit_len_lda_wpos_wpos_readmets')
dataset_submit = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/dataset_submit_len_lda_wpos_wpos_readmets')
print(dataset_submit)
Explanation: Put into pandas dataframe to submit to logistic regression
End of explanation
dataset_submit = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/dataset_submit_len_lda_wpos_wpos_readmets')
# create dataframe
y, X = dmatrices('highlightornot ~ length + LDAdist + wordPos + sentPos + ARI + FRE + FKG + GFI + SMG + CLI + LIX + RIX', \
dataset_submit, return_type="dataframe")
# ytest, Xtest = dmatrices('length ~ length + FRE', set_tr_submit, return_type="dataframe")
# print( Xtest.columns)
# print(ytest)
# flatten y into a 1-D array
y = np.ravel(y)
Explanation: Describe statistical model with patsy
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
file = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/Xtrain_len_lda_wpos_wpos_readmets','wb')
pickle.dump(X_train,file)
file = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/Xtest_len_lda_wpos_wpos_readmets','wb')
pickle.dump(X_test,file)
file = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/ytrain_len_lda_wpos_wpos_readmets','wb')
pickle.dump(y_train,file)
file = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/ytest_len_lda_wpos_wpos_readmets','wb')
pickle.dump(y_test,file)
Explanation: Split into train and test sets
End of explanation
pipe = make_pipeline(StandardScaler(), LogisticRegression(class_weight='balanced', penalty='l2'))
# ## fit
# pipe.fit(X, y)
# ## predict
# pipe.predict_proba(X)
# ## to get back mean/std
# scaler = pipe.steps[0][1]
# scaler.mean_
# # Out[12]: array([ 0.0313, -0.0334, 0.0145, ..., -0.0247, 0.0191, 0.0439])
# scaler.std_
# # Out[13]: array([ 1. , 1.0553, 0.9805, ..., 1.0033, 1.0097, 0.9884])
# pipe = LogisticRegression(class_weight='balanced', penalty='l2')
pipe.fit(X_train, y_train)
# print(X_train)
print(len(y_train))
print(len(y_test))
print(len(X_train))
print(len(X_test))
Explanation: Include data preprocessing to scale all features! (i.e. calculate z-scores)
End of explanation
# fpipe = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/model_len_lda_wpos_wpos_readmets','wb')
# pickle.dump(pipe, fpipe)
# load model
pipe = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/model_len_lda_wpos_wpos_readmets')
X_train = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/Xtrain_len_lda_wpos_wpos_readmets')
X_test = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/Xtest_len_lda_wpos_wpos_readmets')
y_train = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/ytrain_len_lda_wpos_wpos_readmets')
y_test = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/ytest_len_lda_wpos_wpos_readmets')
Explanation: save/load model
End of explanation
# check the accuracy on the training set
pipe.score(X_train, y_train)
# check the accuracy on the test set
pipe.score(X_test, y_test)
# predict class labels for the test set
predicted = pipe.predict(X_test)
print(predicted)
test = np.nonzero(predicted)
print(len(test[0]))
print(len(predicted))
Explanation: Evaluate logistic regression model
End of explanation
print(np.transpose(pipe.steps[1][1].coef_))
print(X_test.columns)
tmp = X_test.columns.name[2:]
print(tmp)
# [[ 0. ]
# [-0.75450588]
# [ 0.00378554]
# [-0.14955534]
# [ 0.17832368]
# [-0.06008622]
# [ 0.69844673]
# [ 1.1656712 ]
# [ 0.01445301]
# [ 0.10138341]
# [ 0.12576411]
# [ 0.16656827]
# [-0.75734814]]
# Index(['Intercept', 'length', 'LDAdist', 'wordPos', 'sentPos', 'ARI', 'FRE',
# 'FKG', 'GFI', 'SMG', 'CLI', 'LIX', 'RIX'],
# dtype='object')
# WHEN ONLY USE LOGISTIC REGRESSION (NO Z-SCORE PREPROCESSING)
# print(np.transpose(pipe.coef_))
# [[-0.83983031]
# [-0.0366619 ]
# [ 0.01134218]
# [-0.49050534]
# [ 0.59117833]
# [-0.00489886]
# [ 0.01644136]
# [ 0.11783003]
# [ 0.00142764]
# [ 0.0216104 ]
# [ 0.0138162 ]
# [ 0.00623342]
# [-0.09777468]]
# Index(['Intercept', 'length', 'LDAdist', 'wordPos', 'sentPos', 'ARI', 'FRE',
# 'FKG', 'GFI', 'SMG', 'CLI', 'LIX', 'RIX'],
# dtype='object')
Explanation: Examine feature weights (feature coefficients)
End of explanation
# generate class probabilities
probs = pipe.predict_proba(X_test)
print (probs)
# generate evaluation metrics
print( metrics.accuracy_score(y_test, predicted))
print( metrics.roc_auc_score(y_test, probs[:, 1]))
# plot ROC curve for test set
from sklearn.metrics import roc_curve, auc
auc_score = metrics.roc_auc_score(y_test, probs[:, 1])
fpr, tpr, thres = metrics.roc_curve(y_test, probs[:,1])
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='navy',
lw=lw, label='AUC = %0.2f' % auc_score)
plt.plot([0, 1], [0, 1], color='darkorange', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic curve, test set')
plt.legend(loc="lower right")
plt.show()
# Plot ROC curve for training set
probs_tr = pipe.predict_proba(X_train)
print (probs_tr)
auc_score_tr = metrics.roc_auc_score(y_train, probs_tr[:, 1])
fpr_tr, tpr_tr, thres_tr = metrics.roc_curve(y_train, probs_tr[:,1])
#Plot of a ROC curve for a specific class
plt.figure()
lw = 2
plt.plot(fpr_tr, tpr_tr, color='navy',
lw=lw, label='AUC = %0.2f' % auc_score_tr)
plt.plot([0, 1], [0, 1], color='darkorange', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic curve, training set')
plt.legend(loc="lower right")
plt.show()
print( metrics.confusion_matrix(y_test, predicted))
print( metrics.classification_report(y_test, predicted))
predicted_train = pipe.predict(X_train)
print( metrics.classification_report(y_train, predicted_train))
Explanation: ROC curve and evaluation metrics
End of explanation
# scores = cross_val_score(pipe, X, y, scoring='accuracy', cv=10)
recall = cross_val_score(pipe, X, y, cv=10, scoring='recall')
print (recall)
print (recall.mean())
print (recall.std())
# [ 0.59003831 0.61612284 0.61804223 0.68714012 0.62955854 0.60076775
# 0.61420345 0.57005758 0.59884837 0.61036468]
# 0.61351438804
# 0.0292113885642
# WITH ONLY LOGISTIC REGRESSION (NO Z-SCORE PREPROCESSING)
# [ 0.59195402 0.61612284 0.61804223 0.68714012 0.62955854 0.60076775
# 0.61420345 0.57005758 0.59884837 0.61036468]
# 0.613705958921
# 0.0290627055203
Explanation: Evaluate the model using 10-fold cross-validation
End of explanation
plt.matshow(confusion_matrix(y_test, predicted), cmap=plt.cm.binary, interpolation='nearest')
plt.title('confusion matrix')
plt.colorbar()
plt.ylabel('expected label')
plt.xlabel('predicted label')
plt.show()
# normalized confusion matrix values
cm = metrics.confusion_matrix(y_test, predicted)
print(cm)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print(cm)
print(pipe)
# predict class labels for the test set
predicted = pipe.predict(X_test)
print(predicted)
# plt.matshow(confusion_matrix(X_test, predicted), cmap=plt.cm.binary, interpolation='nearest')
# plt.title('confusion matrix')
# plt.colorbar()
# plt.ylabel('expected label')
# plt.xlabel('predicted label')
# plt.show()
Explanation: 10-fold cross-validation gives consistent results (0.61 ± 0.03 highlight recall)
End of explanation
decfxn_tr = pipe.decision_function(X_train)
decfxn_ts = pipe.decision_function(X_test)
# print(decfxn_tr)
# print(decfxn_ts)
# print(min(decfxn_tr))
# print(min(decfxn_ts))
# print(max(decfxn_tr))
# print(max(decfxn_ts))
print(decfxn_tr.argmax())
print(decfxn_ts.argmax())
plt.hist(decfxn_tr, bins=50, range=(-1,1))#, normed=1)
plt.title("Training set")
plt.xlabel("Decision Fxn (confidence value)")
plt.ylabel("Frequency")
plt.show()
plt.hist(decfxn_ts, bins=50, range=(-1,1))#, normed=1)
plt.title("Test set")
plt.xlabel("Decision Fxn (confidence value)")
plt.ylabel("Frequency")
plt.show()
Explanation: Inspect distribution of class probabilities
End of explanation
# sort common dataset by cosine similarity score
# plot cumulative density of highlights relative to cos similarity score
# if curve above 1:1 diagonal, means highlight prediction better than chance
# print(dataset_submit.head(10))
# print(dataset_submit.sort_values('LDAdist').head(10))
# print(dataset_submit.sort_values('LDAdist').tail(10))
# print(dataset_submit.sort_values('LDAdist').head(2000))
from scipy.interpolate import interp1d
#
decfxn_tr
print(len(np.nonzero(y_test)[0]))
print(np.nonzero(y_test)[0])
print(len(np.nonzero(y_train)[0]))
print(np.nonzero(y_train)[0])
cumul = pd.DataFrame({'confidencescore':decfxn_tr, 'highlightornot':y_train})
cumul_sort = cumul.sort_values('confidencescore', ascending=False)
# print(cumul_sort)
total = 0
array = []
for i in cumul_sort['highlightornot']:
total = total + i
array.append(total)
# print(min(decfxn_tr))
print(len(array))
print(total)
print(array[-1])
# rangex = np.arange(1,4150)
line = np.linspace(1, 4150, 225064)
print(line)
# plot CDF curve and chance line
plt.plot(range(1,225065), array, range(1,225065), line)
plt.show()
# plot CDF curve and chance line, normalized
plt.plot(range(1,225065), array/max(array), range(1,225065), line/max(line))
plt.show()
Explanation: Construct validation plots -- roc type curve for decision function value (confidence score)
Before computing the ROC curves shown above, I created a separate cumulative density function (CDF) plot by sorting the confidence scores of every sentence in the dataset by rank, then plotting the cumulative highlight recovery going down the ranks. This gave a similar result to the ROC curves, since moving along the x-axis of this CDF plot is similar to adjusting the confidence score threshold.
End of explanation
def plot_cdf(decfxn, ytrain):
cumul = pd.DataFrame({'confidencescore':decfxn, 'highlightornot':ytrain})
cumul_sort = cumul.sort_values('confidencescore', ascending=False)
# print(cumul_sort)
total = 0
array = []
for i in cumul_sort['highlightornot']:
array.append(total)
total = total + i
# print(min(decfxn_tr))
print(len(array))
print(total)
print(array[-1])
line = np.linspace(1, max(array), len(array))
# print(line)
# plot CDF curve and chance line
plt.plot(range(1,len(array)+1), array, range(1,len(array)+1), line)
plt.show()
# plot CDF curve and chance line, normalized
plt.plot(range(1,len(array)+1), array/max(array), range(1,len(array)+1), line/max(line))
plt.show()
normarray = array/max(array)
print(sum(normarray)/(1*len(array)))
plot_cdf(decfxn_ts, y_test)
Explanation: Define a function to draw CDF plots
End of explanation
# print original shape of y from above:
print('Original dataset shape {}'.format(Counter(y)))
undersample = RandomUnderSampler()
X_undersmp, y_undersmp = undersample.fit_sample(X, y)
print('Resampled dataset shape {}'.format(Counter(y_undersmp)))
# Split into train and test sets
X_undersmp_train, X_undersmp_test, y_undersmp_train, y_undersmp_test = \
train_test_split(X_undersmp, y_undersmp, test_size=0.2, random_state=0)
print('Resampled dataset shape {}'.format(Counter(y_undersmp_train)))
print('Resampled dataset shape {}'.format(Counter(y_undersmp_test)))
# Include data preprocessing to scale all features! (i.e. calculate z-scores)
pipe_undersmp = make_pipeline(StandardScaler(), LogisticRegression(class_weight='balanced', penalty='l2'))
# ## fit
# pipe.fit(X, y)
# ## predict
# pipe.predict_proba(X)
# ## to get back mean/std
# scaler = pipe.steps[0][1]
# scaler.mean_
# # Out[12]: array([ 0.0313, -0.0334, 0.0145, ..., -0.0247, 0.0191, 0.0439])
# scaler.std_
# # Out[13]: array([ 1. , 1.0553, 0.9805, ..., 1.0033, 1.0097, 0.9884])
# model = LogisticRegression(class_weight='balanced', penalty='l2')
pipe_undersmp.fit(X_undersmp_train, y_undersmp_train)
# print(X_train)
print(len(y_undersmp_train))
print(len(y_undersmp_test))
print(len(X_undersmp_train))
print(len(X_undersmp_test))
# # save model
# fpipe_undersmp = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/model_undersmp_len_lda_wpos_wpos_readmets','wb')
# pickle.dump(pipe_undersmp, fpipe_undersmp)
# load model
pipe_undersmp = pd.read_pickle('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/model_undersmp_len_lda_wpos_wpos_readmets')
# check the accuracy on the training set
print(pipe_undersmp.score(X_undersmp_train, y_undersmp_train))
# check the accuracy on the test set
print(pipe_undersmp.score(X_undersmp_test, y_undersmp_test))
# predict class labels for the test set
predicted_undersmp = pipe_undersmp.predict(X_undersmp_test)
print(predicted_undersmp)
test = np.nonzero(predicted_undersmp)
print(len(test[0]))
print(len(predicted_undersmp))
# examine the coefficients
print(np.transpose(pipe_undersmp.steps[1][1].coef_))
print(X_test.columns)
# generate class probabilities
probs_undersmp = pipe_undersmp.predict_proba(X_undersmp_test)
print (probs_undersmp)
# generate evaluation metrics
print( metrics.accuracy_score(y_undersmp_test, predicted_undersmp))
print( metrics.roc_auc_score(y_undersmp_test, probs_undersmp[:, 1]))
print( metrics.confusion_matrix(y_undersmp_test, predicted_undersmp))
print( metrics.classification_report(y_undersmp_test, predicted_undersmp))
# evaluate the model using 10-fold cross-validation
# scores = cross_val_score(pipe, X, y, scoring='accuracy', cv=10)
recall_undersmp = cross_val_score(pipe_undersmp, X, y, cv=10, scoring='recall')
print (recall_undersmp)
print (recall_undersmp.mean())
print (recall_undersmp.std())
# convert coefficients to probabilities
coeffs = np.transpose(pipe_undersmp.steps[1][1].coef_)
coeff_odds = []
coeff_probs = []
for i in coeffs:
coeff_odds.append(math.exp(i))
coeff_probs.append(math.exp(i)/(1+math.exp(i)))
print(coeff_odds)
print(coeff_probs)
print(X_test.columns)
# normalized confusion matrix values
cm_undersmp = metrics.confusion_matrix(y_undersmp_test, predicted_undersmp)
print(cm_undersmp)
cm_undersmp = cm_undersmp.astype('float') / cm_undersmp.sum(axis=1)[:, np.newaxis]
print(cm_undersmp)
Explanation: Test undersampling from imblearn for balancing class weights
End of explanation
decfxn_tr_undersmp = pipe_undersmp.decision_function(X_undersmp_train)
decfxn_ts_undersmp = pipe_undersmp.decision_function(X_undersmp_test)
plot_cdf(decfxn_tr_undersmp, y_undersmp_train)
plot_cdf(decfxn_ts_undersmp, y_undersmp_test)
Explanation: Generate CDF plots (see "Construct validation plots" section above)
End of explanation
# print original shape of y from above:
print('Original dataset shape {}'.format(Counter(y)))
smote = SMOTE()
X_smote, y_smote = smote.fit_sample(X, y)
print('Resampled dataset shape {}'.format(Counter(y_smote)))
# Split into train and test sets
X_smote_train, X_smote_test, y_smote_train, y_smote_test = \
train_test_split(X_smote, y_smote, test_size=0.2, random_state=0)
print('Resampled dataset shape {}'.format(Counter(y_smote_train)))
print('Resampled dataset shape {}'.format(Counter(y_smote_test)))
# Include data preprocessing to scale all features! (i.e. calculate z-scores)
pipe_smote = make_pipeline(StandardScaler(), LogisticRegression(class_weight='balanced', penalty='l2'))
pipe_smote.fit(X_smote_train, y_smote_train)
# print(X_train)
print(len(y_smote_train))
print(len(y_smote_test))
print(len(X_smote_train))
print(len(X_smote_test))
# save model
fpipe_smote = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/model_smote_len_lda_wpos_wpos_readmets','wb')
pickle.dump(pipe_smote, fpipe_smote)
# check the accuracy on the training set
print(pipe_smote.score(X_smote_train, y_smote_train))
# check the accuracy on the test set
print(pipe_smote.score(X_smote_test, y_smote_test))
# predict class labels for the test set
predicted_smote = pipe_smote.predict(X_smote_test)
print(predicted_smote)
test = np.nonzero(predicted_smote)
print(len(test[0]))
print(len(predicted_smote))
# examine the coefficients
print(np.transpose(pipe_smote.steps[1][1].coef_))
# print(X_smote_test.columns)
# generate class probabilities
probs_smote = pipe_smote.predict_proba(X_smote_test)
print (probs_smote)
# generate evaluation metrics
print( metrics.accuracy_score(y_smote_test, predicted_smote))
print( metrics.roc_auc_score(y_smote_test, probs_smote[:, 1]))
print( metrics.confusion_matrix(y_smote_test, predicted_smote))
print( metrics.classification_report(y_smote_test, predicted_smote))
# evaluate the model using 10-fold cross-validation
# scores = cross_val_score(pipe, X, y, scoring='accuracy', cv=10)
recall_smote = cross_val_score(pipe_smote, X, y, cv=10, scoring='recall')
print (recall_smote)
print (recall_smote.mean())
print (recall_smote.std())
# normalized confusion matrix values
cm_smote = metrics.confusion_matrix(y_smote_test, predicted_smote)
print(cm_smote)
cm_smote = cm_smote.astype('float') / cm_smote.sum(axis=1)[:, np.newaxis]
print(cm_smote)
Explanation: Test SMOTE (synthetic oversampling) from imblearn for balancing class weights
End of explanation
decfxn_tr_smote = pipe_smote.decision_function(X_smote_train)
decfxn_ts_smote = pipe_smote.decision_function(X_smote_test)
plot_cdf(decfxn_tr_smote, y_smote_train)
plot_cdf(decfxn_ts_smote, y_smote_test)
Explanation: Generate CDF plots (see "Construct validation plots" section above)
End of explanation
from sklearn.ensemble import RandomForestClassifier
# # evaluate the model by splitting into train and test sets
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
modelRF = RandomForestClassifier(class_weight='balanced')
modelRF.fit(X_train, y_train)
# print(X_train)
print(len(y_train))
print(len(y_test))
print(len(X_train))
print(len(X_test))
print(modelRF)
# save model
fmodelRF = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/modelRF_len_lda_wpos_wpos_readmets','wb')
pickle.dump(modelRF, fmodelRF)
# check the accuracy on the training set
print(modelRF.score(X_train, y_train))
# check the accuracy on the test set
print(modelRF.score(X_test, y_test))
# predict class labels for the test set
predictedRF = modelRF.predict(X_test)
print(predictedRF)
test = np.nonzero(predictedRF)
print(len(test[0]))
print(len(predictedRF))
# generate class probabilities
probsRF = modelRF.predict_proba(X_test)
print (probsRF)
# generate evaluation metrics
print( metrics.accuracy_score(y_test, predictedRF))
print( metrics.roc_auc_score(y_test, probsRF[:, 1]))
print( metrics.confusion_matrix(y_test, predictedRF))
print( metrics.classification_report(y_test, predictedRF))
# evaluate the model using 10-fold cross-validation
recallRF = cross_val_score(modelRF, X, y, scoring='recall', cv=10)
print (recallRF)
print (recallRF.mean())
print (recallRF.std())
# Terrible highlight recall!
# [ 0.00383142 0.00191939 0.00767754 0.00191939 0.00575816 0.00959693
# 0.00575816 0.00575816 0.00383877 0.00575816]
# 0.00518160625381
# 0.00227957890199
plt.matshow(confusion_matrix(y_test, predictedRF), cmap=plt.cm.binary, interpolation='nearest')
plt.title('confusion matrix')
plt.colorbar()
plt.ylabel('expected label')
plt.xlabel('predicted label')
plt.show()
Explanation: Conclusion from imblearn tests
I tested applying undersampling and synthetic oversampling (through the imblearn library) to the logistic regression and found that these methods achieved similar highlight recall (sensitivity, TP/(TP+FN)) but with more balanced precision and f1-scores between the highlighted and non-highlighted classes.
TRY RANDOM FOREST with current features
I wanted to see whether a random forest model would be better than the logistic regression at classifying highlights. This was not my preferred method, since random forest models are generally less interpretable than logistic regression.
Here, I found that a random forest model attains a deceptively high overall accuracy, because it predicts almost all sentences to be non-highlighted, allowing the much bigger non-highlighted portion of the samples to overwhelm the prediction. Thus, I decided to stick with the logistic regression model.
End of explanation |
11,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This code shows an example for using the imported data from a modified .mat file into a artificial neural network and its training
Step1: Importing preprocessing data
Step2: Sorting out data (for plotting purposes)
Step3: Artificial Neural Network (Gridsearch, DO NOT RUN)
Step4: Plotting
Step5: Saving ANN to file through pickle (and using it later) | Python Code:
import numpy as np
from sklearn.neural_network import MLPRegressor
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from sklearn.metrics import r2_score # in order to test the results
from sklearn.grid_search import GridSearchCV # looking for parameters
import pickle #saving to file
Explanation: This code shows an example for using the imported data from a modified .mat file into a artificial neural network and its training
End of explanation
#this function reads the file
def read_data(archive, rows, columns):
data = open(archive, 'r')
mylist = data.read().split()
data.close()
myarray = np.array(mylist).reshape(( rows, columns)).astype(float)
return myarray
data = read_data('../get_data_example/set.txt',72, 12)
X = data[:, [0, 2, 4, 6, 7, 8, 9, 10, 11]]
#print pre_X.shape, data.shape
y = data[:,1]
#print y.shape
#getting the time vector for plotting purposes
time_stamp = np.zeros(data.shape[0])
for i in xrange(data.shape[0]):
time_stamp[i] = i*(1.0/60.0)
#print X.shape, time_stamp.shape
X = np.hstack((X, time_stamp.reshape((X.shape[0], 1))))
print X.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
t_test = X_test[:,-1]
t_train = X_train[:, -1]
X_train_std = preprocessing.scale(X_train[:,0:-1])
X_test_std = preprocessing.scale(X_test[:, 0:-1])
Explanation: Importing preprocessing data
End of explanation
#Here comes the way to sort out the data according to one the elements of it
test_sorted = np.hstack(
(t_test.reshape(X_test_std.shape[0], 1), X_test_std, y_test.reshape(X_test_std.shape[0], 1)))
test_sorted = test_sorted[np.argsort(test_sorted[:,0])] #modified
train_sorted = np.hstack((t_train.reshape(t_train.shape[0], 1), y_train.reshape(y_train.shape[0], 1) ))
train_sorted = train_sorted[np.argsort(train_sorted[:,0])]
Explanation: Sorting out data (for plotting purposes)
End of explanation
#Grid search, random state =0: same beginning for all
alpha1 = np.linspace(0.001,0.9, 9).tolist()
momentum1 = np.linspace(0.3,0.9, 9).tolist()
params_dist = {"hidden_layer_sizes":[(20, 40), (15, 40), (10,15), (15, 15, 10), (15, 10), (15, 5)],
"activation":['tanh','logistic'],"algorithm":['sgd', 'l-bfgs'], "alpha":alpha1,
"learning_rate":['constant'],"max_iter":[500], "random_state":[0],
"verbose": [False], "warm_start":[False], "momentum":momentum1}
grid = GridSearchCV(MLPRegressor(), param_grid=params_dist)
grid.fit(X_train_std, y_train)
print "Best score:", grid.best_score_
print "Best parameter's set found:\n"
print grid.best_params_
reg = MLPRegressor(warm_start = grid.best_params_['warm_start'], verbose= grid.best_params_['verbose'],
algorithm= grid.best_params_['algorithm'],hidden_layer_sizes=grid.best_params_['hidden_layer_sizes'],
activation= grid.best_params_['activation'], max_iter= grid.best_params_['max_iter'],
random_state= None,alpha= grid.best_params_['alpha'], learning_rate= grid.best_params_['learning_rate'],
momentum= grid.best_params_['momentum'])
reg.fit(X_train_std, y_train)
Explanation: Artificial Neural Network (Gridsearch, DO NOT RUN)
End of explanation
%matplotlib inline
results = reg.predict(test_sorted[:, 1:-1])
plt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results)
plt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected
plt.scatter(time_stamp, y, c='k')
plt.xlabel("Time(s)")
plt.ylabel("Angular velocities(rad/s)")
red_patch = mpatches.Patch(color='red', label='Predicted')
blue_patch = mpatches.Patch(color='blue', label ='Expected')
black_patch = mpatches.Patch(color='black', label ='Original')
plt.legend(handles=[red_patch, blue_patch, black_patch])
plt.title("MLP results vs Expected values")
plt.show()
print "Accuracy:", reg.score(X_test_std, y_test)
#print "Accuracy test 2", r2_score(test_sorted[:,-1], results)
Explanation: Plotting
End of explanation
#This prevents the user from losing a previous important result
def save_it(ans):
if ans == "yes":
f = open('data.ann', 'w')
mem = pickle.dumps(grid)
f.write(mem)
f.close()
else:
print "Nothing to save"
save_it("no")
#Loading a successful ANN
f = open('data.ann', 'r')
nw = f.read()
saved_ann = pickle.loads(nw)
print "Just the accuracy:", saved_ann.score(X_test_std, y_test), "\n"
print "Parameters:"
print saved_ann.get_params(), "\n"
print "Loss:", saved_ann.loss_
print "Total of layers:", saved_ann.n_layers_
print "Total of iterations:", saved_ann.n_iter_
#print from previously saved data
%matplotlib inline
results = saved_ann.predict(test_sorted[:, 1:-1])
plt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results)
plt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected
plt.scatter(time_stamp, y, c='k')
plt.xlabel("Time(s)")
plt.ylabel("Angular velocities(rad/s)")
red_patch = mpatches.Patch(color='red', label='Predicted')
blue_patch = mpatches.Patch(color='blue', label ='Expected')
black_patch = mpatches.Patch(color='black', label ='Original')
plt.legend(handles=[red_patch, blue_patch, black_patch])
plt.title("MLP results vs Expected values (Loaded from file)")
plt.show()
plt.plot(time_stamp, y,'--.', c='r')
plt.xlabel("Time(s)")
plt.ylabel("Angular velocities(rad/s)")
plt.title("Resuts from patient:\n"
" Angular velocities for the right knee")
plt.show()
print "Accuracy:", saved_ann.score(X_test_std, y_test)
#print "Accuracy test 2", r2_score(test_sorted[:,-1], results)
print max(y), saved_ann.predict(X_train_std[y_train.tolist().index(max(y_train)),:].reshape((1,9)))
Explanation: Saving ANN to file through pickle (and using it later)
End of explanation |
11,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras deep neural network
The structure of the network is the following
Step1: Training
Step2: Visualization | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Data generation obtained from http://cs231n.github.io/neural-networks-case-study/
def generate_data(N, K):
D = 2 # Dimensionality
X = np.zeros((N * K, D)) # Data matrix (each row = single example)
y = np.zeros(N * K, dtype='uint8') # Class labels
for j in xrange(K):
ix = range(N * j, N * (j + 1))
r = np.linspace(0.0, 1, N) # radius
t = np.linspace(j * 8, (j + 1) * 8, N) + np.random.randn(N) * 0.2 # theta
X[ix] = np.c_[r * np.sin(t), r * np.cos(t)]
y[ix] = j
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral, edgecolor='black') # Visualize
plt.xlim([-1,1])
plt.ylim([-1,1])
return X, y
# Example:
generate_data(300, 3);
Explanation: Keras deep neural network
The structure of the network is the following:
INPUT -> FC -> ReLU -> FC -> ReLU -> FC -> OUTPUT -> SOFTMAX LOSS.
End of explanation
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.utils import to_categorical
from keras import regularizers
from keras import optimizers
reg = 0.002
step_size = 0.01
data_per_class = 300 # Number of points per class
num_classes = 4 # Number of classes
X, y = generate_data(data_per_class, num_classes)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) # Visualize
y_cat = to_categorical(y, num_classes)
X_train, X_test, y_train, y_test = train_test_split(X, y_cat, test_size=0.33)
model = Sequential()
model.add(Dense(units=20, input_dim=2, kernel_regularizer=regularizers.l2(reg)))
model.add(Activation('relu'))
model.add(Dense(units=10, input_dim=2, kernel_regularizer=regularizers.l2(reg)))
model.add(Activation('relu'))
model.add(Dense(units=num_classes, kernel_regularizer=regularizers.l2(reg)))
model.add(Activation('softmax'))
opt = optimizers.Adam(lr=step_size)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
# model.fit(X_train, y_train, epochs=5000, batch_size=X_train.shape[0], verbose=0)
for i in xrange(3001):
model.train_on_batch(X_train, y_train)
if i % 500 == 0:
print "Step %4d. Loss=%.3f, train accuracy=%.5f" % tuple([i] + model.test_on_batch(X_train, y_train))
Explanation: Training
End of explanation
# Plot the resulting classifier on the test data.
h = 0.02
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.argmax(model.predict(np.c_[xx.ravel(), yy.ravel()]), axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral, edgecolor='black')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max());
from IPython.display import SVG
# https://github.com/Theano/Theano/issues/1801#issuecomment-267989843
# sudo pip install pydot
# sudo apt-get install graphviz
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
Explanation: Visualization
End of explanation |
11,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting phase tensors from a ModEM data file on a basemap
In this example we will plot phase tensors from ModEM files. This example is a bit more complex than previous examples, as, unlike the previous examples, the basemap plotting functionality is not contained within MTPy. This has the benefit that it makes it easier to customise the plot. But it may mean it takes a bit longer to become familiar with the functionality.
The first step is to import the required modules needed. We only have one import from MTPy - PlotPTMpas. Then there is some standard matplotlib functionality and importantly the basemap module which creates coastlines and the nice borders.
Step1: The next step is to create a function that will draw an inset map showing the survey boundaries on Australia.
Step2: We now need to define our file paths for the response and data files
Step3: We can now create the plot! | Python Code:
from mtpy.modeling.modem import PlotPTMaps
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib as mpl
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Polygon
from descartes import PolygonPatch
import numpy as np
Explanation: Plotting phase tensors from a ModEM data file on a basemap
In this example we will plot phase tensors from ModEM files. This example is a bit more complex than previous examples, as, unlike the previous examples, the basemap plotting functionality is not contained within MTPy. This has the benefit that it makes it easier to customise the plot. But it may mean it takes a bit longer to become familiar with the functionality.
The first step is to import the required modules needed. We only have one import from MTPy - PlotPTMpas. Then there is some standard matplotlib functionality and importantly the basemap module which creates coastlines and the nice borders.
End of explanation
# function to draw a bounding box
def drawBBox( minLon, minLat, maxLon, maxLat, bm, **kwargs):
bblons = np.array([minLon, maxLon, maxLon, minLon, minLon])
bblats = np.array([minLat, minLat, maxLat, maxLat, minLat])
x, y = bm( bblons, bblats )
xy = zip(x,y)
poly = Polygon(xy)
bm.ax.add_patch(PolygonPatch(poly, **kwargs))
Explanation: The next step is to create a function that will draw an inset map showing the survey boundaries on Australia.
End of explanation
# define paths
data_fn = r'C:/mtpywin/mtpy/examples/model_files/ModEM_2/ModEM_Data.dat'
resp_fn = r'C:/mtpywin/mtpy/examples/model_files/ModEM_2/Modular_MPI_NLCG_004.dat'
# define extents
minLat = -22.5
maxLat = -18.5
minLon = 135.5
maxLon = 140.5
# define period index to plot
periodIdx = 10
# position of inset axes (bottom,left,width,height)
inset_ax_position = [0.6,0.2,0.3,0.2]
Explanation: We now need to define our file paths for the response and data files
End of explanation
# read in ModEM data to phase tensor object
plotPTM = PlotPTMaps(data_fn=data_fn, resp_fn=resp_fn)
# make a figure
fig, ax = plt.subplots(figsize=(10,10))
# make a basemap
m = Basemap(resolution='c', # c, l, i, h, f or None
ax=ax,
projection='merc',
lat_0=-20.5, lon_0=132, # central lat/lon for projection
llcrnrlon=minLon, llcrnrlat=minLat, urcrnrlon=maxLon, urcrnrlat=maxLat)
# draw lat-lon grids
m.drawparallels(np.linspace(minLat, maxLat, 5), labels=[1,1,0,0], linewidth=0.1)
m.drawmeridians(np.linspace(minLon, maxLon, 5), labels=[0,0,1,1], linewidth=0.1)
## draw shaded topographic relief map
## (need to install pil first before this will work - conda install pil)
# m.shadedrelief()
# plot inset map ==================================================================
insetAx = fig.add_axes(inset_ax_position)
mInset = Basemap(resolution='c', # c, l, i, h, f or None
ax=insetAx,
projection='merc',
lat_0=-20, lon_0=132,
llcrnrlon=110, llcrnrlat=-40, urcrnrlon=155, urcrnrlat=-10)
mInset.fillcontinents(color='lightgray')
mInset.drawstates(color="grey")
drawBBox(minLon, minLat, maxLon, maxLat, mInset, fill='True', facecolor='k')
# plot phase tensors =============================================================
# fetch attribute to color phase tensor ellipses with
cmapAttrib = plotPTM.get_period_attributes(periodIdx, 'phimin', ptarray='data')
sm = cm.ScalarMappable(norm=mpl.colors.Normalize(vmin=np.min(cmapAttrib),
vmax=np.max(cmapAttrib)),
cmap='gist_heat')
# extract color values from colormap
cvals = sm.cmap(sm.norm(cmapAttrib))
plotPTM.plot_on_axes(ax, m, periodIdx=periodIdx, ptarray='data',
cvals=cvals, ellipse_size_factor=2e4,
edgecolor='k')
# show colormap
cbax = fig.add_axes([0.25,0,0.525,.025])
cbar = mpl.colorbar.ColorbarBase(cbax, cmap=sm.cmap,
norm = sm.norm,
orientation='horizontal')
cbar.set_label('Phase tensors coloured by phimin')
plt.savefig('/tmp/a.pdf', dpi=300)
Explanation: We can now create the plot!
End of explanation |
11,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pre-processing
Step1: Read in tweets and add appropriate labels (sarcastic/genuine)
Step2: Remove non-English tweets
Step3: Feature Engineering
ToUser - tweet references another user via @username
Hashtags - Indicates presence of hashtags in the tweet (aside from #sarcasm)
Step4: Write to CSV | Python Code:
import csv
with open("processed_tweets/sarcastic_tweets.csv", 'r') as f:
reader = csv.reader(f)
linenumber = 1
try:
for row in reader:
linenumber += 1
except Exception as e:
print (("Error line %d: %s %s" % (linenumber, str(type(e)), e)))
Explanation: Pre-processing
End of explanation
sarcastic_tweets = pd.read_csv(open('processed_tweets/sarcastic_tweets.csv','r'), encoding='utf-8', engine='c', sep='\n', error_bad_lines=False, header=None)
regular_tweets = pd.read_csv(open('processed_tweets/regular_tweets.csv','r'), encoding='utf-8', engine='c', sep='\n', error_bad_lines=False, header=None)
sarcastic_tweets['type']='sarcastic';
regular_tweets['type']='genuine';
Explanation: Read in tweets and add appropriate labels (sarcastic/genuine)
End of explanation
import langid as li;
li.NORM_PROBS = True;
li.NORM_PROBS == True
li.classify(sarcastic_tweets[0][4])
sarcastic_tweets["English"] = sarcastic_tweets[0].map(lambda x: 1 if li.classify(x)[0]=='en' else 0);
regular_tweets["English"] = regular_tweets[0].map(lambda x: 1 if li.classify(x)[0]=='en' else 0);
print (str((len(sarcastic_tweets[sarcastic_tweets["English"]==1])/len(sarcastic_tweets)) * 100)+ "% known English sarcastic tweets")
print (str((len(regular_tweets[regular_tweets["English"]==1])/len(regular_tweets)) * 100)+ "% known English regular tweets")
sarcastic_tweets_english = sarcastic_tweets[sarcastic_tweets.English == 1]
regular_tweets_english = regular_tweets[regular_tweets.English == 1]
regular_tweets_english
Explanation: Remove non-English tweets
End of explanation
# ToUser
sarcastic_tweets["ToUser"] = sarcastic_tweets[0].map(lambda x: 1 if "@" in x else 0);
regular_tweets["ToUser"] = regular_tweets[0].map(lambda x: 1 if "@" in x else 0);
# Hashtag presence
sarcastic_tweets["Hashtags"] = sarcastic_tweets[0].map(lambda x: 1 if "#" in x else 0);
regular_tweets["Hashtags"] = regular_tweets[0].map(lambda x: 1 if "#" in x else 0);
sarcastic_tweets_english
Explanation: Feature Engineering
ToUser - tweet references another user via @username
Hashtags - Indicates presence of hashtags in the tweet (aside from #sarcasm)
End of explanation
sarcastic_tweets_english.to_csv("processed_tweets/sarcastic_df_eng.csv")
regular_tweets_english.to_csv("processed_tweets/regular_df_eng.csv")
Explanation: Write to CSV
End of explanation |
11,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Occupancy Detection
Create a classification model to determine if a room is occupied or unoccupied based on environmental data.
In class demo on May 5, 2018
Step1: Data Loading
Load data in two ways
Step5: Transformation
Convert datetime into hour of day (numeric)
Label Encode our Class
Transform dictionaries into numpy array
Step6: Fit a Classifier
Step7: Model Management | Python Code:
%matplotlib notebook
import os
import csv
import pickle
import numpy as np
import pandas as pd
from datetime import datetime
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_extraction import DictVectorizer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import train_test_split as tts
Explanation: Occupancy Detection
Create a classification model to determine if a room is occupied or unoccupied based on environmental data.
In class demo on May 5, 2018
End of explanation
DATA = os.path.join("data", "occupancy.csv")
DTFMT = '%Y-%m-%d %H:%M:%S'
def load_raw(path=DATA):
with open(path, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
# Pop target off of features dictionary
target = row.pop('occupancy')
# Convert fields to floats
for field in ('temperature', 'relative humidity', 'C02', 'humidity', 'light'):
row[field] = float(row[field])
# Parse datetime
row['datetime'] = datetime.strptime(row['datetime'], DTFMT)
yield row, target
def load_df(path=DATA):
return pd.read_csv(path)
df = load_df()
df.describe()
Explanation: Data Loading
Load data in two ways: "raw" form as dictionaries to use with the DictVectorizer and as a Pandas DataFrame for data exploration.
End of explanation
class DateEncode(BaseEstimator, TransformerMixin):
Custom transformers extend sklearn.base.BaseEstimator and TransformerMixin
to add helper methods like fit_transform(). It is up to you to add the
following methods:
1. fit(X, y=None)
2. transform(X)
This transfomer encodes the datetime into hour of day and day of week features.
def fit(self, X, y=None):
Expects X to be a list of dictionaries.
Loops through all dictionaries to find all unique dictionary keys
whose values are datetimes, in order to "learn" what fields to
encode date time as.
For this data, this will only be the "datetime" field, but this
method is added here as an example of fitting to data.
# NOTE: properties suffixed with an underscore are internal
# attributes that are learned during fit
self.date_columns_ = set([
key
for Xi in X
for key, val in Xi.items()
if isinstance(val, datetime)
])
# NOTE: fit must always return self
return self
def transform(self, X):
Expects X to be a list of dictionaries.
Pops (deletes) the datetime fields discovered during fit
and replaces it with the following features:
1. field_hour : the hour of day
2. field_dow : the day of the week
Returns a list of dictionaries
Xprime = []
for Xi in X:
for col in self.date_columns_:
dt = Xi.pop(col)
Xi[col + "_hour"] = dt.hour
Xi[col + "_dow"] = dt.weekday()
Xprime.append(Xi)
return Xprime
# Load Raw Data - data is a list of tuples [(features, target)]
# Extract the features into X and the target into y
data = list(load_raw())
X = [row[0] for row in data]
y = [row[1] for row in data]
# Create feature extraction pipeline
features = Pipeline([
('date_encode', DateEncode()),
('vec', DictVectorizer()),
])
# Fit transfrom the features, which should now be a 2D array
Xp = features.fit_transform(X)
# Label Encode the target, which should now be a 1D vector
label_encoder = LabelEncoder()
yp = label_encoder.fit_transform(y)
# Example of getting the class name back from the encoder
label_encoder.inverse_transform([0,1,1,0,0])
# Always check the shape of X and y makes sense
print("X shape is {} y shape is {}".format(
Xp.shape, yp.shape
))
Explanation: Transformation
Convert datetime into hour of day (numeric)
Label Encode our Class
Transform dictionaries into numpy array
End of explanation
from yellowbrick.classifier import ClassBalance, ConfusionMatrix, ClassificationReport
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
def simple_evaluate_model(model, X=Xp.todense(), y=yp, encoder=label_encoder):
X_train, X_test, y_train, y_test = tts(X, y, train_size=0.80, shuffle=True)
model.fit(X_train, y_train)
y_hat = model.predict(X_test)
print("f1: {}".format(f1_score(y_test, y_hat, average='weighted')))
# Simple Evaluation
clf = GradientBoostingClassifier()
simple_evaluate_model(clf)
# Complete Evaluation
model = Pipeline([
('date_encode', DateEncode()),
('vec', DictVectorizer()),
('clf', GradientBoostingClassifier())
])
cross_val_score(model, X, y, cv=12, scoring='f1_macro').mean()
# Simpler Model
# Simple Evaluation
clf = GradientBoostingClassifier(n_estimators=5)
simple_evaluate_model(clf, Xp.todense(), yp)
cross_val_score(clf, Xp.todense(), yp, cv=12, scoring='f1_macro').mean()
clf = LogisticRegression()
simple_evaluate_model(clf, Xp.todense(), yp)
cross_val_score(clf, Xp.todense(), yp, cv=12, scoring='f1_macro').mean()
clf = GaussianNB()
simple_evaluate_model(clf, Xp.todense(), yp)
cross_val_score(clf, Xp.todense(), yp, cv=12, scoring='f1_macro').mean()
Explanation: Fit a Classifier
End of explanation
def internal_params(estimator):
for attr in dir(estimator):
if attr.endswith("_") and not attr.startswith("_"):
yield attr
def save_model(model, path=None):
if path is None:
path = model.__class__.__name__ + ".pkl"
with open(path, 'wb') as f:
pickle.dump(model, f)
list(internal_params(clf))
#save_model(clf)
Explanation: Model Management
End of explanation |
11,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Colab-only auth
Step2: Config
Step3: Linear Keras model [WORK REQUIRED]
What do the columns do ? Familiarize yourself with these column types.
numeric_col = fc.numeric_column('name')
bucketized_numeric_col = fc.bucketized_column(fc.numeric_column('name'), [0, 2, 10])
indic_of_categ_col = fc.indicator_column(fc.categorical_column_with_identity('name', num_buckets = 24))
indic_of_categ_vocab_col = fc.indicator_column(fc.categorical_column_with_identity('color', vocabulary_list = ['red', 'blue']))
indic_of_crossed_col = fc.indicator_column(fc.crossed_column([categcol1, categcol2], 16*16))
embedding_of_crossed_col = fc.embedding_column(fc.crossed_column([categcol1, categcol2], 16*16), 5)
| column | output vector shape | nb of parameters |
|--------------|---------------------------------|------------------------------|
| numeric_col | [1] | 0 |
| bucketized_numeric_col | [bucket boundaries+1] | 0 |
| indic_of_categ_col | [nb categories] | 0 |
| indic_of_categ_vocab_col | [nb categories] | 0 |
| indic_of_crossed_col | [nb crossed categories] | 0 |
| embedding_of_crossed_col | [nb crossed categories] | crossed categories * embedding size |
Let's start with all the data in as simply as possible | Python Code:
import os, json, math
import numpy as np
import tensorflow as tf
from tensorflow.python.feature_column import feature_column_v2 as fc # This will change when Keras FeatureColumn is final.
from matplotlib import pyplot as plt
print("Tensorflow version " + tf.__version__)
tf.enable_eager_execution()
#@title display utilities [RUN ME]
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/08_Taxifare_Keras_FeatureColumns_playground.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Keras Feature Columns are not an officially released feature yet. Some caveats apply: please run this notebook on a GPU Backend. Keras Feature Columns are not comaptible with TPUs yet. Also, you will not be able to export this model to Tensorflow's "saved model" format for serving. The serving layer for feature columns will be added soon.
Imports
End of explanation
# backend identification
IS_COLAB = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
HAS_COLAB_TPU = 'COLAB_TPU_ADDR' in os.environ
# Auth on Colab
if IS_COLAB:
from google.colab import auth
auth.authenticate_user()
# Also propagate the Auth to TPU if available so that it can access your GCS buckets
if IS_COLAB and HAS_COLAB_TPU:
TF_MASTER = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])
with tf.Session(TF_MASTER) as sess:
with open('/content/adc.json', 'r') as f:
auth_info = json.load(f) # Upload the credentials to TPU.
tf.contrib.cloud.configure_gcs(sess, credentials=auth_info)
print('Using TPU')
# TPU usage flag
USE_TPU = HAS_COLAB_TPU
Explanation: Colab-only auth
End of explanation
DATA_BUCKET = "gs://cloud-training-demos/taxifare/ch4/taxi_preproc/"
TRAIN_DATA_PATTERN = DATA_BUCKET + "train*"
VALID_DATA_PATTERN = DATA_BUCKET + "valid*"
CSV_COLUMNS = ['fare_amount', 'dayofweek', 'hourofday', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
DEFAULTS = [[0.0], ['null'], [12], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def decode_csv(line):
column_values = tf.decode_csv(line, DEFAULTS)
column_names = CSV_COLUMNS
decoded_line = dict(zip(column_names, column_values)) # create a dictionary {'column_name': value, ...} for each line
return decoded_line
def load_dataset(pattern):
#filenames = tf.gfile.Glob(pattern)
filenames = tf.data.Dataset.list_files(pattern)
#dataset = tf.data.TextLineDataset(filenames)
dataset = filenames.interleave(tf.data.TextLineDataset, cycle_length=16) # interleave so that reading happens from multiple files in parallel
dataset = dataset.map(decode_csv)
return dataset
dataset = load_dataset(TRAIN_DATA_PATTERN)
for n, data in enumerate(dataset):
numpy_data = {k: v.numpy() for k, v in data.items()} # .numpy() works in eager mode
print(numpy_data)
if n>10: break
def add_engineered(features):
# this is how you can do feature engineering in TensorFlow
distance = tf.sqrt((features['pickuplat'] - features['dropofflat'])**2 +
(features['pickuplon'] - features['dropofflon'])**2)
# euclidian distance is hard for a neural network to emulate
features['euclidean'] = distance
return features
def features_and_labels(features):
features = add_engineered(features)
features.pop('key') # this column not needed
label = features.pop('fare_amount') # this is what we will train for
return features, label
def prepare_dataset(dataset, batch_size, truncate=None, shuffle=True):
dataset = dataset.map(features_and_labels)
if truncate is not None:
dataset = dataset.take(truncate)
dataset = dataset.cache()
if shuffle:
dataset = dataset.shuffle(10000)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size)
return dataset
one_item = load_dataset(TRAIN_DATA_PATTERN).map(features_and_labels).take(1).batch(1)
Explanation: Config
End of explanation
NB_BUCKETS = 16
latbuckets = np.linspace(38.0, 42.0, NB_BUCKETS).tolist()
lonbuckets = np.linspace(-76.0, -72.0, NB_BUCKETS).tolist()
# the columns you can play with
# Categorical columns are used as:
# fc.indicator_column(dayofweek)
dayofweek = fc.categorical_column_with_vocabulary_list('dayofweek', vocabulary_list = ['Sun', 'Mon', 'Tues', 'Wed', 'Thu', 'Fri', 'Sat'])
hourofday = fc.categorical_column_with_identity('hourofday', num_buckets = 24)
# Bucketized columns can be used as such:
bucketized_pick_lat = fc.bucketized_column(fc.numeric_column('pickuplon'), lonbuckets)
bucketized_pick_lon = fc.bucketized_column(fc.numeric_column('pickuplat'), latbuckets)
bucketized_drop_lat = fc.bucketized_column(fc.numeric_column('dropofflon'), lonbuckets)
bucketized_drop_lon = fc.bucketized_column(fc.numeric_column('dropofflat'), latbuckets)
# Cross columns are used as
# fc.embedding_column(day_hr, 5)
day_hr = fc.crossed_column([dayofweek, hourofday], 24 * 7)
pickup_cross = fc.crossed_column([bucketized_pick_lat, bucketized_pick_lon], NB_BUCKETS * NB_BUCKETS)
drofoff_cross = fc.crossed_column([bucketized_drop_lat, bucketized_drop_lon], NB_BUCKETS * NB_BUCKETS)
#pickdorp_pair = fc.crossed_column([pickup_cross, ddropoff_cross], NB_BUCKETS ** 4 )
columns = [
###
#
# YOUR FEATURE COLUMNS HERE
#
fc.numeric_column('passengers'),
##
]
l = tf.keras.layers
model = tf.keras.Sequential(
[
fc.FeatureLayer(columns),
l.Dense(100, activation='relu'),
l.Dense(64, activation='relu'),
l.Dense(32, activation='relu'),
l.Dense(16, activation='relu'),
l.Dense(1, activation=None), # regression
])
def rmse(y_true, y_pred): # Root Mean Squared Error
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def mae(y_true, y_pred): # Mean Squared Error
return tf.reduce_mean(tf.abs(y_pred - y_true))
model.compile(optimizer=tf.train.AdamOptimizer(), # little bug: in eager mode, 'adam' is not yet accepted, must spell out tf.train.AdamOptimizer()
loss='mean_squared_error',
metrics=[rmse])
# print model layers
model.predict(one_item, steps=1) # little bug: with FeatureLayer, must call the model once on dummy data before .summary can work
model.summary()
EPOCHS = 8
BATCH_SIZE = 512
TRAIN_SIZE = 64*1024 # max is 2,141,023
VALID_SIZE = 4*1024 # max is 2,124,500
# Playground settings: TRAIN_SIZE = 64*1024, VALID_SIZE = 4*1024
# Solution settings: TRAIN_SIZE = 640*1024, VALID_SIZE = 64*1024
# This should reach RMSE = 3.9 (multiple runs may be necessary)
train_dataset = prepare_dataset(load_dataset(TRAIN_DATA_PATTERN), batch_size=BATCH_SIZE, truncate=TRAIN_SIZE)
valid_dataset = prepare_dataset(load_dataset(VALID_DATA_PATTERN), batch_size=BATCH_SIZE, truncate=VALID_SIZE, shuffle=False)
history = model.fit(train_dataset, steps_per_epoch=TRAIN_SIZE//BATCH_SIZE, epochs=EPOCHS, shuffle=True,
validation_data=valid_dataset, validation_steps=VALID_SIZE//BATCH_SIZE)
print(history.history.keys())
display_training_curves(history.history['rmse'], history.history['val_rmse'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
Explanation: Linear Keras model [WORK REQUIRED]
What do the columns do ? Familiarize yourself with these column types.
numeric_col = fc.numeric_column('name')
bucketized_numeric_col = fc.bucketized_column(fc.numeric_column('name'), [0, 2, 10])
indic_of_categ_col = fc.indicator_column(fc.categorical_column_with_identity('name', num_buckets = 24))
indic_of_categ_vocab_col = fc.indicator_column(fc.categorical_column_with_identity('color', vocabulary_list = ['red', 'blue']))
indic_of_crossed_col = fc.indicator_column(fc.crossed_column([categcol1, categcol2], 16*16))
embedding_of_crossed_col = fc.embedding_column(fc.crossed_column([categcol1, categcol2], 16*16), 5)
| column | output vector shape | nb of parameters |
|--------------|---------------------------------|------------------------------|
| numeric_col | [1] | 0 |
| bucketized_numeric_col | [bucket boundaries+1] | 0 |
| indic_of_categ_col | [nb categories] | 0 |
| indic_of_categ_vocab_col | [nb categories] | 0 |
| indic_of_crossed_col | [nb crossed categories] | 0 |
| embedding_of_crossed_col | [nb crossed categories] | crossed categories * embedding size |
Let's start with all the data in as simply as possible: numerical columns for numerical values, categorical (one-hot encoded) columns for categorical data like the day of the week or the hour of the day. Try training...
RSME flat at 8-9 ... not good
Try to replace the numerical latitude and longitudes by their bucketized versions
RSME trains to 6 ... progress!
Try to add an engineered feature like 'euclidean' for the distance traveled by the taxi
RMSE trains down to 4-5 ... progress !
The euclidian distance is really hard to emulate for a neural network. Look through the code to see how it was "engineered".
Now add embedded crossed columns for:
hourofday x dayofweek
pickup neighborhood (bucketized pickup lon x bucketized pickup lat)
dropoff neighborhood (bucketized dropoff lon x bucketized dropoff lat)
is this better ?
The big wins were bucketizing the coordinates and adding the euclidian distance. The cross column add only a little, and only if you train for longer. Try training on 10x the training and validation data. With crossed columns you should be able to reach RMSE=3.9
End of explanation |
11,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Feature Store
Learning Objective
In this notebook, you will learn how to
Step1: Note
Step2: Set your project ID
Update YOUR-PROJECT-ID with your Project ID. If you don't know your project ID, you may be able to get your project ID using gcloud.
Step3: Otherwise, set your project ID here.
Step4: Prepare for output
Step 1. Create dataset for output
You need a BigQuery dataset to host the output data in us-central1. Input the name of the dataset you want to create and specify the name of the table you want to store the output later. These will be used later in the notebook.
Make sure that the table name does NOT already exist.
Step5: Import libraries and define constants
Step6: Terminology and Concept
Featurestore Data model
Feature Store organizes data with the following 3 important hierarchical concepts
Step7: You can use GetFeaturestore or Featurestores to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
Step8: Create Entity Type
You can specify a monitoring config which will by default be inherited by all Features under this EntityType.
Step9: Create Feature
You can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console
Step10: Search created features
While the ListFeatures method allows you to easily view all features of a single
entity type, the SearchFeatures method searches across all featurestores
and entity types in a given location (such as us-central1). This can help you discover features that were created by someone else.
You can query based on feature properties including feature ID, entity type ID,
and feature description. You can also limit results by filtering on a specific
featurestore, feature value type, and/or labels.
Step11: Now, narrow down the search to features that are of type DOUBLE
Step12: Or, limit the search results to features with specific keywords in their ID and type.
Step13: Import Feature Values
You need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK.
Source Data Format and Layout
As mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity must have an ID; also, each entity can optionally have a timestamp, specifying when the feature values are generated. This notebook uses Avro as an input, located at this public bucket. The Avro schemas are as follows
Step14: Import feature values for Movies
Similarly, import feature values for 'movies' into the featurestore.
Step15: Online serving
The
Online Serving APIs
lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions.
Read one entity per request
The ReadFeatureValues API is used to read feature values of one entity; hence
its custom HTTP verb is readFeatureValues. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.
To read feature values, specify the entity ID and features to read. The response
contains a header and an entity_view. Each row of data in the entity_view
contains one feature value, in the same order of features as listed in the response header.
Step16: Read multiple entities per request
To read feature values from multiple entities, use the
StreamingReadFeatureValues API, which is almost identical to the previous
ReadFeatureValues API. Note that fetching only a small number of entities is recommended when using this API due to its latency-sensitive nature.
Step17: Now that you have learned how to fetch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases.
Batch Serving
Batch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API.
Use case
The task is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input
Step18: After the LRO finishes, you should be able to see the result from the BigQuery console, in the dataset created earlier.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also keep the project but delete the featurestore | Python Code:
# Setup your dependencies
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Upgrade the specified package to the newest available version
! pip3 install {USER_FLAG} --upgrade git+https://github.com/googleapis/python-aiplatform.git@main-test
Explanation: Using Feature Store
Learning Objective
In this notebook, you will learn how to:
Import your features into Feature Store.
Serve online prediction requests using the imported features.
Access imported features in offline jobs, such as training jobs.
Introduction
In this notebook, you will learn how to use Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
This notebook assumes that you understand basic Google Cloud concepts such as Project, Storage and Vertex AI. Some machine learning knowledge is also helpful but not required.
Dataset
This notebook uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online.
Make sure to enable the Vertex AI, Cloud Storage, and Compute Engine APIs.
Install additional packages
For this notebook, you need the Vertex SDK for Python.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Note: Please ignore any incompatibility warnings and errors.
Restart the kernel
After you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from Kernel -> Restart Kernel, or running the following:
End of explanation
import os
PROJECT_ID = "YOUR-PROJECT-ID"
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
# PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set your project ID
Update YOUR-PROJECT-ID with your Project ID. If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "YOUR-PROJECT-ID" # @param {type:"string"}
# Authenticate your google Cloud account
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client()
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset, timeout=30)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
Explanation: Prepare for output
Step 1. Create dataset for output
You need a BigQuery dataset to host the output data in us-central1. Input the name of the dataset you want to create and specify the name of the table you want to store the output later. These will be used later in the notebook.
Make sure that the table name does NOT already exist.
End of explanation
# Copy all required files in your bucket.
# Make sure to replace your bucket name here.
!gsutil cp -r gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/feature_stores/* gs://<Your-bucket-name>
# Other than project ID and featurestore ID and endpoints needs to be set.
# Make sure to replace your bucket name here.
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://<Your-bucket-name>/movie_prediction_toy.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
Explanation: Import libraries and define constants
End of explanation
FEATURESTORE_ID = "movie_prediction_{timestamp}".format(timestamp=TIMESTAMP)
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
display_name="Featurestore for movie prediction",
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=3
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
Explanation: Terminology and Concept
Featurestore Data model
Feature Store organizes data with the following 3 important hierarchical concepts:
Featurestore -> EntityType -> Feature
* Featurestore: the place to store your features
* EntityType: under a Featurestore, an EntityType describes an object to be modeled, real one or virtual one.
* Feature: under an EntityType, a feature describes an attribute of the EntityType
In the movie prediction example, you will create a featurestore called movie_prediction. This store has 2 entity types: Users and Movies. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features.
Create Featurestore and Define Schemas
Create Featurestore
The method to create a featurestore returns a
long-running operation (LRO). An LRO starts an asynchronous job. LROs are returned for other API
methods too, such as updating or deleting a featurestore. Calling
create_fs_lro.result() waits for the LRO to complete.
End of explanation
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
Explanation: You can use GetFeaturestore or Featurestores to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
End of explanation
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
Explanation: Create Entity Type
You can specify a monitoring config which will by default be inherited by all Features under this EntityType.
End of explanation
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
Explanation: Create Feature
You can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample properties, sample metrics.
End of explanation
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
Explanation: Search created features
While the ListFeatures method allows you to easily view all features of a single
entity type, the SearchFeatures method searches across all featurestores
and entity types in a given location (such as us-central1). This can help you discover features that were created by someone else.
You can query based on feature properties including feature ID, entity type ID,
and feature description. You can also limit results by filtering on a specific
featurestore, feature value type, and/or labels.
End of explanation
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
Explanation: Now, narrow down the search to features that are of type DOUBLE
End of explanation
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
Explanation: Or, limit the search results to features with specific keywords in their ID and type.
End of explanation
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# TODO 1a
# Make sure to replace your bucket name here.
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://<Your-bucket-name>/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
Explanation: Import Feature Values
You need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK.
Source Data Format and Layout
As mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity must have an ID; also, each entity can optionally have a timestamp, specifying when the feature values are generated. This notebook uses Avro as an input, located at this public bucket. The Avro schemas are as follows:
For the Users entity:
schema = {
"type": "record",
"name": "User",
"fields": [
{
"name":"user_id",
"type":["null","string"]
},
{
"name":"age",
"type":["null","long"]
},
{
"name":"gender",
"type":["null","string"]
},
{
"name":"liked_genres",
"type":{"type":"array","items":"string"}
},
{
"name":"update_time",
"type":["null",{"type":"long","logicalType":"timestamp-micros"}]
},
]
}
For the Movies entity
schema = {
"type": "record",
"name": "Movie",
"fields": [
{
"name":"movie_id",
"type":["null","string"]
},
{
"name":"average_rating",
"type":["null","double"]
},
{
"name":"title",
"type":["null","string"]
},
{
"name":"genres",
"type":["null","string"]
},
{
"name":"update_time",
"type":["null",{"type":"long","logicalType":"timestamp-micros"}]
},
]
}
Import feature values for Users
When importing, specify the following in your request:
Data source format: BigQuery Table/Avro/CSV
Data source URL
Destination: featurestore/entity types/features to be imported
End of explanation
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
# TODO 1b
# Make sure to replace your bucket name here.
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://<Your-bucket-name>/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
Explanation: Import feature values for Movies
Similarly, import feature values for 'movies' into the featurestore.
End of explanation
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
# TODO 2a
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
Explanation: Online serving
The
Online Serving APIs
lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions.
Read one entity per request
The ReadFeatureValues API is used to read feature values of one entity; hence
its custom HTTP verb is readFeatureValues. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.
To read feature values, specify the entity ID and features to read. The response
contains a header and an entity_view. Each row of data in the entity_view
contains one feature value, in the same order of features as listed in the response header.
End of explanation
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
# TODO 2b
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
Explanation: Read multiple entities per request
To read feature values from multiple entities, use the
StreamingReadFeatureValues API, which is almost identical to the previous
ReadFeatureValues API. Note that fetching only a small number of entities is recommended when using this API due to its latency-sensitive nature.
End of explanation
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
# TODO 3a
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
# TODO 3b
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
Explanation: Now that you have learned how to fetch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases.
Batch Serving
Batch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API.
Use case
The task is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:
Features: you already imported into the featurestore.
Labels: the groud-truth data recorded that user X has watched movie Y.
To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the age, gender and liked_genres features from users and
the genres and average_rating features from movies are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all True.
BatchReadFeatureValues API takes Table 1 as
input, joins all required feature values from the featurestore, and returns Table 2 for training.
<h4 align="center">Table 1. Ground-truth Data</h4>
users | movies | timestamp
----- | -------- | --------------------
alice | Cinema Paradiso | 2019-11-01T00:00:00Z
bob | The Shining | 2019-11-15T18:09:43Z
... | ... | ...
<h4 align="center">Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)</h4>
timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating
-------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | -----
2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8
2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5
... | ... | ... | ... | ... | ... | ... | ...
Why timestamp?
Note that there is a timestamp column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.
For example, the 1st row of Table 2 indicates that user alice watched movie Cinema Paradiso on 2019-11-01T00:00:00Z. The featurestore keeps feature values for all timestamps but fetches feature values only at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns age=54 as alice's age, instead of age=56, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres.
Batch Read Feature Values
Assemble the request which specify the following info:
Where is the label data, i.e., Table 1.
Which features are read, i.e., the column names in Table 2.
The output is stored in a BigQuery table.
End of explanation
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
Explanation: After the LRO finishes, you should be able to see the result from the BigQuery console, in the dataset created earlier.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also keep the project but delete the featurestore:
End of explanation |
11,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MCMC & why 3d matters
This example (although quite artificial) shows that viewing a posterior (ok, I have flat priors) in 3d can be quite useful. While the 2d projection may look quite 'bad', the 3d volume rendering shows that much of the volume is empty, and the posterior is much better defined than it seems in 2d.
Step1: Posterior in 2d
Step2: Posterior in 3d | Python Code:
!pip install emcee corner
!pip show matplotlib
import pylab
import scipy.optimize as op
import emcee
import numpy as np
%matplotlib inline
# our 'blackbox' 3 parameter model which is highly degenerate
def f_model(x, a, b, c):
return x * np.sqrt(a**2 +b**2 + c**2) + a*x**2 + b*x**3
N = 100
a_true, b_true, c_true = -1., 2., 1.5
# our input and output
x = np.random.rand(N)*0.5#+0.5
y = f_model(x, a_true, b_true, c_true)
# + some (known) gaussian noise
error = 0.2
y += np.random.normal(0, error, N)
# and plot our data
pylab.scatter(x, y);
pylab.xlabel("$x$")
pylab.ylabel("$y$")
# our likelihood
def lnlike(theta, x, y, error):
a, b, c = theta
model = f_model(x, a, b, c)
chisq = 0.5*(np.sum((y-model)**2/error**2))
return -chisq
result = op.minimize(lambda *args: -lnlike(*args), [a_true, b_true, c_true], args=(x, y, error))
# find the max likelihood
a_ml, b_ml, c_ml = result["x"]
print("estimates", a_ml, b_ml, c_ml)
print("true values", a_true, b_true, c_true)
result["message"]
# do the mcmc walk
ndim, nwalkers = 3, 100
pos = [result["x"] + np.random.randn(ndim)*0.1 for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnlike, args=(x, y, error))
sampler.run_mcmc(pos, 1500);
samples = sampler.chain[:, 50:, :].reshape((-1, ndim))
Explanation: MCMC & why 3d matters
This example (although quite artificial) shows that viewing a posterior (ok, I have flat priors) in 3d can be quite useful. While the 2d projection may look quite 'bad', the 3d volume rendering shows that much of the volume is empty, and the posterior is much better defined than it seems in 2d.
End of explanation
# plot the 2d pdfs
import corner
fig = corner.corner(samples, labels=["$a$", "$b$", "$c$"],
truths=[a_true, b_true, c_true])
Explanation: Posterior in 2d
End of explanation
import vaex
import scipy.ndimage
import ipyvolume
ds = vaex.from_arrays(a=samples[...,0].copy(), b=samples[...,1].copy(), c=samples[...,2].copy())
# get 2d histogram
v = ds.count(binby=["a", "b", "c"], shape=64)
# smooth it for visual pleasure
v = scipy.ndimage.gaussian_filter(v, 2)
ipyvolume.quickvolshow(v, lighting=True)
Explanation: Posterior in 3d
End of explanation |
11,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Success Prediction
Predict success or fail with information given before project begins
Features
Step2: A. Percentage distribution
Step3: B. Category
Step4: Categories are classified by categories
C. Video
Step5: It has non normality distribution.
Different Distribution, Same mean.
D. Target money
Step6: Target_money and result are correlated
E. Grammar level
Step7: very weak negative correlation
Step8: no normality distribution
same distribution, same mean (between success sample and fail sample)
F. Num_funding_type
Step9: no normality distribution
different distribution, different mean (between success sample and fail sample)
G. Month
Step10: same distribution, different mean (between success sample and fail sample)
H. Funding_duration
Step11: different distribution, different mean (between success sample and fail sample)
Label Encoding
Step12: 2. Feature selection
According to proeject result, incorrent feaure selection leads to high variance of accuracy
We treat this part importantly
A. Variables Selection
Step13: B. SelectKBest
Univariate feature selection
Univariate feature selection works by selecting the best features based on univariate statistical tests
Removes all but the K-highest scoring features
Scoring Function
f_classif
Step14: when using k=4, (target_money, funding_duration, start_month, has_video) are selected
C. Correlation
Step15: As a result, (target_money, funding_duration, num_funding_type) are highly correlated
According to correlation theory, one variable could be used among (target_money, funding_duration, num_funding_type)
Candidates are target_money, grammar_level, category_label, has_video, start_month
D. Feature Importance (Random Forest)
Ease of use
Relatively good accuracy
Robustness
Step16: target_money, grammar_level, funding_duration, start_month are selected
But it's always changed as model fits
E. Mean decrease accuracy
Directly measure the impact of each feature on accuracy of the model
For unimportant variables, the permutation should have little to no effect on model accuracy
Base model = RandomForest
Step17: (-) score means accuracy increases without a variable
Only target_money is selected
F. RFE (Recursive Feature Estimation)
Repeatedly construct a model and choose either the best or worst performing feature
Base model = RandomForest
Step18: G. # Result
Kbest = target_money, funding_duration, start_month, has_video
Correlation = target_money, grammar_level, start_month, has_video, category_label
Feature Importance = target_money, grammar_level, funding_duration, start_month
Mean decrese accuracy = target_money
Reculsive Feature Estimation = target_money, grammar_level, funding_duration, start_month, category_label
Rank
Step19: A. SVC
Step20: As features reduce, score increases
B. GaussianNB
Step21: C. Decision Tree
Step22: D. Radom Forest
Step23: E. KNN
Step25: F. Model comparison
Step26: Decision Tree, Random Forest, KNN are selected
3. Grid Search & Score
Decision Tree
Step27: A. Decision Tree
x_features = 'target_money', 'start_month', 'grammar_level', 'funding_duration'
Step28: B. Random Forest
Step29: C. KNN
Step30: D. Pluctuation as a parameter change
Random Forest, KNN
KNN parameter = n_neighbors
Random Forest parameter = n_estimators
Step31: 4. Optimum models
A. Classification Report
Step32: 5. Result
Step33: Overall, performance of KNN is the best (Accuracy | Python Code:
#load_data
cf_df = pd.read_excel('cf_df.xlsx')
#check feature number
def check_number(feature):
"feature : 'str'
count = cf_df[feature].value_counts()
return print(count)
# success rate
print('overall success'),
print('=================='),
success_percentage = cf_df['end_with_success'].value_counts()[1] / cf_df['end_with_success'].count()
print(round(success_percentage*100, 2),'%' )
Explanation: Success Prediction
Predict success or fail with information given before project begins
Features : target_money, grammar_level, has_video, funding_type_count, funding_type_1, funding_type_2, funding_type_3, funding_duration
※ Estimation of grammar_level
number of errors / number of tokens
tested by naver grammar tester (https://github.com/ssut/py-hanspell)
suppposed grammar level represents ability of creators
Outline
Check the insight from previous project (https://github.com/surprisoh/crowdfunding_prediction)
Distribution test
Feature selection
Model Selection (classification)
Grid Search & Scores
Optimum model selection
Result & Insight
1. Distribution Test
End of explanation
# percentage distribution
#log scailing
plt.figure(figsize=(8, 6))
sns.kdeplot(cf_df['percentage'].apply(lambda x: np.log(x)));
plt.legend(fontsize = 15);
plt.xticks(fontsize=15);
plt.xlabel('Log Percentage', fontsize=15);
plt.ylabel('Distribution', fontsize=15);
Explanation: A. Percentage distribution
End of explanation
check_number('category')
# percentage boxplot by categories
figure = plt.figure(figsize=(15,8))
sns.boxplot(x = cf_df['category'], y = cf_df['percentage']);
plt.xticks(rotation = 'vertical');
plt.ylim(-30, 500);
plt.xticks(fontsize=15);
plt.xlabel('category', fontsize=15);
plt.ylabel('percentage', fontsize=15);
# success rate by categories
print('Success rate'),
print('=================='),
for i in cf_df['category'].unique():
success_percentage = (len(cf_df.loc[cf_df['category'] == i][cf_df['end_with_success'] == True]) / \
len(cf_df.loc[cf_df['category'] == i]))*100
print("{category} :".format(category = i), round(success_percentage, 2),'%')
print('=================='),
print('Dance has the highest rate')
# category percentage distribution
figure = plt.figure(figsize=(10,7))
for i in cf_df['category'].unique():
sns.kdeplot(cf_df.loc[cf_df['category'] == '{i}'.format(i=i)]['percentage'], label = '{i}'.format(i=i))
plt.xlim(-100, 400);
plt.xticks(fontsize=15);
plt.yticks(fontsize=15);
plt.xlabel('Percentage', fontsize=15)
plt.ylabel('Distribution', fontsize=15)
plt.legend(fontsize = 15);
# distribution test between whole distribution and each category distribution
# K-S : Kolmogorov Smirnov test
for i in cf_df['category'].unique()[:-1]:
all_data = cf_df['funding_rate']
category_data = cf_df.loc[cf_df['category'] == i]['funding_rate']
# output values (p-value < 0.05)
if round(sp.stats.ks_2samp(all_data, category_data)[1], 4) < 0.05:
print('[all_sample vs {category_i}]'.format(category_i = i)),
print(' K-S statistic :', round(sp.stats.ks_2samp(all_data, category_data)[0], 4))
print(' p-value :', round(sp.stats.ks_2samp(all_data, category_data)[1], 4))
Explanation: B. Category
End of explanation
# video by category
print('video'),
print('=================='),
for i in cf_df['category'].unique():
video = (len(cf_df.loc[cf_df['category'] == i][cf_df['has_video'] == True]) / \
len(cf_df.loc[cf_df['category'] == i]))*100
print("{category} :".format(category = i), round(video, 2),'%')
overall = len(cf_df.loc[cf_df['has_video'] == True]) / len(cf_df)
print('overall :', round(overall, 2)*100, '%')
# number 0f video
check_number('has_video')
# percentage distribution by video status
figure = plt.figure(figsize=(10,8))
sns.kdeplot(cf_df[cf_df['has_video'] == True]['percentage'], label= 'video')
sns.kdeplot(cf_df[cf_df['has_video'] == False]['percentage'], label= 'non-video')
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('Percentage')
plt.ylabel('Distribution')
plt.xlim(-100, 400)
plt.legend(fontsize = 15)
# normality test
video_percent = cf_df[cf_df['has_video'] == True]['percentage']
no_video_percent = cf_df[cf_df['has_video'] == False]['percentage']
print('[video vs no_video]'),
print('Shapiro test statistics(video) :', sp.stats.shapiro(video_percent)[0],
' ','Shapiro test p-value(video) :', sp.stats.shapiro(video_percent)[1])
print('Shapiro test statistics(no_video) :', sp.stats.shapiro(no_video_percent)[0],
' ','Shapiro test p-value(no_video) :', sp.stats.shapiro(no_video_percent)[1])
print(sp.stats.ks_2samp(video_percent, no_video_percent))
print(sp.stats.mannwhitneyu(video_percent, no_video_percent))
Explanation: Categories are classified by categories
C. Video
End of explanation
# target_money, current_money
figure = plt.figure(figsize=(10,8))
sns.regplot(x ='target_money', y ='current_money', data = cf_df, fit_reg=False);
plt.xlim(-1000, 10000000);
plt.ylim(-2000, 10000000);
plt.xticks(fontsize=15);
plt.yticks(fontsize=15);
plt.xlabel('target_money', fontsize = 15);
plt.ylabel('current_money', fontsize = 15);
#Pearson Correlation
target_current_corr = sp.stats.pearsonr(cf_df['target_money'].tolist(), cf_df['current_money'].tolist())
print('Pearson correlation :', round(target_current_corr[0], 4))
Explanation: It has non normality distribution.
Different Distribution, Same mean.
D. Target money
End of explanation
figure = plt.figure(figsize=(10,8));
sns.regplot(cf_df['grammar_level'], cf_df['current_money'], color='g');
plt.ylim(-1000, 10000000);
plt.xticks(fontsize=15);
plt.yticks(fontsize=15);
plt.xlabel('grammar_level', fontsize = 15);
plt.ylabel('current_money', fontsize = 15);
#Pearson Correlation
grammar_current_corr = sp.stats.pearsonr(cf_df['grammar_level'].tolist(), cf_df['current_money'].tolist())
print('Pearson correlation :', round(grammar_current_corr[0], 4))
Explanation: Target_money and result are correlated
E. Grammar level
End of explanation
# distribution
figure = plt.figure(figsize=(8,6))
sns.kdeplot(cf_df[cf_df['end_with_success'] == True]['grammar_level'], label= 'success');
sns.kdeplot(cf_df[cf_df['end_with_success'] == False]['grammar_level'], label= 'fail',
c='r', linestyle='--');
plt.xticks(fontsize=10);
plt.yticks(fontsize=10);
plt.xlabel('Grammar_level');
plt.ylabel('Distribution');
plt.xlim(-0.1, 0.3);
plt.legend(fontsize = 15);
success_grammar = cf_df[cf_df['end_with_success'] == True]['grammar_level']
fail_grammar = cf_df[cf_df['end_with_success'] == False]['grammar_level']
print('Shapiro test statistics(success_grammar) :', round(sp.stats.shapiro(success_grammar)[0], 4),
' ','Shapiro test p-value(success_grammar) :', round(sp.stats.shapiro(success_grammar)[1], 4)),
print('Shapiro test statistics(fail_grammar) :', round(sp.stats.shapiro(fail_grammar)[0], 4),
' ','Shapiro test p-value(fail_grammar) :', round(sp.stats.shapiro(fail_grammar)[1], 4))
print(sp.stats.ks_2samp(success_grammar, fail_grammar))
print(sp.stats.mannwhitneyu(success_grammar, fail_grammar))
Explanation: very weak negative correlation
End of explanation
figure = plt.figure(figsize=(8,6))
sns.kdeplot(cf_df[cf_df['end_with_success'] == True]['num_funding_type'], label= 'success');
sns.kdeplot(cf_df[cf_df['end_with_success'] == False]['num_funding_type'], label= 'fail',
c='r', linestyle='--');
plt.xticks(fontsize=10);
plt.yticks(fontsize=10);
plt.xlabel('Num_funding_type');
plt.ylabel('Distribution');
#plt.xlim(-0.1, 0.3);
plt.legend(fontsize = 15);
success_count = cf_df[cf_df['end_with_success'] == True]['num_funding_type']
fail_count = cf_df[cf_df['end_with_success'] == False]['num_funding_type']
print('Shapiro test statistics(success_count) :', round(sp.stats.shapiro(success_count)[0], 4),
' ','Shapiro test p-value(success_count) :', round(sp.stats.shapiro(success_count)[1], 4)),
print('Shapiro test statistics(fail_count) :', round(sp.stats.shapiro(fail_count)[0], 4),
' ','Shapiro test p-value(fail_count) :', round(sp.stats.shapiro(fail_count)[1], 4))
print(sp.stats.ks_2samp(success_count, fail_count))
print(sp.stats.mannwhitneyu(success_count, fail_count))
Explanation: no normality distribution
same distribution, same mean (between success sample and fail sample)
F. Num_funding_type
End of explanation
check_number('start_month')
plt.figure(figsize=(8,6));
sns.kdeplot(cf_df.loc[cf_df['end_with_success'] ==True]['start_month'], label = 'success');
sns.kdeplot(cf_df.loc[cf_df['end_with_success'] ==False]['start_month'], label = 'fail');
plt.xticks(range(1, 12), fontsize=15);
plt.yticks(fontsize=15);
plt.xlabel('Month', fontsize=15);
plt.ylabel('Distribution', fontsize = 15);
plt.legend(fontsize = 15);
# Ks_2sampResult : Kolmogorov-Smirnov test
# Ttest_indResult : 2 sample T-test
success_month = cf_df.loc[cf_df['end_with_success'] ==True]['start_month']
fail_month = cf_df.loc[cf_df['end_with_success'] ==False]['start_month']
print(sp.stats.ks_2samp(success_month, fail_month))
print(sp.stats.ttest_ind(success_month, fail_month))
Explanation: no normality distribution
different distribution, different mean (between success sample and fail sample)
G. Month
End of explanation
plt.figure(figsize=(10,8));
sns.kdeplot(cf_df.loc[cf_df['end_with_success'] ==True]['funding_duration'], label = 'success');
sns.kdeplot(cf_df.loc[cf_df['end_with_success'] ==False]['funding_duration'], label = 'fail');
plt.yticks(fontsize=15);
plt.xlabel('Funding_duration', fontsize=15);
plt.ylabel('Distribution', fontsize = 15);
plt.legend(fontsize = 15);
# Ks_2sampResult : Kolmogorov-Smirnov test
# Ttest_indResult : 2 sample T-test
success_duration = cf_df.loc[cf_df['end_with_success'] ==True]['funding_duration']
fail_duration = cf_df.loc[cf_df['end_with_success'] ==False]['funding_duration']
print(sp.stats.ks_2samp(success_duration, fail_duration))
print(sp.stats.ttest_ind(success_duration, fail_duration))
Explanation: same distribution, different mean (between success sample and fail sample)
H. Funding_duration
End of explanation
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
cf_df['category_label'] = le.fit_transform(cf_df['category'])
category_class = le.classes_
category_class
Explanation: different distribution, different mean (between success sample and fail sample)
Label Encoding
End of explanation
# all features
x = pd.DataFrame([cf_df['category_label'], cf_df['target_money'],
cf_df['has_video'], cf_df['grammar_level'],
cf_df['start_month'], cf_df['funding_duration'],
cf_df['num_funding_type']]).T
#no month, duration, num_funding_type
x_grammar = pd.DataFrame([cf_df['category_label'], cf_df['target_money'],
cf_df['has_video'], cf_df['grammar_level']]).T
#no month, duration, num_funding_type, grammar
x_no_grammar = pd.DataFrame([cf_df['category_label'], cf_df['target_money'],
cf_df['has_video']]).T
y = cf_df['end_with_success']
Explanation: 2. Feature selection
According to proeject result, incorrent feaure selection leads to high variance of accuracy
We treat this part importantly
A. Variables Selection
End of explanation
x_kbest = SelectKBest(f_classif, k=4).fit_transform(x, y)
x_kbest
Explanation: B. SelectKBest
Univariate feature selection
Univariate feature selection works by selecting the best features based on univariate statistical tests
Removes all but the K-highest scoring features
Scoring Function
f_classif : Compute the ANOVA F-value for the provided sample.
End of explanation
x_categories = pd.DataFrame([x['category_label'], x['has_video'], x['start_month']]).T
x_nominal = pd.DataFrame([x['target_money'], x['grammar_level'], x['funding_duration'],
x['num_funding_type']]).T
category_corr = x_categories.corr(method='spearman')
print(category_corr);
sns.heatmap(category_corr);
plt.xticks(fontsize = 15);
plt.yticks(fontsize = 15);
nominal_corr = x_nominal.corr(method='pearson')
print(nominal_corr)
sns.heatmap(nominal_corr);
plt.xticks(fontsize = 15, rotation='vertical');
plt.yticks(fontsize = 15);
Explanation: when using k=4, (target_money, funding_duration, start_month, has_video) are selected
C. Correlation
End of explanation
re = RandomForestClassifier()
re.fit(x, y)
feature_importance = pd.DataFrame()
for i in np.arange(len(x.columns)):
if re.feature_importances_[i] >= 0.1:
fi = pd.DataFrame([x.columns[i], re.feature_importances_[i]],
index = ['features', 'importance']).T
feature_importance = feature_importance.append(fi)
feature_importance.index = np.arange(len(feature_importance))
feature_importance
Explanation: As a result, (target_money, funding_duration, num_funding_type) are highly correlated
According to correlation theory, one variable could be used among (target_money, funding_duration, num_funding_type)
Candidates are target_money, grammar_level, category_label, has_video, start_month
D. Feature Importance (Random Forest)
Ease of use
Relatively good accuracy
Robustness
End of explanation
from sklearn.cross_validation import ShuffleSplit
from collections import defaultdict
re_1 = RandomForestClassifier(n_estimators=20)
scores = defaultdict(list)
names = x.columns
#crossvalidate the scores on a number of different random splits of the data
for train_idx, test_idx in ShuffleSplit(len(x), 20, .3):
X_train, X_test = x.ix[train_idx], x.ix[test_idx]
Y_train, Y_test = y.ix[train_idx], y.ix[test_idx]
r = re.fit(X_train, Y_train)
acc = re.score(X_test, Y_test)
for i in x.columns:
X_drop_train = X_train.drop([i], axis=1)
X_drop_test = X_test.drop([i], axis=1)
re_1.fit(X_drop_train, Y_train)
shuff_acc = round(re_1.score(X_drop_test, Y_test), 4)
scores[i].append(round((acc-shuff_acc)/acc, 4))
print ("Features sorted by their score:"),
print (pd.DataFrame([sorted([(feat, round(np.mean(score), 4)) for feat, score in scores.items()],
reverse=True)], index = ['score']).T)
Explanation: target_money, grammar_level, funding_duration, start_month are selected
But it's always changed as model fits
E. Mean decrease accuracy
Directly measure the impact of each feature on accuracy of the model
For unimportant variables, the permutation should have little to no effect on model accuracy
Base model = RandomForest
End of explanation
from sklearn.feature_selection import RFE
names = x.columns
rfe = RFE(re)
rfe.fit(x, y)
print ("Features sorted by their rank"),
pd.DataFrame([sorted(zip(map(lambda x: round(x, 4), rfe.ranking_), names))],
index=['Ranking']).T
Explanation: (-) score means accuracy increases without a variable
Only target_money is selected
F. RFE (Recursive Feature Estimation)
Repeatedly construct a model and choose either the best or worst performing feature
Base model = RandomForest
End of explanation
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sklearn.cross_validation import StratifiedKFold
# using features
x_4 = pd.DataFrame([cf_df['target_money'], cf_df['start_month'],
cf_df['grammar_level'], cf_df['funding_duration']]).T
x_5 = pd.DataFrame([cf_df['target_money'], cf_df['start_month'],
cf_df['grammar_level'], cf_df['funding_duration'],
cf_df['category_label']]).T
success_percentage = y.value_counts()[1] / len(y)
print('Base success rate:' ,round(success_percentage * 100, 2),'%')
Explanation: G. # Result
Kbest = target_money, funding_duration, start_month, has_video
Correlation = target_money, grammar_level, start_month, has_video, category_label
Feature Importance = target_money, grammar_level, funding_duration, start_month
Mean decrese accuracy = target_money
Reculsive Feature Estimation = target_money, grammar_level, funding_duration, start_month, category_label
Rank : target_money > start_month > grammar_level > funding_duration > category_label = has_video
'num_funding_type' may be unimportant
Final Selection
4 features : target_money, start_month, grammar_level, funding_duration
5 features : target_money, start_month, grammar_level, funding_duration, category_label(or has_video)
3. Model Selection (Classification)
SVC
Gaussian Naive Bayes
Decision Tree
Random Forest
Select models show better performance than base success rate (62.34%)
End of explanation
svc_rbf = SVC(kernel = 'rbf')
#SVC kernel : rbf
print('7 features :', cross_val_score(svc_rbf, x, y).mean()),
print('=======================================================')
print('5 features : ', cross_val_score(svc_rbf, x_5, y).mean()),
print('======================================================='),
print('4 features : ', cross_val_score(svc_rbf, x_4, y).mean())
Explanation: A. SVC
End of explanation
gnb = GaussianNB()
# GNB
print('7 features :', cross_val_score(gnb, x, y).mean()),
print('=======================================================')
print('5 features : ', cross_val_score(gnb, x_5, y).mean()),
print('======================================================='),
print('4 features : ', cross_val_score(gnb, x_4, y).mean())
Explanation: As features reduce, score increases
B. GaussianNB
End of explanation
dt = DecisionTreeClassifier(criterion = 'entropy')
# DecisionTree
print('7 features :', cross_val_score(dt, x, y).mean()),
print('=======================================================')
print('5 features : ', cross_val_score(dt, x_5, y).mean()),
print('======================================================='),
print('4 features : ', cross_val_score(dt, x_4, y).mean())
Explanation: C. Decision Tree
End of explanation
rf = RandomForestClassifier(n_estimators=20)
# Random Forest
print('7 features :', cross_val_score(rf, x, y).mean()),
print('=======================================================')
print('5 features : ', cross_val_score(rf, x_5, y).mean()),
print('======================================================='),
print('4 features : ', cross_val_score(rf, x_4, y).mean())
Explanation: D. Radom Forest
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
print('7 features :', cross_val_score(knn, x, y).mean()),
print('=======================================================')
print('5 features : ', cross_val_score(knn, x_5, y).mean()),
print('======================================================='),
print('4 features : ', cross_val_score(knn, x_4, y).mean())
Explanation: E. KNN
End of explanation
#StratifiedKFold : proper method for binary classes
sfkfold = StratifiedKFold(y, n_folds = 10)
3 scores
def scores(model, x_1, x_2, x_3, y, cv):
scores = [cross_val_score(model, x_1, y, cv=cv).mean(),
cross_val_score(model, x_2, y, cv=cv).mean(),
cross_val_score(model, x_3, y, cv=cv).mean()]
return scores
svc_score = scores(svc_rbf, x_4, x_5, x, y, sfkfold)
dt_score = scores(dt,x_4, x_5, x, y, sfkfold)
rf_score = scores(rf, x_4, x_5, x, y, sfkfold)
knn_score = scores(knn, x_4, x_5, x, y, sfkfold)
gnb_score = scores(gnb, x_4, x_5, x, y, sfkfold)
figure = plt.figure(figsize=(10,7));
x_ticks = [len(x_4.columns), len(x_5.columns), len(x.columns)]
plt.plot(x_ticks, svc_score, 'o--', c='r', label = 'SVC');
plt.plot(x_ticks, dt_score, 'o--', c='y', label = 'Decision Tree');
plt.plot(x_ticks, rf_score, 'o--', c='g', label = 'Random Forest');
plt.plot(x_ticks, gnb_score, 'o--', c='c', label = 'GaussianNB');
plt.plot(x_ticks, knn_score, 'o--', c='m', label = 'KNN');
plt.legend(loc = 'lower right');
plt.xlabel('Parameter', fontsize=15);
plt.axhline(y = success_percentage, ls = '--', c='b');
plt.ylabel('Accuracy', fontsize=15);
plt.ylim(0.55, 0.83);
plt.xlim(3.5, 7.5);
Explanation: F. Model comparison
End of explanation
from sklearn.grid_search import GridSearchCV
# Gridsearch report function
from operator import itemgetter
def report(grid_scores, n_top=3):
top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]
for i, score in enumerate(top_scores):
print("Model with rank: {0}".format(i + 1))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
score.mean_validation_score,
np.std(score.cv_validation_scores)))
print("Parameters: {0}".format(score.parameters))
print("")
Explanation: Decision Tree, Random Forest, KNN are selected
3. Grid Search & Score
Decision Tree : Accuracy flucuates as selected features changed
Parameter tuning, Feature selection
Random Forest : Accuracy flucuates as selected features changed
Parameter tuning, Feture selection
KNN : Accuracy flucuates as selected features changed
Parameter tuning
End of explanation
#Decision Tree parameters
param_grid = {"max_depth": [3, 5, None],
"max_features": ['auto','sqrt', None],
"min_samples_split": np.arange(1, 5),
"min_samples_leaf": [1, 3, 5, 10]}
# run grid search
grid_search = GridSearchCV(dt, param_grid=param_grid)
grid_search.fit(x_4, y)
report(grid_search.grid_scores_)
Explanation: A. Decision Tree
x_features = 'target_money', 'start_month', 'grammar_level', 'funding_duration'
End of explanation
# Random Forest parameters
param_grid = {"min_samples_leaf": [1, 2, 3],
"n_estimators" : np.arange(2, 50),
"criterion" : ["gini", "entropy"],
"bootstrap" : [True, False]}
# run grid search
grid_search = GridSearchCV(rf, param_grid=param_grid)
grid_search.fit(x_4, y)
report(grid_search.grid_scores_)
Explanation: B. Random Forest
End of explanation
# KNN parameters
param_grid = {"n_neighbors": np.arange(1, 20)}
# run grid search
grid_search = GridSearchCV(knn, param_grid=param_grid)
grid_search.fit(x_4, y)
report(grid_search.grid_scores_)
Explanation: C. KNN
End of explanation
# Random Forest, KNN
x_model = []
rf_model = []
knn_model = []
figure = plt.figure(figsize=(10,7))
for i in range(2, 100):
knn_score = cross_val_score(KNeighborsClassifier(n_neighbors=i), x_4, y, cv=sfkfold).mean()
rf_score = cross_val_score(RandomForestClassifier(n_estimators=i), x_4, y, cv=sfkfold).mean()
x_model.append(i)
rf_model.append(rf_score)
knn_model.append(knn_score)
plt.plot(x_model, rf_model, 'o--', c='r', label = 'Random Forest')
plt.plot(x_model, knn_model, 'o--', c='g', label = 'KNN')
plt.xlabel('Parameter', fontsize=15)
plt.axhline(y = success_percentage, ls = '--', c='b', label = 'Base success rate')
plt.ylabel('Accuracy', fontsize=15)
plt.legend()
print('Random Forest max_accuracy :', round(max(rf_model)*100, 2), '%'),
print('KNN max_accuracy :', round(max(knn_model)*100, 2), '%')
Explanation: D. Pluctuation as a parameter change
Random Forest, KNN
KNN parameter = n_neighbors
Random Forest parameter = n_estimators
End of explanation
from sklearn.metrics import classification_report
from sklearn.metrics import auc
# optimum models
best_knn = KNeighborsClassifier(n_neighbors=5)
best_dt = DecisionTreeClassifier(max_depth= None, min_samples_leaf= 10, min_samples_split= 1, max_features= None)
best_rf = RandomForestClassifier(n_estimators=42, bootstrap=True, min_samples_leaf= 3, criterion ='gini')
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2)
best_knn.fit(x_train, y_train);
best_rf.fit(x_train, y_train);
best_dt.fit(x_train, y_train);
print('Success : True'),
print('Fail : False')
print('')
print("[KNN Classification_report]"),
print(classification_report(y_test, best_knn.predict(x_test))),
print("===================================================="),
print("[Random Forest Classification_report]"),
print(classification_report(y_test, best_rf.predict(x_test))),
print("===================================================="),
print("[Decision Tree Classification_report]"),
print(classification_report(y_test, best_dt.predict(x_test)))
Explanation: 4. Optimum models
A. Classification Report
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2)
# plot
plt.figure(figsize=(18,4))
plt.plot(y, 'bs-', markersize=10, alpha=0.6, label="Actual result")
plt.plot(best_knn.predict(x), 'ro-', markersize=10, alpha=0.6, label="Predicted result")
plt.legend()
plt.xlim(45, 145);
plt.ylim(-0.1, 1.1);
print('Acrual result vs predicted result')
Explanation: 5. Result
End of explanation
x_no_grammars = []
y_no_grammars = []
x_grammars = []
y_grammars = []
x_no_grammar = pd.DataFrame([cf_df['category_label'], cf_df['target_money'],
cf_df['has_video']]).T
x_grammar = pd.DataFrame([cf_df['category_label'], cf_df['target_money'],
cf_df['has_video'], cf_df['grammar_level']]).T
figure = plt.figure(figsize=(10, 7))
legend =['with_grammar', 'without_grammar']
for i in range(1, 100):
rf_grammar = RandomForestClassifier(n_estimators=i)
grammar_score = cross_val_score(rf_grammar, x_grammar, y, cv=10).mean()
x_grammars.append(i)
y_grammars.append(grammar_score)
for i in range(1, 100):
rf_no_grammar = RandomForestClassifier(n_estimators=i)
no_grammar_score = cross_val_score(rf_no_grammar, x_no_grammar, y, cv=10).mean()
x_no_grammars.append(i)
y_no_grammars.append(no_grammar_score)
plt.plot(x_grammars, y_grammars, 'o--', c='r', label = 'with grammar')
plt.plot(x_no_grammars, y_no_grammars, 'o--', c = 'y', label = 'with no grammar')
plt.legend(legend, loc=5, fontsize=15)
plt.xlabel('n_estimator', fontsize=15)
plt.ylabel('accuracy', fontsize=15)
print('max_accuracy(with_grammar_level) :', round(max(y_grammars)*100, 2), '%'),
print('max_accuracy(no_grammar_level) :', round(max(y_no_grammars)*100, 2), '%')
Explanation: Overall, performance of KNN is the best (Accuracy : 0.78, Recall : 0.79)
We have to intepret the result carefully. Because this report is conducted with few features(4) and samples(2000+), and all samples are gathered for 5 years(2011~). It may have overfitting and time parameters' effect.
But still, KNN performs amazingly, even if KNN is ease of use and optimal value(k) is highly data-dependent
Insight
Grammar level is estimated with project description (In previouse project, it was comments)
Grmmar level, date duration effects in this case
I expect grammar level represens project creator's ability or user's annoyance index
Number of funding type (range of choice) doesn't really matter (some samples only have 2 type, but others have over 5types)
Key figure of success is still quality of a project
To predict perfectly, crowd funding company must make measurable index of quality
Appendix : with grammar vs with no grammar
End of explanation |
11,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling rent prices for the SF Bay Area
This notebook will develop a predictive model for rent prices in the Bay Area using rental listings from Craigslist.
Data
Step1: Load and prepare data
Step3: What kind of features do we have?
Step4: Inspecting the columns, some are obviously not useful or redundant. E.g., 'median_income' is just the median income for the entire region.
Define the columns we'll use...
Step5: Missing data?
Step6: About half of rows have bathrooms missing. Turns out this is because bathrooms was only added to the scraper at the end of December. Is it better to exclude the bathrooms feature and use all the data? Or better to include the bathrooms feature and only use data collected beginning in January? I tried it both ways and the model is more accurate (and more interesting) when we include the bathrooms feature. Plus, all the data we collect from now on will have bathrooms.
Variable distributions
Look at distributions to get a better sense of what the variables mean and to look for outliers.
Step8: Create some additional features that might help
(Second iteration)
In the first iteration of models, there appeared to be an interaction effect with # bedrooms and # bathrooms; i.e., the number of bathrooms per bedroom seems to matter. E.g., a property with 4 bedrooms is probably not very desirable if it only has 1 bathroom, but might be much more desirable if it has 4 bedrooms and 3 bathrooms. So let's try adding a feature for bathrooms per bedroom. (Turns out this improved the linear model but not GB.. as expected.)
Step9: It also appears the relationship between rent and sqft is not linear. Let's add a sqft2 term
Step10: Linear model
Try a linear model first, at the very least to use as a baseline.
Step11: These coefficients aren't very useful without the standard errors. If I were interested in building a model for interpretation, I would use the statsmodels package, which has more functionality for statistical tests.
Step12: Scores for full dataset w/o bathrooms feature
- Mean squared error
Step14: Does not look like overfitting. (If there were, we'd probably have a much smaller error on training set and a larger error on test set.')
We do have a problem with colinearity and if we were using this model for interpretation we'd have to do better feature selection...
Gradient boosting
Step15: Tune the parameters for GB
Step16: Best RMSE when using no bathrooms
Step17: Using $/sqft as target variable
What if we use price per sqft as the y variable? Makes more sense conceptually.
Step18: Downside is that the distribution is not as normally distributed. It's not that bad though.
Linear model with $/sqft as target
Step19: Since the rent_sqft has a different scale than lnrent, we can't directly compare the RMSE, but we can compare variance score. This variance (0.66) is a bit lower than the linear model before (0.69), which is what we'd expect when we take the most important covariate and put it into the target variable. Still not a bad score here. Not great either, though.
Added sqft as feature, with rent_sqft still as target
Step20: Gradient boosting with $/sqft as target
Step21: Without parameter tuning, error is about the same as linear model. RMSE
Step22: without sqft as covariate
- RMSE with 500 estimators
Step23: Looks like bathrooms per bedroom might be a good feature to add. Tried this; it made the linear model better but the gradient boosting worse. Maybe because the GB already takes interactiion effects into account.
Error analysis
plot errors
map errors by long/lat
Step24: I can't find a clear spatial pattern with the errors. Can try with real map...
Write to database for use in rent-predictor app | Python Code:
import numpy as np
import pandas as pd
import os
import math
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
DATA_DIR = os.path.join('..','data','urbansim')
Explanation: Modeling rent prices for the SF Bay Area
This notebook will develop a predictive model for rent prices in the Bay Area using rental listings from Craigslist.
Data:
The dataset used here is Craigslist listings merged with Census and point of interest data. It includes all unique listings in the 'apartments/housing' section of Craigslist from 11/13/2016 - 3/17/2017.
Features include:
- property characteristics (rent, sq ft, # bedrooms, # bathrooms, coordinates)
- neighborhood characteristics at the Census block and block group levels (e.g., med hh income, race)
- jobs accessibility (e.g., jobs within a given radius)
- date listed on craigslist
The data has been cleaned and filtered, but features have not yet been carefully selected.
Goal:
Develop a model to predict rent as accurately as possible.
End of explanation
# this particular dataset is a subset of all the features available
infile = 'ba_block_small.csv'
df = pd.read_csv(os.path.join(DATA_DIR,infile))
df = df.ix[:,1:] # exclude first column, which was empty
print(df.shape)
df.head()
Explanation: Load and prepare data
End of explanation
# Let's look at some summary stats for key features
# property characteristics
df[['rent','sqft','rent_sqft','bedrooms','bathrooms']].describe()
# census features
census_features = ['bgpop', 'bgacres', 'bgjobs', 'bgmedkids', 'bgmedhhs',
'bgmedinc', 'proprent', 'propwhite','propblack', 'propasian', 'pumahhden', 'prop1per', 'prop2per', 'bgmedagehd',
'pct1per', 'pct2per', 'pctrent','pctblack', 'pctwhite', 'pctasian', 'bgpopden', 'bgjobden']
df[census_features].describe()
# accessibility features
access_features = ['lowinc1500m', 'highinc1500m', 'lnjobs5000m','lnjobs30km', 'lnpop400m', 'lnpop800m', 'lnjobs800m','lntcpuw3000m',
'pumajobden', 'lnjobs40km', 'lnret3000m', 'lnfire3000m', 'lnserv3000m','highlowinc1500m']
df[access_features].describe()
def log_var(x):
Return log of x, but NaN if zero.
if x==0:
return np.nan
else:
return np.log(x)
# add ln rent
df['lnrent'] = df.rent.apply(log_var)
#df.columns
df.rent_sqft.mean()
df.sqft.mean()*.35/df.rent.mean()
Explanation: What kind of features do we have?
End of explanation
cols_to_use = ['rent','rent_sqft','lnrent', 'sqft','bedrooms','bathrooms','longitude', 'latitude','bgpop','bgjobs', 'bgmedkids', 'bgmedhhs','bgmedinc', 'proprent', 'lowinc1500m', 'highinc1500m', 'lnjobs5000m',
'lnjobs30km', 'lnpop400m', 'lnpop800m', 'lnjobs800m', 'propwhite','propblack', 'propasian', 'pumahhden', 'lnbasic3000m', 'lntcpuw3000m',
'pumajobden', 'lnjobs40km', 'lnret3000m', 'lnfire3000m', 'lnserv3000m','prop1per', 'prop2per', 'bgmedagehd', 'puma1', 'puma2', 'puma3',
'puma4', 'northsf', 'pct1per', 'pct2per', 'pctrent','pctblack', 'pctwhite', 'pctasian', 'y17jan', 'y17feb', 'y17mar',
'bgpopden', 'bgjobden', 'highlowinc1500m']
x_cols = ['sqft','bedrooms','bathrooms','longitude', 'latitude','bgpop','bgjobs', 'bgmedkids', 'bgmedhhs',
'bgmedinc', 'proprent', 'lowinc1500m', 'highinc1500m', 'lnjobs5000m','lnjobs30km', 'lnpop400m', 'lnpop800m', 'lnjobs800m', 'propwhite',
'propblack', 'propasian', 'pumahhden', 'lnbasic3000m', 'lntcpuw3000m','pumajobden', 'lnjobs40km', 'lnret3000m', 'lnfire3000m', 'lnserv3000m',
'prop1per', 'prop2per', 'bgmedagehd', 'puma1', 'puma2', 'puma3','puma4', 'northsf', 'pct1per', 'pct2per', 'pctrent',
'pctblack', 'pctwhite', 'pctasian', 'y17jan', 'y17feb', 'y17mar','bgpopden', 'bgjobden', 'highlowinc1500m']
y_col = 'lnrent'
print(len(x_cols))
Explanation: Inspecting the columns, some are obviously not useful or redundant. E.g., 'median_income' is just the median income for the entire region.
Define the columns we'll use...
End of explanation
df = df[cols_to_use]
print('total rows:',len(df))
df_notnull = df.dropna(how='any')
print('excluding NAs:',len(df_notnull))
df = df_notnull
Explanation: Missing data?
End of explanation
plot_rows = math.ceil(len(cols_to_use)/2)
f, axes = plt.subplots(plot_rows,2, figsize=(8,35))
sns.despine(left=True)
for i,col in enumerate(cols_to_use):
row_position = math.floor(i/2)
col_position = i%2
sns.distplot(df_notnull[col], ax=axes[row_position, col_position],kde=False)
axes[row_position, col_position].set_title('{}'.format(col))
plt.tight_layout()
plt.show()
sns.distplot(df['lnrent'])
# $/sqft as a function of sqft
#plt.scatter(df.sqft, df.rent_sqft)
#plt.ylabel('rent per sq ft')
#plt.xlabel('sqft')
#plt.show()
Explanation: About half of rows have bathrooms missing. Turns out this is because bathrooms was only added to the scraper at the end of December. Is it better to exclude the bathrooms feature and use all the data? Or better to include the bathrooms feature and only use data collected beginning in January? I tried it both ways and the model is more accurate (and more interesting) when we include the bathrooms feature. Plus, all the data we collect from now on will have bathrooms.
Variable distributions
Look at distributions to get a better sense of what the variables mean and to look for outliers.
End of explanation
def bath_var(row):
Make bathrooms/bedroom variable.
# Avoid 0 in demoninator. When bedrooms = 0, it's probably a studio. so for practical purposes, br=1
if row['bedrooms']==0:
br = 1
else:
br = row['bedrooms']
return row['bathrooms']/br
# add a variable bath_bed (bathrooms per bedroom)
df['bath_bed'] = df.apply(bath_var, axis=1)
cols_to_use.append('bath_bed')
x_cols.append('bath_bed')
Explanation: Create some additional features that might help
(Second iteration)
In the first iteration of models, there appeared to be an interaction effect with # bedrooms and # bathrooms; i.e., the number of bathrooms per bedroom seems to matter. E.g., a property with 4 bedrooms is probably not very desirable if it only has 1 bathroom, but might be much more desirable if it has 4 bedrooms and 3 bathrooms. So let's try adding a feature for bathrooms per bedroom. (Turns out this improved the linear model but not GB.. as expected.)
End of explanation
df['sqft2'] = df['sqft']**2
cols_to_use.append('sqft2')
x_cols.append('sqft2')
Explanation: It also appears the relationship between rent and sqft is not linear. Let's add a sqft2 term
End of explanation
from sklearn import linear_model, model_selection
print('target variable:',y_col)
X_train, X_test, y_train, y_test = model_selection.train_test_split(df[x_cols],df[y_col], test_size = .3, random_state = 201)
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
# Intercept
print('Intercept:', regr.intercept_)
# The coefficients
print('Coefficients:')
pd.Series(regr.coef_, index=x_cols)
Explanation: Linear model
Try a linear model first, at the very least to use as a baseline.
End of explanation
from sklearn.metrics import r2_score
# See mean square error, using test data
print("Mean squared error: %.2f" % np.mean((regr.predict(X_test) - y_test) ** 2))
print("RMSE:", np.sqrt(np.mean((regr.predict(X_test) - y_test) ** 2)))
# Explained variance score: 1 is perfect prediction.
print('Variance score: %.2f' % regr.score(X_test, y_test))
# R2
print('R2:',r2_score(y_test, regr.predict(X_test)))
Explanation: These coefficients aren't very useful without the standard errors. If I were interested in building a model for interpretation, I would use the statsmodels package, which has more functionality for statistical tests.
End of explanation
# It's a good idea to look at the residuals to make sure we don't have any gross violations of OLS
# Plot predicted values vs. observed
plt.scatter(regr.predict(X_train),y_train, color='blue',s=1, alpha=.5)
plt.show()
# plot residuals vs predicted values
plt.scatter(regr.predict(X_train), regr.predict(X_train)- y_train, color='blue',s=1, alpha=.5)
plt.scatter(regr.predict(X_test), regr.predict(X_test)- y_test, color='green',s=1, alpha=.5)
plt.show()
print("Training set. Mean squared error: %.5f" % np.mean((regr.predict(X_train) - y_train) ** 2), '| Variance score: %.5f' % regr.score(X_train, y_train))
print("Test set. Mean squared error: %.5f" % np.mean((regr.predict(X_test) - y_test) ** 2), '| Variance score: %.5f' % regr.score(X_test, y_test))
Explanation: Scores for full dataset w/o bathrooms feature
- Mean squared error: 0.03
- RMSE: 0.186277629605
- Variance score: 0.68
Scores w/ bathrooms feature, dropping missing values - slightly better
Mean squared error: 0.03
RMSE: 0.181372180936
Variance score: 0.69
Scores w/ bath_bed and sqft^2 features added
Mean squared error: 0.03
RMSE: 0.175778312162
Variance score: 0.71
End of explanation
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import GridSearchCV
def RMSE(y_actual, y_predicted):
return np.sqrt(mean_squared_error(y_actual, y_predicted))
def cross_val_gb(X,y,cv_method='kfold',k=5, **params):
Estimate gradient boosting regressor using cross validation.
Args:
X (DataFrame): features data
y (Series): target data
cv_method (str): how to split the data ('kfold' (default) or 'timeseries')
k (int): number of folds (default=5)
**params: keyword arguments for regressor
Returns:
float: mean error (RMSE) across all training/test sets.
if cv_method == 'kfold':
kf = KFold(n_splits=k, shuffle=True, random_state=2012016) # use random seed for reproducibility.
E = np.ones(k) # this array will hold the errors.
i=0
for train, test in kf.split(X, y):
train_data_x = X.iloc[train]
train_data_y = y.iloc[train]
test_data_x = X.iloc[test]
test_data_y = y.iloc[test]
# n_estimators is number of trees to build.
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(train_data_x,train_data_y)
predict_y=grad_boost.predict(test_data_x)
E[i] = RMSE(test_data_y, predict_y)
i+=1
return np.mean(E)
df_X = df[x_cols]
df_y = df[y_col]
Explanation: Does not look like overfitting. (If there were, we'd probably have a much smaller error on training set and a larger error on test set.')
We do have a problem with colinearity and if we were using this model for interpretation we'd have to do better feature selection...
Gradient boosting
End of explanation
params = {'n_estimators':100,
'learning_rate':0.1,
'max_depth':1,
'min_samples_leaf':4
}
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(df_X,df_y)
cross_val_gb(df_X,df_y, **params)
param_grid = {'learning_rate':[.5 ,.1, .05],
'max_depth':[2,4,6,12],
'min_samples_leaf': [5,9,17],
'max_features': [1, .3, .1]
}
est= GradientBoostingRegressor(n_estimators = 500)
gs_cv = GridSearchCV(est,param_grid).fit(df_X,df_y)
print(gs_cv.best_params_)
print(gs_cv.best_score_)
param_grid = {'learning_rate':[5,.2,.1],
'max_depth':[6,8,12],
'min_samples_leaf': [17,25],
'max_features': [.3]
}
#est= GradientBoostingRegressor(n_estimators = 100)
#gs_cv = GridSearchCV(est,param_grid).fit(df_X,df_y)
print(gs_cv.best_params_)
print(gs_cv.best_score_)
# best parameters
params = {'n_estimators':500,
'learning_rate':0.1,
'max_depth':4,
'min_samples_leaf':9,
'max_features':.3
}
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(df_X,df_y)
cross_val_gb(df_X,df_y, **params, k=3)
Explanation: Tune the parameters for GB
End of explanation
# plot the importances
gb_o = pd.DataFrame({'features':x_cols,'importance':grad_boost.feature_importances_})
gb_o= gb_o.sort_values(by='importance',ascending=False)
plt.figure(1,figsize=(12, 6))
plt.xticks(range(len(gb_o)), gb_o.features,rotation=45)
plt.plot(range(len(gb_o)),gb_o.importance,"o")
plt.title('Feature importances')
plt.show()
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble.partial_dependence import partial_dependence
#for i,col in enumerate(df_X.columns):
# print(i,col)
features = [0,1,2,3,4, 13, 46,5,11]
names = df_X.columns
fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (10,8))
fig.suptitle('Partial dependence of rental price features')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
plt.show()
features = [(0,2),(1,2),(3,4),(13,2)]
names = df_X.columns
fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (9,6))
fig.suptitle('Partial dependence of rental price features')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
plt.show()
Explanation: Best RMSE when using no bathrooms: 0.1308920
Best RMSE with bathrooms feature: 0.132212
End of explanation
y_col = 'rent_sqft'
df_y = df[y_col]
x_cols = cols_to_use[3:]
df_X = df[x_cols]
sns.distplot(df['rent_sqft'])
Explanation: Using $/sqft as target variable
What if we use price per sqft as the y variable? Makes more sense conceptually.
End of explanation
X_train, X_test, y_train, y_test = model_selection.train_test_split(df[x_cols],df[y_col], test_size = .3, random_state = 201)
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
# Intercept
#print('Intercept:', regr.intercept_)
# The coefficients
#print('Coefficients:')
#pd.Series(regr.coef_, index=x_cols)
# See mean square error, using test data
print("Mean squared error: %.2f" % np.mean((regr.predict(X_test) - y_test) ** 2))
print("RMSE:", np.sqrt(np.mean((regr.predict(X_test) - y_test) ** 2)))
# Explained variance score: 1 is perfect prediction.
print('Variance score: %.2f' % regr.score(X_test, y_test))
print('R2:',r2_score(y_test, regr.predict(X_test)))
Explanation: Downside is that the distribution is not as normally distributed. It's not that bad though.
Linear model with $/sqft as target
End of explanation
# Plot predicted values vs. observed
plt.scatter(regr.predict(X_train),y_train, color='blue',s=1, alpha=.5)
plt.show()
# plot residuals vs predicted values
plt.scatter(regr.predict(X_train), regr.predict(X_train)- y_train, color='blue',s=1, alpha=.5)
plt.scatter(regr.predict(X_test), regr.predict(X_test)- y_test, color='green',s=1, alpha=.5)
plt.show()
Explanation: Since the rent_sqft has a different scale than lnrent, we can't directly compare the RMSE, but we can compare variance score. This variance (0.66) is a bit lower than the linear model before (0.69), which is what we'd expect when we take the most important covariate and put it into the target variable. Still not a bad score here. Not great either, though.
Added sqft as feature, with rent_sqft still as target:
- Mean squared error: 0.32
- RMSE: 0.56438373549
- Variance score: 0.69
Added bath/bedrooms feature
Mean squared error: 0.31
RMSE: 0.559876928818
Variance score: 0.70
Added sqft^2 feature
Mean squared error: 0.28
RMSE: 0.526323190891
Variance score: 0.73
End of explanation
# because GB can model nonlinear relationships and interaction effects, we don't need sqft^2 or bed_bath as features.
x_cols = ['sqft','bedrooms','bathrooms','longitude', 'latitude','bgpop','bgjobs', 'bgmedkids', 'bgmedhhs',
'bgmedinc', 'proprent', 'lowinc1500m', 'highinc1500m', 'lnjobs5000m','lnjobs30km', 'lnpop400m', 'lnpop800m', 'lnjobs800m', 'propwhite',
'propblack', 'propasian', 'pumahhden', 'lnbasic3000m', 'lntcpuw3000m','pumajobden', 'lnjobs40km', 'lnret3000m', 'lnfire3000m', 'lnserv3000m',
'prop1per', 'prop2per', 'bgmedagehd', 'puma1', 'puma2', 'puma3','puma4', 'northsf', 'pct1per', 'pct2per', 'pctrent',
'pctblack', 'pctwhite', 'pctasian', 'y17jan', 'y17feb', 'y17mar','bgpopden', 'bgjobden', 'highlowinc1500m']
df_X = df[x_cols]
y_col = 'rent_sqft'
df_y = df[y_col]
df_X.shape
params = {'n_estimators':100,
'learning_rate':0.1,
'max_depth':1,
'min_samples_leaf':4
}
#grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
#grad_boost.fit(df_X,df_y)
#cross_val_gb(df_X,df_y, **params)
Explanation: Gradient boosting with $/sqft as target
End of explanation
# find the optimal parameters
param_grid = {'learning_rate':[.5 ,.1, .05],
'max_depth':[2,4,6,12],
'min_samples_leaf': [5,9,17],
'max_features': [1, .3, .1]
}
#est= GradientBoostingRegressor(n_estimators = 500)
#gs_cv = GridSearchCV(est,param_grid).fit(df_X,df_y)
print(gs_cv.best_params_)
print(gs_cv.best_score_)
# run with best paramters
params = {'n_estimators':500,
'learning_rate':0.05,
'max_depth':2,
'min_samples_leaf':5,
'max_features':.3
}
#grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
#grad_boost.fit(df_X,df_y)
#cross_val_gb(df_X,df_y, **params, k=3)
# best parameters, using 1000 estimators
params = {'n_estimators':1000,
'learning_rate':0.05,
'max_depth':2,
'min_samples_leaf':5,
'max_features':.3
}
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(df_X,df_y)
cross_val_gb(df_X,df_y, **params, k=3)
import pickle
# save as pickle for web app
fname = 'fitted_gb.p'
with open(os.path.join(DATA_DIR,fname), 'wb') as pfile:
pickle.dump(grad_boost, pfile)
# to load the pickled object:
fname = 'fitted_gb.p'
with open(os.path.join(DATA_DIR,fname), 'rb') as pfile:
grad_boost2 = pickle.load(pfile)
# save median/mean feature values to use as defaults in the web app
df_means = df_X.mean()
fname = 'data_averages.p'
with open(os.path.join(DATA_DIR, fname), 'wb') as pfile:
pickle.dump(df_means, pfile)
# to load the pickled object:
fname = 'data_averages.p'
with open(os.path.join(DATA_DIR,fname), 'rb') as pfile:
df_means2 = pickle.load(pfile)
print(df_means2.head())
# testing prediction
y_predicted = grad_boost2.predict(df_means2)
print(y_predicted)
print(df_y.mean())
Explanation: Without parameter tuning, error is about the same as linear model. RMSE: 0.5947
Adding sqft as covariate : RMSE: 0.5258
Tune the parameters
End of explanation
# plot the importances
gb_o = pd.DataFrame({'features':x_cols,'importance':grad_boost.feature_importances_})
gb_o= gb_o.sort_values(by='importance',ascending=False)
plt.figure(1,figsize=(12, 6))
plt.xticks(range(len(gb_o)), gb_o.features,rotation=45)
plt.plot(range(len(gb_o)),gb_o.importance,"o")
plt.title('Feature importances')
plt.show()
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble.partial_dependence import partial_dependence
# choose features for partial dependent plots
#for i,col in enumerate(df_X.columns):
# print(i,col)
features = [0,1,2,3,4,8,14,16,12,13,24]
names = df_X.columns
fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (10,10))
fig.suptitle('Partial dependence of rental price features')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
plt.show()
features = [(0,1),(0,2),(1,2),(0,3),(0,9),(3,4)]
names = df_X.columns
fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (9,6))
fig.suptitle('Partial dependence of rental price features')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
plt.show()
Explanation: without sqft as covariate
- RMSE with 500 estimators: 0.4708
- RMSE with 1000 estimators: 0.4658
- linear model: 0.5965
with sqft as covariate
- RMSE with 500 estimators: 0.3503
- RMSE with 1000 estimators: 0.3508
- linear model: 0.5644
End of explanation
y_pred = grad_boost.predict(df_X)
# plot predicted vs. actual
plt.scatter(y_pred,df_y, color='blue',s=1, alpha=.5)
plt.show()
# plot errors vs. predicted
plt.scatter(y_pred, y_pred-df_y, color='blue',s=1,alpha=.5 )
plt.show()
sns.distplot(y_pred-df_y)
from matplotlib.colors import Normalize
# map the errors.
x = df_X['longitude']
y = df_X['latitude']
z = y_pred-df_y
norm = Normalize(vmin=-1,vmax=1) # zoom in on middle of error range
plt.figure(figsize=(6,9))
plt.scatter(x,y, c=z, cmap='jet',s=8,alpha=.5,edgecolors='face', norm=norm)
#plt.xlim(-122.6,-122)
#plt.ylim(37.6,38)
plt.colorbar()
plt.show()
# it's hard to see a pattern in the above, but looks like errors are larger in the central city areas.
# only show large errors, to make them more clear.
# map the errors.
err_cutoff = .8
print('# obs where abs(error)>{}:'.format(err_cutoff),len(z[abs(z)>err_cutoff]))
z_large_err = z[abs(z)>err_cutoff]
x_large_err = x.ix[z_large_err.index,]
y_large_err = y.ix[z_large_err.index,]
plt.figure(figsize=(6,9))
plt.scatter(x_large_err,y_large_err, c=z_large_err, cmap='jet',s=8,alpha=.5,edgecolors='face')
plt.colorbar()
plt.show()
plt.figure(figsize=(6,6))
plt.scatter(x_large_err,y_large_err, c=z_large_err, cmap='jet',s=8,alpha=.5,edgecolors='face')
plt.xlim(-122.5,-122.2)
plt.ylim(37.7,37.9)
plt.colorbar()
plt.show()
Explanation: Looks like bathrooms per bedroom might be a good feature to add. Tried this; it made the linear model better but the gradient boosting worse. Maybe because the GB already takes interactiion effects into account.
Error analysis
plot errors
map errors by long/lat
End of explanation
# this particular dataset is a subset of all the features available
infile = 'ba_block_small.csv'
df = pd.read_csv(os.path.join(DATA_DIR,infile), dtype={'fips_block':str})
df = df.ix[:,1:] # exclude first column, which was empty
print(df.shape)
df.head()
fips_list = df['fips_block'].unique()
len(fips_list)
cols_for_db = ['pid','fips_block','latitude','longitude']
df[cols_for_db].to_csv('/Users/lisarayle/rent_predictor/local_setup_files/data/ba_data_temp.csv', index=False)
# map the errors.
x = df['longitude']
y = df['latitude']
plt.figure(figsize=(6,9))
plt.scatter(x,y,s=1,alpha=.5)
#plt.xlim(-122.6,-122)
#plt.ylim(37.6,38)
plt.show()
Explanation: I can't find a clear spatial pattern with the errors. Can try with real map...
Write to database for use in rent-predictor app
End of explanation |
11,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 23
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: Code from the previous chapter
Step2: In the previous chapter we developed a model of the flight of a
baseball, including gravity and a simple version of drag, but neglecting spin, Magnus force, and the dependence of the coefficient of drag on velocity.
In this chapter we apply that model to an optimization problem.
The Manny Ramirez problem
Manny Ramirez is a former member of the Boston Red Sox (an American
baseball team) who was notorious for his relaxed attitude and taste for practical jokes. Our objective in this chapter is to solve the following Manny-inspired problem
Step3: range_func makes a new System object with the given value of
angle. Then it calls run_solve_ivp and
returns the final value of x from the results.
We can call range_func directly like this
Step4: And we can sweep a sequence of angles like this
Step5: Here's what the results look like.
Step6: It looks like the optimal angle is near 40°.
We can find the optimal angle more precisely and more efficiently using maximize_scalar, like this
Step7: The first parameter is the function we want to maximize. The second is
the range of values we want to search; in this case, it's the range of
angles from 0° to 90°.
The return value from maximize is an object that contains the
results, including x, which is the angle that yielded the highest
range, and fun, which is the value of range_func when it's evaluated at x, that is, range when the baseball is launched at the optimal angle.
Step8: For these parameters, the optimal angle is about 41°, which yields a
range of 100 m.
Summary
If you enjoy this exercise, you might be interested in this paper
Step9: Next, write a function called height_func that takes a launch angle, simulates the flight of a baseball, and returns the height of the baseball when it reaches the wall.
Test your function with the initial conditions.
Step10: Now use maximize_scalar to find the optimal angle. Is it higher or lower than the angle that maximizes range?
Step11: Even though we are finding the "minimum" velocity, we are not really solving a minimization problem. Rather, we want to find the velocity that makes the height at the wall exactly 37 feet (11.3 m), given that it's launched at the optimal angle. And that's a job for root_scalar.
Write an error function that takes a velocity and a System object as parameters. It should use maximize_scalar to find the highest possible height of the ball at the wall, for the given velocity. Then it should return the difference between that optimal height and 11.3 meters.
Step12: Test your error function before you call root_scalar.
Step13: Then use root_scalar to find the answer to the problem, the minimum velocity that gets the ball out of the park.
Step14: And just to check, run error_func with the value you found. | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 23
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
from modsim import Params
params = Params(
x = 0, # m
y = 1, # m
angle = 45, # degree
velocity = 40, # m / s
mass = 145e-3, # kg
diameter = 73e-3, # m
C_d = 0.33, # dimensionless
rho = 1.2, # kg/m**3
g = 9.8, # m/s**2
)
from modsim import State, System, pol2cart
from numpy import pi, deg2rad
def make_system(params):
# convert angle to degrees
theta = deg2rad(params.angle)
# compute x and y components of velocity
vx, vy = pol2cart(theta, params.velocity)
# make the initial state
init = State(x=params.x, y=params.y, vx=vx, vy=vy)
# compute the frontal area
area = pi * (params.diameter/2)**2
return System(init = init,
mass = params.mass,
area = area,
C_d = params.C_d,
rho = params.rho,
g = params.g,
t_end=10)
from modsim import vector_mag, vector_hat
def drag_force(V, system):
rho, C_d, area = system.rho, system.C_d, system.area
mag = rho * vector_mag(V)**2 * C_d * area / 2
direction = -vector_hat(V)
f_drag = mag * direction
return f_drag
from modsim import Vector
def slope_func(t, state, system):
x, y, vx, vy = state
mass, g = system.mass, system.g
V = Vector(vx, vy)
a_drag = drag_force(V, system) / mass
a_grav = Vector(0, -g)
A = a_grav + a_drag
return V.x, V.y, A.x, A.y
def event_func(t, state, system):
x, y, vx, vy = state
return y
Explanation: Code from the previous chapter
End of explanation
from modsim import run_solve_ivp
def range_func(angle, params):
params = params.set(angle=angle)
system = make_system(params)
results, details = run_solve_ivp(system, slope_func,
events=event_func)
x_dist = results.iloc[-1].x
print(angle, x_dist)
return x_dist
Explanation: In the previous chapter we developed a model of the flight of a
baseball, including gravity and a simple version of drag, but neglecting spin, Magnus force, and the dependence of the coefficient of drag on velocity.
In this chapter we apply that model to an optimization problem.
The Manny Ramirez problem
Manny Ramirez is a former member of the Boston Red Sox (an American
baseball team) who was notorious for his relaxed attitude and taste for practical jokes. Our objective in this chapter is to solve the following Manny-inspired problem:
What is the minimum effort required to hit a home run in Fenway Park?
Fenway Park is a baseball stadium in Boston, Massachusetts. One of its
most famous features is the "Green Monster", which is a wall in left
field that is unusually close to home plate, only 310 feet away. To
compensate for the short distance, the wall is unusually high, at 37
feet (see http://modsimpy.com/wally).
We want to find the minimum velocity at which a ball can leave home
plate and still go over the Green Monster. We'll proceed in the
following steps:
For a given velocity, we'll find the optimal launch angle, that is, the angle the ball should leave home plate to maximize its height when it reaches the wall.
Then we'll find the minimal velocity that clears the wall, given
that it has the optimal launch angle.
Finding the range
Suppose we want to find the launch angle that maximizes range, that is, the distance the ball travels in the air before landing. We'll use a function in the ModSim library, maximize, which takes a function and finds its maximum.
The function we pass to maximize should take launch angle in degrees, simulate the flight of a ball launched at that angle, and return the distance the ball travels along the $x$ axis.
End of explanation
range_func(45, params)
Explanation: range_func makes a new System object with the given value of
angle. Then it calls run_solve_ivp and
returns the final value of x from the results.
We can call range_func directly like this:
End of explanation
from modsim import linspace, SweepSeries
angles = linspace(20, 80, 21)
sweep = SweepSeries()
for angle in angles:
x_dist = range_func(angle, params)
sweep[angle] = x_dist
Explanation: And we can sweep a sequence of angles like this:
End of explanation
from modsim import decorate
sweep.plot()
decorate(xlabel='Launch angle (degree)',
ylabel='Range (meter)')
Explanation: Here's what the results look like.
End of explanation
from modsim import maximize_scalar
res = maximize_scalar(range_func, [0, 90], params)
res.message
Explanation: It looks like the optimal angle is near 40°.
We can find the optimal angle more precisely and more efficiently using maximize_scalar, like this:
End of explanation
res.x, res.fun
Explanation: The first parameter is the function we want to maximize. The second is
the range of values we want to search; in this case, it's the range of
angles from 0° to 90°.
The return value from maximize is an object that contains the
results, including x, which is the angle that yielded the highest
range, and fun, which is the value of range_func when it's evaluated at x, that is, range when the baseball is launched at the optimal angle.
End of explanation
# Solution
def event_func(t, state, system):
x, y, vx, vy = state
return x - 94.5
# Solution
system = make_system(params)
event_func(0, system.init, system)
Explanation: For these parameters, the optimal angle is about 41°, which yields a
range of 100 m.
Summary
If you enjoy this exercise, you might be interested in this paper: "How to hit home runs: Optimum baseball bat swing parameters for maximum range trajectories", by Sawicki, Hubbard, and Stronge, at
http://modsimpy.com/runs.
Exercise
Exercise: Let's finish off the Manny Ramirez problem:
What is the minimum effort required to hit a home run in Fenway Park?
Although the problem asks for a minimum, it is not an optimization problem. Rather, we want to solve for the initial velocity that just barely gets the ball to the top of the wall, given that it is launched at the optimal angle.
And we have to be careful about what we mean by "optimal". For this problem, we don't want the longest range, we want the maximum height at the point where it reaches the wall.
If you are ready to solve the problem on your own, go ahead. Otherwise I will walk you through the process with an outline and some starter code.
As a first step, write an event_func that stops the simulation when the ball reaches the wall at 310 feet (94.5 m).
Test your function with the initial conditions.
End of explanation
# Solution
def height_func(angle, params):
params = params.set(angle=angle)
system = make_system(params)
results, details = run_solve_ivp(system, slope_func,
events=event_func)
height = results.iloc[-1].y
return height
# Solution
height_func(40, params)
Explanation: Next, write a function called height_func that takes a launch angle, simulates the flight of a baseball, and returns the height of the baseball when it reaches the wall.
Test your function with the initial conditions.
End of explanation
# Solution
bounds = [0, 90]
res = maximize_scalar(height_func, bounds, params)
res.message
# Solution
res.x, res.fun
Explanation: Now use maximize_scalar to find the optimal angle. Is it higher or lower than the angle that maximizes range?
End of explanation
# Solution
def error_func(velocity, params):
print(velocity)
params = params.set(velocity=velocity)
bounds = [0, 90]
res = maximize_scalar(height_func, bounds, params)
return res.fun - 11.3
Explanation: Even though we are finding the "minimum" velocity, we are not really solving a minimization problem. Rather, we want to find the velocity that makes the height at the wall exactly 37 feet (11.3 m), given that it's launched at the optimal angle. And that's a job for root_scalar.
Write an error function that takes a velocity and a System object as parameters. It should use maximize_scalar to find the highest possible height of the ball at the wall, for the given velocity. Then it should return the difference between that optimal height and 11.3 meters.
End of explanation
# Solution
error_func(40, params)
Explanation: Test your error function before you call root_scalar.
End of explanation
# Solution
from scipy.optimize import root_scalar
bracket = [30, 50]
res = root_scalar(error_func, params, bracket=bracket)
# Solution
res
# Solution
min_velocity = res.root
min_velocity
Explanation: Then use root_scalar to find the answer to the problem, the minimum velocity that gets the ball out of the park.
End of explanation
# Solution
error_func(min_velocity, params)
Explanation: And just to check, run error_func with the value you found.
End of explanation |
11,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CM360 Segmentology
CM360 funnel analysis using Census data.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter CM360 Segmentology Recipe Parameters
Wait for BigQuery->->->Census_Join to be created.
Join the StarThinker Assets Group to access the following assets
Copy CM360 Segmentology Sample. Leave the Data Source as is, you will change it in the next step.
Click Edit Connection, and change to BigQuery->->->Census_Join.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute CM360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: CM360 Segmentology
CM360 funnel analysis using Census data.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'account':'',
'auth_read':'user', # Credentials used for reading data.
'auth_write':'service', # Authorization used for writing data.
'recipe_name':'', # Name of report, not needed if ID used.
'date_range':'LAST_365_DAYS', # Timeframe to run report for.
'recipe_slug':'', # Name of Google BigQuery dataset to create.
'advertisers':[], # Comma delimited list of CM360 advertiser ids.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter CM360 Segmentology Recipe Parameters
Wait for BigQuery->->->Census_Join to be created.
Join the StarThinker Assets Group to access the following assets
Copy CM360 Segmentology Sample. Leave the Data Source as is, you will change it in the next step.
Click Edit Connection, and change to BigQuery->->->Census_Join.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'description':'Create a dataset for bigquery tables.',
'hour':[
4
],
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','description':'Place where tables will be created in BigQuery.'}}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing function.'}},
'function':'Pearson Significance Test',
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}
}
}
},
{
'google_api':{
'auth':'user',
'api':'dfareporting',
'version':'v3.4',
'function':'accounts.get',
'kwargs':{
'id':{'field':{'name':'account','kind':'integer','order':5,'default':'','description':'Campaign Manager Account ID'}},
'fields':'id,name'
},
'results':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing function.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'CM360_Account'
}
}
}
},
{
'dcm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'filters':{
'advertiser':{
'values':{'field':{'name':'advertisers','kind':'integer_list','order':6,'default':[],'description':'Comma delimited list of CM360 advertiser ids.'}}
}
},
'account':{'field':{'name':'account','kind':'string','order':5,'default':'','description':'Campaign Manager Account ID'}},
'body':{
'name':{'field':{'name':'recipe_name','kind':'string','suffix':' Segmentology','description':'The report name.','default':''}},
'criteria':{
'dateRange':{
'kind':'dfareporting#dateRange',
'relativeDateRange':{'field':{'name':'date_range','kind':'choice','order':3,'default':'LAST_365_DAYS','choices':['LAST_7_DAYS','LAST_14_DAYS','LAST_30_DAYS','LAST_365_DAYS','LAST_60_DAYS','LAST_7_DAYS','LAST_90_DAYS','LAST_24_MONTHS','MONTH_TO_DATE','PREVIOUS_MONTH','PREVIOUS_QUARTER','PREVIOUS_WEEK','PREVIOUS_YEAR','QUARTER_TO_DATE','WEEK_TO_DATE','YEAR_TO_DATE'],'description':'Timeframe to run report for.'}}
},
'dimensions':[
{
'kind':'dfareporting#sortedDimension',
'name':'advertiserId'
},
{
'kind':'dfareporting#sortedDimension',
'name':'advertiser'
},
{
'kind':'dfareporting#sortedDimension',
'name':'zipCode'
}
],
'metricNames':[
'impressions',
'clicks',
'totalConversions'
]
},
'type':'STANDARD',
'delivery':{
'emailOwner':False
},
'format':'CSV'
}
}
}
},
{
'dcm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'account':{'field':{'name':'account','kind':'string','default':''}},
'name':{'field':{'name':'recipe_name','kind':'string','order':3,'suffix':' Segmentology','default':'','description':'Name of report, not needed if ID used.'}}
},
'out':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'CM360_KPI',
'header':True
}
}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'from':{
'query':'SELECT Id AS Partner_Id, Name AS Partner, Advertiser_Id, Advertiser, Zip_Postal_Code AS Zip, SAFE_DIVIDE(Impressions, SUM(Impressions) OVER(PARTITION BY Advertiser_Id)) AS Impression, SAFE_DIVIDE(Clicks, Impressions) AS Click, SAFE_DIVIDE(Total_Conversions, Impressions) AS Conversion, Impressions AS Impressions FROM `{dataset}.CM360_KPI` CROSS JOIN `{dataset}.CM360_Account` ',
'parameters':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','description':'Place where tables will be created in BigQuery.'}}
},
'legacy':False
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','description':'Place where tables will be written in BigQuery.'}},
'view':'CM360_KPI_Normalized'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'normalize':{
'census_geography':'zip_codes',
'census_year':'2018',
'census_span':'5yr'
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'correlate':{
'join':'Zip',
'pass':[
'Partner_Id',
'Partner',
'Advertiser_Id',
'Advertiser'
],
'sum':[
'Impressions'
],
'correlate':[
'Impression',
'Click',
'Conversion'
],
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'CM360_KPI_Normalized',
'significance':80
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute CM360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
11,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Sending Different Values Based On Client Data
Consider the case where we have some server-placed list from which we want to send a few elements to each client based on some client-placed data. For example, a list of strings on the server, and on the clients, a comma-separated list of indices to download. We can implement that as follows
Step3: Then we can simulate our computation by providing the server-placed list of strings as well as string data for each client
Step4: Sending A Randomized Element To Each Client
Alternatively, it may be useful to send a random portion of the server data to each client. We can implement that by first generating a random key on each client and then following a similar selection process to the one used above
Step5: Since our broadcast_random_element function doesn't take in any client-placed data, we have to configure the TFF Simulation Runtime with a default number of clients to use
Step6: Then we can simulate the selection. We can change default_num_clients above and the list of strings below to generate different results, or simply re-run the computation to generate different random outputs. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import tensorflow as tf
import tensorflow_federated as tff
tff.backends.native.set_local_python_execution_context()
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/federated_select"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/federated_select.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/federated_select.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/federated_select.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Sending Different Data To Particular Clients With tff.federated_select
This tutorial demonstrates how to implement custom federated algorithms in TFF that require sending different data to different clients. You may already be familiar with tff.federated_broadcast which sends a single server-placed value to all clients. This tutorial focuses on cases where different parts of a server-based value are sent to different clients. This may be useful for dividing up parts of a model across different clients in order to avoid sending the whole model to any single client.
Let's get started by importing both tensorflow and tensorflow_federated.
End of explanation
list_of_strings_type = tff.TensorType(tf.string, [None])
# We only ever send exactly two values to each client. The number of keys per
# client must be a fixed number across all clients.
number_of_keys_per_client = 2
keys_type = tff.TensorType(tf.int32, [number_of_keys_per_client])
get_size = tff.tf_computation(lambda x: tf.size(x))
select_fn = tff.tf_computation(lambda val, index: tf.gather(val, index))
client_data_type = tf.string
# A function from our client data to the indices of the values we'd like to
# select from the server.
@tff.tf_computation(client_data_type)
@tff.check_returns_type(keys_type)
def keys_for_client(client_string):
# We assume our client data is a single string consisting of exactly three
# comma-separated integers indicating which values to grab from the server.
split = tf.strings.split([client_string], sep=',')[0]
return tf.strings.to_number([split[0], split[1]], tf.int32)
@tff.tf_computation(tff.SequenceType(tf.string))
@tff.check_returns_type(tf.string)
def concatenate(values):
def reduce_fn(acc, item):
return tf.cond(tf.math.equal(acc, ''),
lambda: item,
lambda: tf.strings.join([acc, item], ','))
return values.reduce('', reduce_fn)
@tff.federated_computation(tff.type_at_server(list_of_strings_type), tff.type_at_clients(client_data_type))
def broadcast_based_on_client_data(list_of_strings_at_server, client_data):
keys_at_clients = tff.federated_map(keys_for_client, client_data)
max_key = tff.federated_map(get_size, list_of_strings_at_server)
values_at_clients = tff.federated_select(keys_at_clients, max_key, list_of_strings_at_server, select_fn)
value_at_clients = tff.federated_map(concatenate, values_at_clients)
return value_at_clients
Explanation: Sending Different Values Based On Client Data
Consider the case where we have some server-placed list from which we want to send a few elements to each client based on some client-placed data. For example, a list of strings on the server, and on the clients, a comma-separated list of indices to download. We can implement that as follows:
End of explanation
client_data = ['0,1', '1,2', '2,0']
broadcast_based_on_client_data(['a', 'b', 'c'], client_data)
Explanation: Then we can simulate our computation by providing the server-placed list of strings as well as string data for each client:
End of explanation
@tff.tf_computation(tf.int32)
@tff.check_returns_type(tff.TensorType(tf.int32, [1]))
def get_random_key(max_key):
return tf.random.uniform(shape=[1], minval=0, maxval=max_key, dtype=tf.int32)
list_of_strings_type = tff.TensorType(tf.string, [None])
get_size = tff.tf_computation(lambda x: tf.size(x))
select_fn = tff.tf_computation(lambda val, index: tf.gather(val, index))
@tff.tf_computation(tff.SequenceType(tf.string))
@tff.check_returns_type(tf.string)
def get_last_element(sequence):
return sequence.reduce('', lambda _initial_state, val: val)
@tff.federated_computation(tff.type_at_server(list_of_strings_type))
def broadcast_random_element(list_of_strings_at_server):
max_key_at_server = tff.federated_map(get_size, list_of_strings_at_server)
max_key_at_clients = tff.federated_broadcast(max_key_at_server)
key_at_clients = tff.federated_map(get_random_key, max_key_at_clients)
random_string_sequence_at_clients = tff.federated_select(
key_at_clients, max_key_at_server, list_of_strings_at_server, select_fn)
# Even though we only passed in a single key, `federated_select` returns a
# sequence for each client. We only care about the last (and only) element.
random_string_at_clients = tff.federated_map(get_last_element, random_string_sequence_at_clients)
return random_string_at_clients
Explanation: Sending A Randomized Element To Each Client
Alternatively, it may be useful to send a random portion of the server data to each client. We can implement that by first generating a random key on each client and then following a similar selection process to the one used above:
End of explanation
tff.backends.native.set_local_python_execution_context(default_num_clients=3)
Explanation: Since our broadcast_random_element function doesn't take in any client-placed data, we have to configure the TFF Simulation Runtime with a default number of clients to use:
End of explanation
broadcast_random_element(tf.convert_to_tensor(['foo', 'bar', 'baz']))
Explanation: Then we can simulate the selection. We can change default_num_clients above and the list of strings below to generate different results, or simply re-run the computation to generate different random outputs.
End of explanation |
11,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 9 - Dataset preprocessing
Before we utilize machine learning algorithms we must first prepare our dataset. This can often take a significant amount of time and can have a large impact on the performance of our models.
We will be looking at four different types of data
Step1: Tabular data
Missing data
Normalization
Categorical data
Missing data
There are a number of ways to handle missing data
Step2: Normalization
Many machine learning algorithms expect features to have similar distributions and scales.
A classic example is gradient descent, if features are on different scales some weights will update faster than others because the feature values scale the weight updates.
There are two common approaches to normalization
Step3: Categorical data
Categorical data can take one of a number of possible values. The different categories may be related to each other or be largely independent and unordered.
Continuous variables can be converted to categorical variables by applying a threshold.
Step4: Exercises
Substitute missing values in x with the column mean and add an additional column to indicate when missing values have been substituted. The isnull method on the pandas dataframe may be useful.
Convert x to the z-scaled values. The StandardScaler method in the preprocessing module can be used or the z-scaled values calculated directly.
Convert x['C'] into a categorical variable using a threshold of 0.125
Step6: Image data
Depending on the type of task being performed there are a variety of steps we may want to take in working with images
Step7: Text
When working with text the simplest approach is known as bag of words. In this approach we simply count the number of instances of each word, and then adjust the values based on how commonly the word is used.
The first task is to break a piece of text up into individual tokens. The number of occurrences of each word is then recorded. More rarely used words are likely to be more interesting and so word counts are scaled by the inverse document frequency.
We can extend this to look at not just individual words but also bigrams and trigrams.
Step8: Exercises
Choose one of the histogram processing methods and apply it to the page example.
Take patches for the page example used above at different scales (10, 20 and 40 pixels). The resulting patches should be rescaled to have the same size.
Change the vectorization approach to ignore very common words such as 'the' and 'a'. These are known as stop words. Reading the documentation should help.
Change the vectorization approach to consider both single words and sequences of 2 words. Reading the documentation should help. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
Explanation: Week 9 - Dataset preprocessing
Before we utilize machine learning algorithms we must first prepare our dataset. This can often take a significant amount of time and can have a large impact on the performance of our models.
We will be looking at four different types of data:
Tabular data
Image data
Text
Tabular data
We will look at three different steps we may need to take when handling tabular data:
Missing data
Normalization
Categorical data
Image data
Image data can present a number of issues that we must address to maximize performance:
Histogram normalization
Windows
Pyramids (for detection at different scales)
Centering
Text
Text can present a number of issues, mainly due to the number of words that can be found in our features. There are a number of ways we can convert from text to usable features:
Bag of words
Parsing
End of explanation
from sklearn import linear_model
x = np.array([[0, 0], [1, 1], [2, 2]])
y = np.array([0, 1, 2])
print(x,y)
clf = linear_model.LinearRegression()
clf.fit(x, y)
print(clf.coef_)
x_missing = np.array([[0, 0], [1, np.nan], [2, 2]])
print(x_missing, y)
clf = linear_model.LinearRegression()
clf.fit(x_missing, y)
print(clf.coef_)
import pandas as pd
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
[4,1,7,9,0,2,np.nan]], ).T
x.columns =['A', 'B', 'C', 'D', 'E']
y = pd.Series([29.0,
31.2,
63.25,
57.27,
66.3,
26.21,
48.24])
print(x, y)
x.dropna()
x.fillna(value={'A':1000,'B':2000,'C':3000,'D':4000,'E':5000})
x.fillna(value=x.mean())
Explanation: Tabular data
Missing data
Normalization
Categorical data
Missing data
There are a number of ways to handle missing data:
Drop all records with a value missing
Substitute all missing values with an average value
Substitute all missing values with some placeholder value, i.e. 0, 1e9, -1e9, etc
Predict missing values based on other attributes
Add additional feature indicating when a value is missing
If the machine learning model will be used with new data it is important to consider the possibility of receiving records with values missing that we have not observed previously in the training dataset.
The simplest approach is to remove any records that have missing data. Unfortunately missing values are often not randomly distributed through a dataset and removing them can introduce bias.
An alternative approach is to substitute the missing values. This can be with the mean of the feature across all the records or the value can be predicted based on the values of the other features in the dataset. Placeholder values can also be used with decision trees but do not work as well for most other algorithms.
Finally, missing values can themselves be useful features. Adding an additional feature indicating when a value is missing is often used to include this information.
End of explanation
x_filled = x.fillna(value=x.mean())
print(x_filled)
x_norm = (x_filled - x_filled.min()) / (x_filled.max() - x_filled.min())
print(x_norm)
from sklearn import preprocessing
scaling = preprocessing.MinMaxScaler().fit(x_filled)
scaling.transform(x_filled)
Explanation: Normalization
Many machine learning algorithms expect features to have similar distributions and scales.
A classic example is gradient descent, if features are on different scales some weights will update faster than others because the feature values scale the weight updates.
There are two common approaches to normalization:
Z-score standardization
Min-max scaling
Z-score standardization
Z-score standardization rescales values so that they have a mean of zero and a standard deviation of 1. Specifically we perform the following transformation:
$$z = \frac{x - \mu}{\sigma}$$
Min-max scaling
An alternative is min-max scaling that transforms data into the range of 0 to 1. Specifically:
$$x_{norm} = \frac{x - x_{min}}{x_{max} - x_{min}}$$
Min-max scaling is less commonly used but can be useful for image data and in some neural networks.
End of explanation
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
['Green','Red','Blue','Blue','Green','Red','Green']], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
print(x)
x_cat = x.copy()
for val in x['E'].unique():
x_cat['E_{0}'.format(val)] = x_cat['E'] == val
x_cat
Explanation: Categorical data
Categorical data can take one of a number of possible values. The different categories may be related to each other or be largely independent and unordered.
Continuous variables can be converted to categorical variables by applying a threshold.
End of explanation
x, x.isnull()
x['B_isnull'] = x['B'].isnull()
x
(x[['A', 'B', 'C', 'D', 'E']] - x[['A', 'B', 'C', 'D', 'E']].mean()) / \
x[['A', 'B', 'C', 'D', 'E']].std()
x_scaled = _74
x_scaled.mean(), x_scaled.std()
x['C_cat'] = x['C'] > 0.125
x
Explanation: Exercises
Substitute missing values in x with the column mean and add an additional column to indicate when missing values have been substituted. The isnull method on the pandas dataframe may be useful.
Convert x to the z-scaled values. The StandardScaler method in the preprocessing module can be used or the z-scaled values calculated directly.
Convert x['C'] into a categorical variable using a threshold of 0.125
End of explanation
# http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#example-color-exposure-plot-equalize-py
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, img_as_float
from skimage import exposure
matplotlib.rcParams['font.size'] = 8
def plot_img_and_hist(img, axes, bins=256):
Plot an image along with its histogram and cumulative histogram.
img = img_as_float(img)
ax_img, ax_hist = axes
ax_cdf = ax_hist.twinx()
# Display image
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_axis_off()
ax_img.set_adjustable('box-forced')
# Display histogram
ax_hist.hist(img.ravel(), bins=bins, histtype='step', color='black')
ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
ax_hist.set_xlabel('Pixel intensity')
ax_hist.set_xlim(0, 1)
ax_hist.set_yticks([])
# Display cumulative distribution
img_cdf, bins = exposure.cumulative_distribution(img, bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_yticks([])
return ax_img, ax_hist, ax_cdf
# Load an example image
img = data.moon()
# Contrast stretching
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
# Equalization
img_eq = exposure.equalize_hist(img)
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
# Display results
fig = plt.figure(figsize=(8, 5))
axes = np.zeros((2,4), dtype=np.object)
axes[0,0] = fig.add_subplot(2, 4, 1)
for i in range(1,4):
axes[0,i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0])
for i in range(0,4):
axes[1,i] = fig.add_subplot(2, 4, 5+i)
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0])
ax_img.set_title('Low contrast image')
y_min, y_max = ax_hist.get_ylim()
ax_hist.set_ylabel('Number of pixels')
ax_hist.set_yticks(np.linspace(0, y_max, 5))
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1])
ax_img.set_title('Contrast stretching')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2])
ax_img.set_title('Histogram equalization')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3])
ax_img.set_title('Adaptive equalization')
ax_cdf.set_ylabel('Fraction of total intensity')
ax_cdf.set_yticks(np.linspace(0, 1, 5))
# prevent overlap of y-axis labels
fig.tight_layout()
plt.show()
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from sklearn import datasets
digits = datasets.load_digits()
#print(digits.DESCR)
fig, ax = plt.subplots(1,1, figsize=(1,1))
ax.imshow(digits.data[0].reshape((8,8)), cmap=plt.cm.gray, interpolation='nearest')
Explanation: Image data
Depending on the type of task being performed there are a variety of steps we may want to take in working with images:
Histogram normalization
Windows and pyramids (for detection at different scales)
Centering
Occasionally the camera used to generate an image will use 10- to 14-bits while a 16-bit file format will be used. In this situation all the pixel intensities will be in the lower values. Rescaling to the full range (or to 0-1) can be useful.
Further processing can be done to alter the histogram of the image.
When looking for particular features in an image a sliding window can be used to check different locations. This can be combined with an image pyramid to detect features at different scales. This is often needed when objects can be at different distances from the camera.
If objects are sparsely distributed in an image a faster approach than using sliding windows is to identify objects with a simple threshold and then test only the bounding boxes containing objects. Before running these through a model centering based on intensity can be a useful approach. Small offsets, rotations and skewing can be used to generate additional training data.
End of explanation
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(X_train_counts.shape)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
print(X_train_tfidf.shape, X_train_tfidf[:5,:15].toarray())
print(twenty_train.data[0])
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
Explanation: Text
When working with text the simplest approach is known as bag of words. In this approach we simply count the number of instances of each word, and then adjust the values based on how commonly the word is used.
The first task is to break a piece of text up into individual tokens. The number of occurrences of each word is then recorded. More rarely used words are likely to be more interesting and so word counts are scaled by the inverse document frequency.
We can extend this to look at not just individual words but also bigrams and trigrams.
End of explanation
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
from skimage import exposure
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
plt.imshow(img_adapteq, cmap=plt.cm.gray)
plt.show()
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from skimage.transform import rescale
im_small = rescale(img, 0.5)
patches = image.extract_patches_2d(im_small, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
count_vect = CountVectorizer(stop_words='english', ngram_range=(1,2))
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
Explanation: Exercises
Choose one of the histogram processing methods and apply it to the page example.
Take patches for the page example used above at different scales (10, 20 and 40 pixels). The resulting patches should be rescaled to have the same size.
Change the vectorization approach to ignore very common words such as 'the' and 'a'. These are known as stop words. Reading the documentation should help.
Change the vectorization approach to consider both single words and sequences of 2 words. Reading the documentation should help.
End of explanation |
11,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Image As Greyscale
Step2: Save Image | Python Code:
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
Explanation: Title: Save Images
Slug: save_images
Summary: How to save images using OpenCV in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Images
Authors: Chris Albon
Preliminaries
End of explanation
# Load image as grayscale
image = cv2.imread('images/plane.jpg', cv2.IMREAD_GRAYSCALE)
# Show image
plt.imshow(image, cmap='gray'), plt.axis("off")
plt.show()
Explanation: Load Image As Greyscale
End of explanation
# Save image
cv2.imwrite('images/plane_new.jpg', image)
Explanation: Save Image
End of explanation |
11,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topic Modeling Amarigna
_ Simple topic classifying LSTM model to test if it is possible to identify topics in Amharic text _
Step25: A small sample dataset to train and test the model
Step26: Preparing the data for the model
* Tokenizing the text - Identifying unique words, creating a dictionary and counting their frequency in the list of documents (texts) in the training data.
* One-hot encoding the labels (topics)
* Splitting the data into train and test(validation) sets
Step27: Model definition and training | Python Code:
from sklearn.datasets import fetch_20newsgroups
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import keras
from keras.layers import Embedding, Dense, LSTM, GRU
from keras.models import Sequential
from sklearn.model_selection import train_test_split, StratifiedShuffleSplit
Explanation: Topic Modeling Amarigna
_ Simple topic classifying LSTM model to test if it is possible to identify topics in Amharic text _
End of explanation
wikis = [
በፈረንሳይ አገር ሃይማኖትን በግብረ ሰዶማዊ ስዕል መስደብ የተፈቀደ ነው። ግብረ ሰዶምን መስደብ ግን ክልክል ነው። ለአባቱ ፍሬድ ትራምፕ ከአምስት ልጆች መሃል አራተኛው ልጃቸው ነበር።,
ኢትዮጵያ ተፈጥሮ ያደላት ሀገር ናት። ከአፍሪካ ትላልቅ ተራራዎች እንዲሁም ከዓለም ከባህር ጠለል በታች በጣም ጥልቅ ከሆኑ ቦታዎች አንዳንዶቹ ይገኙባታል።,
ሶፍ ዑመር ከአፍሪካ ዋሻዎች ትልቁ ሲሆን ፣ዳሎል ከዓለም በጣም ሙቅ ቦታዎች አንዱ ነው። ወደ ሰማንኒያ የሚቆጠሩ ብሔሮችና ብሔረሰቦች ዛሬ በኢትዮጵያ ይገኛሉ። ከእነዚህም ኦሮሞና አማራ በብዛት ትልቆቹ ናቸው።,
ኢትዮጵያ በኣክሱም ሓውልት፣ ከአንድ ድንጋይ ተፈልፍለው በተሰሩ ቤተ-ክርስትያኖቹዋ እና በኦሎምፒክ የወርቅ ሜዳልያ አሸናፊ አትሌቶቹዋ ትታወቃለች። ,
የቡና ፍሬ ለመጀመሪያ ጊዜ የተገኘው በኢትዮጵያ ሲሆን ሀገሪቱዋ በቡናና ማር አምራችነት በአፍሪካ ቅድሚያ ይዛለች።,
ኦሮሞ በኢትዮጵያ፣ በኬንያና፣ በሶማሊያ የሚኖር ማህበረሰብ ነዉ። ኦሮሞ ማለት በገዳ ስርኣተ መንገስት ስር ይተዳደር የነበረ በራሱ የሚኮራ ህዘብ ነው፡በ ገዳ መንግስት ስር የ አገር መሪ በየ ፰(ስምንት) አመት,
የሚቀይር ሲሆን በተለያዩ የ ኦሮሚያ ክልሎች ንጉሳት እንደነበሩም ታሪክ ይነግረናል። በኦሮሚያ ክልሎች ከነበሩት ንጉሳት መካከል የታወቁት አባ ጂፋር ናቸው።,
ኦሮሚያ በ አንድሺ ስምንት መቶ ክፍለዘመን ማለቂያ ላይ በ ንጉስ ሚኒሊክ አማካኝነት ከ አቢሲኒያ ጋር ተቀላቀላ ኢትዮጵያ ስትመሰረት፣የ ቀዳሚ ሃገሩ ህዘብ ብዙ ችግር እና ጭቆና አሳልፏል። የ ኦሮሞን ብዛት አሰመልክቶ,
፤ሃይል እንዳይኖረው በሚለው ስጋት የቀድሞ መንግስታት የህዝቡን መብት ሳያከበሩ ወይ ባህሉን ሳይይደገፉ ገዝተዋል. ለዛም ነው ብዙ ኦሮምያዊ ህዘብ ከሌሎች የሚወዳችው ህዝቦችህ ተለይቶ መገንጠልን የመረጠው።,
ብዙ የጀርመን ሰዎች በዓለም ዙሪያ ስመ ጥሩ ናቸው። ይህም ደራሲዎች ያኮብ ግሪምና ወንድሙ ቭልሄልም ግሪም፣ ባለቅኔው ዮሐን ቩልፍጋንግ ቮን ጌጠ፣ የፕሮቴስታንት ንቅናቄ መሪ ማርቲን ሉጠር፣ ፈላስፋዎች ካንት፣
ኒሺና ሄገል፣ ሳይንቲስቱ አልቤርት አይንስታይን፣ ፈጠራ አፍላቂዎች ዳይምለር፣ ዲዝልና ካርል ቤንዝ፣ የሙዚቃ ቃኚዎች ዮሐን ሴባስትያን ባክ፣ ሉድቪግ ቫን ቤትሆቨን፣ ብራምዝ፣ ስትራውስ፣ ቫግነርና ብዙ ሌሎች ይከትታል።,
እጅግ ቁም ነገር የሆነ ጠቃሚ ፈጠራ ማሳተሚያ፤ ዮሐንስ ጉተንቤርግ በሚባል ሰው በ1431 ዓ.ም. ተጀመረ። ስለዚህ ተጓዦች ከውጭ አገር ሲመልሱ በአውሮፓ ያለው ሰው ሁሉ እርግጡን በቶሎ ያውቀው ነበር። አሁን,
ጀርመን «ዶይቸ ቨለ» በሚባል ራዲዮን ጣቢያ ላይ ዜና በእንግሊዝኛ ያሠራጫል። የጀርመን ሕዝብ ባማካኝ ከአውሮፓ ሁሉ ቴሌቪዥንን የሚወድዱ ሲሆኑ ፺ ከመቶ ሰዎች ወይም ሳተላይት ወይም ገመድ ቴሌቪዥን አላቸው,
ጀርመን አንድ ይፋዊ ቋንቋ ብቻ አለው እርሱም ጀርመንኛ ሲሆን ከዚህ ውስጥ ብዙ ልዩ ልዩ የጀርመንኛ ቀበሌኞች በአገሩ ይገኛሉ። ለአንዳንድ ሰዎች ጀርመን «የገጣሚዎችና የአሳቢዎች አገር» በመባል ታውቋል። በዓመታት ላይ በሥነ ጽሑፍ፣ በሥነ ጥበብ፣ በፍልስፍና፣ በሙዚቃ፣ በሲኒማ፣ ,
ናልድ ትራምፕ ከኒው ዮርክ ከአምስቱ ቀጠናዎች አንዱ በሆነው በክዊንስ በእ.ኤ.አ. ጁን 14 1946 ተወለደ። ለእናቱ ሜሪ አን እና ለአባቱ ፍሬድ ትራምፕ ከአምስት ልጆች መሃል አራተኛው ልጃቸው ነበር። ,
እናቱ የተወለደችው በስኮትላንድ ሉዊስ ኤንድ ሃሪስ ደሴት ላይ ቶንግ በተባለው ስፍራ ነው። በእ.ኤ.አ. 1930 በ18 ዓመቷ ዩናይትድ ስቴትስን ጎበኘች እናም ከፍሬድ ትራምፕ ጋር ተገናኘች። በእ.ኤ.አ. 1936 ትዳር ይዘው ,
በጃማይካ ኢስቴትስ ክዊንስ መኖር ጀመሩ። በዚህም ስፍራ ፍሬድ ትራምፕ ታላቅ የሪልኢስቴት ገንቢ ሆኖ ነበር። ዶናልድ ትራምፕ፥ ሮበርት የተባለ አንድ ወንድም፣ ሜሪአን እና ኤሊዛቤት የተባሉ ሁለት እህቶች አሉት።,
ፍሬድ ጁኒየር የተባለ ወንድሙ ደግሞ ከአልኮል ሱስ ጋር በተያያዘ ምክንያት ሕይወቱ አልፏል ፤ ይህም ከአልኮሆል መጠጥ እና ከትምባሆ እንዲታቀብ እንዳደረገውም ዶናልድ ትራምፕ ይናገራል
]
nb_words = 10000
max_seq_len = 1000
wlabs = [0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 0, 0, 0, 0]
validx = [
በእ.ኤ.አ. ጁን 16 2015 ላይ ደግሞ ለፕሬዚደንትነት እንደሚወዳደር አሳወቀ። ይህን ጊዜ ግን የሪፐብሊካን ፓርቲን በመወከል ነው። በስደት፣ በነፃ ገበያ እና በጦር ጣልቃ ገብነት ላይ ባለው ተቃውሞ ምክንያት ታዋቂ ሆኗል። በእነኚህ አነጋጋሪ አስተያየቶቹ,
(እ.ኤ.አ. ጁን 14 ቀን 1946 ተወለደ) አሜሪካዊ ነጋዴ ፣ ፖለቲከኛ ፣ በቴሌቪዥን ፕሮግራሞቹ ታዋቂ እና 45ኛው የዩናይትድ ስቴትስ ኦፍ አሜሪካ ፕሬዚደንት ነው። ሥልጣኑንም እ.ኤ.አ. በጃኑዌሪ 20 ቀን 2017 ተረክቧል።,
የኬልቶች ከተማ መጀመርያ «ሉኮቶኪያ» ተብሎ በስትራቦን ተመዘገበ። ፕቶሎመይ ደግሞ ከተማውን «ለውኮተኪያ» አለው። ዩሊዩስ ቄሳር አገሩን ሲይዘው ሥፍራውን በሮማይስጥ «ሉቴቲያ» አለው። የኖረበት ጎሣ ፓሪሲ ስለ ተባሉ፣ የከተማው ስም በሙሉ «ሉቴቲያ ፓሪሶሩም» («የፓሪሲ ሉቴቲያ») ተባለ።,
ባቫሪያ ፣ በሙሉ ስሙ ነጻ የባቫሪያ አስተዳደር (ጀርመንኛ፦ Freistaat Bayern /ፍሪሽታት ባየርን/) ደቡብ ምስራቅ ጀርመን ውስጥ የሚገኝ ክፍለ ሃገር ነው። 70,548 ስኩየር ኪ/ሜትር ስፋት ሲኖረው፣ ከማናቸውም የጀርመን ክፍላተ ሃገሮች የበለጠ የቆዳ ስፋት አለው። ይህ ግዛት የጀርመንን አጠቃላይ ስፋት አንድ አምስተኛ (20%) ይሸፍናል።,
ከኖርስ ራይን ዌስትፋሊያ ክፍለሃገር ቀጥሎ ባቫሪያ ብዙውን የጀርመን ህዝብ ይይዛል። (12.5 ሚሊየን)። ሙኒክ የባቫሪያ ዋና ከተማ ነው።,
የታሪክ ፀሀፊ የሆነው እንደ ዶናልድ ሰቨን ጎበና በደቡብ በኩል የተደረገው የማስፋፋት ስራ ማለትም ኦሮምኛ ተናጋሪውን ህዝብ ወደ ሚኒሊክ ሀሳብ የተዋሀደው በራስ ጎበና ነበር ይህ እንዲ እንዳለ በኢትዮጵያ ታዋቂ የሆኑት የኦሮሞ አስተዳደር ሹማምንት ወታደሮችም እረድተውት ነበር። በተጨማሪም የኦሮሞ ህዝብ ደቡብ ሲዳማን እና የጉራጌን ህዝብ ወታደር ድል ነስተዋል።
]
validy = [ 0, 0, 0, 3, 3, 1]
X = wikis + validx
y = wlabs + validy
Explanation: A small sample dataset to train and test the model
End of explanation
tokenizer = Tokenizer(num_words=nb_words)
tokenizer.fit_on_texts(X)
sequences = Tokenizer.texts_to_sequences(tokenizer, X)
word_index = tokenizer.word_index
ydata = keras.utils.to_categorical(y)
input_data = pad_sequences(sequences, maxlen=max_seq_len)
Xtrain, Xvalid, ytrain, yvalid = train_test_split(input_data, ydata, test_size=0.4)
Explanation: Preparing the data for the model
* Tokenizing the text - Identifying unique words, creating a dictionary and counting their frequency in the list of documents (texts) in the training data.
* One-hot encoding the labels (topics)
* Splitting the data into train and test(validation) sets
End of explanation
embedding_vector_length = 64
model = Sequential()
model.add(Embedding(len(word_index)+1, embedding_vector_length, input_length=max_seq_len, embeddings_initializer='glorot_normal',
embeddings_regularizer=keras.regularizers.l2(0.01)))
model.add(LSTM(80, dropout=0.25))
model.add(Dense(4, activation='softmax'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(Xtrain, ytrain, validation_data=(Xvalid, yvalid), nb_epoch=75, batch_size=32)
Explanation: Model definition and training
End of explanation |
11,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features
Step1: First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pins.
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the assembly and then assign it to the root universe.
Step9: We now must create a geometry that is assigned a root universe and export it to XML.
Step10: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step11: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
Step12: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy groups with the fuel assembly geometry.
Step14: Now, we must specify to the Library which types of cross sections to compute. In particular, the following are the multi-group cross section MGXS subclasses that are mapped to string codes accepted by the Library class
Step15: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. We will use a "cell" domain type here to compute cross sections in each of the cells in the fuel assembly geometry.
Note
Step16: We can easily instruct the Library to compute multi-group cross sections on a nuclide-by-nuclide basis with the boolean Library.by_nuclide property. By default, by_nuclide is set to False, but we will set it to True here.
Step17: Lastly, we use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain and nuclide.
Step18: The tallies can now be export to a "tallies.xml" input file for OpenMC.
NOTE
Step19: In addition, we instantiate a fission rate mesh tally to compare with OpenMOC.
Step20: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
Step21: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step22: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
The Library supports a rich API to automate a variety of tasks, including multi-group cross section data retrieval and storage. We will highlight a few of these features here. First, the Library.get_mgxs(...) method allows one to extract an MGXS object from the Library for a particular domain and cross section type. The following cell illustrates how one may extract the NuFissionXS object for the fuel cell.
Note
Step23: The NuFissionXS object supports all of the methods described previously in the openmc.mgxs tutorials, such as Pandas DataFrames
Step24: Similarly, we can use the MGXS.print_xs(...) method to view a string representation of the multi-group cross section data.
Step25: One can export the entire Library to HDF5 with the Library.build_hdf5_store(...) method as follows
Step26: The HDF5 store will contain the numerical multi-group cross section data indexed by domain, nuclide and cross section type. Some data workflows may be optimized by storing and retrieving binary representations of the MGXS objects in the Library. This feature is supported through the Library.dump_to_file(...) and Library.load_from_file(...) routines which use Python's pickle module. This is illustrated as follows.
Step27: The Library class may be used to leverage the energy condensation features supported by the MGXS class. In particular, one can use the Library.get_condensed_library(...) with a coarse group structure which is a subset of the original "fine" group structure as shown below.
Step28: Verification with OpenMOC
Of course it is always a good idea to verify that one's cross sections are accurate. We can easily do so here with the deterministic transport code OpenMOC. We first construct an equivalent OpenMOC geometry.
Step29: Now, we can inject the multi-group cross sections into the equivalent fuel assembly OpenMOC geometry. The openmoc.materialize module supports the loading of Library objects from OpenMC as illustrated below.
Step30: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
Step31: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
Step32: There is a non-trivial bias between the eigenvalues computed by OpenMC and OpenMOC. One can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias
Step33: Next, we extract OpenMOC's volume-averaged fission rates into a 2D 17x17 NumPy array.
Step34: Now we can easily use Matplotlib to visualize the fission rates from OpenMC and OpenMOC side-by-side. | Python Code:
import math
import pickle
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
from openmc.openmoc_compatible import get_openmoc_geometry
import openmoc
import openmoc.process
from openmoc.materialize import load_openmc_mgxs_lib
%matplotlib inline
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features:
Calculation of multi-group cross sections for a fuel assembly
Automated creation, manipulation and storage of MGXS with openmc.mgxs.Library
Validation of multi-group cross sections with OpenMOC
Steady-state pin-by-pin fission rates comparison between OpenMC and OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
Explanation: First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pins.
End of explanation
# Instantiate a Materials object
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:,:] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the assembly and then assign it to the root universe.
End of explanation
# Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
# Instantiate a Plot
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.color_by = 'material'
plot.to_ipython_image()
Explanation: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
End of explanation
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625, 20.0e6])
Explanation: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
# Initialize a 2-group MGXS Library for OpenMOC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy groups with the fuel assembly geometry.
End of explanation
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['nu-transport', 'nu-fission', 'fission', 'nu-scatter matrix', 'chi']
Explanation: Now, we must specify to the Library which types of cross sections to compute. In particular, the following are the multi-group cross section MGXS subclasses that are mapped to string codes accepted by the Library class:
TotalXS ("total")
TransportXS ("transport" or "nu-transport with nu set to True)
AbsorptionXS ("absorption")
CaptureXS ("capture")
FissionXS ("fission" or "nu-fission" with nu set to True)
KappaFissionXS ("kappa-fission")
ScatterXS ("scatter" or "nu-scatter" with nu set to True)
ScatterMatrixXS ("scatter matrix" or "nu-scatter matrix" with nu set to True)
Chi ("chi")
ChiPrompt ("chi prompt")
InverseVelocity ("inverse-velocity")
PromptNuFissionXS ("prompt-nu-fission")
DelayedNuFissionXS ("delayed-nu-fission")
ChiDelayed ("chi-delayed")
Beta ("beta")
In this case, let's create the multi-group cross sections needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we will define "nu-transport", "nu-fission", '"fission", "nu-scatter matrix" and "chi" cross sections for our Library.
Note: A variety of different approximate transport-corrected total multi-group cross sections (and corresponding scattering matrices) can be found in the literature. At the present time, the openmc.mgxs module only supports the "P0" transport correction. This correction can be turned on and off through the boolean Library.correction property which may take values of "P0" (default) or None.
End of explanation
# Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = 'cell'
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_material_cells().values()
Explanation: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. We will use a "cell" domain type here to compute cross sections in each of the cells in the fuel assembly geometry.
Note: By default, the Library class will instantiate MGXS objects for each and every domain (material, cell or universe) in the geometry of interest. However, one may specify a subset of these domains to the Library.domains property. In our case, we wish to compute multi-group cross sections in each and every cell since they will be needed in our downstream OpenMOC calculation on the identical combinatorial geometry mesh.
End of explanation
# Compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = True
Explanation: We can easily instruct the Library to compute multi-group cross sections on a nuclide-by-nuclide basis with the boolean Library.by_nuclide property. By default, by_nuclide is set to False, but we will set it to True here.
End of explanation
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
Explanation: Lastly, we use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain and nuclide.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
Explanation: The tallies can now be export to a "tallies.xml" input file for OpenMC.
NOTE: At this point the Library has constructed nearly 100 distinct Tally objects. The overhead to tally in OpenMC scales as $O(N)$ for $N$ tallies, which can become a bottleneck for large tally datasets. To compensate for this, the Python API's Tally, Filter and Tallies classes allow for the smart merging of tallies when possible. The Library class supports this runtime optimization with the use of the optional merge paramter (False by default) for the Library.add_to_tallies_file(...) method, as shown below.
End of explanation
# Instantiate a tally Mesh
mesh = openmc.RegularMesh(mesh_id=1)
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission', 'nu-fission']
# Add tally to collection
tallies_file.append(tally)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
Explanation: In addition, we instantiate a fission rate mesh tally to compare with OpenMOC.
End of explanation
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
# Retrieve the NuFissionXS object for the fuel cell from the library
fuel_mgxs = mgxs_lib.get_mgxs(fuel_cell, 'nu-fission')
Explanation: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
The Library supports a rich API to automate a variety of tasks, including multi-group cross section data retrieval and storage. We will highlight a few of these features here. First, the Library.get_mgxs(...) method allows one to extract an MGXS object from the Library for a particular domain and cross section type. The following cell illustrates how one may extract the NuFissionXS object for the fuel cell.
Note: The MGXS.get_mgxs(...) method will accept either the domain or the integer domain ID of interest.
End of explanation
df = fuel_mgxs.get_pandas_dataframe()
df
Explanation: The NuFissionXS object supports all of the methods described previously in the openmc.mgxs tutorials, such as Pandas DataFrames:
Note that since so few histories were simulated, we should expect a few division-by-error errors as some tallies have not yet scored any results.
End of explanation
fuel_mgxs.print_xs()
Explanation: Similarly, we can use the MGXS.print_xs(...) method to view a string representation of the multi-group cross section data.
End of explanation
# Store the cross section data in an "mgxs/mgxs.h5" HDF5 binary file
mgxs_lib.build_hdf5_store(filename='mgxs.h5', directory='mgxs')
Explanation: One can export the entire Library to HDF5 with the Library.build_hdf5_store(...) method as follows:
End of explanation
# Store a Library and its MGXS objects in a pickled binary file "mgxs/mgxs.pkl"
mgxs_lib.dump_to_file(filename='mgxs', directory='mgxs')
# Instantiate a new MGXS Library from the pickled binary file "mgxs/mgxs.pkl"
mgxs_lib = openmc.mgxs.Library.load_from_file(filename='mgxs', directory='mgxs')
Explanation: The HDF5 store will contain the numerical multi-group cross section data indexed by domain, nuclide and cross section type. Some data workflows may be optimized by storing and retrieving binary representations of the MGXS objects in the Library. This feature is supported through the Library.dump_to_file(...) and Library.load_from_file(...) routines which use Python's pickle module. This is illustrated as follows.
End of explanation
# Create a 1-group structure
coarse_groups = openmc.mgxs.EnergyGroups(group_edges=[0., 20.0e6])
# Create a new MGXS Library on the coarse 1-group structure
coarse_mgxs_lib = mgxs_lib.get_condensed_library(coarse_groups)
# Retrieve the NuFissionXS object for the fuel cell from the 1-group library
coarse_fuel_mgxs = coarse_mgxs_lib.get_mgxs(fuel_cell, 'nu-fission')
# Show the Pandas DataFrame for the 1-group MGXS
coarse_fuel_mgxs.get_pandas_dataframe()
Explanation: The Library class may be used to leverage the energy condensation features supported by the MGXS class. In particular, one can use the Library.get_condensed_library(...) with a coarse group structure which is a subset of the original "fine" group structure as shown below.
End of explanation
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(mgxs_lib.geometry)
Explanation: Verification with OpenMOC
Of course it is always a good idea to verify that one's cross sections are accurate. We can easily do so here with the deterministic transport code OpenMOC. We first construct an equivalent OpenMOC geometry.
End of explanation
# Load the library into the OpenMOC geometry
materials = load_openmc_mgxs_lib(mgxs_lib, openmoc_geometry)
Explanation: Now, we can inject the multi-group cross sections into the equivalent fuel assembly OpenMOC geometry. The openmoc.materialize module supports the loading of Library objects from OpenMC as illustrated below.
End of explanation
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=32, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.nominal_value
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
# Get the OpenMC fission rate mesh tally data
mesh_tally = sp.get_tally(name='mesh tally')
openmc_fission_rates = mesh_tally.get_values(scores=['nu-fission'])
# Reshape array to 2D for plotting
openmc_fission_rates.shape = (17,17)
# Normalize to the average pin power
openmc_fission_rates /= np.mean(openmc_fission_rates[openmc_fission_rates > 0.])
Explanation: There is a non-trivial bias between the eigenvalues computed by OpenMC and OpenMOC. One can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Flux and Pin Power Visualizations
We will conclude this tutorial by illustrating how to visualize the fission rates computed by OpenMOC and OpenMC. First, we extract volume-integrated fission rates from OpenMC's mesh fission rate tally for each pin cell in the fuel assembly.
End of explanation
# Create OpenMOC Mesh on which to tally fission rates
openmoc_mesh = openmoc.process.Mesh()
openmoc_mesh.dimension = np.array(mesh.dimension)
openmoc_mesh.lower_left = np.array(mesh.lower_left)
openmoc_mesh.upper_right = np.array(mesh.upper_right)
openmoc_mesh.width = openmoc_mesh.upper_right - openmoc_mesh.lower_left
openmoc_mesh.width /= openmoc_mesh.dimension
# Tally OpenMOC fission rates on the Mesh
openmoc_fission_rates = openmoc_mesh.tally_fission_rates(solver)
openmoc_fission_rates = np.squeeze(openmoc_fission_rates)
openmoc_fission_rates = np.fliplr(openmoc_fission_rates)
# Normalize to the average pin fission rate
openmoc_fission_rates /= np.mean(openmoc_fission_rates[openmoc_fission_rates > 0.])
Explanation: Next, we extract OpenMOC's volume-averaged fission rates into a 2D 17x17 NumPy array.
End of explanation
# Ignore zero fission rates in guide tubes with Matplotlib color scheme
openmc_fission_rates[openmc_fission_rates == 0] = np.nan
openmoc_fission_rates[openmoc_fission_rates == 0] = np.nan
# Plot OpenMC's fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(openmc_fission_rates, interpolation='none', cmap='jet')
plt.title('OpenMC Fission Rates')
# Plot OpenMOC's fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(openmoc_fission_rates, interpolation='none', cmap='jet')
plt.title('OpenMOC Fission Rates')
Explanation: Now we can easily use Matplotlib to visualize the fission rates from OpenMC and OpenMOC side-by-side.
End of explanation |
11,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Comparisons
Comparing the DDM to other congitive models
Step1: DDM vs Signal Detection Theory
Comparing DDM to Signal Detection - does d' correlate with DDM parameters?
Step2: d' distributions
Pilot
Step3: Controls
Step4: Patients
Step5: Drift rate / d'
Step6: SS vs US
Step7: SS vs CS
Step8: SS vs CP
Step9: Low d' comparisons
Compare ddm drift rate only with low d'
Ratcliff, R. (2014). Measuring psychometric functions with the diffusion model. Journal of Experimental Psychology | Python Code:
# Environment setup
%matplotlib inline
%cd /lang_dec
# Imports
import warnings; warnings.filterwarnings('ignore')
import hddm
import math
import scipy
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import bayesian_bootstrap.bootstrap as bootstrap
from utils import model_tools, signal_detection
# Import pilot models
pilot_data = hddm.load_csv('/lang_dec/data/pilot_clean.csv')
pilot_model = hddm.HDDM(pilot_data, depends_on={'v': 'stim'}, bias=True)
pilot_model.load_db(dbname='language_decision/models/pilot', db='txt')
#pilot_model_threshold = hddm.HDDM(pilot_data, depends_on={'v': 'stim', 'a': 'stim'})
#pilot_model_threshold.load_db(dbname='language_decision/models/pilot_threshold', db='txt')
# Import control models
controls_data = hddm.load_csv('/lang_dec/data/controls_clean.csv')
controls_model = hddm.HDDM(controls_data, depends_on={'v': 'stim'}, bias=True)
controls_model.load_db(dbname='language_decision/models/controls', db='txt')
#controls_model_threshold = hddm.HDDM(controls_data, depends_on={'v': 'stim', 'a': 'stim'}, bias=True)
#controls_model_threshold.load_db(dbname='language_decision/models/controls_threshold', db='txt')
# Import patient models
patients_data = hddm.load_csv('/lang_dec/data/patients_clean.csv')
patients_model = hddm.HDDM(patients_data, depends_on={'v': 'stim'}, bias=True)
patients_model.load_db(dbname='language_decision/models/patients', db='txt')
#patients_model_threshold = hddm.HDDM(patients_data, depends_on={'v': 'stim', 'a': 'stim'})
#patients_model_threshold.load_db(dbname='language_decision/models/patients_threshold', db='txt')
Explanation: Model Comparisons
Comparing the DDM to other congitive models
End of explanation
def get_d_primes(dataset, stim1, stim2, include_id=False):
d_primes = dict()
subject_ids = set(dataset.subj_idx)
for subject_id in subject_ids:
stim1_data = dataset.loc[
dataset['subj_idx'] == subject_id].loc[
dataset['stim'] == str(stim1)]
stim1_trials = len(stim1_data)
hits = len(stim1_data.loc[
dataset['response'] == 1.0])
stim2_data = dataset.loc[
dataset['subj_idx'] == subject_id].loc[
dataset['stim'] == str(stim2)]
stim2_trials = len(stim2_data)
fas = len(stim2_data.loc[
dataset['response'] == 0.0])
if not stim1_trials or not stim2_trials:
d_primes[subject_id] = None # N/A placeholder value
continue
d_prime = signal_detection.signal_detection(
n_stim1=stim1_trials,
n_stim2=stim2_trials,
hits=hits,
false_alarms=fas)['d_prime']
d_primes[subject_id] = d_prime
if not include_id:
return list(d_primes.values())
return d_primes
Explanation: DDM vs Signal Detection Theory
Comparing DDM to Signal Detection - does d' correlate with DDM parameters?
End of explanation
plt.hist(get_d_primes(pilot_data, 'SS', 'US'))
plt.hist(get_d_primes(pilot_data, 'SS', 'CS'))
plt.hist(get_d_primes(pilot_data, 'SS', 'CP'))
Explanation: d' distributions
Pilot
End of explanation
plt.hist(get_d_primes(controls_data, 'SS', 'US'))
plt.hist(get_d_primes(controls_data, 'SS', 'CS'))
plt.hist(get_d_primes(controls_data, 'SS', 'CP'))
Explanation: Controls
End of explanation
plt.hist(get_d_primes(patients_data, 'SS', 'US'))
plt.hist(get_d_primes(patients_data, 'SS', 'CS'))
plt.hist(list(filter(None, get_d_primes(patients_data, 'SS', 'CP'))))
Explanation: Patients
End of explanation
def match_dprime_to_driftrate(dataset, model, stim1, stim2):
subject_ids = set(dataset.subj_idx)
d_primes = get_d_primes(dataset, stim1, stim2, include_id=True)
for subject_id in subject_ids:
try:
d_prime = d_primes[subject_id]
v_stim1 = model.values['v_subj(' + stim1 + ').' + str(subject_id)]
v_stim2 = model.values['v_subj(' + stim2 + ').' + str(subject_id)]
v_diff = abs(v_stim2 - v_stim1)
yield (d_prime, v_diff)
except:
continue
Explanation: Drift rate / d'
End of explanation
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'US')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'US')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data, patients_model, 'SS', 'US')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
Explanation: SS vs US
End of explanation
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data, patients_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
Explanation: SS vs CS
End of explanation
dprime_driftrate = np.array([*match_dprime_to_driftrate(pilot_data, pilot_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(controls_data, controls_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data, patients_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
Explanation: SS vs CP
End of explanation
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data,
patients_model, 'SS', 'CS')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
dprime_driftrate = np.array([*match_dprime_to_driftrate(patients_data,
patients_model, 'SS', 'CP')])
x = dprime_driftrate[:,0]
y = dprime_driftrate[:,1]
plt.scatter(x, y)
scipy.stats.spearmanr(x, y)
Explanation: Low d' comparisons
Compare ddm drift rate only with low d'
Ratcliff, R. (2014). Measuring psychometric functions with the diffusion model. Journal of Experimental Psychology: Human Perception and Performance, 40(2), 870-888.
http://dx.doi.org/10.1037/a0034954
Patients are the best candidates for this (SSvsCS, SSvsCP)
End of explanation |
11,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conversion of Objax models to Tensorflow
This tutorial demonstrates how to export models from Objax to Tensorflow and then export them into SavedModel format.
SavedModel format could be read and served by Tensorflow serving infrastructure or by custom user code written in C++. Thus export to Tensorflow allows users to potentially run experiments in Objax and then serve these models in production (using Tensorflow infrastructure).
Installation and Imports
First of all, let's install Objax and import all necessary python modules.
Step1: Setup Objax model
Let's make a model in Objax and create a prediction operation which we will be later converting to Tensorflow.
In this tutorial we use randomly initialized model, so we don't need to wait for model training to finish. However conversion to Tensorflow would be the same if we train model first.
Step2: Now, let's generate a few examples and run prediction operation on them
Step3: Convert a model to Tensorflow
We use Objax2Tf object to convert Objax module into tf.Module.
Internally Objax2Tf makes a copy of all Objax variables used by the provided module and converts __call__ method of the provided Objax module
into Tensorflow function.
Step4: After module is converted we can run it and compare results between Objax and Tensorflow. Results are pretty close numerically, however they are not exactly the same due to implementation differences between JAX and Tensorflow.
Step5: Export Tensorflow model as SavedModel
Converting an Objax model to Tensorflow allows us to export it as Tensorflow SavedModel.
Discussion of details of SavedModel format is out of scope of this tutorial, thus we only provide an example showing how to save and load SavedModel. For more details about SavedModel please refert to the following Tensorflow documentation
Step6: Then let's use tf.saved_model.save API to save our Tensorflow model.
Since Objax2Tf is a subclass of tf.Module, instances of Objax2Tf class could be directly used with tf.saved_model.save API
Step7: Now we can list the content of model_dir and see files and subdirectories of SavedModel
Step8: Loading exported SavedModel
We can load SavedModel as a new Tensorflow object loaded_tf_model.
Step9: Then we can run inference using loaded Tensorflow model loaded_tf_model and compare resuls with the model predict_op_tf which was converted from Objax | Python Code:
# install the latest version of Objax from github
%pip --quiet install git+https://github.com/google/objax.git
import math
import random
import tempfile
import numpy as np
import tensorflow as tf
import objax
from objax.zoo.wide_resnet import WideResNet
Explanation: Conversion of Objax models to Tensorflow
This tutorial demonstrates how to export models from Objax to Tensorflow and then export them into SavedModel format.
SavedModel format could be read and served by Tensorflow serving infrastructure or by custom user code written in C++. Thus export to Tensorflow allows users to potentially run experiments in Objax and then serve these models in production (using Tensorflow infrastructure).
Installation and Imports
First of all, let's install Objax and import all necessary python modules.
End of explanation
# Model
model = WideResNet(nin=3, nclass=10, depth=4, width=1)
# Prediction operation
@objax.Function.with_vars(model.vars())
def predict_op(x):
return objax.functional.softmax(model(x, training=False))
predict_op = objax.Jit(predict_op)
Explanation: Setup Objax model
Let's make a model in Objax and create a prediction operation which we will be later converting to Tensorflow.
In this tutorial we use randomly initialized model, so we don't need to wait for model training to finish. However conversion to Tensorflow would be the same if we train model first.
End of explanation
input_shape = (4, 3, 32, 32)
x1 = np.random.uniform(size=input_shape)
y1 = predict_op(x1)
print('y1:\n', y1)
x2 = np.random.uniform(size=input_shape)
y2 = predict_op(x2)
print('y2:\n', y2)
Explanation: Now, let's generate a few examples and run prediction operation on them:
End of explanation
predict_op_tf = objax.util.Objax2Tf(predict_op)
print('isinstance(predict_op_tf, tf.Module) =', isinstance(predict_op_tf, tf.Module))
print('Number of variables: ', len(predict_op_tf.variables))
Explanation: Convert a model to Tensorflow
We use Objax2Tf object to convert Objax module into tf.Module.
Internally Objax2Tf makes a copy of all Objax variables used by the provided module and converts __call__ method of the provided Objax module
into Tensorflow function.
End of explanation
y1_tf = predict_op_tf(x1)
print('max(abs(y1_tf - y1)) =', np.amax(np.abs(y1_tf - y1)))
y2_tf = predict_op_tf(x2)
print('max(abs(y2_tf - y2)) =', np.amax(np.abs(y2_tf - y2)))
Explanation: After module is converted we can run it and compare results between Objax and Tensorflow. Results are pretty close numerically, however they are not exactly the same due to implementation differences between JAX and Tensorflow.
End of explanation
model_dir = tempfile.mkdtemp()
%ls -al $model_dir
Explanation: Export Tensorflow model as SavedModel
Converting an Objax model to Tensorflow allows us to export it as Tensorflow SavedModel.
Discussion of details of SavedModel format is out of scope of this tutorial, thus we only provide an example showing how to save and load SavedModel. For more details about SavedModel please refert to the following Tensorflow documentation:
Using the SavedModel format guide
tf.saved_model.save API call
tf.saved_model.load API call
Saving model as SavedModel
First of all, let's create a new empty directory where model will be saved:
End of explanation
tf.saved_model.save(
predict_op_tf,
model_dir,
signatures=predict_op_tf.__call__.get_concrete_function(
tf.TensorSpec(input_shape, tf.float32)))
Explanation: Then let's use tf.saved_model.save API to save our Tensorflow model.
Since Objax2Tf is a subclass of tf.Module, instances of Objax2Tf class could be directly used with tf.saved_model.save API:
End of explanation
%ls -al $model_dir
Explanation: Now we can list the content of model_dir and see files and subdirectories of SavedModel:
End of explanation
loaded_tf_model = tf.saved_model.load(model_dir)
print('Exported signatures: ', loaded_tf_model.signatures)
Explanation: Loading exported SavedModel
We can load SavedModel as a new Tensorflow object loaded_tf_model.
End of explanation
loaded_predict_op_tf = loaded_tf_model.signatures['serving_default']
y1_loaded_tf = loaded_predict_op_tf(tf.cast(x1, tf.float32))['output_0']
print('max(abs(y1_loaded_tf - y1_tf)) =', np.amax(np.abs(y1_loaded_tf - y1_tf)))
y2_loaded_tf = loaded_predict_op_tf(tf.cast(x2, tf.float32))['output_0']
print('max(abs(y2_loaded_tf - y2_tf)) =', np.amax(np.abs(y2_loaded_tf - y2_tf)))
Explanation: Then we can run inference using loaded Tensorflow model loaded_tf_model and compare resuls with the model predict_op_tf which was converted from Objax:
End of explanation |
11,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: AWI
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
11,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 1
Step1: Configure GCP environment settings
Update the PROJECT_ID variable to reflect the ID of the Google Cloud project you are using to implement this solution.
Step2: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step3: Explore the sample data
Use visualizations to explore the data in the vw_item_groups view that you created in the 00_prep_bq_and_datastore.ipynb notebook.
Import libraries for data visualization
Step4: Count the number of songs that occur in at least 15 groups
Step5: Count the number of playlists that have between 2 and 100 items
Step6: Count the number of records with valid songs and playlists
Step7: Show the playlist size distribution
Step8: Show the song occurrence distribution
Step9: Compute song PMI data
You run the sp_ComputePMI stored procedure to compute song PMI data. This PMI data is what you'll use to train the matrix factorization model in the next section.
This stored procedure accepts the following parameters
Step10: View the song PMI data
Step11: Train the BigQuery ML matrix factorization model
You run the sp_TrainItemMatchingModel stored procedure to train the item_matching_model matrix factorization model on the song PMI data. The model builds a feedback matrix, which in turn is used to calculate item embeddings for the songs. For more information about how this process works, see Understanding item embeddings.
This stored procedure accepts the dimensions parameter, which provides the value for the NUM_FACTORS parameter of the CREATE MODEL statement. The NUM_FACTORS parameter lets you set the number of latent factors to use in the model. Higher values for this parameter can increase model performance, but will also increase the time needed to train the model. Using the default dimensions value of 50, the model takes around 120 minutes to train.
Run the sp_TrainItemMatchingModel stored procedure
After the item_matching_model model is created successfully, you can use the the BigQuery console to investigate the loss through the training iterations, and also see the final evaluation metrics.
Step12: Explore the trained embeddings | Python Code:
from datetime import datetime
import matplotlib.pyplot as plt
import seaborn as sns
from google.cloud import bigquery
Explanation: Part 1: Learn item embeddings based on song co-occurrence
This notebook is the first of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to complete the following tasks:
Explore the sample playlist data.
Compute Pointwise mutual information (PMI) that represents the co-occurence of songs on playlists.
Train a matrix factorization model using BigQuery ML to learn item embeddings based on the PMI data.
Explore the learned embeddings.
Before starting this notebook, you must run the 00_prep_bq_procedures notebook to complete the solution prerequisites.
After completing this notebook, run the 02_export_bqml_mf_embeddings notebook to process the item embedding data.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
Import libraries
End of explanation
PROJECT_ID = "yourProject" # Change to your project.
!gcloud config set project $PROJECT_ID
Explanation: Configure GCP environment settings
Update the PROJECT_ID variable to reflect the ID of the Google Cloud project you are using to implement this solution.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Explore the sample data
Use visualizations to explore the data in the vw_item_groups view that you created in the 00_prep_bq_and_datastore.ipynb notebook.
Import libraries for data visualization:
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE TABLE recommendations.valid_items
AS
SELECT
item_Id,
COUNT(group_Id) AS item_frequency
FROM recommendations.vw_item_groups
GROUP BY item_Id
HAVING item_frequency >= 15;
SELECT COUNT(*) item_count FROM recommendations.valid_items;
Explanation: Count the number of songs that occur in at least 15 groups:
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE TABLE recommendations.valid_groups
AS
SELECT
group_Id,
COUNT(item_Id) AS group_size
FROM recommendations.vw_item_groups
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
GROUP BY group_Id
HAVING group_size BETWEEN 2 AND 100;
SELECT COUNT(*) group_count FROM recommendations.valid_groups;
Explanation: Count the number of playlists that have between 2 and 100 items:
End of explanation
%%bigquery --project $PROJECT_ID
SELECT COUNT(*) record_count
FROM `recommendations.vw_item_groups`
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
AND group_Id IN (SELECT group_Id FROM recommendations.valid_groups);
Explanation: Count the number of records with valid songs and playlists:
End of explanation
%%bigquery size_distribution --project $PROJECT_ID
WITH group_sizes
AS
(
SELECT
group_Id,
ML.BUCKETIZE(
COUNT(item_Id), [10, 20, 30, 40, 50, 101])
AS group_size
FROM `recommendations.vw_item_groups`
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
AND group_Id IN (SELECT group_Id FROM recommendations.valid_groups)
GROUP BY group_Id
)
SELECT
CASE
WHEN group_size = 'bin_1' THEN '[1 - 10]'
WHEN group_size = 'bin_2' THEN '[10 - 20]'
WHEN group_size = 'bin_3' THEN '[20 - 30]'
WHEN group_size = 'bin_4' THEN '[30 - 40]'
WHEN group_size = 'bin_5' THEN '[40 - 50]'
ELSE '[50 - 100]'
END AS group_size,
CASE
WHEN group_size = 'bin_1' THEN 1
WHEN group_size = 'bin_2' THEN 2
WHEN group_size = 'bin_3' THEN 3
WHEN group_size = 'bin_4' THEN 4
WHEN group_size = 'bin_5' THEN 5
ELSE 6
END AS bucket_Id,
COUNT(group_Id) group_count
FROM group_sizes
GROUP BY group_size, bucket_Id
ORDER BY bucket_Id
plt.figure(figsize=(20, 5))
q = sns.barplot(x="group_size", y="group_count", data=size_distribution)
Explanation: Show the playlist size distribution:
End of explanation
%%bigquery occurrence_distribution --project $PROJECT_ID
WITH item_frequency
AS
(
SELECT
Item_Id,
ML.BUCKETIZE(
COUNT(group_Id)
, [15, 30, 50, 100, 200, 300, 400]) AS group_count
FROM `recommendations.vw_item_groups`
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
AND group_Id IN (SELECT group_Id FROM recommendations.valid_groups)
GROUP BY Item_Id
)
SELECT
CASE
WHEN group_count = 'bin_1' THEN '[15 - 30]'
WHEN group_count = 'bin_2' THEN '[30 - 50]'
WHEN group_count = 'bin_3' THEN '[50 - 100]'
WHEN group_count = 'bin_4' THEN '[100 - 200]'
WHEN group_count = 'bin_5' THEN '[200 - 300]'
WHEN group_count = 'bin_6' THEN '[300 - 400]'
ELSE '[400+]'
END AS group_count,
CASE
WHEN group_count = 'bin_1' THEN 1
WHEN group_count = 'bin_2' THEN 2
WHEN group_count = 'bin_3' THEN 3
WHEN group_count = 'bin_4' THEN 4
WHEN group_count = 'bin_5' THEN 5
WHEN group_count = 'bin_6' THEN 6
ELSE 7
END AS bucket_Id,
COUNT(Item_Id) item_count
FROM item_frequency
GROUP BY group_count, bucket_Id
ORDER BY bucket_Id
plt.figure(figsize=(20, 5))
q = sns.barplot(x="group_count", y="item_count", data=occurrence_distribution)
%%bigquery --project $PROJECT_ID
DROP TABLE IF EXISTS recommendations.valid_items;
%%bigquery --project $PROJECT_ID
DROP TABLE IF EXISTS recommendations.valid_groups;
Explanation: Show the song occurrence distribution:
End of explanation
%%bigquery --project $PROJECT_ID
DECLARE min_item_frequency INT64;
DECLARE max_group_size INT64;
SET min_item_frequency = 15;
SET max_group_size = 100;
CALL recommendations.sp_ComputePMI(min_item_frequency, max_group_size);
Explanation: Compute song PMI data
You run the sp_ComputePMI stored procedure to compute song PMI data. This PMI data is what you'll use to train the matrix factorization model in the next section.
This stored procedure accepts the following parameters:
min_item_frequency — Sets the minimum number of times that a song must appear on playlists.
max_group_size — Sets the maximum number of songs that a playlist can contain.
These parameters are used together to select records where the song occurs on a number of playlists equal to or greater than the min_item_frequency value and the playlist contains a number of songs between 2 and the max_group_size value. These are the records that get processed to make the training dataset.
The stored procedure works as follows:
Selects a valid_item_groups1 table and populates it with records from thevw_item_groups` view that meet the following criteria:
The song occurs on a number of playlists equal to or greater than the
min_item_frequency value
The playlist contains a number of songs between 2 and the max_group_size
value.
Creates the item_cooc table and populates it with co-occurrence data that
identifies pairs of songs that occur on the same playlist. It does this by:
Self-joining the valid_item_groups table on the group_id column.
Setting the cooc column to 1.
Summing the cooc column for the item1_Id and item2_Id columns.
Creates an item_frequency table and populates it with data that identifies
how many playlists each song occurs in.
Recreates the item_cooc table to include the following record sets:
The item1_Id, item2_Id, and cooc data from the original item_cooc
table. The PMI values calculated from these song pairs lets the solution
calculate the embeddings for the rows in the feedback matrix.
<img src="figures/feedback-matrix-rows.png" alt="Embedding matrix that shows the matrix rows calculated by this step." style="width: 400px;"/>
The same data as in the previous bullet, but with the item1_Id data
written to the item2_Id column and the item2_Id data written to the
item1_Id column. This data provides the mirror values of the initial
entities in the feedback matrix. The PMI values calculated from these
song pairs lets the solution calculate the embeddings for the columns in
the feedback matrix.
<img src="figures/feedback-matrix-columns.png" alt="Embedding matrix that shows the matrix columns calculated by this step." style="width: 400px;"/>
The data from the item_frequency table. The item_Id data is written
to both the item1_Id and item2_Id columns and the frequency data is
written to the cooc column. This data provides the diagonal entries of
the feedback matrix. The PMI values calculated from these song pairs lets
the solution calculate the embeddings for the diagonals in the feedback
matrix.
<img src="figures/feedback-matrix-diagonals.png" alt="Embedding matrix that shows the matrix diagonals calculated by this step." style="width: 400px;"/>
Computes the PMI for item pairs in the item_cooc table, then recreates the
item_cooc table to include this data in the pmi column.
Run the sp_ComputePMI stored procedure
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
a.item1_Id,
a.item2_Id,
b.frequency AS freq1,
c.frequency AS freq2,
a.cooc,
a.pmi,
a.cooc * a.pmi AS score
FROM recommendations.item_cooc a
JOIN recommendations.item_frequency b
ON a.item1_Id = b.item_Id
JOIN recommendations.item_frequency c
ON a.item2_Id = c.item_Id
WHERE a.item1_Id != a.item2_Id
ORDER BY score DESC
LIMIT 10;
%%bigquery --project $PROJECT_ID
SELECT COUNT(*) records_count
FROM recommendations.item_cooc
Explanation: View the song PMI data
End of explanation
%%bigquery --project $PROJECT_ID
DECLARE dimensions INT64 DEFAULT 50;
CALL recommendations.sp_TrainItemMatchingModel(dimensions)
Explanation: Train the BigQuery ML matrix factorization model
You run the sp_TrainItemMatchingModel stored procedure to train the item_matching_model matrix factorization model on the song PMI data. The model builds a feedback matrix, which in turn is used to calculate item embeddings for the songs. For more information about how this process works, see Understanding item embeddings.
This stored procedure accepts the dimensions parameter, which provides the value for the NUM_FACTORS parameter of the CREATE MODEL statement. The NUM_FACTORS parameter lets you set the number of latent factors to use in the model. Higher values for this parameter can increase model performance, but will also increase the time needed to train the model. Using the default dimensions value of 50, the model takes around 120 minutes to train.
Run the sp_TrainItemMatchingModel stored procedure
After the item_matching_model model is created successfully, you can use the the BigQuery console to investigate the loss through the training iterations, and also see the final evaluation metrics.
End of explanation
%%bigquery song_embeddings --project $PROJECT_ID
SELECT
feature,
processed_input,
factor_weights,
intercept
FROM
ML.WEIGHTS(MODEL recommendations.item_matching_model)
WHERE
feature IN ('2114406',
'2114402',
'2120788',
'2120786',
'1086322',
'3129954',
'53448',
'887688',
'562487',
'833391',
'1098069',
'910683',
'1579481',
'2675403',
'2954929',
'625169')
songs = {
"2114406": "Metallica: Nothing Else Matters",
"2114402": "Metallica: The Unforgiven",
"2120788": "Limp Bizkit: My Way",
"2120786": "Limp Bizkit: My Generation",
"1086322": "Jacques Brel: Ne Me Quitte Pas",
"3129954": "Édith Piaf: Non, Je Ne Regrette Rien",
"53448": "France Gall: Ella, Elle l'a",
"887688": "Enrique Iglesias: Tired Of Being Sorry",
"562487": "Shakira: Hips Don't Lie",
"833391": "Ricky Martin: Livin' la Vida Loca",
"1098069": "Snoop Dogg: Drop It Like It's Hot",
"910683": "2Pac: California Love",
"1579481": "Dr. Dre: The Next Episode",
"2675403": "Eminem: Lose Yourself",
"2954929": "Black Sabbath: Iron Man",
"625169": "Black Sabbath: Paranoid",
}
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
def process_results(results):
items = list(results["feature"].unique())
item_embeddings = dict()
for item in items:
emebedding = [0.0] * 100
embedding_pair = results[results["feature"] == item]
for _, row in embedding_pair.iterrows():
factor_weights = list(row["factor_weights"])
for _, element in enumerate(factor_weights):
emebedding[element["factor"] - 1] += element["weight"]
item_embeddings[item] = emebedding
return item_embeddings
item_embeddings = process_results(song_embeddings)
item_ids = list(item_embeddings.keys())
for idx1 in range(0, len(item_ids) - 1):
item1_Id = item_ids[idx1]
title1 = songs[item1_Id]
print(title1)
print("==================")
embedding1 = np.array(item_embeddings[item1_Id])
similar_items = []
for idx2 in range(len(item_ids)):
item2_Id = item_ids[idx2]
title2 = songs[item2_Id]
embedding2 = np.array(item_embeddings[item2_Id])
similarity = round(cosine_similarity([embedding1], [embedding2])[0][0], 5)
similar_items.append((title2, similarity))
similar_items = sorted(similar_items, key=lambda item: item[1], reverse=True)
for element in similar_items[1:]:
print(f"- {element[0]}' = {element[1]}")
print()
Explanation: Explore the trained embeddings
End of explanation |
11,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this micro-course, you'll learn all about pandas, the most popular Python library for data analysis.
Along the way, you'll complete several hands-on exercises with real-world data. We recommend that you work on the exercises while reading the corresponding tutorials.
To start the first exercise, please click here.
In this tutorial, you will learn how to create your own data, along with how to work with data that already exists.
Getting started
To use pandas, you'll typically start with the following line of code.
Step1: Creating data
There are two core objects in pandas
Step2: In this example, the "0, No" entry has the value of 131. The "0, Yes" entry has a value of 50, and so on.
DataFrame entries are not limited to integers. For instance, here's a DataFrame whose values are strings
Step3: We are using the pd.DataFrame() constructor to generate these DataFrame objects. The syntax for declaring a new one is a dictionary whose keys are the column names (Bob and Sue in this example), and whose values are a list of entries. This is the standard way of constructing a new DataFrame, and the one you are most likely to encounter.
The dictionary-list constructor assigns values to the column labels, but just uses an ascending count from 0 (0, 1, 2, 3, ...) for the row labels. Sometimes this is OK, but oftentimes we will want to assign these labels ourselves.
The list of row labels used in a DataFrame is known as an Index. We can assign values to it by using an index parameter in our constructor
Step4: Series
A Series, by contrast, is a sequence of data values. If a DataFrame is a table, a Series is a list. And in fact you can create one with nothing more than a list
Step5: A Series is, in essence, a single column of a DataFrame. So you can assign row labels to the Series the same way as before, using an index parameter. However, a Series does not have a column name, it only has one overall name
Step6: The Series and the DataFrame are intimately related. It's helpful to think of a DataFrame as actually being just a bunch of Series "glued together". We'll see more of this in the next section of this tutorial.
Reading data files
Being able to create a DataFrame or Series by hand is handy. But, most of the time, we won't actually be creating our own data by hand. Instead, we'll be working with data that already exists.
Data can be stored in any of a number of different forms and formats. By far the most basic of these is the humble CSV file. When you open a CSV file you get something that looks like this
Step7: We can use the shape attribute to check how large the resulting DataFrame is
Step8: So our new DataFrame has 130,000 records split across 14 different columns. That's almost 2 million entries!
We can examine the contents of the resultant DataFrame using the head() command, which grabs the first five rows
Step9: The pd.read_csv() function is well-endowed, with over 30 optional parameters you can specify. For example, you can see in this dataset that the CSV file has a built-in index, which pandas did not pick up on automatically. To make pandas use that column for the index (instead of creating a new one from scratch), we can specify an index_col. | Python Code:
import pandas as pd
Explanation: Introduction
In this micro-course, you'll learn all about pandas, the most popular Python library for data analysis.
Along the way, you'll complete several hands-on exercises with real-world data. We recommend that you work on the exercises while reading the corresponding tutorials.
To start the first exercise, please click here.
In this tutorial, you will learn how to create your own data, along with how to work with data that already exists.
Getting started
To use pandas, you'll typically start with the following line of code.
End of explanation
pd.DataFrame({'Yes': [50, 21], 'No': [131, 2]})
Explanation: Creating data
There are two core objects in pandas: the DataFrame and the Series.
DataFrame
A DataFrame is a table. It contains an array of individual entries, each of which has a certain value. Each entry corresponds to a row (or record) and a column.
For example, consider the following simple DataFrame:
End of explanation
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'], 'Sue': ['Pretty good.', 'Bland.']})
Explanation: In this example, the "0, No" entry has the value of 131. The "0, Yes" entry has a value of 50, and so on.
DataFrame entries are not limited to integers. For instance, here's a DataFrame whose values are strings:
End of explanation
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'],
'Sue': ['Pretty good.', 'Bland.']},
index=['Product A', 'Product B'])
Explanation: We are using the pd.DataFrame() constructor to generate these DataFrame objects. The syntax for declaring a new one is a dictionary whose keys are the column names (Bob and Sue in this example), and whose values are a list of entries. This is the standard way of constructing a new DataFrame, and the one you are most likely to encounter.
The dictionary-list constructor assigns values to the column labels, but just uses an ascending count from 0 (0, 1, 2, 3, ...) for the row labels. Sometimes this is OK, but oftentimes we will want to assign these labels ourselves.
The list of row labels used in a DataFrame is known as an Index. We can assign values to it by using an index parameter in our constructor:
End of explanation
pd.Series([1, 2, 3, 4, 5])
Explanation: Series
A Series, by contrast, is a sequence of data values. If a DataFrame is a table, a Series is a list. And in fact you can create one with nothing more than a list:
End of explanation
pd.Series([30, 35, 40], index=['2015 Sales', '2016 Sales', '2017 Sales'], name='Product A')
Explanation: A Series is, in essence, a single column of a DataFrame. So you can assign row labels to the Series the same way as before, using an index parameter. However, a Series does not have a column name, it only has one overall name:
End of explanation
wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv")
Explanation: The Series and the DataFrame are intimately related. It's helpful to think of a DataFrame as actually being just a bunch of Series "glued together". We'll see more of this in the next section of this tutorial.
Reading data files
Being able to create a DataFrame or Series by hand is handy. But, most of the time, we won't actually be creating our own data by hand. Instead, we'll be working with data that already exists.
Data can be stored in any of a number of different forms and formats. By far the most basic of these is the humble CSV file. When you open a CSV file you get something that looks like this:
Product A,Product B,Product C,
30,21,9,
35,34,1,
41,11,11
So a CSV file is a table of values separated by commas. Hence the name: "Comma-Separated Values", or CSV.
Let's now set aside our toy datasets and see what a real dataset looks like when we read it into a DataFrame. We'll use the pd.read_csv() function to read the data into a DataFrame. This goes thusly:
End of explanation
wine_reviews.shape
Explanation: We can use the shape attribute to check how large the resulting DataFrame is:
End of explanation
wine_reviews.head()
Explanation: So our new DataFrame has 130,000 records split across 14 different columns. That's almost 2 million entries!
We can examine the contents of the resultant DataFrame using the head() command, which grabs the first five rows:
End of explanation
wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
wine_reviews.head()
Explanation: The pd.read_csv() function is well-endowed, with over 30 optional parameters you can specify. For example, you can see in this dataset that the CSV file has a built-in index, which pandas did not pick up on automatically. To make pandas use that column for the index (instead of creating a new one from scratch), we can specify an index_col.
End of explanation |
11,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sampler statistics
When checking for convergence or when debugging a badly behaving
sampler, it is often helpful to take a closer look at what the
sampler is doing. For this purpose some samplers export
statistics for each generated sample.
Step1: As a minimal example we sample from a standard normal distribution
Step2: NUTS provides the following statistics
Step3: mean_tree_accept
Step4: The get_sampler_stats method provides more control over which values should be returned, and it also works if the name of the statistic is the same as the name of one of the variables. We can use the chains option, to control values from which chain should be returned, or we can set combine=False to get the values for the individual chains
Step5: Find the index of all diverging transitions
Step6: It is often useful to compare the overall distribution of the
energy levels with the change of energy between successive samples.
Ideally, they should be very similar
Step7: If the overall distribution of energy levels has longer tails, the efficiency of the sampler will deteriorate quickly.
Multiple samplers
If multiple samplers are used for the same model (e.g. for continuous and discrete variables), the exported values are merged or stacked along a new axis.
Step8: Both samplers export accept, so we get one acceptance probability for each sampler | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
import pymc3 as pm
%matplotlib inline
Explanation: Sampler statistics
When checking for convergence or when debugging a badly behaving
sampler, it is often helpful to take a closer look at what the
sampler is doing. For this purpose some samplers export
statistics for each generated sample.
End of explanation
model = pm.Model()
with model:
mu1 = pm.Normal("mu1", mu=0, sd=1, shape=10)
with model:
step = pm.NUTS()
trace = pm.sample(2000, tune=1000, init=None, step=step, njobs=2)
Explanation: As a minimal example we sample from a standard normal distribution:
End of explanation
trace.stat_names
Explanation: NUTS provides the following statistics:
End of explanation
plt.plot(trace['step_size_bar'])
Explanation: mean_tree_accept: The mean acceptance probability for the tree that generated this sample. The mean of these values across all samples but the burn-in should be approximately target_accept (the default for this is 0.8).
diverging: Whether the trajectory for this sample diverged. If there are many diverging samples, this usually indicates that a region of the posterior has high curvature. Reparametrization can often help, but you can also try to increase target_accept to something like 0.9 or 0.95.
energy: The energy at the point in phase-space where the sample was accepted. This can be used to identify posteriors with problematically long tails. See below for an example.
energy_error: The difference in energy between the start and the end of the trajectory. For a perfect integrator this would always be zero.
max_energy_error: The maximum difference in energy along the whole trajectory.
depth: The depth of the tree that was used to generate this sample
tree_size: The number of leafs of the sampling tree, when the sample was accepted. This is usually a bit less than $2 ^ \text{depth}$. If the tree size is large, the sampler is using a lot of leapfrog steps to find the next sample. This can for example happen if there are strong correlations in the posterior, if the posterior has long tails, if there are regions of high curvature ("funnels"), or if the variance estimates in the mass matrix are inaccurate. Reparametrisation of the model or estimating the posterior variances from past samples might help.
tune: This is True, if step size adaptation was turned on when this sample was generated.
step_size: The step size used for this sample.
step_size_bar: The current best known step-size. After the tuning samples, the step size is set to this value. This should converge during tuning.
If the name of the statistic does not clash with the name of one of the variables, we can use indexing to get the values. The values for the chains will be concatenated.
We can see that the step sizes converged after the 1000 tuning samples for both chains to about the same value. The first 2000 values are from chain 1, the second 2000 from chain 2.
End of explanation
sizes1, sizes2 = trace.get_sampler_stats('depth', combine=False)
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, sharey=True)
ax1.plot(sizes1)
ax2.plot(sizes2)
accept = trace.get_sampler_stats('mean_tree_accept', burn=1000)
sb.distplot(accept, kde=False)
accept.mean()
Explanation: The get_sampler_stats method provides more control over which values should be returned, and it also works if the name of the statistic is the same as the name of one of the variables. We can use the chains option, to control values from which chain should be returned, or we can set combine=False to get the values for the individual chains:
End of explanation
trace['diverging'].nonzero()
Explanation: Find the index of all diverging transitions:
End of explanation
energy = trace['energy']
energy_diff = np.diff(energy)
sb.distplot(energy - energy.mean(), label='energy')
sb.distplot(energy_diff, label='energy diff')
plt.legend()
Explanation: It is often useful to compare the overall distribution of the
energy levels with the change of energy between successive samples.
Ideally, they should be very similar:
End of explanation
model = pm.Model()
with model:
mu1 = pm.Bernoulli("mu1", p=0.8)
mu2 = pm.Normal("mu2", mu=0, sd=1, shape=10)
with model:
step1 = pm.BinaryMetropolis([mu1])
step2 = pm.Metropolis([mu2])
trace = pm.sample(10000, init=None, step=[step1, step2], njobs=2, tune=1000)
trace.stat_names
Explanation: If the overall distribution of energy levels has longer tails, the efficiency of the sampler will deteriorate quickly.
Multiple samplers
If multiple samplers are used for the same model (e.g. for continuous and discrete variables), the exported values are merged or stacked along a new axis.
End of explanation
trace.get_sampler_stats('accept')
Explanation: Both samplers export accept, so we get one acceptance probability for each sampler:
End of explanation |
11,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The first two consecutive numbers to have two distinct prime factors are
Step1: Define an $n$-wise iterator over an iterator, inspired by the implementation of pairwise in the Itertools recipes. It get $n$ iterators for the iterable, advances the $i$th iterator by $i$, and zips them back together.
Step2: <!-- TEASER_END -->
See some example calls below.
Step4: Now we can define our function, which (lazily) evaluates the prime factors of all integers greater than 1 and iterates over them $n$-wise (can be thought of as a sliding window of size $n$.) We use len to count the number of distinct factors (since the factors are returned as a Counter) and return the window if the numer of distinct prime factors of all numbers in the window is equal to $m$.
Step5: We could modify the loop in the method to return the first number from the window, rather than the prime factors of all the numbers in the window, but this makes our implementation unecessarily long and messy. Also, the prime factorizations actually yield more information, and while we could return both, we can also just uniquely determine the multiple from the prime factors.
Step6: Define the prod function, which is analogous to Python's built-in sum function.
Step7: By convention, $x^0 = 1$ for all $x$ and $0! = 1$
Step8: Evaluate $2^3 \cdot 3$
Step9: Now we can get the actual number, rather than the prime factorization. | Python Code:
%load_ext autoreload
%autoreload 2
from common.utils import prime_factors
from itertools import count, tee
from six.moves import map, reduce, zip
Explanation: The first two consecutive numbers to have two distinct prime factors are:
$$
14 = 2 × 7 \
15 = 3 × 5
$$
The first three consecutive numbers to have three distinct prime factors are:
$$
644 = 2² × 7 × 23 \
645 = 3 × 5 × 43 \
646 = 2 × 17 × 19.
$$
Find the first four consecutive integers to have four distinct prime factors. What is the first of these numbers?
Version 1: Fun times with Itertools
First let's load up the prime factorization function we implemented earlier. Recall that this function returns a Counter with the prime factor as the key, and its exponent as the value.
End of explanation
def nwise(iterable, n=2):
iters = tee(iterable, n)
for i, it in enumerate(iters):
for _ in range(i):
next(it, None)
return zip(*iters)
Explanation: Define an $n$-wise iterator over an iterator, inspired by the implementation of pairwise in the Itertools recipes. It get $n$ iterators for the iterable, advances the $i$th iterator by $i$, and zips them back together.
End of explanation
list(nwise(range(10), 9))
list(nwise(range(10), 4))
Explanation: <!-- TEASER_END -->
See some example calls below.
End of explanation
def consecutive_distinct_factors(n, m):
The first consecutive n numbers
to have m distinct prime factors
for factors in nwise(map(prime_factors, count(2)), n):
if all(map(lambda x: len(x) == m, factors)):
return factors
consecutive_distinct_factors(2, 2)
Explanation: Now we can define our function, which (lazily) evaluates the prime factors of all integers greater than 1 and iterates over them $n$-wise (can be thought of as a sliding window of size $n$.) We use len to count the number of distinct factors (since the factors are returned as a Counter) and return the window if the numer of distinct prime factors of all numbers in the window is equal to $m$.
End of explanation
# recall Python's built-in pow function
pow(3, 3)
c = prime_factors(24); c
# recall that map can be applied
# to a variable number of lists,
# i.e. map(func, (a0, a1, ...), (b0, b1, ...)) -> [func(a0, b0), func(a1, b1), ...]
list(map(pow, c.keys(), c.values()))
Explanation: We could modify the loop in the method to return the first number from the window, rather than the prime factors of all the numbers in the window, but this makes our implementation unecessarily long and messy. Also, the prime factorizations actually yield more information, and while we could return both, we can also just uniquely determine the multiple from the prime factors.
End of explanation
prod = lambda xs: reduce(lambda x, y: x*y, xs, 1)
Explanation: Define the prod function, which is analogous to Python's built-in sum function.
End of explanation
prod([])
# calculate 6!
prod(range(1, 6+1))
prod(map(pow, c.keys(), c.values()))
multiple = lambda factors: prod(map(pow, factors.keys(), factors.values()))
Explanation: By convention, $x^0 = 1$ for all $x$ and $0! = 1$
End of explanation
multiple(c)
Explanation: Evaluate $2^3 \cdot 3$
End of explanation
list(map(multiple, consecutive_distinct_factors(2, 2)))
list(map(multiple, consecutive_distinct_factors(3, 3)))
list(map(multiple, consecutive_distinct_factors(4, 4)))
Explanation: Now we can get the actual number, rather than the prime factorization.
End of explanation |
11,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Reduction in Python
Erik Tollerud (STScI)
In this notebook we will walk through several of the basic steps required to do data reduction using Python and Astropy. This notebook is focused on "practical" (you decide if that is a code word for "lazy") application of the existing ecosystem of Python packages. That is, it is not a thorough guide to the nitty-gritty of how all these stages are implemented. For that, see other lectures in this session.
Installation/Requirements
This notebook requires the following Python packages that do not come by default with anaconda
Step1: We also do a few standard imports here python packages we know we will need.
Step2: Getting the data
We need to first get the image data we are looking at for this notebook. This is actual data from an observing run by the author using the Palomar 200" Hale Telescope, using the Large Format Camera (LFC) instrument.
The cell below will do just this from the web (or you can try downloading from here
Step3: The above creates a directory called "python_imred_data" - lets examine it
Step4: Exercise
Look at the observing_log.csv file - it's an excerpt from the log. Now look at the file sizes above. What patterns do you see? Can you tell why? Discuss with your neighbor. (Hint
Step5: This shows that this file is a relatively simple image - a single "HDU" (Header + Data Unit). Lets take a look at what that HDU contains
Step6: The header contains all the metadata about the image, while the data stores "counts" from the CCD (in the form of 16-bit integers). Seems sensible enough. Now lets try plotting up those counts.
Step7: Hmm, not very useful, as it turns out.
In fact, astronomical data tend to have dynamic ranges that require a bit more care when visualizing. To assist in this, the astropy.visualization packages has some helper utilities for re-scaling an image look more interpretable (to learn more about these see the astropy.visualization docs).
Step9: Well that looks better. It's now clear this is an astronomical image. However, it is not a very good looking one. It is full of various kinds of artifacts that need removing before any science can be done. We will address how to correct these in the rest of this notebook.
Exercise
Try playing with the parameters in the ImageNormalize class. See if you can get the image to look clearer. The image stretching part of the docs are your friend here!
To simplify this plotting task in the future, we will make a helperfunction that takes in an image and scales it to some settings that seem to look promising. Feel free to adjust this function to your preferences base on your results in the exercise, though!
Step10: CCDData / astropy.nddata
For the rest of this notebook, instead of using astropy.io.fits, we will use a somewhat higher-level view that is intended specifically for CCD (or CCD-like) images. Appropriately enough, it is named CCDData. It can be found in astropy.nddata, a sub-package storing data structures for multidimensional astronomical data sets. Lets open the same file we were just look at with CCDData.
Step11: Looks to be the same file... But not a few differences
Step12: CCDData has several other features like the ability to store (and propogate) uncertainties, or to mark certain pixels as defective, but we will not use that here, as we are focused on getting something out quickly. So lets just try visualizing it using our function from above
Step13: Looks just the same, so looks like it accomplished the same aim but with less dependence on the FITS format. So we will continue on with this data structure.
Overscan and Bias
Our first set of corrections center around removing the "bias" of the image - usually due to the voltage offset necessary to make a CCD work - along with associated pixel-by-pixel offsets due to the electronics.
Understanding Bias and Overscan areas
Examine the observing_log.csv file. It shows that the first several images are called "bias". That seems like a good place to start! Lets try examining one - both the image itself and its header.
Step14: Several things to note here. First, it says it has an exposure time of zero, as expected for a bias. There is also a big glowing strip on the left that is a few hunderd counts. This is probably an example of "amplifier glow", where the readout electronics emit light (or electronic noise) that is picked up by the CCD as it reads out. Forrunately, this pattern, along with the other patterns evident in this image, are mostly static, so they can be removed by simply subtracting the bias.
Another thing that might not be obvious to the untrained eye is that the image dimensions are sllightly larger than a power-of-two. This is not the size of the actual detector. Another hint is in the header in the section called BIASSEC
Step15: Now it is clear that the overscan region is quite different from the rest of the image.
Subtracting overscan
Fortunately, subtracting the overscan region is fairly trivial with the ccdproc package. It has a function to do just this task. Take a look at the docs for the function by executing the cell below.
Step16: Of particular note is the fits_section keyword. This seems like it might be useful because the header already includes a FITS-style "BIASSEC" keyword. Lets try using that
Step17: D'Oh! That didn't work. Why not? The error message, while a bit cryptic, provides a critical clue
Step18: While the image at first glance looks the same, looking at the color bar now reveals that it is centered around 0 rather than ~1100. So we have sucessfully subtracted the row-by-row bias using the overscan region.
However, the image still includes the overscan region. To remove that, we need to trim the image. The FITS header keyword DATASEC is typically used for this purpose, and we can use that here to trim off the overscan section
Step19: Looking closely, though we see another oddity
Step21: It appears as though there is a second overscan region along the columns. We could conceivably choose to use that overscan region as well, but we should be suspicious of this given that it is not part of the BIASSEC mentioned in the header, and is included in DATASEC. So it seems like it might be safer to just trim this region off and trust in the bias to remove the column-wise variations.
We know we need to apply this correction to every image, so to make the operations above repeatable easily, we move them into a single function to perform the overscan correction, which we will call later in the notebook
Step22: Exercise
Both bias images and overscan regions contain read noise (since they are read with the same electronics as everything else). See if you can determine the read noise (in counts) of this chip using the images we've examined thus far. Compare to the LFC spec page and see if you get the same answer as they do (see the table in section 3 - this image is from CCD #0).
Combining and applying a "master bias"
The overscan is good at removing the bias along its scan direction, but will do nothing about pixel-specific bias variations or readout phenomena that are constant per-readout but spatially varying (like the "glow" on the left edge of the image that we saw above). To correct that we will use a set of bias frames and subtract them from the images. Before we can do this, however, we need to combine the biases together into a "master bias". Otherwise we may simply be adding more noise than we remove, as a single bias exposure has the same noise as a single science exposure.
Look at the observing_log.csv file. It shows you which files are biases. The code below conveniently grabs them all into a single python list
Step23: Now we both load these images into memory and apply the overscan correction in a single step.
Step24: Now that we have all the bias images loaded and overscan subtracted, we can combine them together into a single combined (or "master") bias. We use the ccdproc.Combiner class to do median combining (although it supports several other combining algorithms)
Step25: Not too much exciting there to the eye... but it should have better noise properties than any of the biases by themselves. Now lets do the final step in the bias process and try subtracting the bias from the science image. Lets look at the docstring for ccdproc.subtract_bias
Step26: Looks pretty straightforward to use! Just remember that we have to overscan-correct the science image before we apply the bias
Step27: Hark! The bias level is gone, as is the glowing strip on the left. Two artifacts down. It is now also quite evident that there is some kind of interesting astronomical object to the center-left...
Exercise
Opinions vary on the best way to combine biases. Try comparing the statistics of a bias made using the above procedure and one using a simple average (which you should be able to do with bias_combiner). Which one is better? Discuss with your neighbor when you might like one over the other.
Flats
The final image correction we will apply is a flat. Dividing by a flat removes pixel-by-pixel sensitivity variations, as well as some imperfections in instrument optics, dust on filters, etc. Different flats are taken with each filter, so lets look in the observing_log.csv to see which images are flats for the g-band filter
Step28: Now we load these files like we did the biases, applying both the overscan and bias corrections
Step29: Inspecting this bias shows that it has far more counts than the science image, which is good because that means we have I higher S/N than the science. It clearly shows the cross-shaped imprint in the middle that is also present in the science image, as well as vignetting near the bottom).
Now we combine the flats like we did with the biases to get a single flat to apply to science images.
Step30: Now we can do the final correction to our science image
Step31: Note that this includes, by default, automatically normalizing the flat so that the resulting image is close to the true count rate, so there's no need to manually re-normalize the flat (unless you want to apply it many times and don't want to re-compute the normalization). Lets see what happens when we apply the flat to our science image
Step32: Now we're in business! This now looks like an astronomical image of a galaxy and various objects around it, aside from some residual amp glow (discussed below in the "Advanced Exercise").
Exercise
We still haven't gotten quite rid of the glow. It seems there is a slight residual glow in the strip on the left, and a quite prominent glow to the lower-left.
The lower-left glow is apparently time-dependent - that is, it is absent in the biases (0 sec exposure), weak in the flats (70 sec exposure), but much stronger in the science images (300 sec). This means it is probably due to amplifier electronics that emit a continuous glow even when not reading out. That's an annoying thing about semiconductors
Step33: Now we can subtract that from the reduced science image. Note that the units have to match
Step34: Doesn't look too different from the last one by eye... which is expected because it's just a shift in the 0-level. But now seems like a good time to zoom in on that galaxy in the center-left
Step35: Aha! It's a Local Group galaxy, because otherwise you wouldn't be able to see individual stars like that. Lets see if we can identify it. Lets see where the telescope was pointed
Step36: Knowing that this location in the image is pretty close to the center, we can try using NED (the NASA/IPAC Extragalactic Database) to identify the object. Go to NED's web site and click the "near position" search. Enter in the coordinates, and hit search. You'll see plenty of objects, but only one that's actually a Local Group galaxy.
For comparison purposes, lets also look at this field in the Sloan Digital Sky Survey. Go to the SDSS Sky Navigate Tool, and enter in the same coordinates as above. You'll land right on a field that should look like our target galaxy. This means we can use the SDSS as a comparison to see if our photometric measurements make any sense. Keep that window open for reference later on.
For now, we note two objects that are more compact than the Local Group galaxy and therefore more amenable to a comparison with SDSS
Step37: Note the "SECPIX1"/"SECPIX2" keywords, which give the number of arcseconds per pixel. While this is relatively straightforward, we e can use astropy.units machinery in a way that makes it foolproof (i.e., keeps you from getting pixel/arcsec or arcsec/pixel confused)
Step38: Now we can use this to define a photutils CircularAperture object. These objects also require positions, so we'll pick the positions of the two objects we identified above
Step39: Conveniently, these apertures can even plot themselves
Step40: And indeed we see they are centered on the right objects.
Now we simply perform photometry on these objects using photutils.aperture_photometry
Step41: While it has many options, the defaults are probably sufficient for our needs right now
Step42: The results given are fluxes in units of counts, which we can convert to "instrumental" magnitudes and put into the table
Step43: While the value of these magnitudes is instrument-specific, the difference of these magnitudes should match the difference between any other instrument, including the SDSS. Find the two objects in the navigate view. When you click on one of them in the window, you can choose "explore" on the right, and then the "PhotoObj" link on the left sidebar in the explore window that comes up. Record "fiberMag_g" row in that table. Repeat for the other galaxy, and compare the difference that you compute below.
Step44: You should get the SDSS and your computed difference to be within ~0.001 mags if all went well in the previous steps. Guess we're on the right track!
Exercise
The astroquery package contains a sub-package astroquery.sdss that can be used to programatically search the SDSS for objects. See if you can use that to automate the process we followed above to compare SDSS and our image.
Note
Step45: The thresholding algorithm here is straightforward and fairly efficient interpretation, but it requires a threshold above which "isolated" pixels are considered separate objects. photutils provides automated ways to do this, but for now we will do it manually
Step46: Simply eye-balling this histogram reveals that the background fluctuations are at the level of ~20 counts. So if we want a 3-sigma threshold, we use 60. We also, somewhat arbitrary, require at last 5 pixels for a source to be included
Step47: The resulting object also has some utilities to help plot itself
Step48: We can clearly see our two objects from before standing out as distinct sources. Lets have a closer look
Step49: Now lets try to figure out which of the many objects in the source table are ours, and try making measurements of these objects directly from these threshold maps
Step50: Now sanity-check that they are actually the correct objects
Step51: Now that we've identified them, we can see all the information the photutils.source_properties computes. The one thing it does not do is instrumental magnitudes, so we add that manually. | Python Code:
import ccdproc
ccdproc.__version__
import photutils
photutils.__version__
Explanation: Image Reduction in Python
Erik Tollerud (STScI)
In this notebook we will walk through several of the basic steps required to do data reduction using Python and Astropy. This notebook is focused on "practical" (you decide if that is a code word for "lazy") application of the existing ecosystem of Python packages. That is, it is not a thorough guide to the nitty-gritty of how all these stages are implemented. For that, see other lectures in this session.
Installation/Requirements
This notebook requires the following Python packages that do not come by default with anaconda:
CCDProc (>= v1.3)
Photutils (>= v0.4)
Both if these should be available on the astropy (or conda-forge) channels. So you can get them with conda by doing conda -c astropy ccdproc photutils.
If for some reason this doesn't work, pip install <packagename> should also work for either package.
Regardless of how you install them, you may need to restart your notebook kernel to recognize the packages are present. Run the cells below to ensure you have the right versions.
End of explanation
from glob import glob
import numpy as np
from astropy import units as u
%matplotlib inline
from matplotlib import pyplot as plt
Explanation: We also do a few standard imports here python packages we know we will need.
End of explanation
!wget http://www.stsci.edu/\~etollerud/python_imred_data.tar
!tar xf python_imred_data.tar
Explanation: Getting the data
We need to first get the image data we are looking at for this notebook. This is actual data from an observing run by the author using the Palomar 200" Hale Telescope, using the Large Format Camera (LFC) instrument.
The cell below will do just this from the web (or you can try downloading from here: https://northwestern.box.com/s/4mact3c5xu9wcveofek55if8j1mxptqd), and un-tar it locally to create a directory with the image files. This is an ~200MB download, so might take a bit. If the wifi has gotten bad, try asking a neighbor or your instructor if they have it on a key drive or similar.
End of explanation
ls -lh python_imred_data
Explanation: The above creates a directory called "python_imred_data" - lets examine it:
End of explanation
from astropy.io import fits
data_g = fits.open('python_imred_data/ccd.037.0.fits.gz')
data_g
Explanation: Exercise
Look at the observing_log.csv file - it's an excerpt from the log. Now look at the file sizes above. What patterns do you see? Can you tell why? Discuss with your neighbor. (Hint: the ".gz" at the end is significant here.)
You might find it useful to take a quick look at some of the images with a fits viewer like ds9 to do this. Feel free to come back to this after looking over some of the files if you don't have an external prgram.
Loading the data into Python
To have the "lowest-level" view of a fits file, you can use the astropy.io.fits package. It is a direct view into a fits file, which means you have a lot of control of how you look at the file, but because FITS files can store more than just an individual image dataset, it requires some understanding of FITS files. Here we take a quick look at one of the "science" images using this interface.
Quick look with astropy.io.fits
End of explanation
data_g[0].header
data_g[0].data
Explanation: This shows that this file is a relatively simple image - a single "HDU" (Header + Data Unit). Lets take a look at what that HDU contains:
End of explanation
plt.imshow(data_g[0].data)
Explanation: The header contains all the metadata about the image, while the data stores "counts" from the CCD (in the form of 16-bit integers). Seems sensible enough. Now lets try plotting up those counts.
End of explanation
from astropy import visualization as aviz
image = data_g[0].data
norm = aviz.ImageNormalize(image,
interval=aviz.PercentileInterval(95),
stretch=aviz.LogStretch())
fig, ax = plt.subplots(1,1, figsize=(6,10))
aim = ax.imshow(image, norm=norm, origin='lower')
plt.colorbar(aim)
Explanation: Hmm, not very useful, as it turns out.
In fact, astronomical data tend to have dynamic ranges that require a bit more care when visualizing. To assist in this, the astropy.visualization packages has some helper utilities for re-scaling an image look more interpretable (to learn more about these see the astropy.visualization docs).
End of explanation
def show_image(image, percl=99, percu=None, figsize=(6, 10)):
Show an image in matplotlib with some basic astronomically-appropriat stretching.
Parameters
----------
image
The image to show
percl : number
The percentile for the lower edge of the stretch (or both edges if ``percu`` is None)
percu : number or None
The percentile for the upper edge of the stretch (or None to use ``percl`` for both)
figsize : 2-tuple
The size of the matplotlib figure in inches
if percu is None:
percu = percl
percl = 100-percl
norm = aviz.ImageNormalize(image, interval=aviz.AsymmetricPercentileInterval(percl, percu),
stretch=aviz.LogStretch())
fig, ax = plt.subplots(1,1, figsize=figsize)
plt.colorbar(ax.imshow(image, norm=norm, origin='lower'))
Explanation: Well that looks better. It's now clear this is an astronomical image. However, it is not a very good looking one. It is full of various kinds of artifacts that need removing before any science can be done. We will address how to correct these in the rest of this notebook.
Exercise
Try playing with the parameters in the ImageNormalize class. See if you can get the image to look clearer. The image stretching part of the docs are your friend here!
To simplify this plotting task in the future, we will make a helperfunction that takes in an image and scales it to some settings that seem to look promising. Feel free to adjust this function to your preferences base on your results in the exercise, though!
End of explanation
from astropy.nddata import CCDData
ccddata_g = CCDData.read('python_imred_data/ccd.037.0.fits.gz', unit=u.count)
ccddata_g
Explanation: CCDData / astropy.nddata
For the rest of this notebook, instead of using astropy.io.fits, we will use a somewhat higher-level view that is intended specifically for CCD (or CCD-like) images. Appropriately enough, it is named CCDData. It can be found in astropy.nddata, a sub-package storing data structures for multidimensional astronomical data sets. Lets open the same file we were just look at with CCDData.
End of explanation
ccddata_g.meta # ccddata_g.header is the exact same thing
ccddata_g.data
Explanation: Looks to be the same file... But not a few differences: it's immediately an image, with no need to do [0]. Also note that you had to specify a unit. Some fits files come with their units specified (allowing fits fulls of e.g. calibrated images to know what their units are), but this one is raw data, so we had to specify the unit.
Note also that CCDDatathe object knows all the header information, and has a copy of the data just as in the astropy.io.fits interface:
End of explanation
show_image(ccddata_g, 95)
Explanation: CCDData has several other features like the ability to store (and propogate) uncertainties, or to mark certain pixels as defective, but we will not use that here, as we are focused on getting something out quickly. So lets just try visualizing it using our function from above:
End of explanation
im1 = CCDData.read('python_imred_data/ccd.001.0.fits.gz', unit=u.count)
show_image(im1)
im1.header
Explanation: Looks just the same, so looks like it accomplished the same aim but with less dependence on the FITS format. So we will continue on with this data structure.
Overscan and Bias
Our first set of corrections center around removing the "bias" of the image - usually due to the voltage offset necessary to make a CCD work - along with associated pixel-by-pixel offsets due to the electronics.
Understanding Bias and Overscan areas
Examine the observing_log.csv file. It shows that the first several images are called "bias". That seems like a good place to start! Lets try examining one - both the image itself and its header.
End of explanation
show_image(ccddata_g)
plt.xlim(2000,2080)
plt.ylim(0,150);
Explanation: Several things to note here. First, it says it has an exposure time of zero, as expected for a bias. There is also a big glowing strip on the left that is a few hunderd counts. This is probably an example of "amplifier glow", where the readout electronics emit light (or electronic noise) that is picked up by the CCD as it reads out. Forrunately, this pattern, along with the other patterns evident in this image, are mostly static, so they can be removed by simply subtracting the bias.
Another thing that might not be obvious to the untrained eye is that the image dimensions are sllightly larger than a power-of-two. This is not the size of the actual detector. Another hint is in the header in the section called BIASSEC: this reveals the section of the chip that is used for "overscan" - the readout process is repeated several times past the end of the chip to characterize the average bias for that row. Lets take a closer look at the overscan area. Note that we do this in one of the science exposures because it's not visible in the bias (which is in some sense entirely overscan):
End of explanation
ccdproc.subtract_overscan?
Explanation: Now it is clear that the overscan region is quite different from the rest of the image.
Subtracting overscan
Fortunately, subtracting the overscan region is fairly trivial with the ccdproc package. It has a function to do just this task. Take a look at the docs for the function by executing the cell below.
End of explanation
ccdproc.subtract_overscan(im1, fits_section=im1.header['BIASSEC'])
Explanation: Of particular note is the fits_section keyword. This seems like it might be useful because the header already includes a FITS-style "BIASSEC" keyword. Lets try using that:
End of explanation
im1.header['BIASSEC']
subed = ccdproc.subtract_overscan(im1, fits_section='[2049:2080,:]', overscan_axis=1)
show_image(subed)
subed.shape
Explanation: D'Oh! That didn't work. Why not? The error message, while a bit cryptic, provides a critical clue: the overscan region found is one pixel shorter than the actual image. This is because the "BIASSEC" keyword in this image doesn't follow quite the convention expected for the second (vertical) dimension. So we will just manually fill in a fits_section with the correct overscan region.
End of explanation
trimmed = ccdproc.trim_image(subed, fits_section=subed.meta['DATASEC'])
show_image(trimmed)
trimmed.shape
Explanation: While the image at first glance looks the same, looking at the color bar now reveals that it is centered around 0 rather than ~1100. So we have sucessfully subtracted the row-by-row bias using the overscan region.
However, the image still includes the overscan region. To remove that, we need to trim the image. The FITS header keyword DATASEC is typically used for this purpose, and we can use that here to trim off the overscan section:
End of explanation
show_image(ccddata_g)
plt.xlim(1000,1050)
plt.ylim(4000,4130);
Explanation: Looking closely, though we see another oddity: even with the trimming done above, it is still not the dimensions the LFC specs say the CCD should be (2048 × 4096 pixels). The vertical direction still has excess pixels. Lets look at these in the science image:
End of explanation
def overscan_correct(image):
Subtract the row-wise overscan and trim the non-data regions.
Parameters
----------
image : CCDData object
The image to apply the corrections to
Returns
-------
CCDData object
the overscan-corrected image
subed = ccdproc.subtract_overscan(image, fits_section='[2049:2080,:]', overscan_axis=1)
trimmed = ccdproc.trim_image(subed, fits_section='[1:2048,1:4096]')
return trimmed
Explanation: It appears as though there is a second overscan region along the columns. We could conceivably choose to use that overscan region as well, but we should be suspicious of this given that it is not part of the BIASSEC mentioned in the header, and is included in DATASEC. So it seems like it might be safer to just trim this region off and trust in the bias to remove the column-wise variations.
We know we need to apply this correction to every image, so to make the operations above repeatable easily, we move them into a single function to perform the overscan correction, which we will call later in the notebook:
End of explanation
biasfns = glob('python_imred_data/ccd.00?.0.fits.gz')
biasfns
Explanation: Exercise
Both bias images and overscan regions contain read noise (since they are read with the same electronics as everything else). See if you can determine the read noise (in counts) of this chip using the images we've examined thus far. Compare to the LFC spec page and see if you get the same answer as they do (see the table in section 3 - this image is from CCD #0).
Combining and applying a "master bias"
The overscan is good at removing the bias along its scan direction, but will do nothing about pixel-specific bias variations or readout phenomena that are constant per-readout but spatially varying (like the "glow" on the left edge of the image that we saw above). To correct that we will use a set of bias frames and subtract them from the images. Before we can do this, however, we need to combine the biases together into a "master bias". Otherwise we may simply be adding more noise than we remove, as a single bias exposure has the same noise as a single science exposure.
Look at the observing_log.csv file. It shows you which files are biases. The code below conveniently grabs them all into a single python list:
End of explanation
biases = [overscan_correct(CCDData.read(fn, unit=u.count)) for fn in biasfns]
# The above cell uses Python's "list comprehensions", which are faster and more
# compact than a regular for-loop. But if you have not seen these before,
# it's useful to know they are exactly equivalent to this:
# biases = []
# for fn in biasfns:
# im = overscan_correct(CCDData.read(fn, unit=u.count))
# biases.append(im)
Explanation: Now we both load these images into memory and apply the overscan correction in a single step.
End of explanation
bias_combiner = ccdproc.Combiner(biases)
master_bias = bias_combiner.median_combine()
show_image(master_bias)
Explanation: Now that we have all the bias images loaded and overscan subtracted, we can combine them together into a single combined (or "master") bias. We use the ccdproc.Combiner class to do median combining (although it supports several other combining algorithms):
End of explanation
ccdproc.subtract_bias?
Explanation: Not too much exciting there to the eye... but it should have better noise properties than any of the biases by themselves. Now lets do the final step in the bias process and try subtracting the bias from the science image. Lets look at the docstring for ccdproc.subtract_bias
End of explanation
ccddata_g_corr = overscan_correct(ccddata_g)
ccd_data_g_unbiased = ccdproc.subtract_bias(ccddata_g_corr, master_bias)
show_image(ccd_data_g_unbiased, 10, 99.8)
Explanation: Looks pretty straightforward to use! Just remember that we have to overscan-correct the science image before we apply the bias:
End of explanation
flat_g_fns = glob('python_imred_data/ccd.01[4-6].0.fits.gz')
flat_g_fns
Explanation: Hark! The bias level is gone, as is the glowing strip on the left. Two artifacts down. It is now also quite evident that there is some kind of interesting astronomical object to the center-left...
Exercise
Opinions vary on the best way to combine biases. Try comparing the statistics of a bias made using the above procedure and one using a simple average (which you should be able to do with bias_combiner). Which one is better? Discuss with your neighbor when you might like one over the other.
Flats
The final image correction we will apply is a flat. Dividing by a flat removes pixel-by-pixel sensitivity variations, as well as some imperfections in instrument optics, dust on filters, etc. Different flats are taken with each filter, so lets look in the observing_log.csv to see which images are flats for the g-band filter:
End of explanation
# These steps could all be one single list comprehension. However,
# breaking them into several lines makes it much clearer which steps
# are being applied. In practice the performance difference is
# ~microseconds, far less than the actual execution time for even a
# single image
flats_g = [CCDData.read(fn, unit=u.count) for fn in flat_g_fns]
flats_g = [overscan_correct(flat) for flat in flats_g]
flats_g = [ccdproc.subtract_bias(flat, master_bias) for flat in flats_g]
show_image(flats_g[0], 90)
Explanation: Now we load these files like we did the biases, applying both the overscan and bias corrections:
End of explanation
flat_g_combiner = ccdproc.Combiner(flats_g)
# feel free to choose a different combine algorithm if you developed a preference in the last exercise
combined_flat_g = flat_g_combiner.median_combine()
show_image(combined_flat_g, 90)
Explanation: Inspecting this bias shows that it has far more counts than the science image, which is good because that means we have I higher S/N than the science. It clearly shows the cross-shaped imprint in the middle that is also present in the science image, as well as vignetting near the bottom).
Now we combine the flats like we did with the biases to get a single flat to apply to science images.
End of explanation
ccdproc.flat_correct?
Explanation: Now we can do the final correction to our science image: dividing by the flat. ccdproc provides a function to do that, too:
End of explanation
ccd_data_g_flattened = ccdproc.flat_correct(ccd_data_g_unbiased, combined_flat_g)
show_image(ccd_data_g_flattened, 10, 99.5)
Explanation: Note that this includes, by default, automatically normalizing the flat so that the resulting image is close to the true count rate, so there's no need to manually re-normalize the flat (unless you want to apply it many times and don't want to re-compute the normalization). Lets see what happens when we apply the flat to our science image:
End of explanation
from astropy.stats import SigmaClip
bkg_estimator = photutils.ModeEstimatorBackground(sigma_clip=SigmaClip(sigma=3.))
# Note: for some versions of numpy you may need to do ``ccd_data_g_flattened.data`` in the line below
bkg_val = bkg_estimator.calc_background(ccd_data_g_flattened)
bkg_val
Explanation: Now we're in business! This now looks like an astronomical image of a galaxy and various objects around it, aside from some residual amp glow (discussed below in the "Advanced Exercise").
Exercise
We still haven't gotten quite rid of the glow. It seems there is a slight residual glow in the strip on the left, and a quite prominent glow to the lower-left.
The lower-left glow is apparently time-dependent - that is, it is absent in the biases (0 sec exposure), weak in the flats (70 sec exposure), but much stronger in the science images (300 sec). This means it is probably due to amplifier electronics that emit a continuous glow even when not reading out. That's an annoying thing about semiconductors: while they are great light absorbers but also great emmitters! Oh well, at least in means our electricity bills are getting cheaper...
In any event, this means the only way to correct for this glow is to use a "dark" in place of a bias. A dark is an exposure of the same time as the target exposure, but with the camera shutter closed. This exposure should then capture the full amplifier glow for a given exposure time. You may have noticed the data files included a darks directory. If you look there you'll see several images, including dark exposures of times appropriate for our images. They have overlapping exposure numbers and no log, because they were taken on a different night as the science data, but around the same time. So you will have to do some sleuthing to figure out which ones to use.
Once you've figured this out, try applying the darks to the images in the same way as the biases and see if you can get rid of the remaining glow. If you get this working, you can use those images instead of the ones derived above for the "Photometry" section.
Advanced Exercise
Due to time constraints, the above discussion has said little about uncertainties. But there is enough information in the images above to compute running per-pixel uncertainties. See if you can do this, and attach them to the final file as the ccd_data_g_flattened.uncertainty attribute (see the nddata and CCDData docs for the details of how to store the type of uncertainty).
Photometry
After the above reductions, opinions begin to diverge wildly on the best way to reduce data. Many, many papers have been written on the right way to do photometry for various conditions or classes of objects. It is an area both of active research and active code development. It is also the subject of many of the lectures this week.
Hence, this final section is not meant to be in any way complete, but rather meant to demonstrate a few ways you might do certain basic photometric measurements in Python. For this purpose, we will rely heavily on photutils, the main Astropy package for doing general photometry.
Background Estimation
Before any photometric measurements can be made of any object, the background flux must be subtracted. In some images the background is variable enough that fairly complex models are required. In other cases, this is done locally as part of the photometering, although that can be problematic in crowded fields. But for many purposes estimating a single background for the whole image is sufficient, and it is that case we will consider here.
Phutils has several background-estimation algorithms available. Here we will use an algorithm meant to estimate the mode of a distribution relatively quickly in the presence of outliers (i.e., the background in a typical astronomical image that's not too crowded with sources):
End of explanation
ccd_data_g_bkgsub = ccd_data_g_flattened.subtract(bkg_val*u.count)
show_image(ccd_data_g_bkgsub)
Explanation: Now we can subtract that from the reduced science image. Note that the units have to match:
End of explanation
show_image(ccd_data_g_bkgsub, 12, 99.9, figsize=(12, 10))
plt.xlim(0, 1000)
plt.ylim(2200, 3300)
Explanation: Doesn't look too different from the last one by eye... which is expected because it's just a shift in the 0-level. But now seems like a good time to zoom in on that galaxy in the center-left:
Finding Notable Objects
End of explanation
ccddata_g.header['RA'], ccddata_g.header['DEC']
Explanation: Aha! It's a Local Group galaxy, because otherwise you wouldn't be able to see individual stars like that. Lets see if we can identify it. Lets see where the telescope was pointed:
End of explanation
ccddata_g.header
Explanation: Knowing that this location in the image is pretty close to the center, we can try using NED (the NASA/IPAC Extragalactic Database) to identify the object. Go to NED's web site and click the "near position" search. Enter in the coordinates, and hit search. You'll see plenty of objects, but only one that's actually a Local Group galaxy.
For comparison purposes, lets also look at this field in the Sloan Digital Sky Survey. Go to the SDSS Sky Navigate Tool, and enter in the same coordinates as above. You'll land right on a field that should look like our target galaxy. This means we can use the SDSS as a comparison to see if our photometric measurements make any sense. Keep that window open for reference later on.
For now, we note two objects that are more compact than the Local Group galaxy and therefore more amenable to a comparison with SDSS: the two background galaxies that are directly to the right of the Local Group object.
Aperture Photometry
The simplest form of photometry is simply drawing apertures (often circular) around an object and counting the flux inside that aperture. Since this process is so straightforward, we will us it as a sanity check for comparing our image to the SDSS.
First, we need to pick an aperture. SDSS provides several aperture photometry measurements, but the easiest to find turns out to be 3" diameter apertures (available on SDSS as "FIBERMAG"). We need to compute how many pixels for our image are in a 3" aperture. Lets choose to true the FITS headers, which give a plate scale.
End of explanation
scale_eq = u.pixel_scale(ccddata_g.header['SECPIX1']*u.arcsec/u.pixel)
fibermag_ap_diam = (3*u.arcsec).to(u.pixel, scale_eq)
fibermag_ap_diam
Explanation: Note the "SECPIX1"/"SECPIX2" keywords, which give the number of arcseconds per pixel. While this is relatively straightforward, we e can use astropy.units machinery in a way that makes it foolproof (i.e., keeps you from getting pixel/arcsec or arcsec/pixel confused):
End of explanation
positions = [(736., 2601.5), (743., 2872.)]
apertures = photutils.CircularAperture(positions, r=fibermag_ap_diam.value/2)
Explanation: Now we can use this to define a photutils CircularAperture object. These objects also require positions, so we'll pick the positions of the two objects we identified above:
End of explanation
show_image(ccd_data_g_bkgsub, 12, 99.9, figsize=(6, 10))
apertures.plot(color='red')
plt.xlim(600, 800)
plt.ylim(2530, 2920)
Explanation: Conveniently, these apertures can even plot themselves:
End of explanation
photutils.aperture_photometry?
Explanation: And indeed we see they are centered on the right objects.
Now we simply perform photometry on these objects using photutils.aperture_photometry
End of explanation
apphot = photutils.aperture_photometry(ccd_data_g_bkgsub, apertures)
apphot
Explanation: While it has many options, the defaults are probably sufficient for our needs right now:
End of explanation
apphot['aperture_mags'] = u.Magnitude(apphot['aperture_sum'])
apphot
Explanation: The results given are fluxes in units of counts, which we can convert to "instrumental" magnitudes and put into the table:
End of explanation
apphot['aperture_mags'][1] - apphot['aperture_mags'][0]
Explanation: While the value of these magnitudes is instrument-specific, the difference of these magnitudes should match the difference between any other instrument, including the SDSS. Find the two objects in the navigate view. When you click on one of them in the window, you can choose "explore" on the right, and then the "PhotoObj" link on the left sidebar in the explore window that comes up. Record "fiberMag_g" row in that table. Repeat for the other galaxy, and compare the difference that you compute below.
End of explanation
photutils.detect_sources?
Explanation: You should get the SDSS and your computed difference to be within ~0.001 mags if all went well in the previous steps. Guess we're on the right track!
Exercise
The astroquery package contains a sub-package astroquery.sdss that can be used to programatically search the SDSS for objects. See if you can use that to automate the process we followed above to compare SDSS and our image.
Note: you may need to install astroquery the same way as you did ccdproc or photutils (see the top of this notebook).
To go one step further, try using the SDSS to calibrate (at least, roughly) our measurements. This will require identifying matching objects in the field (ideally fairly bright stars) and using them to compute the instrumental-to-$g$-band offset. See if you get magnitudes to match the SDSS on other objects.
Source Detection using Thresholding
As with photometry, source-finding as a complex subject. Here we overview a straightforward algorithm that photutils provides to find heterogeneous objects in an image. Note that this is not optimal if you are only looking for stars. photutils provides several star-finders that are better-suited for that problem (but are not covered further here).
Have a look at the options to the relevant photutils function:
End of explanation
plt.hist(ccd_data_g_bkgsub.data.flat, histtype='step',
bins=100, range=(-100, 200))
plt.xlim(-100, 200)
plt.tight_layout()
Explanation: The thresholding algorithm here is straightforward and fairly efficient interpretation, but it requires a threshold above which "isolated" pixels are considered separate objects. photutils provides automated ways to do this, but for now we will do it manually:
End of explanation
ccd_data_g_bkgsub.shape
# as above, for some numpy versions you might need a `.data` to get this to work
srcs = photutils.detect_sources(ccd_data_g_bkgsub, 60, 5)
Explanation: Simply eye-balling this histogram reveals that the background fluctuations are at the level of ~20 counts. So if we want a 3-sigma threshold, we use 60. We also, somewhat arbitrary, require at last 5 pixels for a source to be included:
End of explanation
plt.figure(figsize=(8, 16))
plt.imshow(srcs, cmap=srcs.cmap('#222222'), origin='lower')
Explanation: The resulting object also has some utilities to help plot itself:
End of explanation
plt.figure(figsize=(6, 10))
plt.imshow(srcs, cmap=srcs.cmap('#222222'), origin='lower')
apertures.plot(color='red')
plt.xlim(600, 800)
plt.ylim(2530, 2920)
Explanation: We can clearly see our two objects from before standing out as distinct sources. Lets have a closer look:
End of explanation
# if you had to do `.data` above, you'll need to add it here too to get all the cells below to work
src_props = photutils.source_properties(ccd_data_g_bkgsub, srcs)
srcid = np.array(src_props.id)
x = src_props.xcentroid.value
y = src_props.ycentroid.value
src0_id = srcid[np.argmin(np.hypot(x-positions[0][0] , y-positions[0][1]))]
src1_id = srcid[np.argmin(np.hypot(x-positions[1][0] , y-positions[1][1]))]
msk = np.in1d(srcid, [src0_id, src1_id])
x[msk], y[msk]
Explanation: Now lets try to figure out which of the many objects in the source table are ours, and try making measurements of these objects directly from these threshold maps:
End of explanation
plt.figure(figsize=(8, 16))
plt.imshow(srcs, cmap=srcs.cmap('#222222'), origin='lower')
plt.scatter(x[msk], y[msk], c='r', s=150, marker='*')
plt.xlim(600, 800)
plt.ylim(330+2200, 720+2200)
Explanation: Now sanity-check that they are actually the correct objects:
End of explanation
src_tab = src_props.to_table()
if src_tab['source_sum'].unit == u.count:
src_tab['source_mags'] = u.Magnitude(src_tab['source_sum'])
else:
# this is the case for if you had to do the ``.data`` work-around in the ``source_properties`` call
src_tab['source_mags'] = u.Magnitude(src_tab['source_sum']*u.count)
src_tab[msk]
Explanation: Now that we've identified them, we can see all the information the photutils.source_properties computes. The one thing it does not do is instrumental magnitudes, so we add that manually.
End of explanation |
11,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic orienatation to ticdat, pandas and developing engines for Opalytics
One of the advantages of Python is that it has "batteries included". That is to say, there is a rich set of libraries available for installation. Of course, with such a large collection of libraries to choose from, it's natural to wonder how different libraries relate to each other, and which to choose for a given situation.
This notebook addresses the ticdat and pandas libraries. It is a good starting point if you are a pythonic and pandonic programmer who wishes to develop Opalytics-ready data science engines as quickly as possible.
ticdat was developed to promote modular solve engine development. It facilitates the pattern under which a solve function publishes its input and output data formats.
Specifically, a solve engine creates two TicDatFactory objects. One defines the input schema and the other the output schema. Although you are encouraged to add as many data integrity rules as possible to these objects (particularly the input object), you only need to specify the table and field names, and to organize the fields into primary key fields and data fields.
For example, in the diet example, the dietmodel.py file has the following lines.
Step1: Here, the dataFactory object defines an input schema. This schema has three tables (categories, foods, and nutritionQuantities). The categories table is indexed by a single field (name) and has two data fields (minNutrition and maxNutrition). The nutritionQuantities table is indexed by two fields (food and category) and has one data field (qty).
Any code wishing to run the solve function can learn what type of data object to pass as input by examining the dataFactory object. The dietcsvdata.py, dietstaticdata.py and dietxls.py scripts demonstrate this pattern by sourcing data from a sub-directory of csv files, a static data instance, and an xls file, respectively. Were Opalytics to deploy dietmodel, it would perform work roughly analogous to that performed by these three files, except Opalytics would source the input data from the Opalytics Cloud Platform.
Let's examine what a TicDat object created by dataFactory looks like. To do this, we're going to pull in some sample testing data hard coded in the ticdat testing code.
Step2: dietData is a TicDat object. It is an instance of the schema defined by dataFactory. By default, it stores its data in a "dict of dicts" format.
Step3: However, since you are pandonic, you might prefer to have a copy of this data in pandas format. This is easy to do.
Step4: Note that these aren't "raw" DataFrame objects. Intead, ticdat has inferred sensible indexes for you from the primary key field designations in dataFactory. The nutritionQuantities table has a MultiIndex and the foods and categories table each have a simple index.
By default, copy_to_pandas will drop the columns that are used to populate the index, unless doing so would result in a DataFrame with no columns at all. However, if you wish for no columns to be dropped under any circumstances, you can use the optional drop_pk_columns argument. This is illustrated below.
Step5: Let's review.
dataFactory describes the input schema
The solve function doesn't know where its input data is coming from. It only knows that is will conform to the schema defined by dataFactory. (All of my examples include at least one assert statement double checking this assumption).
By default, the input tables will be in the default "dict of dicts" format. However, its easy to create a copy of the data which creates a DataFrame for each table.
This summarizes how a solve function can specify its input data and reformat this data as needed. Let's now examine how solve will return data.
The following code specifies a return schema.
Step6: This schema has three tables (parameters, buyFood, consumeNutrition). The parameters table has no primary key fields at all, and just a single data field. (It is assumed that this table will have at most one record). The buyFood table is indexed by the food field, and has a single data field indicating how much of that food is to be consumed. consumeNutrition is similar, except it defines the quantity consumed for each nutrition type.
(As an aside, only the buyFood table is really needed. The total cost and the quantities of nutrition consumed for each nutrition type can be inferred from the consumption of food and the input data. However, it often makes good sense for the solve routine to compute mathematically redundant tables purely for reporting purposes).
How can the solve code return an object of this type? The easiest way is to create an empty TicDat object, and populate it row by row. This is particularly easy for this schema because all the tables have but one data field. (We're going to skip populating the parameters table because "no primary key" tables are a little different).
Step7: ticdat overrides __setitem__ for single data field tables so as to create the following.
Step8: Here are a couple of other, equivalent ways to populate these seven records.
Step9: But wait! You're pandonic! Fair enough. Here are a few ways to initialize a TicDat object with Series and DataFrame objects.
First, lets make two DataFrames for the two output tables.
Step10: As you can see, these DataFrames are consistent with the format expected by solutionFactory.
Step11: As a result, they can be used to create a solutionFactory.TicDat object. Just pass the DataFrame objects as the correct named arguments when creating the TicDat.
Step12: But wait! There's even more. Because the data tables here have but a single data field, they can accept properly formatted Series objects as well. | Python Code:
from ticdat import TicDatFactory, freeze_me
dataFactory = TicDatFactory (
categories = [["name"],["minNutrition", "maxNutrition"]],
foods = [["name"],["cost"]],
nutritionQuantities = [["food", "category"], ["qty"]])
Explanation: Basic orienatation to ticdat, pandas and developing engines for Opalytics
One of the advantages of Python is that it has "batteries included". That is to say, there is a rich set of libraries available for installation. Of course, with such a large collection of libraries to choose from, it's natural to wonder how different libraries relate to each other, and which to choose for a given situation.
This notebook addresses the ticdat and pandas libraries. It is a good starting point if you are a pythonic and pandonic programmer who wishes to develop Opalytics-ready data science engines as quickly as possible.
ticdat was developed to promote modular solve engine development. It facilitates the pattern under which a solve function publishes its input and output data formats.
Specifically, a solve engine creates two TicDatFactory objects. One defines the input schema and the other the output schema. Although you are encouraged to add as many data integrity rules as possible to these objects (particularly the input object), you only need to specify the table and field names, and to organize the fields into primary key fields and data fields.
For example, in the diet example, the dietmodel.py file has the following lines.
End of explanation
import ticdat.testing.ticdattestutils as tictest
_tmp = tictest.dietData()
dietData = dataFactory.TicDat(categories = _tmp.categories, foods = _tmp.foods,
nutritionQuantities = _tmp.nutritionQuantities)
Explanation: Here, the dataFactory object defines an input schema. This schema has three tables (categories, foods, and nutritionQuantities). The categories table is indexed by a single field (name) and has two data fields (minNutrition and maxNutrition). The nutritionQuantities table is indexed by two fields (food and category) and has one data field (qty).
Any code wishing to run the solve function can learn what type of data object to pass as input by examining the dataFactory object. The dietcsvdata.py, dietstaticdata.py and dietxls.py scripts demonstrate this pattern by sourcing data from a sub-directory of csv files, a static data instance, and an xls file, respectively. Were Opalytics to deploy dietmodel, it would perform work roughly analogous to that performed by these three files, except Opalytics would source the input data from the Opalytics Cloud Platform.
Let's examine what a TicDat object created by dataFactory looks like. To do this, we're going to pull in some sample testing data hard coded in the ticdat testing code.
End of explanation
dietData.categories
dietData.nutritionQuantities
Explanation: dietData is a TicDat object. It is an instance of the schema defined by dataFactory. By default, it stores its data in a "dict of dicts" format.
End of explanation
panDiet = dataFactory.copy_to_pandas(dietData)
panDiet.categories
panDiet.nutritionQuantities
Explanation: However, since you are pandonic, you might prefer to have a copy of this data in pandas format. This is easy to do.
End of explanation
panDietNoDrop = dataFactory.copy_to_pandas(dietData, drop_pk_columns=False)
panDietNoDrop.categories
Explanation: Note that these aren't "raw" DataFrame objects. Intead, ticdat has inferred sensible indexes for you from the primary key field designations in dataFactory. The nutritionQuantities table has a MultiIndex and the foods and categories table each have a simple index.
By default, copy_to_pandas will drop the columns that are used to populate the index, unless doing so would result in a DataFrame with no columns at all. However, if you wish for no columns to be dropped under any circumstances, you can use the optional drop_pk_columns argument. This is illustrated below.
End of explanation
solutionFactory = TicDatFactory(
parameters = [[],["totalCost"]],
buyFood = [["food"],["qty"]],
consumeNutrition = [["category"],["qty"]])
Explanation: Let's review.
dataFactory describes the input schema
The solve function doesn't know where its input data is coming from. It only knows that is will conform to the schema defined by dataFactory. (All of my examples include at least one assert statement double checking this assumption).
By default, the input tables will be in the default "dict of dicts" format. However, its easy to create a copy of the data which creates a DataFrame for each table.
This summarizes how a solve function can specify its input data and reformat this data as needed. Let's now examine how solve will return data.
The following code specifies a return schema.
End of explanation
soln = solutionFactory.TicDat()
soln.buyFood["hamburger"] = 0.6045138888888888
soln.buyFood["ice cream"] = 2.591319444444
soln.buyFood["milk"] = 6.9701388888
soln.consumeNutrition["calories"]= 1800.0
soln.consumeNutrition["fat"]=59.0559
soln.consumeNutrition["protein"]=91.
soln.consumeNutrition["sodium"]=1779.
Explanation: This schema has three tables (parameters, buyFood, consumeNutrition). The parameters table has no primary key fields at all, and just a single data field. (It is assumed that this table will have at most one record). The buyFood table is indexed by the food field, and has a single data field indicating how much of that food is to be consumed. consumeNutrition is similar, except it defines the quantity consumed for each nutrition type.
(As an aside, only the buyFood table is really needed. The total cost and the quantities of nutrition consumed for each nutrition type can be inferred from the consumption of food and the input data. However, it often makes good sense for the solve routine to compute mathematically redundant tables purely for reporting purposes).
How can the solve code return an object of this type? The easiest way is to create an empty TicDat object, and populate it row by row. This is particularly easy for this schema because all the tables have but one data field. (We're going to skip populating the parameters table because "no primary key" tables are a little different).
End of explanation
soln.buyFood
soln.consumeNutrition
Explanation: ticdat overrides __setitem__ for single data field tables so as to create the following.
End of explanation
soln = solutionFactory.TicDat()
soln.buyFood["hamburger"]["qty"] = 0.6045138888888888
soln.buyFood["ice cream"]["qty"] = 2.591319444444
soln.buyFood["milk"]["qty"] = 6.9701388888
soln.consumeNutrition["calories"]["qty"] = 1800.0
soln.consumeNutrition["fat"]["qty"] = 59.0559
soln.consumeNutrition["protein"]["qty"] = 91.
soln.consumeNutrition["sodium"]["qty"] = 1779.
soln = solutionFactory.TicDat()
soln.buyFood["hamburger"] = {"qty" : 0.6045138888888888}
soln.buyFood["ice cream"] = {"qty" : 2.591319444444}
soln.buyFood["milk"] = {"qty" : 6.9701388888}
soln.consumeNutrition["calories"] = {"qty" : 1800.0}
soln.consumeNutrition["fat"] = {"qty" : 59.0559}
soln.consumeNutrition["protein"] = {"qty" : 91.}
soln.consumeNutrition["sodium"] = {"qty" : 1779.}
Explanation: Here are a couple of other, equivalent ways to populate these seven records.
End of explanation
from pandas import Series, DataFrame
buyDf = DataFrame({"food":['hamburger', 'ice cream', 'milk'],
"qty":[0.6045138888888888, 2.591319444444, 6.9701388888]}).set_index("food")
consumeDf = DataFrame({"category" : ["calories", "fat", "protein", "sodium"],
"qty": [1800.0, 59.0559, 91., 1779.]}).set_index("category")
Explanation: But wait! You're pandonic! Fair enough. Here are a few ways to initialize a TicDat object with Series and DataFrame objects.
First, lets make two DataFrames for the two output tables.
End of explanation
buyDf
consumeDf
Explanation: As you can see, these DataFrames are consistent with the format expected by solutionFactory.
End of explanation
soln = solutionFactory.TicDat(buyFood = buyDf, consumeNutrition = consumeDf)
soln.buyFood
Explanation: As a result, they can be used to create a solutionFactory.TicDat object. Just pass the DataFrame objects as the correct named arguments when creating the TicDat.
End of explanation
buyS = buyDf.qty
consumeS = consumeDf.qty
assert isinstance(buyS, Series) and isinstance(consumeS, Series)
soln = solutionFactory.TicDat(buyFood = buyS, consumeNutrition = consumeS)
soln.consumeNutrition
Explanation: But wait! There's even more. Because the data tables here have but a single data field, they can accept properly formatted Series objects as well.
End of explanation |
11,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Discretizando la ecuación de Schrödinger
Autor
Step1: Código que controla las simulaciones
Step2: Rutinas que generan inicializan y generan la simulación
Step3: Rutinas que conectan los controles de la primera celda con la simulacion | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
import ipywidgets as widgets
from IPython.display import display
L = 200
dx = 2.
buttonrunsim=widgets.Button(description="Simulate")
outwdt = widgets.Output()
centropaquete = widgets.FloatSlider(value=-3*L/4,min=-L,max=L,step=1,
description='centro del paquete',
ensure_option=True,
disabled=False)
anchopaquete = widgets.FloatSlider(value=L/8,min=1,max=L,step=1,
description='ancho del paquete',
ensure_option=True,
disabled=False)
valuek0 = widgets.FloatSlider(value=0,min=0,max=6.28/dx,step=.1,
description='k0',
ensure_option=True,
disabled=False)
valueV0 = widgets.FloatSlider(value=0,min=-4,max=4,step=.1,
description='V0',
ensure_option=True,
disabled=False)
potencial = widgets.Dropdown(options=["Cuadrado","Rampa","Cuadrático","Gaussiano"],
description="Forma del Potencial",
ensure_option=True,
disabled=False)
valueW = widgets.FloatSlider(value=1.,min=1.,max=300.,step=.5,
description='Alcance potencial',
ensure_option=True,
disabled=False)
items = [[widgets.Label("Paquete"),centropaquete,anchopaquete,valuek0,
widgets.Label("Potencial"),valueW,valueV0,potencial,buttonrunsim],[outwdt]]
widgets.HBox([widgets.VBox(it) for it in items])
Explanation: Discretizando la ecuación de Schrödinger
Autor: Juan Mauricio Matera (Facultad de Ciencias Exactas, UNLP)
12 de agosto de 2020
La ecuación de Schrödinger dependiente del tiempo en una dimensión
$$
{\bf i}\hbar \frac{d}{dt}\psi(x,t) = -\frac{\hbar^2}{2m}\nabla^2 \psi(x,t) + V(x) \psi(x,t)
$$
define $\psi(x,t)$ a partir del valor inicial $\psi(x,t_0)$. Si bien en algunos casos especiales existen soluciones analíticas, aquí analizaremos sus soluciones numéricas vía discretización. Para eso, un primer paso
es adimensionalizar la ecuación:
$$
\frac{2 m (\Delta x)^2}{\hbar}\frac{d}{dt}\psi(x,t) = {\bf i}(\Delta x)^2\nabla^2 \psi(x,t) + (\frac{2 m (\Delta x)^2}{\hbar} V(x)) \psi(x,t)
$$
ó
$$
\frac{d}{d \tilde{t}}\tilde{\psi}(\tilde{x},\tilde{t}) =
{\bf i}\nabla^2 \tilde{\psi}(\tilde{x},\tilde{t}) + \tilde{V}(\tilde{x})\tilde{\psi}(\tilde{x},\tilde{t})
$$
con
\begin{eqnarray}
\tilde{t}&=& \frac{2 m (\Delta x)^2}{\hbar}t\
\tilde{x}&=& x/\Delta x\
\tilde{V}(\tilde{x})&=& \frac{2 m \Delta x^2}{\hbar^2}V(\tilde{x} \Delta x)\
\tilde{\psi}(\tilde{x},\tilde{t})&=&\psi(\tilde{x}\Delta x,\tilde{t} \Delta t)
\end{eqnarray}
y $\Delta x$ elegido lo suficientemente pequeño de manera que, durante la evolución,
$$
\nabla^2\psi(x,t) \approx \frac{\psi(x+\Delta x,t)+\psi(x-\Delta x,t)-2\psi(x,t)}{(\Delta x)^2}
$$
Luego, en lugar de trabajar en el continuo, podemos resolver el problema sobre un conjunto discreto de puntos separados por una distancia $\Delta x$. Además, en lugar de trabajar sobre toda la recta real, vamos a considerar puntos dentro de un intervalo $(-L,L)$, y vamos a imponer condiciones de borde periódicas, de manera que
$$
\psi(x+2L,t)=\psi(x,t)\;\;\mbox{y}\;\;V(x+2L)=V(x)
$$
de manera que el problema se reduce a estudiar la evolución temporal de la función de onda en $n=\frac{2 L}{\Delta x}$ puntos.
Finalmente, asumiremos que el estado inicial es de la forma
$$
\psi(x,t_0)=\sqrt{\rho(x,t_0)}e^{{\bf i}k_0 x}
$$
con $\rho(x,t_0)=\frac{e^{-\frac{(x-x_0)^2}{2 a^2}}}{\sqrt{2\pi a}}$ en el intervalo considerado, de forma que $-L<x_0<L$ y $a+|x_0|\ll L$ y $k<\frac{2\pi}{\Delta x}$.
Simulación
End of explanation
# Para fijar parámetros a mano
#### Parámetros de la discretización
dx = 2.
dt = dx**2 * .1
L = 200
#### Estado inicial
a = 20.
x0 = -L/3
k0 = 10./a
#### Potencial
U0= -1.
w = 10.
Explanation: Código que controla las simulaciones
End of explanation
############ Inicializar #############
def init_state(L=L,dx=dx,a=a,x0=x0,k0=k0):
global rho0
global psi
global xs
xs = np.linspace(-L,L,int(2*L/dx))
rho0 = np.array([np.exp(-(x-x0)**2/(2*a**2)) for x in xs])
rho0[0] = 0
rho0[-1] = .5*rho0[-2]
rho0[1] = .5*rho0[2]
rho0 = rho0/sum(rho0)/dx
psi0 = rho0**.5 * np.exp(1j*k0*(xs-x0))
psi = psi0
return psi
def evolve(psi,xs,pot):
global dt,dx
U = dx**2*pot(xs)
cc = dt/dx**2
deltapsi = ((1.+U)*psi-.5*(np.roll(psi,1)+np.roll(psi,-1)))
psi[:] = psi[:] - cc * 1j * deltapsi[:]
# Corrección a segundo orden
deltapsi = ((1.+U)*deltapsi-.5*(np.roll(deltapsi,1)+np.roll(deltapsi,-1)))
psi[:] = psi[:] - (.5*(cc)**2) * deltapsi[:]
psi[:] = psi[:] /np.sqrt(sum(np.abs(psi)**2))/dx**.5
return psi
def Ugaussiano(xs):
return U0*np.exp(-xs**2/(.5*w**2))
def Uescalon(xs):
return np.array([U0 if abs(x)<.5*w else 0 for x in xs])
def Uarmonico(xs):
return np.array([U0*(1-4*(x/w)**2) if abs(x)<.5 * w else 0 for x in xs])
def Urampadoble(xs):
return np.array([U0*(1-2*abs(x)/w) if abs(x)<.5 * w else 0 for x in xs])
def make_animation_new(ts=50, progress=None):
global psi
global xs
global U0,k0,w,x0,a,U
init_state(a=a,k0=k0,x0=x0)
rho0 = abs(psi)**2
fig1 = plt.figure()
plt.xlim(-L,L)
plt.ylim(-abs(U0)-k0**2-.5,.5+abs(U0)+k0**2)
plt.plot(xs,U(xs),ls="-.")
density,= plt.plot(xs,rho0/max(rho0))
def update_graphic(t,density):
if progress is not None:
progress.value = t
for i in range(100):
evolve(psi,xs,U)
rho = abs(psi)**2
density.set_data(xs,50*(U0+.1)*rho+.5*k0**2)
return density,
line_ani = animation.FuncAnimation(fig1, update_graphic, ts, fargs=(density,),
interval=100, blit=True)
plt.close()
return HTML(line_ani.to_jshtml())
Explanation: Rutinas que generan inicializan y generan la simulación
End of explanation
def on_button_start_sim(b):
global U0, w, k0, a, x0, U
U0 = valueV0.value
w = valueW.value
k0 = valuek0.value
a = anchopaquete.value
x0 = centropaquete.value
if potencial.value == "Gaussiano":
U = Ugaussiano
elif potencial.value == "Cuadrático":
U = Uarmonico
elif potencial.value == "Cuadrado":
U = Uescalon
elif potencial.value == "Rampa":
U = Urampadoble
else:
print("No encontrado")
return
outwdt.clear_output()
progress = widgets.FloatProgress(value=0,min=0,max=50,step=1,description='Simulating:',
bar_style='info',orientation='horizontal')
with outwdt:
print([a,x0,k0,U0,w])
display(progress)
result = make_animation_new(progress=progress)
outwdt.clear_output()
display(result)
buttonrunsim.on_click(on_button_start_sim)
Explanation: Rutinas que conectan los controles de la primera celda con la simulacion
End of explanation |
11,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting
This tutorial explains the high-level interface to plotting provided by the Bundle. You are of course always welcome to access arrays and plot manually.
As of PHOEBE 2.1, PHOEBE uses autofig as an intermediate layer for highend functionality to matplotlib.
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode
Step2: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step3: And we'll attach some dummy datasets. See Datasets for more details.
Step4: And run the forward models. See Computing Observables for more details.
Step5: Showing and Saving
NOTE
Step6: Any call to plot returns 2 objects - the autofig and matplotlib figure instances. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit either before drawing/saving the image.
In this example with so many different models and datasets, it is quite simple to build a single plot by filtering the bundle and calling the plot method on the resulting ParameterSet.
Step7: Time (highlight and uncover)
The built-in plot method also provides convenience options to either highlight the interpolated point for a given time, or only show the dataset up to a given time.
Highlight
The higlight option is enabled by default so long as a time (or times) is passed to plot. It simply adds an extra marker at the sent time - interpolating in the synthetic model if necessary.
Step8: To change the style of the "highlighted" points, you can pass matplotlib recognized markers, colors, and markersizes to the highlight_marker, highlight_color, and highlight_ms keywords, respectively.
Step9: To disable highlighting, simply send highlight=False
Step10: Uncover
Uncover shows the observations or synthetic model up to the provided time and is disabled by default, even when a time is provided, but is enabled simply by providing uncover=True. There are no additional options available for uncover.
Step11: Selecting Datasets
In addition to filtering and calling plot on the resulting ParameterSet, plot can accept a twig or filter on any of the available parameter tags.
For this reason, any of the following give identical results
Step12: Selecting Arrays
So far, each plotting call automatically chose default arrays from that dataset to plot along each axis. To override these defaults, simply point to the qualifier of the array that you'd like plotted along a given axis.
Step13: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
Step14: For more information on each of the available arrays, see the relevant tutorial on that dataset method
Step15: Units
Likewise, each array that is plotted is automatically plotted in its default units. To override these defaults, simply provide the unit (as a string or as a astropy units object) for a given axis.
Step16: WARNING
Step17: Axes Limits
Axes limits are determined by the data automatically. To set custom axes limits, either use matplotlib methods on the returned axes objects, or pass limits as a list or tuple.
Step18: Errorbars
In the cases of observational data, errorbars can be added by passing the name of the column.
Step19: To disable the errorbars, simply set yerror=None.
Step20: Colors
Colors of points and lines, by default, cycle according to matplotlib's color policy. To manually set the color, simply pass a matplotlib recognized color to the 'c' keyword.
Step21: In addition, you can point to an array in the dataset to use as color.
Step22: Choosing colors works slightly differently for meshes (ie you can set fc for facecolor and ec for edgecolor). For more details, see the tutorial on the MESH dataset.
Colormaps
The colormaps is determined automatically based on the parameter used for coloring (ie RVs will be a red-blue colormap). To override this, pass a matplotlib recognized colormap to the cmap keyword.
Step23: Adding a Colorbar
To add a colorbar (or sizebar, etc), send draw_sidebars=True to the plot call.
Step24: Labels and Legends
To add a legend, include legend=True.
For details on placement and formatting of the legend see matplotlib's documentation.
Step25: The legend labels are generated automatically, but can be overriden by passing a string to the label keyword.
Step26: To override the position or styling of the legend, you can pass valid options to legend_kwargs which will be passed on to plt.legend
Step27: Other Plotting Options
Valid plotting options that are directly passed to matplotlib include
Step28: 3D Axes
To plot a in 3d, simply pass projection='3d' to the plot call. To override the defaults for the z-direction, pass a twig or array just as you would for x or y. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Plotting
This tutorial explains the high-level interface to plotting provided by the Bundle. You are of course always welcome to access arrays and plot manually.
As of PHOEBE 2.1, PHOEBE uses autofig as an intermediate layer for highend functionality to matplotlib.
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
Explanation: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.8
b['ecc'] = 0.1
b['irrad_method'] = 'none'
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('orb', times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
b.set_value('incl@orbit', 90)
b.run_compute(model='run_with_incl_90')
b.set_value('incl@orbit', 85)
b.run_compute(model='run_with_incl_85')
b.set_value('incl@orbit', 80)
b.run_compute(model='run_with_incl_80')
Explanation: And run the forward models. See Computing Observables for more details.
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: Showing and Saving
NOTE: in IPython notebooks calling plot will display directly below the call to plot. When not in IPython you have several options for viewing the figure:
call b.show or b.savefig after calling plot
use the returned autofig and matplotlib figures however you'd like
pass show=True to the plot method.
pass save='myfilename' to the plot method. (same as calling plt.savefig('myfilename'))
Default Plots
To see the options for plotting that are dataset-dependent see the tutorials on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
By calling the plot method on the bundle (or any ParameterSet) without any arguments, a plot or series of subplots will be built based on the contents of that ParameterSet.
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True)
Explanation: Any call to plot returns 2 objects - the autofig and matplotlib figure instances. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit either before drawing/saving the image.
In this example with so many different models and datasets, it is quite simple to build a single plot by filtering the bundle and calling the plot method on the resulting ParameterSet.
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, show=True)
Explanation: Time (highlight and uncover)
The built-in plot method also provides convenience options to either highlight the interpolated point for a given time, or only show the dataset up to a given time.
Highlight
The higlight option is enabled by default so long as a time (or times) is passed to plot. It simply adds an extra marker at the sent time - interpolating in the synthetic model if necessary.
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight_marker='s', highlight_color='g', highlight_ms=20, show=True)
Explanation: To change the style of the "highlighted" points, you can pass matplotlib recognized markers, colors, and markersizes to the highlight_marker, highlight_color, and highlight_ms keywords, respectively.
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(time=1.0, highlight=False, show=True)
Explanation: To disable highlighting, simply send highlight=False
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0.5, uncover=True, show=True)
Explanation: Uncover
Uncover shows the observations or synthetic model up to the provided time and is disabled by default, even when a time is provided, but is enabled simply by providing uncover=True. There are no additional options available for uncover.
End of explanation
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(show=True)
afig, mplfig = b.plot(component='primary', kind='orb', model='run_with_incl_80', show=True)
afig, mplfig = b.plot('primary@orb@run_with_incl_80', show=True)
Explanation: Selecting Datasets
In addition to filtering and calling plot on the resulting ParameterSet, plot can accept a twig or filter on any of the available parameter tags.
For this reason, any of the following give identical results:
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', y='vus', show=True)
Explanation: Selecting Arrays
So far, each plotting call automatically chose default arrays from that dataset to plot along each axis. To override these defaults, simply point to the qualifier of the array that you'd like plotted along a given axis.
End of explanation
b['orb01@primary@run_with_incl_80'].qualifiers
Explanation: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
End of explanation
afig, mplfig = b['lc01@dataset'].plot(x='phases', z=0, show=True)
Explanation: For more information on each of the available arrays, see the relevant tutorial on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
Selecting Phase
And to plot in phase we just send x='phases' or x='phases:binary'.
Setting x='phases' will use the ephemeris from the top-level of the hierarchy
(as if you called b.get_ephemeris()), whereas passing a string after the colon,
will use the ephemeris of that component.
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xunit='AU', yunit='AU', show=True)
Explanation: Units
Likewise, each array that is plotted is automatically plotted in its default units. To override these defaults, simply provide the unit (as a string or as a astropy units object) for a given axis.
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlabel='X POS', ylabel='Z POS', show=True)
Explanation: WARNING: when plotting two arrays with the same dimensions, PHOEBE attempts to set the aspect ratio to equal, but overriding to use two different units will result in undesired results. This may be fixed in the future, but for now can be avoided by using consistent units for the x and y axes when they have the same dimensions.
Axes Labels
Axes labels are automatically generated from the qualifier of the array and the plotted units. To override these defaults, simply pass a string for the label of a given axis.
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(xlim=(-2,2), show=True)
Explanation: Axes Limits
Axes limits are determined by the data automatically. To set custom axes limits, either use matplotlib methods on the returned axes objects, or pass limits as a list or tuple.
End of explanation
afig, mplfig = b['lc01@dataset'].plot(yerror='sigmas', show=True)
Explanation: Errorbars
In the cases of observational data, errorbars can be added by passing the name of the column.
End of explanation
afig, mplfig = b['lc01@dataset'].plot(yerror=None, show=True)
Explanation: To disable the errorbars, simply set yerror=None.
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(c='r', show=True)
Explanation: Colors
Colors of points and lines, by default, cycle according to matplotlib's color policy. To manually set the color, simply pass a matplotlib recognized color to the 'c' keyword.
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', show=True)
Explanation: In addition, you can point to an array in the dataset to use as color.
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', cmap='spring', show=True)
Explanation: Choosing colors works slightly differently for meshes (ie you can set fc for facecolor and ec for edgecolor). For more details, see the tutorial on the MESH dataset.
Colormaps
The colormaps is determined automatically based on the parameter used for coloring (ie RVs will be a red-blue colormap). To override this, pass a matplotlib recognized colormap to the cmap keyword.
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(x='times', c='vws', draw_sidebars=True, show=True)
Explanation: Adding a Colorbar
To add a colorbar (or sizebar, etc), send draw_sidebars=True to the plot call.
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True)
Explanation: Labels and Legends
To add a legend, include legend=True.
For details on placement and formatting of the legend see matplotlib's documentation.
End of explanation
afig, mplfig = b['primary@orb@run_with_incl_80'].plot(label='primary')
afig, mplfig = b['secondary@orb@run_with_incl_80'].plot(label='secondary', legend=True, show=True)
Explanation: The legend labels are generated automatically, but can be overriden by passing a string to the label keyword.
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(show=True, legend=True, legend_kwargs={'loc': 'center', 'facecolor': 'r'})
Explanation: To override the position or styling of the legend, you can pass valid options to legend_kwargs which will be passed on to plt.legend
End of explanation
afig, mplfig = b['orb01@primary@run_with_incl_80'].plot(linestyle=':', s=0.1, show=True)
Explanation: Other Plotting Options
Valid plotting options that are directly passed to matplotlib include:
- linestyle
- marker
Note that sizes (markersize, linewidth) should be handled by passing the size to 's' and attempting to set markersize or linewidth directly will raise an error. See also the autofig documention on size scales.
End of explanation
afig, mplfig = b['orb@run_with_incl_80'].plot(time=0, projection='3d', show=True)
Explanation: 3D Axes
To plot a in 3d, simply pass projection='3d' to the plot call. To override the defaults for the z-direction, pass a twig or array just as you would for x or y.
End of explanation |
11,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create DataFrame
Step2: Fit The Label Encoder
Step3: View The Labels
Step4: Transform Categories Into Integers
Step5: Transform Integers Into Categories | Python Code:
# Import required packages
from sklearn import preprocessing
import pandas as pd
Explanation: Title: Convert Pandas Categorical Data For Scikit-Learn
Slug: convert_pandas_categorical_column_into_integers_for_scikit-learn
Summary: Convert Pandas Categorical Column Into Integers For Scikit-Learn
Date: 2016-11-30 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
raw_data = {'patient': [1, 1, 1, 2, 2],
'obs': [1, 2, 3, 1, 2],
'treatment': [0, 1, 0, 1, 0],
'score': ['strong', 'weak', 'normal', 'weak', 'strong']}
df = pd.DataFrame(raw_data, columns = ['patient', 'obs', 'treatment', 'score'])
Explanation: Create DataFrame
End of explanation
# Create a label (category) encoder object
le = preprocessing.LabelEncoder()
# Fit the encoder to the pandas column
le.fit(df['score'])
Explanation: Fit The Label Encoder
End of explanation
# View the labels (if you want)
list(le.classes_)
Explanation: View The Labels
End of explanation
# Apply the fitted encoder to the pandas column
le.transform(df['score'])
Explanation: Transform Categories Into Integers
End of explanation
# Convert some integers into their category names
list(le.inverse_transform([2, 2, 1]))
Explanation: Transform Integers Into Categories
End of explanation |
11,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nexa Wall Street Columns Raw Data, Low Resolution vs High Resolution, NData
Here we compare how well the LDA classifier works for both low resolution and high resolution classification when we change the number of letters it can actually use.
Step1: Load the data
Step2: Calculate scalability with Ndata
Inclusive policy
Main parameters
Step3: Do the calculation for low resolution
Step4: Do the calculatino for high resolution
Step5: Plot scores as function of Ndata
Step6: Plot them by number of letters used
Step7: Exclusive Policy
Main parameters
Step8: Extract the data
Step9: Do the calculation for low resolution
Step10: Do the calculation for high resolution
Step11: Plot the scores as a function of Ndata
Step12: Plot the scores as a function of letters | Python Code:
import numpy as np
from sklearn import cross_validation
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
import h5py
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import sys
sys.path.append("../")
from aux.raw_images_columns_functions import extract_column_data, extract_letters_to_columns
Explanation: Nexa Wall Street Columns Raw Data, Low Resolution vs High Resolution, NData
Here we compare how well the LDA classifier works for both low resolution and high resolution classification when we change the number of letters it can actually use.
End of explanation
# Load low resolution signal
signal_location_low = '../data/wall_street_data_spaces.hdf5'
with h5py.File(signal_location_low, 'r') as f:
dset = f['signal']
signals_low = np.empty(dset.shape, np.float)
dset.read_direct(signals_low)
# Load high resolution signal
signal_location_high = '../data/wall_street_data_30.hdf5'
with h5py.File(signal_location_high, 'r') as f:
dset = f['signal']
signals_high = np.empty(dset.shape, np.float)
dset.read_direct(signals_high)
# Load the letters
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters_spaces.npy'
letters_sequence = np.load(text_directory)
Explanation: Load the data
End of explanation
MaxNletters = 2500
shift = 1 # Predict within (0) or next letter (1)
policy = 'inclusive' # The type of the policy fo the letter covering
Nside_low = signals_low.shape[1]
max_lag_low = 5
Nside_high = signals_high.shape[1]
max_lag_high = 15
# Low resolution
data_low = extract_column_data(MaxNletters, Nside_low, max_lag_low, signals_low, policy=policy)
letters_low = extract_letters_to_columns(MaxNletters, Nside_low, max_lag_low,
letters_sequence, policy=policy, shift=shift)
# High resolution
data_high = extract_column_data(MaxNletters, Nside_high, max_lag_high, signals_high, policy=policy)
letters_high = extract_letters_to_columns(MaxNletters, Nside_high, max_lag_high,
letters_sequence, policy=policy, shift=shift)
# Now let's do classification for different number of data
print('Policy', policy)
MaxN_lowdata = letters_low.size
MaxN_high_data = letters_high.size
print('Ndata for the low resolution', MaxN_lowdata)
print('Ndata for the high resolution', MaxN_high_data)
Explanation: Calculate scalability with Ndata
Inclusive policy
Main parameters
End of explanation
Ndata_array = np.arange(500, 24500, 500)
score_low = []
for Ndata_class in Ndata_array:
# First we get the classification for low resolution
X = data_low[:Ndata_class, ...].reshape(Ndata_class, Nside_low * max_lag_low)
y = letters_low[:Ndata_class, ...]
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = LDA()
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100
score_low.append(score)
Explanation: Do the calculation for low resolution
End of explanation
Ndata_array = np.arange(500, 24500, 500)
score_high = []
for Ndata_class in Ndata_array:
# First we get the classification for low resolution
X = data_high[:Ndata_class, ...].reshape(Ndata_class, Nside_high * max_lag_high)
y = letters_high[:Ndata_class, ...]
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = LDA()
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100
score_high.append(score)
Explanation: Do the calculatino for high resolution
End of explanation
sns.set(font_scale=2)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(Ndata_array, score_high, 'o-', label='high resolution', lw=3, markersize=10)
ax.plot(Ndata_array, score_low, 'o-', label='low resolution', lw=3, markersize=10)
ax.legend()
ax.set_ylim(0, 105)
ax.set_ylabel('Accuracy')
ax.set_xlabel('Ndata')
ax.set_title('Accuracy vs Number of Data for High Resolution (next letter - inclusive policy)')
Explanation: Plot scores as function of Ndata
End of explanation
sns.set(font_scale=2)
fig = plt.figure(figsize=(16, 12))
# Low resolution
ax1 = fig.add_subplot(211)
Nletters_array = Ndata_array / Nside_low
ax1.plot(Nletters_array, score_low, 'o-', lw=3, markersize=10)
ax1.set_ylim(0, 105)
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Nletters')
# High resolution
ax2 = fig.add_subplot(212)
Nletters_array = Ndata_array / Nside_high
ax2.plot(Nletters_array, score_high, 'o-', lw=3, markersize=10)
ax2.set_ylim(0, 105)
ax2.set_ylabel('Accuracy')
ax2.set_xlabel('Nletters')
Explanation: Plot them by number of letters used
End of explanation
MaxNletters = 2500
shift = 1 # Predict within (0) or next letter (1)
policy = 'exclusive' # The type of the policy fo the letter covering
Nside_low = signals_low.shape[1]
max_lag_low = 5
Nside_high = signals_high.shape[1]
max_lag_high = 15
Explanation: Exclusive Policy
Main parameters
End of explanation
# Low resolution
data_low = extract_column_data(MaxNletters, Nside_low, max_lag_low, signals_low, policy=policy)
letters_low = extract_letters_to_columns(MaxNletters, Nside_low, max_lag_low,
letters_sequence, policy=policy, shift=shift)
# High resolution
data_high = extract_column_data(MaxNletters, Nside_high, max_lag_high, signals_high, policy=policy)
letters_high = extract_letters_to_columns(MaxNletters, Nside_high, max_lag_high,
letters_sequence, policy=policy, shift=shift)
# Now let's do classification for different number of data
print('Policy', policy)
MaxN_lowdata = letters_low.size
MaxN_high_data = letters_high.size
print('Ndata for the low resolution', MaxN_lowdata)
print('Ndata for the high resolution', MaxN_high_data)
Explanation: Extract the data
End of explanation
Ndata_array = np.arange(500, 24500, 500)
score_low = []
for Ndata_class in Ndata_array:
# First we get the classification for low resolution
X = data_high[:Ndata_class, ...].reshape(Ndata_class, Nside_high * max_lag_high)
y = letters_high[:Ndata_class, ...]
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = LDA()
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100
score_low.append(score)
Explanation: Do the calculation for low resolution
End of explanation
Ndata_array = np.arange(500, 24500, 500)
score_high = []
for Ndata_class in Ndata_array:
# First we get the classification for low resolution
X = data_high[:Ndata_class, ...].reshape(Ndata_class, Nside_high * max_lag_high)
y = letters_high[:Ndata_class, ...]
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = LDA()
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100
score_high.append(score)
Explanation: Do the calculation for high resolution
End of explanation
sns.set(font_scale=2)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(Ndata_array, score_high, 'o-', label='high resolution', lw=3, markersize=10)
ax.plot(Ndata_array, score_low, 'o-', label='low resolution', lw=3, markersize=10)
ax.legend()
ax.set_ylim(0, 105)
ax.set_ylabel('Accuracy')
ax.set_xlabel('Ndata')
ax.set_title('Accuracy vs Number of Data for High Resolution (next letter - exclusive policy)')
Explanation: Plot the scores as a function of Ndata
End of explanation
sns.set(font_scale=2)
fig = plt.figure(figsize=(16, 12))
# Low resolution
ax1 = fig.add_subplot(211)
Nletters_array = Ndata_array / (Nside_low + max_lag_low + 1)
ax1.plot(Nletters_array, score_low, 'o-', lw=3, markersize=10)
ax1.set_ylim(0, 105)
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Nletters')
# High resolution
ax2 = fig.add_subplot(212)
Nletters_array = Ndata_array / (Nside_high + max_lag_high + 1)
ax2.plot(Nletters_array, score_high, 'o-', lw=3, markersize=10)
ax2.set_ylim(0, 105)
ax2.set_ylabel('Accuracy')
ax2.set_xlabel('Nletters')
Explanation: Plot the scores as a function of letters
End of explanation |
11,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
Pandas Website
This tutorial pulls from the Pandas website and the Handson-ML tutorial
Step1: Index is defaulted to start at 0
Step2: Convert dictionary to series, and set the key as the index and the value as the data point
Step3: Series Attributes
Step4: Series Methods
Step5: Dataframes
Dataframes are a 2-dimensional labeled data structure with columns of potentially different types
Step6: Transpose
Step7: Dataframe Attributes
Step8: Dataframe Methods
Step9: Selecting specific rows, columns, or cells
Rows
Referencing a named index
loc is used for label-based indexing
Step10: Referencing a row number
iloc is used for position-based indexing
Step11: Columns
Referencing a named index
Step12: Referencing a row number
Step13: Specific Cells
Step14: Slicing
Step15: people["age"] = 2016 - people["birthyear"] # adds a new column "age"
people["over 30"] = people["age"] > 30 # adds another column "over 30"
birthyears = people.pop("birthyear")
del people["children"]
people
Step16: Auxillary Methods
Plotting
Step17: Descriptive Statistics | Python Code:
v = pd.Series(np.random.randn(5))
v
Explanation: Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
Pandas Website
This tutorial pulls from the Pandas website and the Handson-ML tutorial: https://github.com/ageron/handson-ml and the Pandas's documentation tutorial
Basic Data Structures: Series and Objects
See: https://pandas.pydata.org/pandas-docs/stable/dsintro.html
Series are a 1D array or a single column vector
End of explanation
index = [1,2,3,4,5]
letters = ['a', 'b', 'c', 'd', 'e']
v = pd.Series(letters, index=index)
v
type(v)
d = {'a' : 0., 'b' : 1., 'c' : 2.} # dictionary object
d
d_v = pd.Series(d)
d_v
Explanation: Index is defaulted to start at 0
End of explanation
d_v['b'] # by index value
d_v[2] # by row position
pd.Series(d, index=['b', 'c', 'd', 'a'])
pd.Series(5., index=['a', 'b', 'c', 'd', 'e'])
temperatures = [4.4,5.1,6.1,6.2,6.1,6.1,5.7,5.2,4.7,4.1,3.9,3.5]
s7 = pd.Series(temperatures, name="Temperature")
s7.plot()
plt.show()
dates = pd.date_range('2016/10/29 5:30pm', periods=12, freq='H')
dates
temp_series = pd.Series(temperatures, dates)
temp_series # where dates is the index
temp_series.plot(kind="bar")
plt.grid(True)
plt.show()
temp_series.plot(kind="line")
plt.show()
Explanation: Convert dictionary to series, and set the key as the index and the value as the data point
End of explanation
temp_series
temp_series.shape
temp_series.size
temp_series.dtype
temp_series.hasnans # Does the series have NaN values?
temp_series.values
Explanation: Series Attributes
End of explanation
ones = np.ones(temp_series.size)
temp_series.add(ones)
def square(value):
return value * value
temp_series.apply(square) # apply method over all cells in a Series object
temp_series.at_time('17:30')
temp_series.between_time(start_time='17:30', end_time='19:30')
temp_series.describe()
temp_series.head(3)
for item in temp_series.items():
print("Time: {}, Value: {}".format(item[0], item[1]))
temp_series.mode()
temp_series.value_counts()
temp_series.sort_values()
temp_series.sort_index()
temp_series.to_dict()
temp_series.sample(frac=.25, random_state=42) # return 25% of set
Explanation: Series Methods
End of explanation
people_dict = {
"weight": pd.Series([68, 83, 112], index=["alice", "bob", "charles"]),
"birthyear": pd.Series([1984, 1985, 1992], index=["bob", "alice", "charles"], name="year"),
"children": pd.Series([0, 3], index=["charles", "bob"]),
"hobby": pd.Series(["Biking", "Dancing"], index=["alice", "bob"]),
}
people = pd.DataFrame(people_dict)
people
Explanation: Dataframes
Dataframes are a 2-dimensional labeled data structure with columns of potentially different types
End of explanation
people.T
people
Explanation: Transpose
End of explanation
people.head(3)
people.tail(2)
people.T
people.shape
Explanation: Dataframe Attributes
End of explanation
people.corr()
import seaborn as sns
import matplotlib
%matplotlib inline
sns.heatmap(people.corr())
Explanation: Dataframe Methods
End of explanation
people.loc['charles']
Explanation: Selecting specific rows, columns, or cells
Rows
Referencing a named index
loc is used for label-based indexing
End of explanation
people.iloc[2,]
people.iloc[2:,]
people['charles'] # error
Explanation: Referencing a row number
iloc is used for position-based indexing
End of explanation
people[['weight']]
people.loc[:,'weight']
Explanation: Columns
Referencing a named index
End of explanation
people.iloc[:,3] # 0-index based
people.ix[3]
people.iloc[:,3]['alice'] # 0-index based
Explanation: Referencing a row number
End of explanation
people
people.iloc[1,0] # Unnamed index, column
people.loc['bob', 'birthyear'] # Named index, column
people.iloc[:,0]['bob'] # Named index, unnamed column
people.loc['bob', :][0] # Named index, unnamed column
people.iloc[1,:]['birthyear'] # Unnamed index, named column
people.loc[:,'birthyear'][1] # Unnamed index, named column
people.iloc[:,0][1] # Unnamed index, unnamed column
people.loc['bob', :]['birthyear'] # Named index, named column
Explanation: Specific Cells
End of explanation
people.iloc[1:3] # return slice of rows, from 2-3
people[people["birthyear"] < 1990]
Explanation: Slicing
End of explanation
birthyears
people["pets"] = pd.Series({"bob": 0, "charles": 5, "eugene":1}) # alice is missing, eugene is ignored
people
people.insert(1, "height", [172, 181, 185])
people
Explanation: people["age"] = 2016 - people["birthyear"] # adds a new column "age"
people["over 30"] = people["age"] > 30 # adds another column "over 30"
birthyears = people.pop("birthyear")
del people["children"]
people
End of explanation
people.plot(kind = "scatter", x = "height", y = "weight", s=[40, 120, 200])
plt.show()
people.assign(
body_mass_index = people["weight"] / (people["height"] / 100) ** 2,
has_pets = people["pets"] > 0
)
# Let's look at people again,
people
people.info()
Explanation: Auxillary Methods
Plotting
End of explanation
people.describe(include='all')
people.height.min()
people.children.max()
Explanation: Descriptive Statistics
End of explanation |
11,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Use Dictionary Comprehension | Python Code:
Officers = {'Michael Mulligan': 'Red Army',
'Steven Johnson': 'Blue Army',
'Jessica Billars': 'Green Army',
'Sodoni Dogla': 'Purple Army',
'Chris Jefferson': 'Orange Army'}
Officers
Explanation: Title: Iterating Over Dictionary Keys
Slug: iterating_over_dictionary_keys_python
Summary: Iterating Over Python Dictionary Keys
Date: 2016-09-06 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Create A Dictionary
End of explanation
# Display all dictionary entries where the key doesn't start with 'Chris'
{keys : Officers[keys] for keys in Officers if not keys.startswith('Chris')}
Explanation: Use Dictionary Comprehension
End of explanation |
11,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 4
Step1: Pandas' data frame has some helpful methods for seeing which values are null
Step2: Side note
Step3: So we can quickly summarize the number of missing values for each feature.
How do we deal with these missing values before passing the data into a model? There are a few options.
Filtering out missing data
One option is to simply remove columns or rows with missing data. The dropna method has several options for choosing when to filter it out
Step4: Ultimately we'll want a dataset with no missing values. Short of removing every row or column that has a missing value, how else can we massage the data?
Note
Step5: Other options to choose from are 'median' and 'most_frequent'.
The estimator API
Note that Imputer objects are similar to the models we used for supervized learning in chapter 3
Step6: Hand mapping ordinal data
Only we know what the meaning of the class labels for an ordinal variable like 'size' is, so we can map it out and apply it
Step7: Mapping nominal features
Two options
Step8: One hot encoding
Mapping nominal data with more than two possible values to numerical values is not a great idea, as it tells our model that e.g 'blue' is of greater value than 'green' (assuming we mapped red, green, blue to 0, 1, 2).
The way to work around this is to transform the feature into multiple binary features using one hot encoding
Step9: Loading in the wine dataset
Step10: note that it doesn't have headers, let's fix that
Step11: Splitting into test / training
Step12: Scaling parameters using standardization
Step13: Selecting meaningful features
L1 Penalty for sparser feature set
L1 regularization results in sparser weight vectors than L2, and can be helpful in pairing down the model if overfitting is detected via test performance being worse than training performance.
Step14: Note that with l1 penalty, some of the features drop out more quickly than others, e.g 'magnesium' is down to zero as C gets to 10^4.
Assessing with the help of Random Forests | Python Code:
import pandas as pd
from io import StringIO
csv_data = '''A,B,C,D
1.0,2.0,3.0,4.0
5.0,6.0,,8.0
10.0,11.0,12.0,'''
df = pd.read_csv(StringIO(csv_data))
df
Explanation: Chapter 4: building good training sets
Handling missing values
Let's start by constructing a simple dataset with some missing values.
End of explanation
df.isnull()
df.isnull().sum()
Explanation: Pandas' data frame has some helpful methods for seeing which values are null:
End of explanation
df.values
Explanation: Side note: while our data is in a data frame, we can always get a numpy array out if we'd like to:
End of explanation
df.dropna()
df.dropna(axis=1)
df.dropna(subset=['C'])
Explanation: So we can quickly summarize the number of missing values for each feature.
How do we deal with these missing values before passing the data into a model? There are a few options.
Filtering out missing data
One option is to simply remove columns or rows with missing data. The dropna method has several options for choosing when to filter it out:
End of explanation
from sklearn.preprocessing import Imputer
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(df)
imputed_data = imr.transform(df.values)
imputed_data
Explanation: Ultimately we'll want a dataset with no missing values. Short of removing every row or column that has a missing value, how else can we massage the data?
Note: we could do something like remove all rows where the majority of variables are missing using df.dropna(thresh=3) and then correct the rows with only one or two missing items using another mechanism so as not to throw out too much of our dataset.
Imputing missing values
The most common way to correct missing data without removing the associated rows is to replace it with the mean value for that variable. pandas makes this easy:
End of explanation
df = pd.DataFrame([['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1']])
df.columns = ['color', 'size', 'price', 'classlabel']
df
Explanation: Other options to choose from are 'median' and 'most_frequent'.
The estimator API
Note that Imputer objects are similar to the models we used for supervized learning in chapter 3: there's a training step and a application step, and the application step can generalize to new data. This means we could massage the data based on means used in our training data set and then use that same massaging applied to the test data set without having to re-fit on the training set.
Mapping categorical data to numbers
Categorical data, including nominal and ordinal variables, need to be mapped to number before we can fit them using models.
End of explanation
size_mapping = {'XL': 3,
'L': 2,
'M': 1}
df['size'] = df['size'].map(size_mapping)
df
Explanation: Hand mapping ordinal data
Only we know what the meaning of the class labels for an ordinal variable like 'size' is, so we can map it out and apply it
End of explanation
import numpy as np
class_mapping = {label: idx for idx, label in enumerate(np.unique(df['classlabel']))}
class_mapping
df['classlabel'] = df['classlabel'].map(class_mapping)
df
from sklearn.preprocessing import LabelEncoder
class_le = LabelEncoder()
y = class_le.fit_transform(df['classlabel'].values)
y
Explanation: Mapping nominal features
Two options: the first by hand, the second using a built in helper:
End of explanation
pd.get_dummies(df, columns=['color'])
Explanation: One hot encoding
Mapping nominal data with more than two possible values to numerical values is not a great idea, as it tells our model that e.g 'blue' is of greater value than 'green' (assuming we mapped red, green, blue to 0, 1, 2).
The way to work around this is to transform the feature into multiple binary features using one hot encoding
End of explanation
df_wine = pd.read_csv('wine.data', header=None)
df_wine.head(3)
Explanation: Loading in the wine dataset
End of explanation
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines',
'Proline']
df_wine.head()
Explanation: note that it doesn't have headers, let's fix that
End of explanation
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
Explanation: Splitting into test / training
End of explanation
from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
Explanation: Scaling parameters using standardization
End of explanation
from sklearn.linear_model import LogisticRegression
for penalty in ['l2', 'l1']:
print("with penalty {}".format(penalty))
lr = LogisticRegression(penalty=penalty, C=0.1)
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy:', lr.score(X_test_std, y_test))
print('Coefficients: {}'.format(lr.coef_[0]))
print('')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
for penalty in ['l2', 'l1']:
print("With penalty {}".format(penalty))
fig = plt.figure()
ax = plt.subplot(111)
colors = ['blue', 'green', 'red', 'cyan',
'magenta', 'yellow', 'black',
'pink', 'lightgreen', 'lightblue',
'gray', 'indigo', 'orange']
weights, params = [], []
for c in np.arange(-4, 6):
lr = LogisticRegression(penalty=penalty, C=10**c, random_state=0)
lr.fit(X_train_std, y_train)
weights.append(lr.coef_[1])
params.append(10**c)
weights = np.array(weights)
for column, color in zip(range(weights.shape[1]), colors):
plt.plot(params, weights[:, column],
label=df_wine.columns[column + 1],
color=color)
plt.axhline(0, color='black', linestyle='--', linewidth=3)
plt.xlim([10**(-5), 10**5])
plt.ylabel('weight coefficient')
plt.xlabel('C')
plt.xscale('log')
plt.legend(loc='upper left')
ax.legend(loc='upper center',
bbox_to_anchor=(1.38, 1.03),
ncol=1, fancybox=True)
plt.show()
Explanation: Selecting meaningful features
L1 Penalty for sparser feature set
L1 regularization results in sparser weight vectors than L2, and can be helpful in pairing down the model if overfitting is detected via test performance being worse than training performance.
End of explanation
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_wine.columns[1:]
forest = RandomForestClassifier(n_estimators=10000,
random_state=0,
n_jobs=-1)
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30,
feat_labels[indices[f]],
importances[indices[f]]))
plt.title('Feature Importances')
plt.bar(range(X_train.shape[1]),
importances[indices],
color='lightblue',
align='center')
plt.xticks(range(X_train.shape[1]),
feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
#plt.savefig('./random_forest.png', dpi=300)
plt.show()
Explanation: Note that with l1 penalty, some of the features drop out more quickly than others, e.g 'magnesium' is down to zero as C gets to 10^4.
Assessing with the help of Random Forests
End of explanation |
11,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: PWC-Net-small model training (with cyclical learning rate schedule)
In this notebook we
Step2: TODO
Step3: Pre-train on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
Step4: Configure the training
Step5: Train the model | Python Code:
pwcnet_train.ipynb
PWC-Net model training.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
Tensorboard:
[win] tensorboard --logdir=E:\\repos\\tf-optflow\\tfoptflow\\pwcnet-sm-6-2-cyclic-chairsthingsmix
[ubu] tensorboard --logdir=/media/EDrive/repos/tf-optflow/tfoptflow/pwcnet-sm-6-2-cyclic-chairsthingsmix
from __future__ import absolute_import, division, print_function
import sys
from copy import deepcopy
from dataset_base import _DEFAULT_DS_TRAIN_OPTIONS
from dataset_flyingchairs import FlyingChairsDataset
from dataset_flyingthings3d import FlyingThings3DHalfResDataset
from dataset_mixer import MixedDataset
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TRAIN_OPTIONS
Explanation: PWC-Net-small model training (with cyclical learning rate schedule)
In this notebook we:
- Use a small model (no dense or residual connections), 6 level pyramid, uspample level 2 by 4 as the final flow prediction
- Train the PWC-Net-small model on a mix of the FlyingChairs and FlyingThings3DHalfRes dataset using a Cyclic<sub>short</sub> schedule of our own
- The Cyclic<sub>short</sub> schedule oscillates between 5e-04 and 1e-05 for 200,000 steps
Below, look for TODO references and customize this notebook based on your own needs.
Reference
[2018a]<a name="2018a"></a> Sun et al. 2018. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. [arXiv] [web] [PyTorch (Official)] [Caffe (Official)]
End of explanation
# TODO: You MUST set dataset_root to the correct path on your machine!
if sys.platform.startswith("win"):
_DATASET_ROOT = 'E:/datasets/'
else:
_DATASET_ROOT = '/media/EDrive/datasets/'
_FLYINGCHAIRS_ROOT = _DATASET_ROOT + 'FlyingChairs_release'
_FLYINGTHINGS3DHALFRES_ROOT = _DATASET_ROOT + 'FlyingThings3D_HalfRes'
# TODO: You MUST adjust the settings below based on the number of GPU(s) used for training
# Set controller device and devices
# A one-gpu setup would be something like controller='/device:GPU:0' and gpu_devices=['/device:GPU:0']
# Here, we use a dual-GPU setup, as shown below
gpu_devices = ['/device:GPU:0', '/device:GPU:1']
controller = '/device:CPU:0'
# TODO: You MUST adjust this setting below based on the amount of memory on your GPU(s)
# Batch size
batch_size = 8
Explanation: TODO: Set this first!
End of explanation
# TODO: You MUST set the batch size based on the capabilities of your GPU(s)
# Load train dataset
ds_opts = deepcopy(_DEFAULT_DS_TRAIN_OPTIONS)
ds_opts['in_memory'] = False # Too many samples to keep in memory at once, so don't preload them
ds_opts['aug_type'] = 'heavy' # Apply all supported augmentations
ds_opts['batch_size'] = batch_size * len(gpu_devices) # Use a multiple of 8; here, 16 for dual-GPU mode (Titan X & 1080 Ti)
ds_opts['crop_preproc'] = (256, 448) # Crop to a smaller input size
ds1 = FlyingChairsDataset(mode='train_with_val', ds_root=_FLYINGCHAIRS_ROOT, options=ds_opts)
ds_opts['type'] = 'into_future'
ds2 = FlyingThings3DHalfResDataset(mode='train_with_val', ds_root=_FLYINGTHINGS3DHALFRES_ROOT, options=ds_opts)
ds = MixedDataset(mode='train_with_val', datasets=[ds1, ds2], options=ds_opts)
# Display dataset configuration
ds.print_config()
Explanation: Pre-train on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
End of explanation
# Start from the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TRAIN_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_dir'] = './pwcnet-sm-6-2-cyclic-chairsthingsmix/'
nn_opts['batch_size'] = ds_opts['batch_size']
nn_opts['x_shape'] = [2, ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 3]
nn_opts['y_shape'] = [ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 2]
nn_opts['use_tf_data'] = True # Use tf.data reader
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller
# Use the PWC-Net-small model in quarter-resolution mode
nn_opts['use_dense_cx'] = False
nn_opts['use_res_cx'] = False
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2
# Set the learning rate schedule. This schedule is for a single GPU using a batch size of 8.
# Below,we adjust the schedule to the size of the batch and the number of GPUs.
nn_opts['lr_policy'] = 'cyclic'
nn_opts['cyclic_lr_max'] = 5e-04 # Anything higher will generate NaNs
nn_opts['cyclic_lr_base'] = 1e-05
nn_opts['cyclic_lr_stepsize'] = 20000
nn_opts['max_steps'] = 200000
# Below,we adjust the schedule to the size of the batch and our number of GPUs (2).
nn_opts['max_steps'] = int(nn_opts['max_steps'] * 8 / ds_opts['batch_size'])
nn_opts['cyclic_lr_stepsize'] = int(nn_opts['cyclic_lr_stepsize'] * 8 / ds_opts['batch_size'])
# Instantiate the model and display the model configuration
nn = ModelPWCNet(mode='train_with_val', options=nn_opts, dataset=ds)
nn.print_config()
Explanation: Configure the training
End of explanation
# Train the model
nn.train()
Explanation: Train the model
End of explanation |
11,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matching Market
This simple model consists of a buyer, a supplier, and a market.
The buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a getbid function.
The supplier is similar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices.
The willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded.
New in version 8
A time step hack, now we don't need a predefined dictionary/array to tell us what month it is
added reserves
restructured the running sequence of the market
got rid of some "clean" functions
Microeconomic Foundations
The market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.
Agent-Based Objects
The following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.
Step1: classes buyers and sellers
Below we are constructing the buyers and sellers in classes.
Step2: Construct the market
For the market two classes are made. The market itself, which controls the buyers and the sellers, and the book. The market has a book where the results of the clearing procedure are stored.
Step3: Observer
The observer holds the clock and collects data. In this setup it tells the market another tick has past and it is time to act. The market will instruct the other agents. The observer initializes the model, thereby making real objects out of the classes defined above.
Step4: Example Market
In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.
Step5: run the model
To run the model we create the observer. The observer creates all the other objects and runs the model.
Step6: Operations Research Formulation
The market can also be formulated as a very simple linear program or linear complementarity problem. It is clearer and easier to implement this market clearing mechanism with agents. One merit of the agent-based approach is that we don't need linear or linearizable supply and demand function.
The auctioneer is effectively following a very simple linear program subject to constraints on units sold. The auctioneer is, in the primal model, maximizing the consumer utility received by customers, with respect to the price being paid, subject to a fixed supply curve. On the dual side the auctioneer is minimizing the cost of production for the supplier, with respect to quantity sold, subject to a fixed demand curve. It is the presumed neutrality of the auctioneer which justifies the honest statement of supply and demand.
An alternative formulation is a linear complementarity problem. Here the presence of an optimal space of trades ensures that there is a Pareto optimal front of possible trades. The perfect opposition of interests in dividing the consumer and producer surplus means that this is a zero sum game. Furthermore the solution to this zero-sum game maximizes societal welfare and is therefore the Hicks optimal solution.
Next Steps
A possible addition of this model would be to have a weekly varying demand of customers, for instance caused by the use of natural gas as a heating agent. This would require the bids and asks to be time varying, and for the market to be run over successive time periods. A second addition would be to create transport costs, or enable intermediate goods to be produced. This would need a more elaborate market operator. Another possible addition would be to add a profit maximizing broker. This may require adding belief, fictitious play, or message passing.
The object-orientation of the models will probably need to be further rationalized. Right now the market requires very particular ordering of calls to function correctly.
Step7: Time of last run
Time and date of the last run of this notebook file | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import random as rnd
import pandas as pd
import numpy as np
import time
import datetime
import calendar
# fix what is missing with the datetime/time/calendar package
def add_months(sourcedate,months):
month = sourcedate.month - 1 + months
year = int(sourcedate.year + month / 12 )
month = month % 12 + 1
day = min(sourcedate.day,calendar.monthrange(year, month)[1])
return datetime.date(year,month,day)
Explanation: Matching Market
This simple model consists of a buyer, a supplier, and a market.
The buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a getbid function.
The supplier is similar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices.
The willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded.
New in version 8
A time step hack, now we don't need a predefined dictionary/array to tell us what month it is
added reserves
restructured the running sequence of the market
got rid of some "clean" functions
Microeconomic Foundations
The market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.
Agent-Based Objects
The following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.
End of explanation
# measure how long it takes to run the script
startit = time.time()
dtstartit = datetime.datetime.now()
class Seller():
def __init__(self, name):
self.name = name
self.wta = []
self.step = 0
self.prod = 2000
self.lb_price = 10
self.ub_price = 20
self.reserve = 500000
#multiple market idea, also ga away from market
self.subscr_market = {}
# the supplier has n quantities that they can sell
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self):
n = self.prod
l = self.lb_price
u = self.ub_price
wta = []
for i in range(n):
p = rnd.uniform(l, u)
wta.append(p)
self.wta = wta
def get_name(self):
return self.name
def get_asks(self):
return self.wta
def clear_wta(self):
self.wta = []
def extract(self, cur_extraction):
if self.reserve > 0:
self.reserve = self.reserve - cur_extraction
else:
self.prod = 0
class Buyer():
def __init__(self, name):
self.name = name
self.wtp = []
self.step = 0
self.base_demand = 0
self.max_demand = 0
self.lb_price = 10
self.ub_price = 20
# the supplier has n quantities that they can buy
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self):
n = int(self.consumption(self.step))
l = self.lb_price
u = self.ub_price
wtp = []
for i in range(n):
p = rnd.uniform(l, u)
wtp.append(p)
self.wtp = wtp
# gets a little to obvious
def get_name(self):
return self.name
# return list of willingness to pay
def get_bids(self):
return self.wtp
# is this neccesary?
def clear_wtp(self):
self.wtp = []
def consumption(self, x):
# make it initialise to seller
b = self.base_demand
m = self.max_demand
y = b + m * (.5 * (1 + np.cos((x/6)*np.pi)))
return(y)
Explanation: classes buyers and sellers
Below we are constructing the buyers and sellers in classes.
End of explanation
# the book is an object of the market used for the clearing procedure
class Book():
def __init__(self):
self.ledger = pd.DataFrame(columns = ("role","name","price","cleared"))
def set_asks(self,seller_list):
# ask each seller their name
# ask each seller their willingness
# for each willingness append the data frame
for seller in seller_list:
seller_name = seller.get_name()
seller_price = seller.get_asks()
for price in seller_price:
self.ledger=self.ledger.append({"role":"seller","name":seller_name,"price":price,"cleared":"in process"},
ignore_index=True)
def set_bids(self,buyer_list):
# ask each seller their name
# ask each seller their willingness
# for each willingness append the data frame
for buyer in buyer_list:
buyer_name = buyer.get_name()
buyer_price = buyer.get_bids()
for price in buyer_price:
self.ledger=self.ledger.append({"role":"buyer","name":buyer_name,"price":price,"cleared":"in process"},
ignore_index=True)
def update_ledger(self,ledger):
self.ledger = ledger
def get_ledger(self):
return self.ledger
def clean_ledger(self):
self.ledger = pd.DataFrame(columns = ("role","name","price","cleared"))
class Market():
def __init__(self):
self.count = 0
self.last_price = ''
self.book = Book()
self.b = []
self.s = []
self.buyer_list = []
self.seller_list = []
self.buyer_dict = {}
self.seller_dict = {}
self.ledger = ''
def update_seller(self):
for i in self.seller_dict:
self.seller_dict[i].step += 1
self.seller_dict[i].set_quantity()
def update_buyer(self):
for i in self.buyer_dict:
self.buyer_dict[i].step += 1
self.buyer_dict[i].set_quantity()
def add_buyer(self,buyer):
self.b.append(buyer)
self.buyer_list.append(buyer)
def add_seller(self,seller):
self.s.append(seller)
self.seller_list.append(seller)
def set_book(self):
self.book.set_bids(self.buyer_list)
self.book.set_asks(self.seller_list)
def get_ledger(self):
self.ledger = self.book.get_ledger()
return self.ledger
def get_bids(self):
# this is a data frame
ledger = self.book.get_ledger()
rows= ledger.loc[ledger['role'] == 'buyer']
# this is a series
prices=rows['price']
# this is a list
bids = prices.tolist()
return bids
def get_asks(self):
# this is a data frame
ledger = self.book.get_ledger()
rows = ledger.loc[ledger['role'] == 'seller']
# this is a series
prices=rows['price']
# this is a list
asks = prices.tolist()
return asks
# return the price at which the market clears
# this fails because there are more buyers then sellers
def get_clearing_price(self):
# buyer makes a bid starting with the buyer which wants it most
b = self.get_bids()
s = self.get_asks()
# highest to lowest
self.b=sorted(b, reverse=True)
# lowest to highest
self.s=sorted(s, reverse=False)
# find out whether there are more buyers or sellers
# then drop the excess buyers or sellers; they won't compete
n = len(b)
m = len(s)
# there are more sellers than buyers
# drop off the highest priced sellers
if (m > n):
s = s[0:n]
matcher = n
# There are more buyers than sellers
# drop off the lowest bidding buyers
else:
b = b[0:m]
matcher = m
# It's possible that not all items sold actually clear the market here
count = 0
for i in range(matcher):
if (self.b[i] > self.s[i]):
count +=1
self.last_price = self.b[i]
# copy count to market
self.count = count
return self.last_price
# TODO: Annotate the ledger
def annotate_ledger(self,clearing_price):
ledger = self.book.get_ledger()
for index, row in ledger.iterrows():
if (row['role'] == 'seller'):
if (row['price'] < clearing_price):
ledger.loc[index,'cleared'] = 'True'
else:
ledger.loc[index,'cleared'] = 'False'
else:
if (row['price'] > clearing_price):
ledger.loc[index,'cleared'] = 'True'
else:
ledger.loc[index,'cleared'] = 'False'
self.book.update_ledger(ledger)
def get_units_cleared(self):
return self.count
def clean_ledger(self):
self.ledger = ''
self.book.clean_ledger()
def run_it(self):
self.pre_clearing_operation()
self.clearing_operation()
self.after_clearing_operation()
#pre clearing empty out the last run and start
# clean ledger is kind of sloppy, rewrite functions to overide the ledger
def pre_clearing_operation(self):
self.clean_ledger()
self.update_buyer()
self.update_seller()
def clearing_operation(self):
self.set_book()
clearing_price = self.get_clearing_price()
self.annotate_ledger(clearing_price)
def after_clearing_operation(self):
for i in self.seller_dict:
name = self.seller_dict[i].name
cur_extract = len(self.book.ledger[(self.book.ledger.cleared == 'True') &
(self.book.ledger.name == name)])
self.seller_dict[i].extract(cur_extract)
Explanation: Construct the market
For the market two classes are made. The market itself, which controls the buyers and the sellers, and the book. The market has a book where the results of the clearing procedure are stored.
End of explanation
class Observer():
def __init__(self, x, y, z):
self.init_buyer = x
self.init_seller = y
self.maxrun = z
self.hist_book = []
self.buyer_dict = {}
self.seller_dict = {}
self.timetick = 0
self.gas_market = ''
self.reserve = []
def set_buyer(self, buyer_info):
for name in buyer_info:
self.buyer_dict[name] = Buyer('%s' % name)
self.buyer_dict[name].base_demand = buyer_info[name]['b']
self.buyer_dict[name].max_demand = buyer_info[name]['m']
def set_seller(self, seller_info):
for name in seller_info:
self.seller_dict[name] = Seller('%s' % name)
self.seller_dict[name].prod = seller_info[name][0]
def get_reserve(self):
reserve = []
for name in self.seller_dict:
reserve.append(self.seller_dict[name].reserve)
return reserve
def set_market(self):
self.gas_market = Market()
#add suplliers and buyers to this market
for supplier in self.seller_dict.values():
self.gas_market.add_seller(supplier)
for buyer in self.buyer_dict.values():
self.gas_market.add_buyer(buyer)
self.gas_market.seller_dict = self.seller_dict
self.gas_market.buyer_dict = self.buyer_dict
def run_it(self):
# Timing
# time initialising
startit_init = time.time()
#initialise, setting up all the agents
first_run = True
if first_run:
self.set_buyer(self.init_buyer)
self.set_seller(self.init_seller)
self.set_market()
first_run=False
# time init stop
stopit_init = time.time() - startit_init
print('%s : init' % stopit_init)
for period in range(self.maxrun):
# time the period
startit_period = time.time()
self.timetick += 1
print('#######################################')
period_now = add_months(period_null, self.timetick-1)
print(period_now.strftime('%Y-%b'))
# real action on the market
self.gas_market.run_it()
# data collection
p_clearing = self.gas_market.last_price
q_sold = self.gas_market.count
self.reserve.append([period_now.strftime('%Y-%b'),*self.get_reserve()])
# recording the step_info
# since this operation can take quite a while, print after every operation
period_time = time.time() - startit_period
print('%s : period time' % period_time)
self.hist_book.append([period_now.strftime('%Y-%b'), p_clearing, q_sold])
Explanation: Observer
The observer holds the clock and collects data. In this setup it tells the market another tick has past and it is time to act. The market will instruct the other agents. The observer initializes the model, thereby making real objects out of the classes defined above.
End of explanation
# Show some real consumption data, for more data see folder data analytics
#read montly consumption data of 2010 into a dataframe
df = pd.read_csv('2010cbstestrun.csv', header=0, index_col=0)
df = df.transpose()
#plot the 2010 monthly consumption data
df.plot();
df
# make initialization dictionary
init_buyer = {'elec':{'b':400, 'm' : 673}, 'indu':{'b':400, 'm':1171}, 'home':{'b': 603, 'm': 3615}}
init_seller = {'netherlands' : (2000, 0, 10), 'Russia' : (2000, 0, 10)}
# make a history book to record every timestep
hist_book = []
# set the starting time
period_null= datetime.date(2010,1,1)
Explanation: Example Market
In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.
End of explanation
# create observer and run the model
# first data about buyers then sellers and then model ticks
years = 5
timestep = 12
obser1 = Observer(init_buyer, init_seller, years*timestep)
obser1.run_it()
#get the info from the observer
hist_book = obser1.hist_book
# recording the total run
def write_to_csv(hist_book):
f = open('hist_book.csv', 'a')
for item in hist_book:
f.write('%s,%s\n' % (item[0], item[1]))
f.close()
#write_to_csv(hist_book)
# make a dataframe of clearing prices
df_hb = pd.DataFrame(hist_book)
df_hb = df_hb.set_index(0)
df_hb.index.name = 'month'
df_hb.rename(columns={1: 'price', 2: 'quantity'}, inplace=True)
Explanation: run the model
To run the model we create the observer. The observer creates all the other objects and runs the model.
End of explanation
# timeit
stopit = time.time()
dtstopit = datetime.datetime.now()
print('it took us %s seconds to get to this conclusion' % (stopit-startit))
print('in another notation (h:m:s) %s'% (dtstopit - dtstartit))
# print the run results
price = df_hb['price']
fig = price.plot()
plt.ylabel('€ / unit')
plt.show()
quantity = df_hb['quantity']
fig = quantity.plot()
plt.ylabel('quantity')
plt.show()
Explanation: Operations Research Formulation
The market can also be formulated as a very simple linear program or linear complementarity problem. It is clearer and easier to implement this market clearing mechanism with agents. One merit of the agent-based approach is that we don't need linear or linearizable supply and demand function.
The auctioneer is effectively following a very simple linear program subject to constraints on units sold. The auctioneer is, in the primal model, maximizing the consumer utility received by customers, with respect to the price being paid, subject to a fixed supply curve. On the dual side the auctioneer is minimizing the cost of production for the supplier, with respect to quantity sold, subject to a fixed demand curve. It is the presumed neutrality of the auctioneer which justifies the honest statement of supply and demand.
An alternative formulation is a linear complementarity problem. Here the presence of an optimal space of trades ensures that there is a Pareto optimal front of possible trades. The perfect opposition of interests in dividing the consumer and producer surplus means that this is a zero sum game. Furthermore the solution to this zero-sum game maximizes societal welfare and is therefore the Hicks optimal solution.
Next Steps
A possible addition of this model would be to have a weekly varying demand of customers, for instance caused by the use of natural gas as a heating agent. This would require the bids and asks to be time varying, and for the market to be run over successive time periods. A second addition would be to create transport costs, or enable intermediate goods to be produced. This would need a more elaborate market operator. Another possible addition would be to add a profit maximizing broker. This may require adding belief, fictitious play, or message passing.
The object-orientation of the models will probably need to be further rationalized. Right now the market requires very particular ordering of calls to function correctly.
End of explanation
# print the time of last run
print('last run of this notebook:')
time.strftime("%a, %d %b %Y %H:%M:%S", time.localtime())
#df_hb
df_res = pd.DataFrame(obser1.reserve, columns=['time', *[i for i in init_seller]])
df_res = df_res.set_index('time')
df_res.plot();
Explanation: Time of last run
Time and date of the last run of this notebook file
End of explanation |
11,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This script implements a Gradient Boosting Machine on the Titanic dataset. This is a boosting algorithm that will be using trees, and will be auto-selecting features to evaluate. Since we're pretty agnostic about everything except prediction, let's just cycle through each of the option values to find some good ones (since we're not doing any backward evaluation after choosing an option, this probably isn't the optimal one, but it should be decent at least).
Step1: Now, what we're going to do is go stepwise through many of the features of the random forest classifier, to figure out which parameters will give us the best fit. We'll go in the following order
Step2: 2.) Learning Rate
Step3: 3.) Count of Estimators
Step4: 4.) Maximum depth of tree
Step5: 5.) The number of samples available in order to make another split
Step6: 6.) The number of samples that must appear in a leaf
Step7: 7.) Min required weighted fraction of samples in a leaf or node
Step8: 8.) Fraction of samples used in each base learner
Step9: 9.) Maximum possible number of nodes | Python Code:
import numpy as np
import pandas as pd
titanic=pd.read_csv('./titanic_clean_data.csv')
cols_to_norm=['Age','Fare']
col_norms=['Age_z','Fare_z']
titanic[col_norms]=titanic[cols_to_norm].apply(lambda x: (x-x.mean())/x.std())
titanic['cabin_clean']=(pd.notnull(titanic.Cabin))
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import KFold
from sklearn.cross_validation import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
titanic_target=titanic.Survived.values
features=['Sex','SibSp','Parch','Pclass_1','Pclass_2','Pclass_3','Emb_C','Emb_Q','Emb_S',\
'Emb_nan','Age_ct_C','Age_ct_A','Age_ct_S', 'Sp_ct','Age_z','Fare_z',\
'Ti_Dr', 'Ti_Master', 'Ti_Mil', 'Ti_Miss', 'Ti_Mr', 'Ti_Mrs', 'Ti_Other', 'Ti_Rev',\
'Fl_AB', 'Fl_CD', 'Fl_EFG', 'Fl_nan']
titanic_features=titanic[features].values
titanic_features, ensemble_features, titanic_target, ensemble_target= \
train_test_split(titanic_features,
titanic_target,
test_size=.1,
random_state=7132016)
Explanation: This script implements a Gradient Boosting Machine on the Titanic dataset. This is a boosting algorithm that will be using trees, and will be auto-selecting features to evaluate. Since we're pretty agnostic about everything except prediction, let's just cycle through each of the option values to find some good ones (since we're not doing any backward evaluation after choosing an option, this probably isn't the optimal one, but it should be decent at least).
End of explanation
feat_param=['deviance','exponential']
score=0
for feature in feat_param:
clf = GradientBoostingClassifier(loss=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
loss_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print loss_out
Explanation: Now, what we're going to do is go stepwise through many of the features of the random forest classifier, to figure out which parameters will give us the best fit. We'll go in the following order:
loss
learning_rate
n_estimators
max_depth (depth of tree)
min_samples_split (number of samples you need to be able to create a new branch/node)
min_samples_leaf
min_weight_fraction_leaf
subsample
max_features (number of features considered at each split)
max_leaf_nodes (implicitly, if the best is not none, then we'll be ignoring the max depth parameter
warm_start
1.) Loss Criteria
End of explanation
score=0
for feature in np.linspace(.05,.45,11):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
rate_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print rate_out
Explanation: 2.) Learning Rate
End of explanation
score=0
for feature in range(100,1001,100):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
feat_n_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print feat_n_out
Explanation: 3.) Count of Estimators
End of explanation
score=0
for feature in range(1,21):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feat_n_out,\
max_depth=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
depth_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print depth_out
Explanation: 4.) Maximum depth of tree
End of explanation
score=0
for feature in range(1,21):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feat_n_out,\
max_depth=depth_out, min_samples_split=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
sample_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print sample_out
Explanation: 5.) The number of samples available in order to make another split
End of explanation
score=0
for feature in range(1,21):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feat_n_out,\
max_depth=depth_out, min_samples_split=sample_out,\
min_samples_leaf=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
sample_leaf_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print sample_leaf_out
Explanation: 6.) The number of samples that must appear in a leaf
End of explanation
score=0
for feature in np.linspace(0.0,0.5,10):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feat_n_out,\
max_depth=depth_out, min_samples_split=sample_out,\
min_samples_leaf=sample_leaf_out, min_weight_fraction_leaf=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
frac_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print frac_out
Explanation: 7.) Min required weighted fraction of samples in a leaf or node
End of explanation
score=0
for feature in np.linspace(0.1,1,10):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feat_n_out,\
max_depth=depth_out, min_samples_split=sample_out,\
min_samples_leaf=sample_leaf_out, min_weight_fraction_leaf=frac_out,\
subsample=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
subsamp_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print subsamp_out
Explanation: 8.) Fraction of samples used in each base learner
End of explanation
node_out=None
for feature in range(2,11):
clf = GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feat_n_out,\
max_depth=depth_out, min_samples_split=sample_out,\
min_samples_leaf=sample_leaf_out, min_weight_fraction_leaf=frac_out,\
subsample=subsamp_out, max_leaf_nodes=feature, random_state=7112016)
score_test= cross_val_score(clf,titanic_features,titanic_target,cv=10 )
if score_test.mean()>score:
node_out=feature
score_diff=score_test.mean()-score
score=score_test.mean()
print node_out
model=GradientBoostingClassifier(loss=loss_out, learning_rate=rate_out, n_estimators=feat_n_out,\
max_depth=depth_out, min_samples_split=sample_out,\
min_samples_leaf=sample_leaf_out, min_weight_fraction_leaf=frac_out,\
subsample=subsamp_out, max_leaf_nodes=node_out,\
random_state=7112016).fit(titanic_features, titanic_target)
test_data=pd.read_csv('./test.csv')
test_data.Sex.replace(['male','female'],[True,False], inplace=True)
test_data.Age= test_data.groupby(['Sex','Pclass'])[['Age']].transform(lambda x: x.fillna(x.mean()))
test_data.Fare= titanic.groupby(['Pclass'])[['Fare']].transform(lambda x: x.fillna(x.mean()))
titanic_class=pd.get_dummies(test_data.Pclass,prefix='Pclass',dummy_na=False)
test_data=pd.merge(test_data,titanic_class,on=test_data['PassengerId'])
test_data=pd.merge(test_data,pd.get_dummies(test_data.Embarked, prefix='Emb', dummy_na=True), on=test_data['PassengerId'])
titanic['Floor']=titanic['Cabin'].str.extract('^([A-Z])', expand=False)
titanic['Floor'].replace(to_replace='T',value=np.NaN ,inplace=True)
titanic=pd.merge(titanic,pd.get_dummies(titanic.Floor, prefix="Fl", dummy_na=True),on=titanic['PassengerId'])
test_data['Age_cut']=pd.cut(test_data['Age'],[0,17.9,64.9,99], labels=['C','A','S'])
test_data=pd.merge(test_data,pd.get_dummies(test_data.Age_cut, prefix="Age_ct", dummy_na=False),on=test_data['PassengerId'])
test_data['Title']=test_data['Name'].str.extract(', (.*)\.', expand=False)
test_data['Title'].replace(to_replace='Mrs\. .*',value='Mrs', inplace=True, regex=True)
test_data.loc[test_data.Title.isin(['Col','Major','Capt']),['Title']]='Mil'
test_data.loc[test_data.Title=='Mlle',['Title']]='Miss'
test_data.loc[test_data.Title=='Mme',['Title']]='Mrs'
test_data['Title_ct']=test_data.groupby(['Title'])['Title'].transform('count')
test_data.loc[test_data.Title_ct<5,['Title']]='Other'
test_data=pd.merge(test_data,pd.get_dummies(test_data.Title, prefix='Ti',dummy_na=False), on=test_data['PassengerId'])
test_data['NameTest']=test_data.Name
test_data['NameTest'].replace(to_replace=" \(.*\)",value="",inplace=True, regex=True)
test_data['NameTest'].replace(to_replace=", M.*\.",value=", ",inplace=True, regex=True)
cols_to_norm=['Age','Fare']
col_norms=['Age_z','Fare_z']
test_data['Age_z']=(test_data.Age-titanic.Age.mean())/titanic.Age.std()
test_data['Fare_z']=(test_data.Fare-titanic.Fare.mean())/titanic.Fare.std()
test_data['cabin_clean']=(pd.notnull(test_data.Cabin))
name_list=pd.concat([titanic[['PassengerId','NameTest']],test_data[['PassengerId','NameTest']]])
name_list['Sp_ct']=name_list.groupby('NameTest')['NameTest'].transform('count')-1
test_data=pd.merge(test_data,name_list[['PassengerId','Sp_ct']],on='PassengerId',how='left')
def add_cols(var_check,df):
if var_check not in df.columns.values:
df[var_check]=0
for x in features:
add_cols(x, test_data)
features=['Sex','SibSp','Parch','Pclass_1','Pclass_2','Pclass_3','Emb_C','Emb_Q','Emb_S',\
'Emb_nan','Age_ct_C','Age_ct_A','Age_ct_S', 'Sp_ct','Age_z','Fare_z',\
'Ti_Dr', 'Ti_Master', 'Ti_Mil', 'Ti_Miss', 'Ti_Mr', 'Ti_Mrs', 'Ti_Other', 'Ti_Rev',\
'Fl_AB', 'Fl_CD', 'Fl_EFG', 'Fl_nan']
test_features=test_data[features].values
predictions=model.predict(ensemble_features)
ensemble_gboost=pd.DataFrame({'gboost_pred':predictions})
ensemble_gboost.to_csv('./ensemble_gboost.csv', index=False)
predictions=model.predict(test_features)
test_data['Survived']=predictions
kaggle=test_data[['PassengerId','Survived']]
kaggle.to_csv('./kaggle_titanic_submission_gboost.csv', index=False)
Explanation: 9.) Maximum possible number of nodes
End of explanation |
11,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manuscript plots
This notebook creates the figures in Parviainen (2015, submitted to MNRAS). The figures show the calculation of quadratic limb darkening coefficients for three broadband filters (for simplicity defined using boxcar filters) and 19 narrow-band filters.
Step1: Broadband filters
Step2: Narrow band filters | Python Code:
%pylab inline
import seaborn as sb
from matplotlib.patches import Ellipse
from scipy.stats import chi2
from ldtk import LDPSetCreator, BoxcarFilter, TabulatedFilter
AAOCW, AAPGW = 3.465, 7.087
rc(['xtick','ytick','axes'], labelsize=8)
def eigsorted(cov):
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
return vals[order], vecs[:,order]
def bve(data, volume=0.5):
vals, vecs = eigsorted(cov(data, rowvar=0))
theta = np.degrees(np.arctan2(*vecs[:,0][::-1]))
width, height = 2 * np.sqrt(chi2.ppf(volume,2)) * np.sqrt(vals)
return width, height, theta
Explanation: Manuscript plots
This notebook creates the figures in Parviainen (2015, submitted to MNRAS). The figures show the calculation of quadratic limb darkening coefficients for three broadband filters (for simplicity defined using boxcar filters) and 19 narrow-band filters.
End of explanation
filters = [BoxcarFilter('a',450,550),
BoxcarFilter('b',650,750),
BoxcarFilter('c',850,950)]
sc = LDPSetCreator(teff=(6400,50), logg=(4.5,0.1), z=(0.25,0.05), filters=filters)
ps = sc.create_profiles(nsamples=2000)
ps.resample_linear_z(100)
qm,qe = ps.coeffs_qd(do_mc=True)
ec = 3
cp = cm.Spectral_r(linspace(0.1,1.0,3))
with sb.axes_style('ticks'):
fig,ax = subplots(1,2,figsize=(AAPGW,0.8*AAOCW), sharey=True)
for i in range(ps._nfilters):
c = sb.desaturate(cp[i],0.85)
ax[0].fill_between(ps._mu, ps._mean[i]-ec*ps._std[i], ps._mean[i]+ec*ps._std[i], facecolor=c, alpha=0.2)
ax[0].plot(ps._mu, ps._mean[i], ',', c=c);
#ax[0].plot(ps._mu_orig, ps._mean_orig[i], '.', c=c);
ax[1].fill_between(ps._z, ps._mean[i]-ec*ps._std[i], ps._mean[i]+ec*ps._std[i], facecolor=c, alpha=0.2)
ax[1].plot(ps._z, ps._mean[i], ',', c=c);
#ax[1].plot(ps._z_orig/ps._limb_z, ps._mean_orig[i], '.', c=c);
setp(ax[0], xlabel='$\mu$', ylabel='Normalized flux', xlim=(0,1))
setp(ax[1], xlabel='$z$', xlim=(0,1))
sb.despine(fig, offset=10)
setp(ax[1].get_yticklabels(), visible=False)
fig.tight_layout()
fig.subplots_adjust(left=0.1, right=0.98, bottom=0.25, top=0.97)
fig.savefig('plots/example_profiles.pdf')
fig.savefig('plots/example_profiles.png', dpi=150)
chains = array(ps._samples['qd'])
est_u = percentile(chains[:,:,0], [50,0.5,99.5], 1)
est_v = percentile(chains[:,:,1], [50,0.5,99.5], 1)
with sb.axes_style('ticks'):
fig = figure(figsize=(AAPGW,0.8*AAOCW))
gs = GridSpec(2,3, width_ratios=[2,1,1])
ad = subplot(gs[:,0])
au = subplot(gs[0,1:])
av = subplot(gs[1,1:])
ch = chains[0]
m = mean(ch, 0)
c = sb.desaturate(cp[0],0.85)
ad.plot(ch[::20,0],ch[::20,1], '.', c=c, alpha=0.25)
ad.add_patch(Ellipse(m, *bve(ch[::25,:], 0.50), fill=False, lw=1.50, ec='k', zorder=100, alpha=0.75))
ad.add_patch(Ellipse(m, *bve(ch[::25,:], 0.95), fill=False, lw=1.25, ec='k', zorder=100, alpha=0.50))
ad.add_patch(Ellipse(m, *bve(ch[::25,:], 0.99), fill=False, lw=1.00, ec='k', zorder=100, alpha=0.25))
au.errorbar((1,2,3), est_u[0], abs(est_u[0]-est_u[1:]).mean(0), fmt='.')
av.errorbar((1,2,3), est_v[0], abs(est_v[0]-est_v[1:]).mean(0), fmt='.')
setp(ad, xlabel='Quadratic law u', ylabel='Quadratic law v', xlim=(0.567,0.577))
setp([au,av], xlim=(0.95,3.05), xticks=[1,2,3], xticklabels='a b c'.split())
setp(au, ylabel='Quadratic law u', ylim=(0.30,0.64), yticks=[0.25,0.40,0.55,0.70])
setp(av, ylabel='Quadratic law v', ylim=(0.13, 0.17), yticks=[0.13,0.15,0.17])
sb.despine(fig, offset=10, trim=True)
setp(au.get_xticklabels(), visible=False)
fig.subplots_adjust(left=0.12, right=0.99, bottom=0.25, top=0.97, wspace=0.45)
fig.savefig('plots/example_coefficients.pdf')
fig.savefig('plots/example_coefficients.png', dpi=150)
Explanation: Broadband filters
End of explanation
f_edges = arange(500,800,15)
f_centres = 0.5*(f_edges[1:] + f_edges[:-1])
filters = [BoxcarFilter('t', a, b) for a,b in zip(f_edges[:-1], f_edges[1:])]
sc = LDPSetCreator(teff=(6400,50), logg=(4.5,0.2), z=(0.25,0.05), filters=filters)
ps = sc.create_profiles(nsamples=5000)
qc,qe = ps.coeffs_qd(do_mc=True, n_mc_samples=50000)
sc_wide = LDPSetCreator(teff=(6400,50), logg=(4.5,0.2), z=(0.25,0.05), filters=[BoxcarFilter('wide',500,800)])
ps_wide = sc_wide.create_profiles(nsamples=5000)
qc_wide,qe_wide = ps_wide.coeffs_qd(do_mc=True, n_mc_samples=50000)
with sb.axes_style('ticks'):
fig,axs = subplots(2,1,figsize=(AAOCW,0.75*AAOCW), sharex=False)
for i in range(2):
axs[i].plot(f_centres, qc[:,i], drawstyle='steps-mid', c='k')
axs[i].errorbar(f_centres, qc[:,i], qe[:,i], fmt='k.')
setp(axs, xlim=(500,775), xticks=linspace(500,800,11))
setp(axs[0], ylabel='u', ylim=(0.34,0.67), yticks=[0.35,0.45,0.55,0.65])
setp(axs[1], ylabel='v', xlabel='Wavelength [nm]',
yticks=[0.11,0.13,0.15,0.17], ylim=(0.11,0.20))
sb.despine(fig, offset=10, trim=True)
setp(axs[0].get_xticklabels(), visible=False)
fig.tight_layout()
fig.subplots_adjust(left=0.21, right=0.95, bottom=0.25, top=0.99)
fig.savefig('plots/qd_coeffs_narrow.pdf')
fig.savefig('plots/qd_coeffs_narrow.png', dpi=150)
Explanation: Narrow band filters
End of explanation |
11,097 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
a = tf.constant([1,2,3])
b = tf.constant([4,5,6,7])
def g(a,b):
tile_a = tf.tile(tf.expand_dims(a, 1), [1, tf.shape(b)[0]])
tile_a = tf.expand_dims(tile_a, 2)
tile_b = tf.tile(tf.expand_dims(b, 0), [tf.shape(a)[0], 1])
tile_b = tf.expand_dims(tile_b, 2)
cart = tf.concat([tile_a, tile_b], axis=2)
return cart
result = g(a.__copy__(),b.__copy__()) |
11,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numbers
Python provide following builtins numeric data types
Step1: NOTE
Step2: Greater than (>)
Step3: Less than or equal to (<=)
Step4: greater_than_or_equal_to
Step5: Equal To
Step6: Not Equal To
Step7: Bitwise Operations
Step8: 1011 | Python Code:
# Converting real to integer
print ('int(3.14) =', int(3.14))
print ('int(3.64) =', int(3.64))
print('int("22") =', int("22"))
print('int("22.0") !=', int("22.0"))
print("int(3+4j) =", int(3+4j))
# Converting integer to real
print ('float(5) =', float(5))
print('int("22.0") ==', float("22.0"))
print('int(float("22.0")) ==', int(float("22.0")))
# Calculation between integer and real results in real
print ('5.0 / 2 + 3 = ', 5 / 2 + 3)
x = 3.5
y = 2.5
z = x + y
print(x, y, z)
print(type(x), type(y), type(z))
z = int(z)
print(x, y, z)
print(type(x), y, type(z))
# Integers in other base
print ("int('20', 8) =", int('20', 8)) # base 8
print ("int('20', 16) =", int('20', 16)) # base 16
# Operations with complex numbers
c = 3 + 4j
print ('c =', c)
print ('Real Part:', c.real)
print ('Imaginary Part:', c.imag)
print ('Conjugate:', c.conjugate())
Explanation: Numbers
Python provide following builtins numeric data types:
Integer (int): i = 26011950
Floating Point real (float): f = 1.2345
Complex (complex): c = 2 + 10j
The builtin function int() can be used to convert other types to integer, including base changes.
Example:
End of explanation
x = 22
y = 4
if(x < y):
print("X wins")
else:
print("Y wins")
x = 2
y = 4
if(x < y):
print("X wins")
else:
print("Y wins")
Explanation: NOTE: The real numbers can also be represented in scientific notation, for example: 1.2e22.
Arithmetic Operations:
Python has a number of defined operators for handling numbers through arithmetic calculations, logic operations (that test whether a condition is true or false) or bitwise processing (where the numbers are processed in binary form).
Logical Operations:
Less than (<)
Greater than (>)
Less than or equal to (<=)
Greater than or equal to (>=)
Equal to (==)
Not equal to (!=)
Less than (<)
End of explanation
x = 2
y = 4
if(x > y):
print("X wins")
else:
print("Y wins")
x = 14
y = 4
if(x > y):
print("X wins")
else:
print("Y wins")
Explanation: Greater than (>)
End of explanation
x = 2
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 2
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 21
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 4
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
Explanation: Less than or equal to (<=)
End of explanation
x = 8
y = 4
if(x >= y):
print("X wins")
else:
print("Y wins")
x = 4
y = 14
if(x <= y):
print("X wins")
else:
print("Y wins")
x = 4
y = 4
if(x <= y):
print("X wins")
else:
print("Y wins")
Explanation: greater_than_or_equal_to
End of explanation
x = 4
y = 4
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 41
y = 4
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 2+1j
y = 3+1j
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 21+1j
y = 21+1j
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
x = 21+1j
y = 21+1j
if(x == y):
print("X & Y are equal")
else:
print("X & Y are different")
Explanation: Equal To
End of explanation
x = 4
y = 4
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
x = 41
y = 4
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
x = 2+1j
y = 3+1j
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
x = 21+1j
y = 21+1j
if(x != y):
print("X & Y are different")
else:
print("X & Y are equal")
Explanation: Not Equal To
End of explanation
x = 10 #-> 1010
y = 11 #-> 1011
Explanation: Bitwise Operations:
Left Shift (<<)
Right Shift (>>)
And (&)
Or (|)
Exclusive Or (^)
Inversion (~)
During the operations, numbers are converted appropriately (eg. (1.5+4j) + 3 gives 4.5+4j).
Besides operators, there are also some builtin features to handle numeric types: abs(), which returns the absolute value of the number, oct(), which converts to octal, hex(), which converts for hexadecimal, pow(), which raises a number by another and round(), which returns a real number with the specified rounding.
End of explanation
print("x<<2 = ", x<<2)
print("x =", x)
print("x>>2 = ", x>>2)
print("x&y = ", x&y)
print("x|y = ", x|y)
print("x^y = ", x^y)
print("x =", x)
print("~x = ", ~x)
print("~y = ", ~y)
Explanation: 1011
OR
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
AND
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
End of explanation |
11,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programación lineal
<img style="float
Step1: 3.2
Mónica hace aretes y cadenitas de joyería. Es tan buena, que todo lo que hace lo vende.
Le toma 30 minutos hacer un par de aretes y una hora hacer una cadenita, y como Mónica también es estudihambre, solo dispone de 10 horas a la semana para hacer las joyas. Por otra parte, el material que compra solo le alcanza para hacer 15 unidades (el par de aretes cuenta como unidad) de joyas por semana.
La utilidad que le deja la venta de las joyas es \$15 en cada par de aretes y \$20 en cada cadenita.
¿Cuántos pares de aretes y cuántas cadenitas debería hacer Mónica para maximizar su utilidad?
Formular el problema en la forma explicada y obtener la solución gráfica.
Solución
Sean
Step2: 4. Problema de transporte
Referencia
Step3: La solución de costo mínimo de transporte diario resulta ser
Step4: Y por último, las restricciones del problema, van a estar dadas por las capacidades de oferta y demanda de cada cervecería (en cajas de cerveza) y cada bar, las cuales se detallan en el gráfico de más arriba.
Solución
Sean | Python Code:
import numpy as np
f = np.array([-1, -1])
A = np.array([[50, 24], [30, 33], [-1, 0], [0, -1]])
b = np.array([2400, 2100, -45, -5])
import pyomo_utilities
x, obj = pyomo_utilities.linprog(f, A, b)
x
obj
obj_real = x.sum()-50
obj_real.round(2)
Explanation: Programación lineal
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/0c/Linear_Programming_Feasible_Region.svg/2000px-Linear_Programming_Feasible_Region.svg.png" width="400px" height="125px" />
La clase pasada vimos que los problemas de programación lineal eran problemas de optimización de funciones lineales sujetos a restricciones también lineales. Motivamos el estudio de esta disciplina con el concepto de complejidad computacional.
Adicionalmente vimos cómo llevar los problemas de programación lineal a una forma determinada que nos será bastante útil en esta clase.
Seguir instrucciones de acá: (ejecutar anaconda prompt como administrador)
https://stackoverflow.com/questions/19928878/installing-pyomo-on-windows-with-anaconda-python
1. Problemas de programación lineal
De acuerdo a lo visto la clase pasada, un problema de programación lineal puede escribirse en la siguiente forma:
\begin{equation}
\begin{array}{ll}
\min_{x_1,\dots,x_n} & f_1x_1+\dots+f_nx_n \
\text{s. a. } & a^{eq}{j,1}x_1+\dots+a^{eq}{j,n}x_n=b^{eq}j \text{ para } 1\leq j\leq m_1 \
& a{k,1}x_1+\dots+a_{k,n}x_n\leq b_k \text{ para } 1\leq k\leq m_2,
\end{array}
\end{equation}
donde:
- $x_i$ para $i=1,\dots,n$ son las incógnitas o variables de decisión,
- $f_i$ para $i=1,\dots,n$ son los coeficientes de la función a optimizar,
- $a^{eq}{j,i}$ para $j=1,\dots,m_1$ e $i=1,\dots,n$, son los coeficientes de la restricción de igualdad,
- $a{k,i}$ para $k=1,\dots,m_2$ e $i=1,\dots,n$, son los coeficientes de la restricción de desigualdad,
- $b^{eq}_j$ para $j=1,\dots,m_1$ son valores conocidos que deben ser respetados estrictamente, y
- $b_k$ para $k=1,\dots,m_2$ son valores conocidos que no deben ser superados.
Equivalentemente, el problema puede escribirse como
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{f}^T\boldsymbol{x} \
\text{s. a. } & \boldsymbol{A}{eq}\boldsymbol{x}=\boldsymbol{b}{eq} \
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
donde:
- $\boldsymbol{x}=\left[x_1\quad\dots\quad x_n\right]^T$,
- $\boldsymbol{f}=\left[f_1\quad\dots\quad f_n\right]^T$,
- $\boldsymbol{A}{eq}=\left[\begin{array}{ccc}a^{eq}{1,1} & \dots & a^{eq}{1,n}\ \vdots & \ddots & \vdots\ a^{eq}{m_1,1} & \dots & a^{eq}{m_1,n}\end{array}\right]$,
- $\boldsymbol{A}=\left[\begin{array}{ccc}a{1,1} & \dots & a_{1,n}\ \vdots & \ddots & \vdots\ a_{m_2,1} & \dots & a_{m_2,n}\end{array}\right]$,
- $\boldsymbol{b}{eq}=\left[b^{eq}_1\quad\dots\quad b^{eq}{m_1}\right]^T$, y
- $\boldsymbol{b}=\left[b_1\quad\dots\quad b_{m_2}\right]^T$.
Nota: el problema $\max_{\boldsymbol{x}}\boldsymbol{g}(\boldsymbol{x})$ es equivalente a $\min_{\boldsymbol{x}}-\boldsymbol{g}(\boldsymbol{x})$.
2. Explicar la función pyomo_utilities.py
Para trabajar con pyomo:
1. pyomo a parte de contener funciones para optimización, es en sí un lenguaje de programación. No tiene solucionadores instalados, utiliza solucionadores externos (glpk, ipopt).
2. Modelos concretos y modelos abstractos.
3. Forma de ingresarle los parámetros para un modelo abstracto.
4. Resultados.
Sin embargo, ya tenemos una función que hace todo el trabajo por nosotros. Únicamente debemos proporcionar los parámetros $\boldsymbol{f}$, $\boldsymbol{A}$ y $\boldsymbol{b}$ ($\boldsymbol{A}{eq}$ y $\boldsymbol{b}{eq}$, de ser necesario).
3. Ejemplos de la clase pasada
3.1
Una compañía produce dos productos ($X_1$ y $X_2$) usando dos máquinas ($A$ y $B$). Cada unidad de $X_1$ que se produce requiere 50 minutos en la máquina $A$ y 30 minutos en la máquina $B$. Cada unidad de $X_2$ que se produce requiere 24 minutos en la máquina $A$ y 33 minutos en la máquina $B$.
Al comienzo de la semana hay 30 unidades de $X_1$ y 90 unidades de $X_2$ en inventario. El tiempo de uso disponible de la máquina $A$ es de 40 horas y el de la máquina $B$ es de 35 horas.
La demanda para $X_1$ en la semana actual es de 75 unidades y de $X_2$ es de 95 unidades. La política de la compañía es maximizar la suma combinada de unidades de $X_1$ y $X_2$ en inventario al finalizar la semana.
Formular el problema de decidir cuánto hacer de cada producto en la semana como un problema de programación lineal.
Solución
Sean:
- $x_1$ la cantidad de unidades de $X_1$ a ser producidas en la semana, y
- $x_2$ la cantidad de unidades de $X_2$ a ser producidas en la semana.
Notar que lo que se quiere es maximizar $(x_1-75+30)+(x_2-95+90)=x_1+x_2-50$. Equivalentemente $x_1+x_2$.
Restricciones:
1. El tiempo de uso disponible de la máquina $A$ es de 40 horas: $50x_1+24x_2\leq 40(60)\Rightarrow 50x_1+24x_2\leq 2400$.
2. El tiempo de uso disponible de la máquina $B$ es de 35 horas: $30x_1+33x_2\leq 35(60)\Rightarrow 30x_1+33x_2\leq 2100$.
3. La demanda para $X_1$ en la semana actual es de 75 unidades: $x_1+30\geq 75\Rightarrow x_1\geq 45\Rightarrow -x_1\leq -45$.
4. La demanda para $X_2$ en la semana actual es de 95 unidades: $x_2+90\geq 95\Rightarrow x_2\geq 5\Rightarrow -x_2\leq -5$.
Finalmente, el problema puede ser expresado en la forma explicada como:
\begin{equation}
\begin{array}{ll}
\min_{x_1,x_2} & -x_1-x_2 \
\text{s. a. } & 0x_1+0x_2=0 \
& 50x_1+24x_2\leq 2400 \
& 30x_1+33x_2\leq 2100 \
& -x_1\leq -45 \
& -x_2\leq -5,
\end{array}
\end{equation}
o, eqivalentemente
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{f}^T\boldsymbol{x} \
\text{s. a. } & \boldsymbol{A}{eq}\boldsymbol{x}=\boldsymbol{b}{eq} \
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
con
- $\boldsymbol{f}=\left[-1 \quad -1\right]^T$,
- $\boldsymbol{A}{eq}=\left[0\quad 0\right]$,
- $\boldsymbol{A}=\left[\begin{array}{cc}50 & 24 \ 30 & 33\ -1 & 0\ 0 & -1\end{array}\right]$,
- $\boldsymbol{b}{eq}=0$, y
- $\boldsymbol{b}=\left[2400\quad 2100\quad -45\quad -5\right]^T$.
End of explanation
f = np.array([-15, -20])
A = np.array([[0.5, 1], [1, 1], [-1, 0], [0, -1]])
b = np.array([10, 15, 0, 0])
x, obj = pyomo_utilities.linprog(f, A, b)
x
obj
obj_real = -obj
obj_real
Explanation: 3.2
Mónica hace aretes y cadenitas de joyería. Es tan buena, que todo lo que hace lo vende.
Le toma 30 minutos hacer un par de aretes y una hora hacer una cadenita, y como Mónica también es estudihambre, solo dispone de 10 horas a la semana para hacer las joyas. Por otra parte, el material que compra solo le alcanza para hacer 15 unidades (el par de aretes cuenta como unidad) de joyas por semana.
La utilidad que le deja la venta de las joyas es \$15 en cada par de aretes y \$20 en cada cadenita.
¿Cuántos pares de aretes y cuántas cadenitas debería hacer Mónica para maximizar su utilidad?
Formular el problema en la forma explicada y obtener la solución gráfica.
Solución
Sean:
- $x_1$ la cantidad de pares de aretes que hace Mónica.
- $x_2$ la cantidad de cadenitas que hace Mónica.
Notar que lo que se quiere es maximizar $15x_1+20x_2$.
Restricciones:
1. Dispone de 10 horas semanales: $0.5x_1+x_2\leq 10$.
2. Material disponible para 15 unidades: $x_1+x_2\leq 15$.
3. No negatividad: $x_1,x_2\geq 0\Rightarrow -x_1,-x_2\leq 0$.
Finalmente, el problema puede ser expresado en la forma explicada como:
\begin{equation}
\begin{array}{ll}
\min_{x_1,x_2} & -15x_1-20x_2 \
\text{s. a. } & 0x_1+0x_2=0 \
& 0.5x_1+x_2\leq 10 \
& x_1+x_2\leq 15 \
& -x_1\leq 0 \
& -x_2\leq 0,
\end{array}
\end{equation}
o, eqivalentemente
\begin{equation}
\begin{array}{ll}
\min_{\boldsymbol{x}} & \boldsymbol{f}^T\boldsymbol{x} \
\text{s. a. } & \boldsymbol{A}{eq}\boldsymbol{x}=\boldsymbol{b}{eq} \
& \boldsymbol{A}\boldsymbol{x}\leq\boldsymbol{b},
\end{array}
\end{equation}
con
- $\boldsymbol{f}=\left[-15 \quad -20\right]^T$,
- $\boldsymbol{A}{eq}=\left[0\quad 0\right]$,
- $\boldsymbol{A}=\left[\begin{array}{cc}0.5 & 1 \ 1 & 1 \ -1 & 0 \ 0 & -1 \end{array}\right]$,
- $\boldsymbol{b}{eq}=0$, y
- $\boldsymbol{b}=\left[10\quad 15 \quad 0 \quad 0\right]^T$.
End of explanation
f = np.array([2, 11, 12, 24, 13, 18])
A = np.array([[1, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0], [0, 0, 0, 0, 1, 1], [-1, 0, -1, 0, -1, 0], [0, -1, 0, -1, 0, -1]])
A = np.concatenate((A, -np.eye(6)), axis = 0)
b = np.array([40, 40, 20, -40, -60])
b = np.concatenate((b, np.zeros((6,))))
x, obj = pyomo_utilities.linprog(f, A, b)
x
obj
Explanation: 4. Problema de transporte
Referencia: https://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/a/a0/Progr_Lineal.PNG" width="400px" height="125px" />
Este es un caso curioso, con solo 6 variables (un caso real de problema de transporte puede tener fácilmente más de 1.000 variables) en el cual se aprecia la utilidad de este procedimiento de cálculo.
Existen tres minas de carbón cuya producción diaria es:
- la mina "a" produce 40 toneladas de carbón por día;
- la mina "b" produce 40 t/día; y,
- la mina "c" produce 20 t/día.
En la zona hay dos centrales termoeléctricas que consumen:
- la central "d" consume 40 t/día de carbón; y,
- la central "e" consume 60 t/día.
Los costos de mercado, de transporte por tonelada son:
- de "a" a "d" = 2 monedas;
- de "a" a "e" = 11 monedas;
- de "b" a "d" = 12 monedas;
- de "b" a "e" = 24 monedas;
- de "c" a "d" = 13 monedas; y,
- de "c" a "e" = 18 monedas.
Si se preguntase a los pobladores de la zona cómo organizar el transporte, tal vez la mayoría opinaría que debe aprovecharse el precio ofrecido por el transportista que va de "a" a "d", porque es más conveniente que los otros, debido a que es el de más bajo precio.
En este caso, el costo total del transporte es:
- transporte de 40 t de "a" a "d" = 80 monedas;
- transporte de 20 t de "c" a "e" = 360 monedas; y,
- transporte de 40 t de "b" a "e" = 960 monedas,
Para un total 1.400 monedas.
Sin embargo, formulando el problema para ser resuelto por la programación lineal con
- $x_1$ toneladas transportadas de la mina "a" a la central "d"
- $x_2$ toneladas transportadas de la mina "a" a la central "e"
- $x_3$ toneladas transportadas de la mina "b" a la central "d"
- $x_4$ toneladas transportadas de la mina "b" a la central "e"
- $x_5$ toneladas transportadas de la mina "c" a la central "d"
- $x_6$ toneladas transportadas de la mina "c" a la central "e"
se tienen las siguientes ecuaciones:
Restricciones de la producción:
$x_1 + x_2 \leq 40$
$x_3 + x_4 \leq 40$
$x_5 + x_6 \leq 20$
Restricciones del consumo:
$x_1 + x_3 + x_5 \geq 40$
$x_2 + x_4 + x_6 \geq 60$
La función objetivo será:
$$\min_{x_1,\dots,x_6}2x_1 + 11x_2 + 12x_3 + 24x_4 + 13x_5 + 18x_6$$
End of explanation
import pandas as pd
info = pd.DataFrame({'Bar1': [2, 3], 'Bar2': [4, 1], 'Bar3': [5, 3], 'Bar4': [2, 2], 'Bar5': [1, 3]}, index = ['CerveceriaA', 'CerveceriaB'])
info
Explanation: La solución de costo mínimo de transporte diario resulta ser:
$x_2 = 40$ resultando un costo de $11(40) = 480$ monedas
$x_3 = 40$ resultando un costo de $12(40) = 440$ monedas
$x_6 = 20$ resultando un costo de $18(20) = 360$ monedas
para un total de $1280$ monedas, $120$ monedas menos que antes.
5. Problema de transporte más interesante (Actividad)
Referencia: https://relopezbriega.github.io/blog/2017/01/18/problemas-de-optimizacion-con-python/
Supongamos que tenemos que enviar cajas de cervezas de 2 cervecerías a 5 bares de acuerdo al siguiente gráfico:
<img style="float: center; margin: 0px 0px 15px 15px;" src="https://relopezbriega.github.io/images/Trans_problem.png" width="500px" height="150px" />
Asimismo, supongamos que nuestro gerente financiero nos informa que el costo de transporte por caja de cada ruta se conforma de acuerdo a la siguiente tabla:
End of explanation
f = np.array([2, 4, 5, 2, 1, 3, 1, 3, 2, 3])
A = np.array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], [-1, 0, 0, 0, 0, -1, 0, 0, 0, 0], [0, -1, 0, 0, 0, 0, -1, 0, 0, 0], [0, 0, -1, 0, 0, 0, 0, -1, 0, 0], [0, 0, 0, -1, 0, 0, 0, 0, -1, 0], [0, 0, 0, 0, -1, 0, 0, 0, 0, -1]])
A = np.concatenate((A, -np.eye(10)), axis = 0)
b = np.array([1000, 4000, -500, -900, -1800, -200, -700])
b = np.concatenate((b, np.zeros((10,))))
x, obj = pyomo_utilities.linprog(f, A, b)
x
obj
Explanation: Y por último, las restricciones del problema, van a estar dadas por las capacidades de oferta y demanda de cada cervecería (en cajas de cerveza) y cada bar, las cuales se detallan en el gráfico de más arriba.
Solución
Sean:
- $x_i$ cajas transportadas de la cervecería A al Bar $i$,
- $x_{i+5}$ cajas transportadas de la cervecería B al Bar $i$.
La actividad consiste en plantear el problema y resolverlo con linprog...
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.