Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color=red> 1 None订单 </font>
Step1: prior、train订单
Step2: 2 How many products do users buy each time
每张订单的商品数目
Step3: 3 Do users purchase different numbers of products each time?
用户每次购买的商品数目一样麽
Step4: 4 Reorder Rate
每张订单中重复购买商品比例
Step5: 5 Products User Bought Previously
Step6: 6 Candidate Products
last purchase
reorder items
all items that has high reorder rate
items that are added to cart first
Step7: 7 Time of orders
Step8: 8 Topic Distance
user VS product <font color=blue>prior中的所有(u,p)对</font>
<font color=red>latest order VS product</font> <font color=blue>通过LDA-transform来构造</font>
Step9: 9 Order Topic Construct
<font color=red>countvector, lda transform</font>
由商品的Topic构造订单的Topic表达
商品加入购物车的次序??? 先忽视次序
每个用户学习:加购物车次序 VS 重购? VS下张订单的Topic??
Step10: 10 XGBoost Feature Preparation
正负样本10
Step11: 11 LSTM Feature Preparation
(u,p,t)
间隔、加购物车次序作为Symbol
次序
- 1
- 2
- 3
- 4-6
- 7-11
- 12 ——
间隔
- 1 - 7
- 8 - 16
- 17 - 33
- 34
- 100 NAN
实现
Encoder两个列, 总共30种符号
Cartesian查表
直接数值
Step12: <font color=red> (u,p)对时间间隔预测</font>
Time Series Forcasting 问题
方案1:用之前的Timestep对当前值进行回归预测
方案2:<font color=red>LSTM</font> 仅仅包含购买间隔信息
样本(sample) | Python Code:
order_is_None = order_products_train.groupby(['order_id'])['reordered'].sum().reset_index()
len(order_is_None[order_is_None.reordered == 0]) / len(order_is_None[order_is_None.reordered > 0])
a = pd.merge(order_is_None, orders, how = 'left', on = ['order_id'])
Explanation: <font color=red> 1 None订单 </font>
End of explanation
order_products_all = pd.concat([order_products_prior, order_products_train], axis = 0)
Explanation: prior、train订单
End of explanation
grouped = order_products_prior.groupby("order_id")["add_to_cart_order"].aggregate("max").reset_index()
grouped.add_to_cart_order.describe()
Explanation: 2 How many products do users buy each time
每张订单的商品数目
End of explanation
grouped = pd.merge(grouped,
orders,
on = ['order_id'],
how = 'left')[['user_id', 'add_to_cart_order', 'order_number', 'order_dow', 'order_hour_of_day', 'days_since_prior_order']]
grouped = grouped.sort_values(['user_id', 'order_number'])
grouped.columns = ['user_id',
'num_products',
'order_number',
'order_dow',
'order_hour_of_day',
'days_since_prior_order']
user_num_product = grouped.groupby(['user_id'])['num_products'].agg({'mean':'mean', 'std':'std'})
with open(DATA_DIR + 'user_num_product_stat.pkl', 'wb') as f:
pickle.dump(user_num_product, f, pic)
with open(constants.FEAT_DATA_DIR + 'user_num_product_stat.pkl', 'rb') as f:
user_num_product = pickle.load(f)
user_num_product['std'].describe()
Explanation: 3 Do users purchase different numbers of products each time?
用户每次购买的商品数目一样麽
End of explanation
grouped = order_products_all.groupby("product_id")["reordered"].aggregate({'reorder_sum': sum,'reorder_total': 'count'}).reset_index()
grouped['reorder_probability'] = grouped['reorder_sum'] / grouped['reorder_total']
grouped = pd.merge(grouped, products[['product_id', 'product_name']], how='left', on=['product_id'])
grouped = grouped[grouped.reorder_total > 75].sort_values(['reorder_probability'], ascending=False)[:10]
prior_reorder_rate = order_products_prior.groupby(['order_id'])['reordered'] \
.aggregate({'reorder_pnum':'sum', 'pnum':'count'})
prior_reorder_rate['reorder_rate'] = prior_reorder_rate['reorder_pnum'] / prior_reorder_rate['pnum']
prior_reorder_rate.reset_index(inplace=True)
prior_orders = orders[orders.eval_set == 'prior']
prior_orders = pd.merge(prior_orders, prior_reorder_rate,
on = ['order_id'], how = 'left')
prior_orders.head(5)
user_reorder_est = prior_orders.groupby(['user_id'])['reorder_pnum']\
.aggregate({'reorder_pnum_mean':'mean',
'reorder_pnum_std':'std'}).reset_index()
user_reorder_est = user_reorder_est[['user_id', 'reorder_pnum_mean', 'reorder_pnum_std']]
with open(constants.FEAT_DATA_DIR + 'user_reorder_est.pkl', 'wb') as f:
pickle.dump(user_reorder_est, f, pickle.HIGHEST_PROTOCOL)
with open(constants.FEAT_DATA_DIR + 'user_reorder_est.pkl', 'rb') as f:
user_reorder_est = pickle.load(f)
user_reorder_est.reorder_pnum_std.describe()
Explanation: 4 Reorder Rate
每张订单中重复购买商品比例
End of explanation
users_products = pd.merge(prior_orders, order_products_prior, on = ['order_id'], how = 'left')
users_products = users_products.groupby(['user_id'])['product_id'].apply(list).reset_index()
with open(DATA_DIR + 'user_product.pkl', 'wb') as f:
pickle.dump(users_products, f, pickle.HIGHEST_PROTOCOL)
with open(constants.FEAT_DATA_DIR + 'user_product.pkl', 'rb') as f:
users_products = pickle.load(f)
l = users_products.product_id.apply(len)
l.describe()
Explanation: 5 Products User Bought Previously
End of explanation
grouped = order_products_all.groupby("product_id")["reordered"].aggregate({'reorder_sum': sum,'reorder_total': 'count'}).reset_index()
grouped['reorder_probability'] = grouped['reorder_sum'] / grouped['reorder_total']
Explanation: 6 Candidate Products
last purchase
reorder items
all items that has high reorder rate
items that are added to cart first
End of explanation
grouped = orders.order_hour_of_day.value_counts()
sns.set_style('darkgrid')
sns.barplot(grouped.index, grouped.values)
plt.show()
Explanation: 7 Time of orders
End of explanation
# term-frequency matrix construct
orders = pd.read_csv(DATA_DIR + 'orders.csv')
users_orders = pd.merge(order_products_prior, orders[['user_id', 'order_id']],
on = ['order_id'], how = 'left')
users_products_matrix = users_orders.groupby(['user_id'])['product_id'].apply(series_to_str)
tf = CountVectorizer(analyzer = 'word', lowercase = False, max_df=0.95, min_df=2,)
tf_matrix = tf.fit_transform(users_products_matrix.values)
tf_feature_names = tf.get_feature_names()
with open(DATA_DIR + 'tf.model', 'wb') as f:
pickle.dump(tf, f, pickle.HIGHEST_PROTOCOL)
#订单的Topic, tf为CountVector,将文档转化为term-frequency矩阵
op = order_products_prior.groupby(['order_id'])['product_id'].apply(series_to_str)
topic_order = pd.DataFrame(lda.transform(tf.transform(op.values)), columns= ["topic_%d"%x for x in range(10)])
topic_order['order_id'] = op.index.values
with open(DATA_DIR + 'order_topic_norm.pkl', 'wb') as f:
pickle.dump(topic_order_norm, f, pickle.HIGHEST_PROTOCOL)
up_distance = pd.merge(users_orders[['user_id', 'product_id']].drop_duplicates(),
user_topic,
on = ['user_id'],
how = 'left')
up_distance.columns = ['user_id', 'product_id'] + ["u_topic_%d"%x for x in range(10)]
up_distance = pd.merge(up_distance,
topic_product,
on = ['product_id'],
how = 'left')
up_distance.columns = ['user_id', 'product_id'] + ["u_topic_%d"%x for x in range(10)] + ["p_topic_%d"%x for x in range(10)]
def cal_up_distance(subf):
u_topic = subf[["u_topic_%d"%x for x in range(10)]]
p_topic = subf[["p_topic_%d"%x for x in range(10)]]
upd = euclidean(u_topic, p_topic)
return upd
# 3 hours
up_distance['up_dis'] = up_distance.apply(cal_up_distance, axis = 1)
up_distance = up_distance[['user_id', 'product_id', 'up_dis']]
with open(DATA_DIR + 'upd_feat.pkl', 'wb') as f:
pickle.dump(up_distance, f, pickle.HIGHEST_PROTOCOL)
Explanation: 8 Topic Distance
user VS product <font color=blue>prior中的所有(u,p)对</font>
<font color=red>latest order VS product</font> <font color=blue>通过LDA-transform来构造</font>
End of explanation
order_topic = pd.merge(order_products_prior[['order_id', 'product_id']],
topic_product,
on = ['product_id'],
how = 'inner')#throw stop words
order_topic = order_topic.groupby(['order_id'])[["topic_%d"%x for x in range(10)]].sum().reset_index()
unorm = order_topic[["topic_%d"%x for x in range(10)]].values
order_topic[["topic_%d"%x for x in range(10)]] = unorm / unorm.sum(axis = 1)[:,np.newaxis]
len(order_products_prior.product_id.unique())
len(topic_product.product_id.unique())
Explanation: 9 Order Topic Construct
<font color=red>countvector, lda transform</font>
由商品的Topic构造订单的Topic表达
商品加入购物车的次序??? 先忽视次序
每个用户学习:加购物车次序 VS 重购? VS下张订单的Topic??
End of explanation
import constants, utils, transactions, feats
from imp import reload
tle = transactions.TransLogExtractor(constants.RAW_DATA_DIR, constants.FEAT_DATA_DIR)
train_none = feats.make_train_or_test_none(tle, 'train')
test_none = feats.make_train_or_test_none(tle, 'test')
train = feats.make_train_or_test(tle, 'train')
utils.check_inf_nan(train[up_cols])
utils.check_inf_nan(train[ua_cols])
utils.check_inf_nan(train[ud_cols])
utils.check_inf_nan(train[p_cols])
utils.check_inf_nan(train[a_cols])
utils.check_inf_nan(train[d_cols])
utils.check_inf_nan(train[ctx_cols])
utils.check_inf_nan(train[topic_cols])
Explanation: 10 XGBoost Feature Preparation
正负样本10:1
End of explanation
users_orders = tle.get_users_orders('prior')
product_feat = tle.craft_feat_item('products')
user_feat = tle.craft_feat_user()
users_orders = pd.merge(users_orders, product_feat[['product_id', 'p_reorder_probability']], on=['product_id'], how='left')
users_orders = pd.merge(users_orders, user_feat[['user_id', 'u_total_reorders']], on=['user_id'], how='left')
def encode_numeric(row, bins):
'''
convert numeric variable into binned category
bins = [b1, b2, b3, b4]
'''
index = ~(row < bins)
return [bins[index][-1]]
add2cart_bins = np.array([1, 2, 3, 4, 7, 12], dtype=float) # 6
interval_bins = np.array([-1, 4, 8, 17, 34], dtype=float)# 5
p_reorder_bins = np.array([0.0, 0.20, 0.38, 0.53], dtype=float)# 4
u_reorder_bins = np.array([0, 10, 33, 101], dtype=float)# 4
%%time
users_orders = users_orders.sort_values(['user_id', 'product_id', 'order_number'], ascending = False)
users_orders['up_interval'] = users_orders.groupby(['user_id', 'product_id'])['days_up_to_last'].diff()
users_orders.up_interval.fillna(-1, inplace=True)
users_orders['up_interval_sym'] = users_orders.up_interval.apply(lambda x: encode_numeric(x, interval_bins))
users_orders['up_add2cart_order_sym'] = users_orders.add_to_cart_order.apply(lambda x: encode_numeric(x, add2cart_bins))
users_orders['p_reorder_prob_sym'] = users_orders.p_reorder_probability.apply(lambda x: encode_numeric(x, p_reorder_bins))
users_orders['u_reorder_sym'] = users_orders.u_total_reorders.apply(lambda x:encode_numeric(x, u_reorder_bins))
feat_card = [add2cart_bins, interval_bins, p_reorder_bins, u_reorder_bins]
feat_cartesian = cartesian(feat_card)
users_orders['up_card'] = users_orders.up_add2cart_order_sym + users_orders.up_interval_sym + users_orders.p_reorder_prob_sym + users_orders.u_reorder_sym
def encode_cartesian(row, feat_cartesian):
'''
lookup table
turn a group of categorical variable into a symbol
'''
sym = np.where(np.all(row == feat_cartesian,axis=1))[0][0] + 1
return sym
%%time
users_orders['up_airr_sym'] = users_orders.up_card.apply(lambda x: encode_cartesian(x, feat_cartesian))
up_airr_sym = users_orders[['user_id', 'product_id', 'order_number', 'up_airr_sym']]
up_airr_sym.sort_values(['user_id', 'product_id', 'order_number'], inplace=True)
up_airr_sym_list = up_airr_sym.groupby(['user_id', 'product_id'])['up_airr_sym'].apply(list).reset_index()
with open(constants.FEAT_DATA_DIR + 'up_airr_sym.pkl', 'wb') as f:
pickle.dump(up_airr_sym_list, f, pickle.HIGHEST_PROTOCOL)
Explanation: 11 LSTM Feature Preparation
(u,p,t)
间隔、加购物车次序作为Symbol
次序
- 1
- 2
- 3
- 4-6
- 7-11
- 12 ——
间隔
- 1 - 7
- 8 - 16
- 17 - 33
- 34
- 100 NAN
实现
Encoder两个列, 总共30种符号
Cartesian查表
直接数值
End of explanation
users_orders = tle.get_users_orders(prior_or_train='prior')
a = users_orders[['user_id', 'order_number', 'product_id', 'days_up_to_last', 'p_purchase_interval']].sort_values(['user_id', 'order_number', 'p_purchase_interval'])
del users_orders
a.sort_values(['user_id', 'product_id', 'order_number'], ascending=False, inplace=True)
%%time
a['up_interval'] = a.head(1000).groupby(['user_id', 'product_id'])['days_up_to_last'].diff()
a.sort_values(['user_id', 'product_id'])
print("number of (u,p,t) tuples: %d"%len(users_orders))
del users_orders # free memory usage
users_orders_intervals = users_orders.dropna() #throw away product_id bought only once
users_orders_intervals = users_orders_intervals[users_orders_intervals.p_purchase_interval > 0] # throw away record buy in the same day
users_orders_intervals = users_orders_intervals.sort_values(['user_id', 'product_id', 'order_number'])
%%time
up_interval_list = users_orders_intervals.groupby(['user_id', 'product_id'])['p_purchase_interval'].apply(list).reset_index()
len(up_interval_list)
del users_orders_intervals # free memory usage
up_interval_list['len'] = up_interval_list.p_purchase_interval.apply(lambda x: len(x))
up_interval_list = up_interval_list[up_interval_list.len >= 2] # for train/test split
with open(constants.FEAT_DATA_DIR + 'up_interval_feat.pkl', 'wb') as f:
pickle.dump(up_interval_list, f, pickle.HIGHEST_PROTOCOL)
len(up_interval_list)
up_interval_list.len.describe()
Explanation: <font color=red> (u,p)对时间间隔预测</font>
Time Series Forcasting 问题
方案1:用之前的Timestep对当前值进行回归预测
方案2:<font color=red>LSTM</font> 仅仅包含购买间隔信息
样本(sample):(u,p,oid)
特征(feature):两次购买之间的间隔
预处理
只出现一次的(u,p)无法计算间隔,NAN 丢弃
p_purchase_interval:距离下次购买的时间
间隔为0的删除,同一天内购买两次视为一次
为了training,间隔序列的长度 >=2 即(u,p)在prior里至少出现3次
End of explanation |
10,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
O básico sobre tratamento de exceções
Erros detectados durante a execução são chamados de exceções e não são necessariamente fatais. A maioria das exceções não são lidadas pelos programas, entretanto, um resultado de mensagens de erros são ilustradas abaixo
Step1: Podemos controlar o fluxo de execução quando algo inesperado ocorrer em nosso código.
Step2: Para contornar esse erro, podemos utilizar o par try/catch.
Step3: Desta forma o erro não aparece, porém caso o erro seja de outro tipo, como
Step4: Note que a saída será a mesma que definimos anteriormente. Portanto precisamos expecificar qual é o tipo no except.
Step5: Para ter mais de uma except, é só adicionar o outro tipo abaixo | Python Code:
10 *(1/0)
4 + spam*3
'2' + 2
Explanation: O básico sobre tratamento de exceções
Erros detectados durante a execução são chamados de exceções e não são necessariamente fatais. A maioria das exceções não são lidadas pelos programas, entretanto, um resultado de mensagens de erros são ilustradas abaixo:
End of explanation
produtos = ["ipda", "cel", "note"]
print(produtos[1])
print(produtos[3])
Explanation: Podemos controlar o fluxo de execução quando algo inesperado ocorrer em nosso código.
End of explanation
try:
print(produtos[3])
except:
print("O vetor não possui a posição desejada")
Explanation: Para contornar esse erro, podemos utilizar o par try/catch.
End of explanation
produtos[3+'1']
try:
print(produtos[3+'1'])
except:
print("O vetor não possui a posição desejada")
Explanation: Desta forma o erro não aparece, porém caso o erro seja de outro tipo, como:
End of explanation
try:
print(produtos[3+'1'])
except IndexError:
print("O vetor não possui a posição desejada")
Explanation: Note que a saída será a mesma que definimos anteriormente. Portanto precisamos expecificar qual é o tipo no except.
End of explanation
try:
print(produtos[3+'1'])
except IndexError:
print("O vetor não possui a posição desejada")
except TypeError:
print("Erro de Tipo")
Explanation: Para ter mais de uma except, é só adicionar o outro tipo abaixo:
End of explanation |
10,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a simple launch. No reference to geo-position in control loop. Roll and yaw will be fixed at zero. Pitch will start at 90 deg until 60 m/s at which point we will pitch towards 30 deg above the eastern horizon at flame-out.
This is a booster-only single-body rocket with two stages
Step1: Given a moment of intertia vector around (pitch, roll, yaw) in kg m^2 and an available torque vector around (pitch, roll, yaw) in N m or kg m^2 / s^2, I want to create a controller (presumably PID) in the range [-1,1] for each axis.
Step2: This is very fast. I only want a couple degrees per second^2 acceleration and not more than about 5 deg/sec rotation ever, so let's see what control range that represents | Python Code:
linkup = krpc.connect('192.168.1.2', name='First Flight')
import numpy as np
from matplotlib import pyplot, cm
inf = np.inf
isclose = np.isclose
π = np.pi
arctan = np.arctan
sign = np.sign
class Controller(object):
'''Single Axis PID Controller'''
def __init__(self,set_point=0,limits=(-inf,inf),kp=1,ki=0,kd=0,t0=0):
# set point and control constants
self.set_point = set_point
self.min,self.max = limits
self.kp = kp
self.ki = ki
self.kd = kd
# time of previous call, running integral and
# proportional term of previous call
self.t0 = t0
self.I = 0
self.P0 = 0
# response value of previous call
self.c = 0
def __call__(self,x,t):
# return previous value if no time has passed
if isclose(t - self.t0, 0):
return self.c
# bring instance variables into local scope
xset = self.set_point
kp = self.kp
ki = self.ki
kd = self.kd
# if parameters are all zero or None, return set point
if not any([kp,ki,kd]):
self.t0 = t
return xset
# bring instance variables into local scope
t0 = self.t0
I = self.I
P0 = self.P0
# calculate PID terms
Δt = t - t0
P = xset - x
ΔP = P - P0
D = ΔP / Δt
# freeze integral for a small time on
# a large disturbance
if self.ki > 0:
if abs(kp*ΔP) > 0.5*(self.max - self.min):
self._t0_freeze_I = t
else:
try:
if (t - self._t0_freeze_I) > self.ti:
del self._t0_freeze_I
I += P * Δt
except AttributeError:
I += P * Δt
# turn off integral term if kp*P is out of the
# control range
if not (self.min < kp*P < self.max):
I = 0
else:
I = min(max(I, self.min/ki), self.max/ki)
# clip proportional gain
if not (self.min < kp*P < self.max):
P = min(max(P, self.min/kp), self.max/kp)
c = kp*P + ki*I + kd*D
# clip output to specified limits
c = min(max(c, self.min), self.max)
# save parameters to class instance
self.t0 = t
self.I = I
self.P0 = P
self.c = c
return c
@property
def ti(self):
'''integral time'''
return self.kp / self.ki
@ti.setter
def ti(self,ti):
self.ki = self.kp / ti
@property
def td(self):
'''derivative time'''
return self.kd / self.kp
@td.setter
def td(self,td):
self.kd = self.kp * td
@property
def ku(self):
'''ultimate gain, assuming classic ziegler-nichols pid scheme'''
return (1/.6)*self.kp
@ku.setter
def ku(self,ku):
self.kp = .6*ku
@property
def tu(self):
'''period of oscillation at ultimate gain'''
return 2*self.kp/self.ki
@tu.setter
def tu(self,tu):
self.ki = 2*self.kp/tu
self.kd = self.kp*tu/8
def ziegler_nichols(self,ku,tu,control_type='pid'):
'''
ku = ultimate gain
tu = period of oscillation at ultimate gain
'''
converter = dict(
p = lambda ku,tu: (.5*ku, 0, 0),
pi = lambda ku,tu: (.45*ku, 1.2*(.45*ku)/tu, 0),
pd = lambda ku,tu: (.8*ku, 0, (.8*ku)*tu/8),
pid = lambda ku,tu: (.6*ku, 2*(.6*ku)/tu, (.6*ku)*tu/8),
pessen = lambda ku,tu: (.7*ku, 2.5*(.7*ku)/tu, 3*(.7*ku)*tu/20),
some_overshoot = lambda ku,tu: (.33*ku, 2*(.33*ku)/tu, (.33*ku)*tu/3),
no_overshoot = lambda ku,tu: (.2*ku, 2*(.2*ku)/tu, (.2*ku)*tu/3)
)
self.kp,self.ki,self.kd = converter[control_type.lower()](ku,tu)
def main(linkup):
ksc = linkup.space_center
vessel = ksc.active_vessel
body = vessel.orbit.body
altitude = linkup.add_stream(getattr, vessel.flight(body.reference_frame), 'mean_altitude')
vertical_speed = linkup.add_stream(getattr, vessel.flight(body.reference_frame), 'vertical_speed')
speed = linkup.add_stream(getattr, vessel.flight(body.reference_frame), 'speed')
pitch = linkup.add_stream(getattr, vessel.flight(body.reference_frame), 'pitch')
heading = linkup.add_stream(getattr, vessel.flight(body.reference_frame), 'heading')
roll = linkup.add_stream(getattr, vessel.flight(body.reference_frame), 'roll')
con = vessel.control
experiments = {'goo' : vessel.parts.with_name('GooExperiment')}
capsule = vessel.parts.with_name('mk1pod')[0]
def crew_report(capsule):
report_module = capsule.module[2]
report_action = report_module.actions[0]
report_module.trigger_event(report_action)
def observe_goo(goo_experiment):
goo_module = goo_experiment.modules[1]
observe_action = goo_module.actions[0]
goo_module.trigger_event(observe_action)
t0 = ksc.ut
pitch_con = Controller(set_point=pitch()*π/180,limits=(-1,1),kp=1,ki=0.6,kd=1,t0=t0)
heading_con = Controller(set_point=heading()*π/180,limits=(-1,1),kp=1,ki=0.6,kd=1,t0=t0)
roll_con = Controller(set_point=roll()*π/180,limits=(-1,1),kp=1,ki=0.6,kd=1,t0=t0)
observe_goo(experiments['goo'][0])
time.sleep(3)
con.activate_next_stage() # launch!
while speed() < 60:
t = ksc.ut
con.yaw = pitch_con(pitch()*π/180, t)
con.pitch = heading_con(heading()*π/180, t)
con.roll = roll_con(roll()*π/180, t)
time.sleep(0.001)
ftot = vessel.resources.amount('SolidFuel')
frem = ftot
while frem > 0.1:
frem = vessel.resources.amount('SolidFuel')
pitch_con.set_point = 15 - (15 - 50) * (ftot - frem) / ftot
t = ksc.ut
con.yaw = pitch_con(pitch()*π/180, t)
con.pitch = heading_con(heading()*π/180, t)
con.roll = roll_con(roll()*π/180, t)
time.sleep(0.001)
reported = False
while altitude() > 8000:
if not reported and vertical_speed() < 0:
observe_goo(experiments['goo'][1])
crew_report(capsule)
reported = True
time.sleep(1)
con.activate_next_stage() # parachutes
while vessel.situation is not splashed:
time.sleep(1)
observe_goo(experiments['goo'][2])
main(linkup)
moi = np.array(vessel.moment_of_inertia)
itensor = np.array(vessel.inertia_tensor).reshape(3,3)
availtorque = np.array(vessel.available_torque)
print('''
moi: {moi}
itensor: {itensor}
availtorque: {availtorque}
'''.format(moi=moi, itensor=itensor, availtorque=availtorque))
Explanation: This is a simple launch. No reference to geo-position in control loop. Roll and yaw will be fixed at zero. Pitch will start at 90 deg until 60 m/s at which point we will pitch towards 30 deg above the eastern horizon at flame-out.
This is a booster-only single-body rocket with two stages: launch and parachute.
End of explanation
moment_of_inertia = 1733
max_torque = 5000
control_range = np.array([-1,1])
max_acceleration = max_torque * control_range[1] / moment_of_inertia
print('max accel:',max_acceleration,'rad / s^2')
print(max_acceleration*180/np.pi, 'deg / s^2')
Explanation: Given a moment of intertia vector around (pitch, roll, yaw) in kg m^2 and an available torque vector around (pitch, roll, yaw) in N m or kg m^2 / s^2, I want to create a controller (presumably PID) in the range [-1,1] for each axis.
End of explanation
print('control range at 2 deg / s^2:',(2*np.pi/180) * moment_of_inertia / max_torque)
Explanation: This is very fast. I only want a couple degrees per second^2 acceleration and not more than about 5 deg/sec rotation ever, so let's see what control range that represents:
End of explanation |
10,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Transformação-geométrica" data-toc-modified-id="Transformação-geométrica-1"><span class="toc-item-num">1 </span>Transformação geométrica</a></div><div class="lev2 toc-item"><a href="#Estudo-preparatório" data-toc-modified-id="Estudo-preparatório-11"><span class="toc-item-num">1.1 </span>Estudo preparatório</a></div><div class="lev2 toc-item"><a href="#Preparação-para-o-problema-a-ser-modificado" data-toc-modified-id="Preparação-para-o-problema-a-ser-modificado-12"><span class="toc-item-num">1.2 </span>Preparação para o problema a ser modificado</a></div><div class="lev2 toc-item"><a href="#Entendimento-da-iaffine-da-toolbox-ia898---execução-passo-a-passo" data-toc-modified-id="Entendimento-da-iaffine-da-toolbox-ia898---execução-passo-a-passo-13"><span class="toc-item-num">1.3 </span>Entendimento da iaffine da toolbox ia898 - execução passo a passo</a></div><div class="lev3 toc-item"><a href="#Parâmetros-de-entrada
Step1: Domínio da imagem de saída
O domínio da imagem de saída é feito igual ao domínio da imagem f de entrada.
domain diz respeito ao shape da imagem de saída. H e W dizem respeito às
dimensões da imagem de entrada; neste caso, são iguais.
Step2: Cálculo dos índices da imagem de saída
Como fazemos mapeamento indireto, a varredura será nos pixels da
imagem de saída. Assim, utilizamos domain para gerar as coordenadas de
todos os pixels de g. Note aqui que usamos r1,c1 no lugar de r',c' das
equações matemáticas.
Step3: Empacotamento das coordenadas homogêneas
As coordenadas r1 e c1 são vetorizadas e colocadas em 3 linhas
Step4: Transformação da coordenadas
Aqui é feito o processamento para o cálculo das novas coordenadas correspondentes.
As coordenadas de g são multiplicadas pela inversa da matriz T (mapeamento inverso).
Observe que os valores estão em ponto flutuante
Step5: Interpolação do vizinho mais próximo
A coordenada ponto flutuante de f é arredondada pelo vizinho mais próximo. O
arredondamento é feito pela operação rint do NumPy
Step6: Ajuste das coordenadas que caíram fora do domínio de f
Aqui é onde é preenchido os valores em que (rr,cc) caíram foram do domínio de f.
Neste caso, é feito um "clipping" forçando estes valores cairem dentro de [0,H-1] e
[0,W-1] que formam o domínio de f
Step7: Cópia dos valores dos pixels
Uma vez que as coordenadas estão todas calculadas, é feita agora a cópia dos pixels
da imagem f nas coordenadas calculadas para os pixels da imagem g. Veja que
a indexação de f está sendo feita por dois arrays r e c que possuem as
mesmas dimensões da imagem de entrada f. Esta é uma técnica do Numpy de
indexação por arrays que é bastante poderosa e eficiente. | Python Code:
import numpy as np
t = np.array([2.1, 0.8])
T = np.array([[1,0,t[1]],
[0,1,t[0]],
[0,0,1]])
f = np.array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9,10],
[11,12,13,14,15]])
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Transformação-geométrica" data-toc-modified-id="Transformação-geométrica-1"><span class="toc-item-num">1 </span>Transformação geométrica</a></div><div class="lev2 toc-item"><a href="#Estudo-preparatório" data-toc-modified-id="Estudo-preparatório-11"><span class="toc-item-num">1.1 </span>Estudo preparatório</a></div><div class="lev2 toc-item"><a href="#Preparação-para-o-problema-a-ser-modificado" data-toc-modified-id="Preparação-para-o-problema-a-ser-modificado-12"><span class="toc-item-num">1.2 </span>Preparação para o problema a ser modificado</a></div><div class="lev2 toc-item"><a href="#Entendimento-da-iaffine-da-toolbox-ia898---execução-passo-a-passo" data-toc-modified-id="Entendimento-da-iaffine-da-toolbox-ia898---execução-passo-a-passo-13"><span class="toc-item-num">1.3 </span>Entendimento da iaffine da toolbox ia898 - execução passo a passo</a></div><div class="lev3 toc-item"><a href="#Parâmetros-de-entrada:" data-toc-modified-id="Parâmetros-de-entrada:-131"><span class="toc-item-num">1.3.1 </span>Parâmetros de entrada:</a></div><div class="lev3 toc-item"><a href="#Domínio-da-imagem-de-saída" data-toc-modified-id="Domínio-da-imagem-de-saída-132"><span class="toc-item-num">1.3.2 </span>Domínio da imagem de saída</a></div><div class="lev3 toc-item"><a href="#Cálculo-dos-índices-da-imagem-de-saída" data-toc-modified-id="Cálculo-dos-índices-da-imagem-de-saída-133"><span class="toc-item-num">1.3.3 </span>Cálculo dos índices da imagem de saída</a></div><div class="lev3 toc-item"><a href="#Empacotamento-das-coordenadas-homogêneas" data-toc-modified-id="Empacotamento-das-coordenadas-homogêneas-134"><span class="toc-item-num">1.3.4 </span>Empacotamento das coordenadas homogêneas</a></div><div class="lev3 toc-item"><a href="#Transformação-da-coordenadas" data-toc-modified-id="Transformação-da-coordenadas-135"><span class="toc-item-num">1.3.5 </span>Transformação da coordenadas</a></div><div class="lev3 toc-item"><a href="#Interpolação-do-vizinho-mais-próximo" data-toc-modified-id="Interpolação-do-vizinho-mais-próximo-136"><span class="toc-item-num">1.3.6 </span>Interpolação do vizinho mais próximo</a></div><div class="lev3 toc-item"><a href="#Ajuste-das-coordenadas-que-caíram-fora-do-domínio-de-f" data-toc-modified-id="Ajuste-das-coordenadas-que-caíram-fora-do-domínio-de-f-137"><span class="toc-item-num">1.3.7 </span>Ajuste das coordenadas que caíram fora do domínio de <strong>f</strong></a></div><div class="lev3 toc-item"><a href="#Cópia-dos-valores-dos-pixels" data-toc-modified-id="Cópia-dos-valores-dos-pixels-138"><span class="toc-item-num">1.3.8 </span>Cópia dos valores dos pixels</a></div><div class="lev2 toc-item"><a href="#Teste-de-autoavaliação" data-toc-modified-id="Teste-de-autoavaliação-14"><span class="toc-item-num">1.4 </span>Teste de autoavaliação</a></div>
# Transformação geométrica
Enquanto as Transformações de intensidade alteram apenas o valor do pixel, sem depender
de suas coordenadas, as transformações geométricas fazem um mapeamento de coordenadas, sem
modificar o valor de cada pixel.
O exercício principal nesta atividade é desenvolvermos o programa que modificará a
implementação da transformação afim da toolbox `ia898:affine`. Para isto, precisamos
primeiro rever a teoria de transformação geométrica 2D utilizando coordenadas homogêneas e
matriz de transformação.
## Estudo preparatório
Estude com cuidado o texto preparado para esta experiência:
* [Introdução transformação geométrica](../master/tutorial_trans_geom_intro_2.ipynb)
Veja também este outro texto que exercita a transformação `ia636:iaffine`:
* `master:tutorial_trans_geom_2 Transformação geométrica com iaffine`
## Preparação para o problema a ser modificado
Leia agora o problema que deverá ser entregue: `activity_mariecp_3_gg Modificar iaffine`.
A modificação solicitada diz respeito aos pixels na imagem transformada que não
têm correspondência com a imagem de entrada. Na implementação atual da iaffine, estes valores
são buscados na imagem original, com o uso da função "clip" do NumPy. Altere este comportamento
de modo que estes valores fiquem zerados.
Apesar de parecer fácil, a solução exige um perfeito entendimento da
implementação da `ia636:iaffine`. Utilizando o NumPy, busque uma solução que seja simples e eficiente.
A seguir será feita uma demonstração do funcionamento `ia636:iaffine` com uma pequena
imagem numérica exemplificando seu passo a passo. A ideia é que você utilize este página como
rascunho da sua solução. Depois de ver o passo a passo, edite-o para obter o resultado
desejado. Uma vez conseguido, coloque sua função modificada no local apropriado para submissão
no sistema automático de entrega de programas do Adessowiki.
## Entendimento da iaffine da toolbox ia898 - execução passo a passo
### Parâmetros de entrada:
Criamos a imagem de entrada e a matriz de transformação geométrica T:
End of explanation
domain = f.shape
n = f.size
H,W = f.shape
print('domain:', domain)
Explanation: Domínio da imagem de saída
O domínio da imagem de saída é feito igual ao domínio da imagem f de entrada.
domain diz respeito ao shape da imagem de saída. H e W dizem respeito às
dimensões da imagem de entrada; neste caso, são iguais.
End of explanation
r1,c1 = np.indices(domain)
print('r1=\n', r1)
print('c1=\n', c1)
Explanation: Cálculo dos índices da imagem de saída
Como fazemos mapeamento indireto, a varredura será nos pixels da
imagem de saída. Assim, utilizamos domain para gerar as coordenadas de
todos os pixels de g. Note aqui que usamos r1,c1 no lugar de r',c' das
equações matemáticas.
End of explanation
rc1 = np.array([ r1.ravel(),
c1.ravel(),
np.ones(n)])
print('rc1=\n', rc1)
Explanation: Empacotamento das coordenadas homogêneas
As coordenadas r1 e c1 são vetorizadas e colocadas em 3 linhas: r1, c1 e de 1's, para poder multiplicar por T.
A vetorização dos arrays é feita pelo ravel() do NumPy:
End of explanation
rc_float = np.linalg.inv(T).dot(rc1)
print('rc_float=\n',rc_float)
Explanation: Transformação da coordenadas
Aqui é feito o processamento para o cálculo das novas coordenadas correspondentes.
As coordenadas de g são multiplicadas pela inversa da matriz T (mapeamento inverso).
Observe que os valores estão em ponto flutuante:
End of explanation
rr = np.rint(rc_float[0]).astype(int)
cc = np.rint(rc_float[1]).astype(int)
print('rr=\n', rr)
print('cc=\n', cc)
Explanation: Interpolação do vizinho mais próximo
A coordenada ponto flutuante de f é arredondada pelo vizinho mais próximo. O
arredondamento é feito pela operação rint do NumPy:
End of explanation
r = np.clip(rr,0,H-1).reshape(domain)
c = np.clip(cc,0,W-1).reshape(domain)
print('r=\n', r)
print('c=\n', c)
Explanation: Ajuste das coordenadas que caíram fora do domínio de f
Aqui é onde é preenchido os valores em que (rr,cc) caíram foram do domínio de f.
Neste caso, é feito um "clipping" forçando estes valores cairem dentro de [0,H-1] e
[0,W-1] que formam o domínio de f:
End of explanation
g = f[r,c]
print('g=\n', g)
Explanation: Cópia dos valores dos pixels
Uma vez que as coordenadas estão todas calculadas, é feita agora a cópia dos pixels
da imagem f nas coordenadas calculadas para os pixels da imagem g. Veja que
a indexação de f está sendo feita por dois arrays r e c que possuem as
mesmas dimensões da imagem de entrada f. Esta é uma técnica do Numpy de
indexação por arrays que é bastante poderosa e eficiente.
End of explanation |
10,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic(整体的,巨大的) function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass
Step9: Loss layers
Step10: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
Step11: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
Step12: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
Step20: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic(整体的,巨大的) function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
#pass
data_train_val = {
'X_train': data['X_train'],
'y_train': data['y_train'],
'X_val': data['X_val'],
'y_val': data['y_val']
}
model = TwoLayerNet(hidden_dim=256, reg=0.0)
solver = Solver(model=model, data=data, update_rule='sgd', optim_config={'learning_rate': 6e-4}, lr_decay=0.95, num_epochs=10, batch_size=200,print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
# weight_scale = 2e-2
# learning_rate = 9.5e-3
weight_scale = 5e-2
learning_rate = 5e-3
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 5e-2
learning_rate = 5e-3
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
[FILL THIS IN]
对于5层神经网络,loss最小值对于weight_scale更加敏感。因为它更加容易陷入局部最优值。
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation |
10,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
text_set = set(text)
vocab_to_int = dict((word, index) for index, word in enumerate(text_set))
int_to_vocab = dict((index, word) for index, word in enumerate(text_set))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
punct_tokens = {"." : "||period||",
"," : "||comma||",
"\"" : "||quotation_mark||",
";" : "||semicolon||",
"!" : "||exclamation_mark||",
"?" : "||question_mark||",
"(" : "||left_parentheses||",
")" : "||right_parentheses||",
"--" : "||dash||",
"\n" : "||return||"}
return punct_tokens
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return input, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
n_layers = 2
keep_prob = 0.6
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
cell = tf.contrib.rnn.MultiRNNCell([drop] * n_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embedded_input = tf.nn.embedding_lookup(embedding, input_data)
return embedded_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed_dim = 200
embed_input = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed_input)
logits = tf.contrib.layers.fully_connected(outputs,
vocab_size,
activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_elements = len(int_text)
n_batches = n_elements // (batch_size * seq_length)
x_data = np.array(int_text[: n_batches * batch_size * seq_length])
y_data = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
batches = np.array(list(zip(x_batches, y_batches)))
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 1024
# Sequence Length
seq_length = 15
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 34
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name("input:0")
initial_state_tensor = loaded_graph.get_tensor_by_name("initial_state:0")
final_state_tensor = loaded_graph.get_tensor_by_name("final_state:0")
probs_tensor = loaded_graph.get_tensor_by_name("probs:0")
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
predicted_word = np.random.choice(list(int_to_vocab.values()),p=probabilities)
return predicted_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
10,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lists, numpy arrays and all those confusing essential things that we do all the time
Package importing
Step1: Or you can also import a particular function or a subpackage from a package like this
Step2: Lists in python
Lists are some of the most useful data structures in basic python. You can put anything you want in a list, you can change individual cells, elements can be lists themselves too.
Step3: An extremely useful pre-defined function is "range"
Step4: One of the most useful things about lists in python, is that they can be defined recursively, in a very "math" way
Step5: Numpy array and indexing
You can create a numpy array a couple of different ways. One of them is to use np.arange, the equivalent of range() we saw for lists, but also creates floats
The BEST thing about numpy arrays, is that you can do math on them treating them as single numbers, and it will do the operations to EACH element by default!
Step6: OR we can produce lists and then convert them to numpy arrays in a straightforward way
Step7: Indexing is pretty flexible | Python Code:
import numpy as np
np.random.random() #Random real number uniformly distributed between 0 and 1
np.random.normal() #Random real number following Gaussian distribution with mean=0 and standard deviation=1
Explanation: Lists, numpy arrays and all those confusing essential things that we do all the time
Package importing
End of explanation
from numpy import random
random.normal()
#OR:
from numpy.random import normal
normal()
Explanation: Or you can also import a particular function or a subpackage from a package like this
End of explanation
mylist=[1,2,3,'a',True,[5,6]]
print mylist
print mylist[5]
Explanation: Lists in python
Lists are some of the most useful data structures in basic python. You can put anything you want in a list, you can change individual cells, elements can be lists themselves too.
End of explanation
print range(5)
print range(4,10)
Explanation: An extremely useful pre-defined function is "range":
End of explanation
listA=range(10)
listB=[2*elem for elem in listA]
print listA
print listB
listC=[(idx,2**elem) for idx,elem in enumerate(listA)]
listD=[2**elem for idx,elem in enumerate(listA) if elem>4]
print listC
print listD
listE=[('id:'+str(idx),'power:'+str(2**elem)) for idx,elem in enumerate(listA)]
print listE
Explanation: One of the most useful things about lists in python, is that they can be defined recursively, in a very "math" way
End of explanation
print np.arange(10)
print np.arange(5,10)
print np.arange(1.5,4,.5) #start, end (excluded) and step
print np.arange(3,4,.1)
print 1/np.arange(0,10.5,.5)
Explanation: Numpy array and indexing
You can create a numpy array a couple of different ways. One of them is to use np.arange, the equivalent of range() we saw for lists, but also creates floats
The BEST thing about numpy arrays, is that you can do math on them treating them as single numbers, and it will do the operations to EACH element by default!
End of explanation
print np.array([1,2,3])
print np.array([3**x for x in np.arange(0,4,.5)])
#THIS CAN BE DONE BETTER WITH THIS:
myarray=3**np.arange(0,4,.5)
print myarray
print myarray.shape
print myarray.shape[0]
Explanation: OR we can produce lists and then convert them to numpy arrays in a straightforward way:
End of explanation
print myarray[3:5] #3 to 5, 5 excluded
print myarray[3:] #3 onwards
print myarray[:4] #Until 4, i.e. 0,1,2,3
print myarray[:-2] #Until two from the end, i.e. exclude the last two
indexing_list=[3,4,5]
print myarray[indexing_list]
print np.ones(4)
boolean_indexing=np.ones_like(myarray)
print boolean_indexing
boolean_indexing[[4,5,6]]=False
print boolean_indexing.astype('bool')
print np.arange(myarray.shape[0])[boolean_indexing.astype('bool')]
print np.ones((3,4))
y,x=np.mgrid[:4,:5]
print x
print y
print x**2+y**2
Explanation: Indexing is pretty flexible
End of explanation |
10,107 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have encountered a problem that, I want to get the intermediate result of a Pipeline instance in sklearn. | Problem:
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
from sklearn.pipeline import Pipeline
import pandas as pd
data = load_data()
pipe = Pipeline([
("tf_idf", TfidfVectorizer()),
("nmf", NMF())
])
pipe.fit_transform(data.test)
tf_idf_out = pipe.named_steps['tf_idf'].transform(data.test) |
10,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-lr', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-LR
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
10,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RNNs tutorial
Step1: An LSTM/RNN overview
Step2: Note that when we create the builder, it adds the internal RNN parameters to the model.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
Step3: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer
Step4: The same interface that we saw until now for the LSTM, holds also for the Simple RNN
Step5: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it
Step6: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack
Step7: Aside
Step8: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
Step9: Character-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
Step10: Notice that
Step11: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM!
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences. | Python Code:
# we assume that we have the dynet module in your path.
# OUTDATED: we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is.
from dynet import *
Explanation: RNNs tutorial
End of explanation
model = Model()
NUM_LAYERS=2
INPUT_DIM=50
HIDDEN_DIM=10
builder = LSTMBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model)
# or:
# builder = SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model)
Explanation: An LSTM/RNN overview:
An (1-layer) RNN can be thought of as a sequence of cells, $h_1,...,h_k$, where $h_i$ indicates the time dimenstion.
Each cell $h_i$ has an input $x_i$ and an output $r_i$. In addition to $x_i$, cell $h_i$ receives as input also $r_{i-1}$.
In a deep (multi-layer) RNN, we don't have a sequence, but a grid. That is we have several layers of sequences:
$h_1^3,...,h_k^3$
$h_1^2,...,h_k^2$
$h_1^1,...h_k^1$,
Let $r_i^j$ be the output of cell $h_i^j$. Then:
The input to $h_i^1$ is $x_i$ and $r_{i-1}^1$.
The input to $h_i^2$ is $r_i^1$ and $r_{i-1}^2$,
and so on.
The LSTM (RNN) Interface
RNN / LSTM / GRU follow the same interface. We have a "builder" which is in charge of creating definining the parameters for the sequence.
End of explanation
s0 = builder.initial_state()
x1 = vecInput(INPUT_DIM)
s1=s0.add_input(x1)
y1 = s1.output()
# here, we add x1 to the RNN, and the output we get from the top is y (a HIDEN_DIM-dim vector)
y1.npvalue().shape
s2=s1.add_input(x1) # we can add another input
y2=s2.output()
Explanation: Note that when we create the builder, it adds the internal RNN parameters to the model.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
End of explanation
print s2.h()
Explanation: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer:
End of explanation
# create a simple rnn builder
rnnbuilder=SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model)
# initialize a new graph, and a new sequence
rs0 = rnnbuilder.initial_state()
# add inputs
rs1 = rs0.add_input(x1)
ry1 = rs1.output()
print "all layers:", s1.h()
print s1.s()
Explanation: The same interface that we saw until now for the LSTM, holds also for the Simple RNN:
End of explanation
rnn_h = rs1.h()
rnn_s = rs1.s()
print "RNN h:", rnn_h
print "RNN s:", rnn_s
lstm_h = s1.h()
lstm_s = s1.s()
print "LSTM h:", lstm_h
print "LSTM s:", lstm_s
Explanation: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it:
1. the state of the current RNN column
2. the input x
The state is then returned, and we can call it's output() method to get the output y, which is the output at the top of the column. We can access the outputs of all the layers (not only the last one) using the .h() method of the state.
.s() The internal state of the RNN may be more involved than just the outputs $h$. This is the case for the LSTM, that keeps an extra "memory" cell, that is used when calculating $h$, and which is also passed to the next column. To access the entire hidden state, we use the .s() method.
The output of .s() differs by the type of RNN being used. For the simple-RNN, it is the same as .h(). For the LSTM, it is more involved.
End of explanation
s2=s1.add_input(x1)
s3=s2.add_input(x1)
s4=s3.add_input(x1)
# let's continue s3 with a new input.
s5=s3.add_input(x1)
# we now have two different sequences:
# s0,s1,s2,s3,s4
# s0,s1,s2,s3,s5
# the two sequences share parameters.
assert(s5.prev() == s3)
assert(s4.prev() == s3)
s6=s3.prev().add_input(x1)
# we now have an additional sequence:
# s0,s1,s2,s6
s6.h()
s6.s()
Explanation: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack: we can remove the top and continue from the previous state.
This is done either by remembering the previous state and continuing it with a new .add_input(), or using
we can access the previous state of a given state using the .prev() method of state.
Initializing a new sequence with a given state When we call builder.initial_state(), we are assuming the state has random /0 initialization. If we want, we can specify a list of expressions that will serve as the initial state. The expected format is the same as the results of a call to .final_s(). TODO: this is not supported yet.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
states = state.add_inputs(xs)
outputs = [s.output() for s in states]
hs = [s.h() for s in states]
print outputs, hs
Explanation: Aside: memory efficient transduction
The RNNState interface is convenient, and allows for incremental input construction.
However, sometimes we know the sequence of inputs in advance, and care only about the sequence of
output expressions. In this case, we can use the add_inputs(xs) method, where xs is a list of Expression.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
outputs = state.transduce(xs)
print outputs
Explanation: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
End of explanation
import random
from collections import defaultdict
from itertools import count
import sys
LAYERS = 2
INPUT_DIM = 50
HIDDEN_DIM = 50
characters = list("abcdefghijklmnopqrstuvwxyz ")
characters.append("<EOS>")
int2char = list(characters)
char2int = {c:i for i,c in enumerate(characters)}
VOCAB_SIZE = len(characters)
model = Model()
srnn = SimpleRNNBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model)
lstm = LSTMBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model)
params = {}
params["lookup"] = model.add_lookup_parameters((VOCAB_SIZE, INPUT_DIM))
params["R"] = model.add_parameters((VOCAB_SIZE, HIDDEN_DIM))
params["bias"] = model.add_parameters((VOCAB_SIZE))
# return compute loss of RNN for one sentence
def do_one_sentence(rnn, sentence):
# setup the sentence
renew_cg()
s0 = rnn.initial_state()
R = parameter(params["R"])
bias = parameter(params["bias"])
lookup = params["lookup"]
sentence = ["<EOS>"] + list(sentence) + ["<EOS>"]
sentence = [char2int[c] for c in sentence]
s = s0
loss = []
for char,next_char in zip(sentence,sentence[1:]):
s = s.add_input(lookup[char])
probs = softmax(R*s.output() + bias)
loss.append( -log(pick(probs,next_char)) )
loss = esum(loss)
return loss
# generate from model:
def generate(rnn):
def sample(probs):
rnd = random.random()
for i,p in enumerate(probs):
rnd -= p
if rnd <= 0: break
return i
# setup the sentence
renew_cg()
s0 = rnn.initial_state()
R = parameter(params["R"])
bias = parameter(params["bias"])
lookup = params["lookup"]
s = s0.add_input(lookup[char2int["<EOS>"]])
out=[]
while True:
probs = softmax(R*s.output() + bias)
probs = probs.vec_value()
next_char = sample(probs)
out.append(int2char[next_char])
if out[-1] == "<EOS>": break
s = s.add_input(lookup[next_char])
return "".join(out[:-1]) # strip the <EOS>
# train, and generate every 5 samples
def train(rnn, sentence):
trainer = SimpleSGDTrainer(model)
for i in xrange(200):
loss = do_one_sentence(rnn, sentence)
loss_value = loss.value()
loss.backward()
trainer.update()
if i % 5 == 0:
print loss_value,
print generate(rnn)
Explanation: Character-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
End of explanation
sentence = "a quick brown fox jumped over the lazy dog"
train(srnn, sentence)
sentence = "a quick brown fox jumped over the lazy dog"
train(lstm, sentence)
Explanation: Notice that:
1. We pass the same rnn-builder to do_one_sentence over and over again.
We must re-use the same rnn-builder, as this is where the shared parameters are kept.
2. We renew_cg() before each sentence -- because we want to have a new graph (new network) for this sentence.
The parameters will be shared through the model and the shared rnn-builder.
End of explanation
train(srnn, "these pretzels are making me thirsty")
Explanation: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM!
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences.
End of explanation |
10,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parse angles
Demonstrate how to convert direction strings to angles.
The code below shows how to parse directional text into angles.
It also demonstrates the function's flexibility
in handling various string formatting.
Step1: Create a test value of a directional text
Step2: Now throw that string into the function to calculate
the corresponding angle
Step3: The function can also handle arrays of string
in many different abbrieviations and capitalizations | Python Code:
import metpy.calc as mpcalc
Explanation: Parse angles
Demonstrate how to convert direction strings to angles.
The code below shows how to parse directional text into angles.
It also demonstrates the function's flexibility
in handling various string formatting.
End of explanation
dir_str = 'SOUTH SOUTH EAST'
print(dir_str)
Explanation: Create a test value of a directional text
End of explanation
angle_deg = mpcalc.parse_angle(dir_str)
print(angle_deg)
Explanation: Now throw that string into the function to calculate
the corresponding angle
End of explanation
dir_str_list = ['ne', 'NE', 'NORTHEAST', 'NORTH_EAST', 'NORTH east']
angle_deg_list = mpcalc.parse_angle(dir_str_list)
print(angle_deg_list)
Explanation: The function can also handle arrays of string
in many different abbrieviations and capitalizations
End of explanation |
10,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pyomo - Getting started
Pyomo installation
Step1: Ice cream example
This example is taken from the following book
Step3: Abstract Model
AbstHLinScript.py in https | Python Code:
from pyomo.environ import *
Explanation: Pyomo - Getting started
Pyomo installation: see http://www.pyomo.org/installation
pip install pyomo
End of explanation
instance = ConcreteModel(name="Linear (H)")
A = ['I_C_Scoops', 'Peanuts']
h = {'I_C_Scoops': 1, 'Peanuts': 0.1}
d = {'I_C_Scoops': 5, 'Peanuts': 27}
c = {'I_C_Scoops': 3.14, 'Peanuts': 0.2718}
b = 12
u = {'I_C_Scoops': 100, 'Peanuts': 40.6}
def x_bounds(m, i):
return (0,u[i])
instance.x = Var(A, bounds=x_bounds)
def obj_rule(instance):
return sum(h[i]*(1 - u[i]/d[i]**2) * instance.x[i] for i in A)
instance.z = Objective(rule=obj_rule, sense=maximize)
instance.budgetconstr = Constraint(expr = sum(c[i] * instance.x[i] for i in A) <= b)
# @tail:
opt = SolverFactory('glpk')
results = opt.solve(instance) # solves and updates instance
instance.display()
# @:tail
Explanation: Ice cream example
This example is taken from the following book: Pyomo - Optimization Modeling in Python by W. E. Hart & al. , Second Edition, Springer (p.19)
$$
\begin{align}
\max_{x} & \quad \sum_{i \in \mathcal{A}} h_i (1 - u/d_i^2) x_i \
\text{s.t.} & \quad \sum_{i \in \mathcal{A}} c_i x_i \leq b \
& \quad 0 \leq x_i \leq u_i, \quad i \in \mathcal{A}
\end{align}
$$
Concrete Model
ConcHLinScript.py in https://github.com/Pyomo/pyomo/tree/master/examples/doc/pyomobook/optimization-ch
End of explanation
DATA_STR = # Pyomo data file for AbstractH.py
set A := I_C_Scoops Peanuts ;
param h := I_C_Scoops 1 Peanuts 0.1 ;
param d :=
I_C_Scoops 5
Peanuts 27 ;
param c := I_C_Scoops 3.14 Peanuts 0.2718 ;
param b := 12 ;
param u := I_C_Scoops 100 Peanuts 40.6 ;
with open("AbstractH.dat", "w") as fd:
print(DATA_STR, file=fd)
!cat AbstractH.dat
model = AbstractModel(name="Simple Linear (H)")
model.A = Set()
model.h = Param(model.A)
model.d = Param(model.A)
model.c = Param(model.A)
model.b = Param()
model.u = Param(model.A)
def xbounds_rule(model, i):
return (0, model.u[i])
model.x = Var(model.A, bounds=xbounds_rule)
def obj_rule(model):
return sum(model.h[i] * (1 - model.u[i]/model.d[i]**2) * model.x[i] for i in model.A)
model.z = Objective(rule=obj_rule, sense=maximize)
def budget_rule(model):
return summation(model.c, model.x) <= model.b
model.budgetconstr = Constraint(rule=budget_rule)
# @tail:
opt = SolverFactory('glpk')
instance = model.create_instance("AbstractH.dat")
results = opt.solve(instance) # solves and updates instance
instance.display()
# @:tail
Explanation: Abstract Model
AbstHLinScript.py in https://github.com/Pyomo/pyomo/tree/master/examples/doc/pyomobook/optimization-ch
End of explanation |
10,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression Model
Dataset Information
No. of Features
Step1: Data Ingestion<a name='data ingestion'></a>
Step2: Features & Target Arrays<a name='features and target arrays'></a>
Step3: LogisticRegression Model<a name='logreg'></a>
Step4: Hyperparameter Tuning<a name='hyperparameter tuning'></a>
Step5: Classification Report<a name='classification report'></a>
Step6: Confusion Matrix <a name='confusion matrix'></a>
Step7: Class Balance Plot<a name='class balance'></a>
Step8: Save Model<a name='pickle'></a>
Step9: Return to Table of Contents | Python Code:
%matplotlib inline
import os
import json
import time
import pickle
import requests
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import yellowbrick as yb
sns.set_palette('RdBu', 10)
Explanation: Logistic Regression Model
Dataset Information
No. of Features: 12
No. of Instances: 4492
Table of Contents<a name='table of contents'></a>
Data Ingestion
Features & Target Arrays
Logistic Regression Model
Hyperparameter Tuning
a. Classification Report
b. Confusion Matrix
c. Class Balance Plot
Save Model
End of explanation
URL = 'https://raw.githubusercontent.com/georgetown-analytics/classroom-occupancy/master/models/sensor_data_ml.csv'
def fetch_data(fname='sensor_data_ml.csv'):
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'wb') as f:
f.write(response.content)
return outpath
# Defining fetching data from the URL
DATA = fetch_data()
# Import as pandas dataframe with DateTimeIndex: df
df = pd.read_csv('sensor_data_ml.csv', index_col='datetime', parse_dates=True)
# Rename columns
df.columns = ['temp', 'humidity', 'co2', 'light', 'light_st', 'noise',
'bluetooth', 'images', 'door', 'occupancy_count', 'occupancy_level']
df.info()
df.head()
Explanation: Data Ingestion<a name='data ingestion'></a>
End of explanation
# Breakdown of classroom occupancy levels
df.occupancy_level.value_counts()
# Encode multiclass target variable
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit_transform(df['occupancy_level'])
X = df.drop('occupancy_level', axis=1).values
y = df['occupancy_level']
# Use TimeSeriesSplit to create training and test set split indices
from sklearn.model_selection import TimeSeriesSplit
tscv = TimeSeriesSplit(n_splits=12)
for train_index, test_index in tscv.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
Explanation: Features & Target Arrays<a name='features and target arrays'></a>
End of explanation
# Initial cross-validation scores
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# Fit logistic regression classifier onto the training data: logreg
logreg = LogisticRegression().fit(X_train, y_train)
# Print the 12-fold cross-validation scores
cv_scores = cross_val_score(logreg, X_train, y_train, cv=tscv)
print('Logistic Regression Cross-Validation Scores')
print(cv_scores)
print('Average 12-Fold CV Score: {:.4f}'.format(np.mean(cv_scores)))
# Initial classification report
from sklearn.metrics import classification_report
# Predict the labels of the test set: y_pred
y_pred = logreg.predict(X_test)
# Compute and print the classification report and training and test scores
print('Logistic Regression Model')
print(classification_report(y_test, y_pred))
print('Training set score: {:.4f}'.format(logreg.score(X_train, y_train)))
print('Test set score: {:.4f}'.format(logreg.score(X_test, y_test)))
Explanation: LogisticRegression Model<a name='logreg'></a>
End of explanation
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [0.01, 0.1, 1, 10, 100, 110, 120], 'class_weight':[None, 'balanced']}
grid = GridSearchCV(LogisticRegression(), param_grid, cv=tscv)
logreg_clf = grid.fit(X_train, y_train)
print('Best estimator:\n{}'.format(logreg_clf.best_estimator_))
print('Logistic Regression Model')
print('Best Score: {:.4f}'.format(logreg_clf.best_score_))
print('Best parameters: {}'.format(logreg_clf.best_params_))
# Accuracy scores after tuning C parameter
# Predict the labels of the test set: y_pred
y_pred = logreg_clf.predict(X_test)
print('Training set score: {:.4f}'.format(logreg_clf.score(X_train, y_train)))
print('Test set score: {:.4f}'.format(logreg_clf.score(X_test, y_test)))
Explanation: Hyperparameter Tuning<a name='hyperparameter tuning'></a>
End of explanation
# Compute and print the classification report and training and test scores
print('Logistic Regression Model')
print(classification_report(y_test, y_pred))
from sklearn.metrics import f1_score, precision_score, recall_score
print('Logistic Regression F1 Scores')
print('F1 Score - micro: {:.4f}'.format(f1_score(y_test, y_pred, average='micro')))
print('F1 Score - weighted: {:.4f}'.format(f1_score(y_test, y_pred, average='weighted')))
print('F1 Score - macro: {:.4f}'.format(f1_score(y_test, y_pred, average='macro')))
from yellowbrick.classifier import ClassificationReport
classes = ['Empty', 'High', 'Low', 'Mid-Level']
fig = plt.figure()
visualizer = ClassificationReport(logreg_clf, classes=classes)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
g = visualizer.poof()
fig.savefig('ml_graphs/logreg_classification_report.png')
Explanation: Classification Report<a name='classification report'></a>
End of explanation
from sklearn.metrics import confusion_matrix
print('Logistic Regression Confusion Matrix')
print(confusion_matrix(y_test, y_pred))
Explanation: Confusion Matrix <a name='confusion matrix'></a>
End of explanation
from yellowbrick.classifier import ClassBalance
classes = ['Empty', 'High', 'Low', 'Mid-Level']
visualizer = ClassBalance(logreg_clf, classes=classes)
fig = plt.figure()
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
g = visualizer.poof()
fig.savefig('ml_graphs/logreg_class_balance.png')
Explanation: Class Balance Plot<a name='class balance'></a>
End of explanation
import pickle
logreg_model = 'logreg_model.sav'
# Save fitted model to disk
pickle.dump(logreg_clf, open(logreg_model, 'wb'))
Explanation: Save Model<a name='pickle'></a>
End of explanation
# Test model
loaded_model = pickle.load(open(logreg_model, 'rb'))
result = loaded_model.score(X_test, y_test)
print(result)
Explanation: Return to Table of Contents
End of explanation |
10,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Submission 4 from <a href="http
Step1: Load dataset
Step2: Utilities function
Step3: Extract data
Step4: Modified imputation method using MLPRegressor
Step5: Feature Augmentation method from Bestagini
Step6: Neural Network
Step7: Validation with Leave One Well Out on Training Dataset
Step8: Applying to Test Dataset | Python Code:
import numpy as np
np.random.seed(1337)
import warnings
warnings.filterwarnings("ignore")
import time as tm
import pandas as pd
from keras.models import Sequential, Model
from keras.constraints import maxnorm
from keras.layers import Dense, Dropout, Activation
from keras.utils import np_utils
from sklearn.metrics import f1_score, recall_score, accuracy_score, confusion_matrix
from sklearn.model_selection import LeaveOneGroupOut
from sklearn import preprocessing
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
Explanation: Submission 4 from <a href="http://petroanalytix.com/">PetroAnalytix Team</a>
In this notebook, we try NN with several ideas/code from other contestant:
* Alan Richardson (Ausar Geophysical) - PE imputation, method changed using MLPRegressor
* <a href="https://home.deib.polimi.it/bestagini/">Paolo Bestagini</a> - Feature augmentation
* Model spearation between Marine and Non Marine
End of explanation
training_data = pd.read_csv('../data/facies_vectors.csv')
Explanation: Load dataset
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0, 2], [1], [4], [3, 5], [4, 6, 7], [5, 7], [5, 6, 8], [6, 7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '.g')
ax[1].plot(logs.ILD_log10, logs.Depth, '.')
ax[2].plot(logs.DeltaPHI, logs.Depth, '.', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '.', color='r')
ax[4].plot(logs.PE, logs.Depth, '.', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: Utilities function
End of explanation
X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
wells = training_data["Well Name"].values
Explanation: Extract data
End of explanation
from sklearn.neural_network import MLPRegressor
reg = MLPRegressor()
DataImpAll = training_data.drop(['Formation', 'Well Name', 'Depth', 'FaciesLabels'], axis=1).copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))
training_data.ix[:,"PE"] = X[:,4]
Explanation: Modified imputation method using MLPRegressor
End of explanation
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
# Marine Models
Org_data = training_data
training_data = training_data[training_data["NM_M"]==1]
X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
wells = training_data["Well Name"].values
well = training_data['Well Name'].values
depth = training_data['Depth'].values
X, padded_rows = augment_features(X, well, depth, N_neig=1)
X1org = X
y1org = y
Explanation: Feature Augmentation method from Bestagini
End of explanation
def fDNN(in_dim, out_dim):
# Model
model = Sequential()
model.add(Dense(152, input_dim=in_dim, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(out_dim, activation='softmax'))
# Compilation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
Explanation: Neural Network
End of explanation
logo = LeaveOneGroupOut()
nb_classes = 9
epoch = 10
bats = 20
t0 = tm.time()
f1s_ls = []
acc_ls = []
adj_ls = []
from scipy.signal import medfilt
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
# Scaling
scaler = preprocessing.MinMaxScaler().fit(X)
X_tr = scaler.transform(X[train])
X_te = scaler.transform(X[test])
Y_tr = np_utils.to_categorical(y[train], nb_classes)
in_dim = len(X_tr[0])
# Method initialization
mlp = fDNN(in_dim, nb_classes)
# Training
mlp.fit(X_tr, Y_tr, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_hat = mlp.predict_classes(X_te, verbose=0)
y_hat = medfilt(y_hat, kernel_size=5)
try:
f1s = f1_score(y[test], y_hat, average="weighted", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
f1s = 0
try:
conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
acc = f1_score(y[test], y_hat, average="micro", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
acc = 0
try:
acc_adj = accuracy_adjacent(conf, adjacent_facies)
except:
acc_adj = 0
f1s_ls += [f1s]
acc_ls += [acc]
adj_ls += [acc_adj]
print("{:>20s} f1w:{:.3f} | f1m:{:.3f} | acc_adj:{:.3f}".format(well_name, f1s, acc, acc_adj))
t1 = tm.time()
print("Avg F1w", np.average(f1s_ls)*100, "Avg F1m", np.average(acc_ls)*100, "Avg Adj", np.average(adj_ls)*100)
print((t1-t0), "seconds")
# Non - Marine
training_data = Org_data
training_data = training_data[training_data["NM_M"]==2]
X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
wells = training_data["Well Name"].values
well = training_data['Well Name'].values
depth = training_data['Depth'].values
X, padded_rows = augment_features(X, well, depth, N_neig=1)
X2org =X
y2org = y
f1s_ls = []
acc_ls = []
adj_ls = []
from scipy.signal import medfilt
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
# Scaling
scaler = preprocessing.MinMaxScaler().fit(X)
X_tr = scaler.transform(X[train])
X_te = scaler.transform(X[test])
Y_tr = np_utils.to_categorical(y[train], nb_classes)
in_dim = len(X_tr[0])
# Method initialization
mlp = fDNN(in_dim, nb_classes)
# Training
mlp.fit(X_tr, Y_tr, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_hat = mlp.predict_classes(X_te, verbose=0)
y_hat = medfilt(y_hat, kernel_size=5)
try:
f1s = f1_score(y[test], y_hat, average="weighted", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
f1s = 0
try:
conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
acc = f1_score(y[test], y_hat, average="micro", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
acc = 0
try:
acc_adj = accuracy_adjacent(conf, adjacent_facies)
except:
acc_adj = 0
f1s_ls += [f1s]
acc_ls += [acc]
adj_ls += [acc_adj]
print("{:>20s} f1w:{:.3f} | f1m:{:.3f} | acc_adj:{:.3f}".format(well_name, f1s, acc, acc_adj))
t1 = tm.time()
print("Avg F1w", np.average(f1s_ls)*100, "Avg F1m", np.average(acc_ls)*100, "Avg Adj", np.average(adj_ls)*100)
print((t1-t0), "seconds")
Explanation: Validation with Leave One Well Out on Training Dataset
End of explanation
Org_blind_data = pd.read_csv('../data/nofacies_data.csv')
blind_data = Org_blind_data[Org_blind_data["NM_M"]==1]
X_blind = blind_data.drop(['Formation', 'Well Name', 'Depth'], axis=1).values
well_blind = blind_data['Well Name'].values
depth_blind = blind_data['Depth'].values
X_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)
# Scaling
scl = preprocessing.MinMaxScaler().fit(X1org)
X_train = scl.transform(X1org)
X_blind = scl.transform(X_blind)
Y_train = np_utils.to_categorical(y1org, nb_classes)
# Method initialization
model = fDNN(in_dim, nb_classes)
# Training
model.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_blind = model.predict_classes(X_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=5)
Org_blind_data.ix[Org_blind_data["NM_M"]==1,"Facies"] = y_blind + 1 # return the original value (1-9)
blind_data = Org_blind_data[Org_blind_data["NM_M"]==2]
X_blind = blind_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values
well_blind = blind_data['Well Name'].values
depth_blind = blind_data['Depth'].values
X_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)
# Scaling
scl = preprocessing.MinMaxScaler().fit(X2org)
X_train = scl.transform(X2org)
X_blind = scl.transform(X_blind)
Y_train = np_utils.to_categorical(y2org, nb_classes)
# Method initialization
model = fDNN(in_dim, nb_classes)
# Training
model.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_blind = model.predict_classes(X_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=5)
Org_blind_data.ix[Org_blind_data["NM_M"]==2,"Facies"] = y_blind + 1 # return the original value (1-9)
Org_blind_data.to_csv("PA_Team_Submission_4-revised.csv")
make_facies_log_plot(
Org_blind_data[Org_blind_data['Well Name'] == 'STUART'],
facies_colors)
make_facies_log_plot(
Org_blind_data[Org_blind_data['Well Name'] == 'CRAWFORD'],
facies_colors)
Explanation: Applying to Test Dataset
End of explanation |
10,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some notes about recent dolo development
Toolchain
build tool
Step1: Stateless / Statefree
Step2: two tools to improve the situation
Step3: Stateless approach in dolo
Step4: Towards more separatation between pure objects and discretized ones
Step5: Processes and Distributions
Three separate concepts | Python Code:
# numba-ification
from dolo.numeric.grids import UniformCartesianGrid, NonUniformCartesianGrid
grid = UniformCartesianGrid(min=[0.0, 0.0], max=[1.0, 1.0], n=[10, 10])
display(grid)
display(grid.__numba_repr__()) # fully type inferrable by numba / interpolation.py
import numba
numba.typeof(grid.__numba_repr__())
import numpy as np
grid = NonUniformCartesianGrid([np.linspace(0,1,10), np.linspace(0,1,10)])
numba.typeof( grid.__numba_repr__() )
Explanation: Some notes about recent dolo development
Toolchain
build tool: poetry
simplifies dependency management
setup.py disappeared
pyproject.toml
install poetry then pip install .
tag-based releases:
automatic release on pypi when tag is pushed to master
semi-automatic release from pypi to conda
documentation switch to github/actions + mkdocs:
old: www.econforge.org/dolo
new: www.econforge.org/dolo.py
built and published on tag-release
advertisement: try mamba instead of conda !
Development choices
semi-object oriented programming:
type hierarchy
mostly stateless design
most objects are simple datastructures with simple internal state
multiple dispatch
use optional typing:
mostly for documentation
almost completely useless at runtime
performance
numba-ification out of reach (no @jitclass) but...
some objects have a typed numba equivalent
Numbaification
End of explanation
# problem 1
class Model:
def __init__(self, a=1, b=2):
self.a = a
self.b = b
def solve(self):
return self.a + self.b
d = dict(a=1, b=2.5)
model = Model(**d)
model.solve() # -> 3.4
d['a'] = 1.5 # bad student
model.solve() # -> I paid for this school, why don't I get 4.0 ?
# solutions:
# - terminate student
# - promote no reference style of coding:
m = Model()
m.a = 1
m.b = 2
# - educate by providing feedback when parameter is set (see below)
class Model:
def __init__(self, a=1)
self.a
def update():
self.b = self.a+1
def solve():
returb self.a+self.b
model = Model()
model.update() # bad design
model.solve()
# stateless / stateful
class Model:
a = 1
def update():
self.b = self.a+1
def solve():
returb self.a+self.b
model = Model()
model.update()
model.solve()
model.a = 2
# model.update() # bad design
model.solve()
Explanation: Stateless / Statefree
End of explanation
class Model:
@property
def a():
return self._a_
@a.setter
def a(self, value):
print(f"Parameter 'a' set to {value}") # for the bad student
self._a_ = value
self.update()
def update():
self.b = self.a+1
def solve():
returb self.a + self.b
model = Model()
model.a = 2 # triggers update()
model.solve() # -> 3
## ok but requires lots of recalculations with many parameters
# better solution with global flag
class Model:
__parameters_reset__ = True
@property
def a():
return self._a_
@a.setter
def a(self, value):
print(f"Parameter 'a' set to {value}") # for the bad student
self._a_ = value
self.update()
@property
def b():
return self._b_
@a.setter
def b(self, value):
print(f"Parameter 'b' set to {value}") # for the bad student
self._b_ = value
self.update()
def update():
if __parameters_reset__:
self.c = self.a+1
__parameters_reset__ = False
def solve():
returb self.a + self.b
model = Model()
model.a = 2 # triggers update()
model.solve() # -> 3
# same code with traits: (many possible syntaxes)
# better solution with global flag
@magiclass
class Model:
a: Parameter[Float]
b: Parameter[Float]
@depends(['a', 'b'])
def update():
if __parameters_reset__:
self.c = self.a+1
__parameters_reset__ = False
@depends(['update'])
def solve():
return self.a + self.b
model = Model()
model.a = 2.0 # resets `self.__reset__['update']` and `self.__reset__['solve']` to True
model.b = 2.0 # same
model.solve() # will need to recompute update(), which in turn, will take new values of a and b
# other approach
class MagicalDict:
dependencies: Dict[]
def __init__(self, model):
self.model = model
def __setitem__(self, k, v):
pass #
class Model
calibration: MagicalDict
model = Model()
model.calibration['a'] = 1 # same behaviour
Explanation: two tools to improve the situation:
- user-experience:
- properties
- developper experience:
- decorators / annotations
- traits
End of explanation
# mostly stateless
from dolo import *
model = yaml_import("examples/models/rbc_iid.yaml")
dr = time_iteration(model, details=False, verbose=False)
simulate(model, dr, N=10)
# with some exceptions
model.set_calibration(beta=0.96) # (sets global flag)
sol = time_iteration(model, details=True, verbose=False)
sol
model.discretize()[1]
sol.dprocess
Explanation: Stateless approach in dolo
End of explanation
model.domain
grid, dprocess = model.discretize()
display( grid )
display( dprocess )
# model.grid will disappear but there might be a DiscretizedModel object with a pointer to its parent.
Explanation: Towards more separatation between pure objects and discretized ones
End of explanation
from dolo.numeric.distribution import *
dist = UNormal(μ=0, σ=0.1)
from matplotlib import pyplot as plt
f = lambda x: x**2
scip = dist.integrate(f)
discr = [dist.discretize(N=n).integrate(f) for n in range(1,10)]
discr_gh = [dist.discretize(N=n, method="gauss-hermite").integrate(f) for n in range(1,10)]
plt.plot(discr)
plt.plot(discr_gh)
plt.hlines(scip,*plt.xlim())
import numpy as np
trunc = Truncation[UNormal](dist=dist, lb=0, ub=np.Inf)
dtrunc = trunc.discretize()
plt.plot(dtrunc.points.ravel(), dtrunc.weights, 'o')
p = ProductDistribution([dist, trunc])
pp = p.discretize()
plt.plot(pp.points[:,0], pp.points[:,1],'o')
index = Bernouilli(π=0.2)
mix = Mixture(index=index, distributions={0: dist, 1: trunc})
mix.discretize().integrate(f)
UNormal.simulate?
from dolo import *
model = yaml_import("examples/models/rbc.yaml")
model.exogenous
Explanation: Processes and Distributions
Three separate concepts:
distribution ($\epsilon$):
does dist.draw(N: int), dist.integrate(f)
processes $(\epsilon_t)_{t}$:
does dist.simulate()
discretized process:
does dist.node(i), dist.inode(i,j), dist.iweight(i,j)
used to solve models
With multiple inheritances, it is possible to belong to several classes:
a distribution is a process
a markov chain is both a discretized process and a process
What matters most to me:
establish stable user-facing conventions in yaml file (***)
processes names must match distributions
documented here: http://www.econforge.org/dolo.py/processes/
solution: align with distiributions.jl/R/rvlib for conventions
a wrapper is almost certainly needed
define clean language API allowing for more flexibility (**) (truncations, conditional, etc.)
simplify development process (*)
transaprent API
synchronize Python API, with Dolang API
less code duplication within projects (language_element decoretor @language)
Distributions
all distributions are multi-dimensional
1d distribution is a special class
with pdf, cdf function
automatic equiprobable discretization
they carry variable names
End of explanation |
10,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas Examples
Batfish questions can return a huge amount of data, which you may want to filter in various ways based on your task. While most Batfish questions support basic filtering, they may not support your desired filtering criteria. Further, for performance, you may want to fetch the answer once and filter it using multiple different criteria. These scenarios are where Pandas-based filtering can help.
Batfish answers can be easily turned into a Pandas DataFrame (using .frame()), after which you can use the full power of Pandas to filter and manipulate data. This notebook provides a few examples of common manipulations for Batfish. It is not intended as a complete guide of Pandas data manipulation.
Let's first initialize a snapshot that we will use in our examples.
Step2: Filtering initIssues
After initializing the snapshot, you often want to look at the <code>initIssues</code> answer. If there are too many issues, you may want to ignore a particular class of issues. We show below how to do that.
Step3: In the code above, we are using the Pandas method <code>apply</code> to map issues to a binary array based on whether the issue has one of the substrings in line_texts_to_ignore. Passing axis=1 makes apply iterate over rows instead of columns. The helper method has_substring makes this determination. It returns True if text is not None and has any of the substrings. The Python method <code>any</code> returns True if any element of the input iterable is True. Using the binary array as a filter for issues produces rows that match our criterion.
Instead of ignoring some issues, you may want to focus on issues that match a certain criteria. That too can be easily accomplished, as follows.
Step4: The code above is similar to the one we used earlier, with the only differences being that we use the focus_details list as the argument to the has_substrings helper and we do not invert its result.
Filtering objects
Step5: To filter based on a column, we need to know its data type. We can learn that in the Batfish documentation or by inspecting the answer we got from Batfish (e.g., using Python's type() method).
We show three examples of filtering based on the Interface and Active columns, which are of type <code>pybatfish.datamodel.primitives.Interface</code> and bool, respectively. The former has hostname and interface properties (which are strings).
Step6: Filtering columns
When viewing Batfish answers, you may want to view only some of the columns. Pandas makes that easy for both original answers and answers where some rows have been filtered, as both of them are just DataFrames.
Step7: Counting rows
Often, you would be interested in counting the number of rows in the filtered answer. This is super easy because Python's len() method, which we use for iterables, can be used on DataFrames as well.
Step8: Grouping rows
For more advanced operations than filtering rows and columns, chances are that you will find Pandas <code>groupyby</code> pretty handy. This method enables you to group rows using a custom criteria and analyze those groups. For instance, if you wanted to group interfaces by nodes, you may do the following
Step9: We obtained a Pandas DataFrameGroupBy object above. The groupby method iterates over row indexes (apply iterated over rows), calls the lambda over each, and groups rows whose indices yield the same value. In our example, the lambda first gets the row using interfaces.loc[index], then gets the interface (which is of type pybatfish.datamodel.primitives.Interface), and finally the hostname.
DataFrameGroupBy objects offer many functions that are useful for analysis. We demonstrate two of them below.
Step10: Here, we used the <code>get_group</code> method to get all information for 'exitgw', thus viewing all interfaces for that node. This is possible using row filtering as well, but we can do other things that are not, such as | Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
# Initialize a network and a snapshot
bf.set_network("pandas-example")
SNAPSHOT_NAME = "snapshot"
SNAPSHOT_PATH = "networks/hybrid-cloud/"
bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)
Explanation: Pandas Examples
Batfish questions can return a huge amount of data, which you may want to filter in various ways based on your task. While most Batfish questions support basic filtering, they may not support your desired filtering criteria. Further, for performance, you may want to fetch the answer once and filter it using multiple different criteria. These scenarios are where Pandas-based filtering can help.
Batfish answers can be easily turned into a Pandas DataFrame (using .frame()), after which you can use the full power of Pandas to filter and manipulate data. This notebook provides a few examples of common manipulations for Batfish. It is not intended as a complete guide of Pandas data manipulation.
Let's first initialize a snapshot that we will use in our examples.
End of explanation
# Lets get the initIssues for our snapshot
issues = bf.q.initIssues().answer().frame()
issues
# Ignore all issues whose Line_Text contain one of these as a substring
line_texts_to_ignore = ["transceiver"]
def has_substring(text: Optional[str], substrings: List[str]) -> bool:
Returns True if 'text' is not None and contains one of the 'substrings'
return text is not None and any(substr in text for substr in substrings)
issues[
issues.apply(
lambda issue: not has_substring(issue["Line_Text"], line_texts_to_ignore),
axis=1,
)
]
Explanation: Filtering initIssues
After initializing the snapshot, you often want to look at the <code>initIssues</code> answer. If there are too many issues, you may want to ignore a particular class of issues. We show below how to do that.
End of explanation
# Only show issues whose details match these substrings
focus_details = ["Unrecognized element 'ServiceDetails' in AWS"]
issues[
issues.apply(lambda issue: has_substring(issue["Details"], focus_details), axis=1)
]
Explanation: In the code above, we are using the Pandas method <code>apply</code> to map issues to a binary array based on whether the issue has one of the substrings in line_texts_to_ignore. Passing axis=1 makes apply iterate over rows instead of columns. The helper method has_substring makes this determination. It returns True if text is not None and has any of the substrings. The Python method <code>any</code> returns True if any element of the input iterable is True. Using the binary array as a filter for issues produces rows that match our criterion.
Instead of ignoring some issues, you may want to focus on issues that match a certain criteria. That too can be easily accomplished, as follows.
End of explanation
# Fetch interface properties and display its first five rows
interfaces = bf.q.interfaceProperties().answer().frame()
interfaces.head(5)
Explanation: The code above is similar to the one we used earlier, with the only differences being that we use the focus_details list as the argument to the has_substrings helper and we do not invert its result.
Filtering objects
End of explanation
# Display all interfaces on node 'exitgw'
interfaces[interfaces.apply(lambda row: row["Interface"].hostname == "exitgw", axis=1)]
# Display all GigabitEthernet interfaces on node 'exitgw'
interfaces[
interfaces.apply(
lambda row: row["Interface"].hostname == "exitgw"
and row["Interface"].interface.startswith("GigabitEthernet"),
axis=1,
)
]
# Display all active GigabitEthernet interfaces on node 'exitgw'
interfaces[
interfaces.apply(
lambda row: row["Interface"].hostname == "exitgw"
and row["Interface"].interface.startswith("GigabitEthernet")
and row["Active"],
axis=1,
)
]
Explanation: To filter based on a column, we need to know its data type. We can learn that in the Batfish documentation or by inspecting the answer we got from Batfish (e.g., using Python's type() method).
We show three examples of filtering based on the Interface and Active columns, which are of type <code>pybatfish.datamodel.primitives.Interface</code> and bool, respectively. The former has hostname and interface properties (which are strings).
End of explanation
# Filter interfaces to all active GigabitEthernet interfaces on node exitgw
exitgw_gige_active_interfaces = interfaces[
interfaces.apply(
lambda row: row["Interface"].hostname == "exitgw"
and row["Interface"].interface.startswith("GigabitEthernet")
and row["Active"],
axis=1,
)
]
# Display only the Interface and All_Prefixes columns of the filtered DataFrame
exitgw_gige_active_interfaces[["Interface", "All_Prefixes"]]
Explanation: Filtering columns
When viewing Batfish answers, you may want to view only some of the columns. Pandas makes that easy for both original answers and answers where some rows have been filtered, as both of them are just DataFrames.
End of explanation
# Show the number of rows in the filtered DataFrame that we obtained above
len(exitgw_gige_active_interfaces)
Explanation: Counting rows
Often, you would be interested in counting the number of rows in the filtered answer. This is super easy because Python's len() method, which we use for iterables, can be used on DataFrames as well.
End of explanation
# Get interfaces grouped by node name
intefaces_by_hostname = interfaces.groupby(
lambda index: interfaces.loc[index]["Interface"].hostname
)
Explanation: Grouping rows
For more advanced operations than filtering rows and columns, chances are that you will find Pandas <code>groupyby</code> pretty handy. This method enables you to group rows using a custom criteria and analyze those groups. For instance, if you wanted to group interfaces by nodes, you may do the following:
End of explanation
# Display the rows corresponding to node 'exitgw' group
intefaces_by_hostname.get_group("exitgw")
Explanation: We obtained a Pandas DataFrameGroupBy object above. The groupby method iterates over row indexes (apply iterated over rows), calls the lambda over each, and groups rows whose indices yield the same value. In our example, the lambda first gets the row using interfaces.loc[index], then gets the interface (which is of type pybatfish.datamodel.primitives.Interface), and finally the hostname.
DataFrameGroupBy objects offer many functions that are useful for analysis. We demonstrate two of them below.
End of explanation
# Display the number of interfaces per node
intefaces_by_hostname.count()[["Interface"]]
Explanation: Here, we used the <code>get_group</code> method to get all information for 'exitgw', thus viewing all interfaces for that node. This is possible using row filtering as well, but we can do other things that are not, such as:
End of explanation |
10,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Breaking a variable to levels
The scenario for this tutorial is that, you have a series of a variable, such as the population density of different cities. And, you need to classify them into different groups according to this variable, e.g. the very high, medium high, medium, medium low, very low population density, etc.
In some cases, you already have a GeoDataFrame/DataFrame, in other cases, you just have a list that contain the numbers. So, the following cover two major functions
Step1: read a demo file, and take a look
Step2: take a look at the data distribution. using seaborn distplot.
Step3: the above plot showed that the data is potentially an exponential distribution.
so lets try to make the yscale log.
Step4: using different break method
Step5: Normally, the level_list is used to be assign to the gdf. This is what I did in other functions of mapping.
Step6: cuts contain the breaking values, and the min/max at the both end of the list.
Step7: quantile has a similar count for each level.
Lets try some other break method.
Step8: specifying the number of level
The number of level is set to the parameter break_N, which is default to 5.
After setting the break_N to N, the number of cuts become N+1, because it contain both the largest and the smallest values.
Step9: note that what head_tail_break do for increased number of levels.
specifying cuts manually
There are two ways of using the cuts. This will return a cut list, and a level_list that is in the same length and same sequence with the input vector.
using quantile as method, and the cuts are some float numbers betweent 0-1.
using manual as method, and the cuts are some user defined cuts.
NOTE that the cut list has to include the minimum and maximum values.
Step10: breaking a list instead of a column of a dataframe
Let say you have a list, instead of a dataframe/geodataframe.
Step11: And you want to get the break levels, another function is also provided (the function that is called by tm.leveling_vector). | Python Code:
import geopandas as gpd # for reading and manupulating shapefile
import matplotlib.pyplot as plt # for making figure
import seaborn as sns # for making distplot
from colouringmap import theme_mapping as tm # a function named leveling_vector in tm will be used
from colouringmap import breaking_levels as bk # a function named get_levels in bk will be used
# magic line for matlotlib figure to be shown inline in jupyter cell
%matplotlib inline
Explanation: Breaking a variable to levels
The scenario for this tutorial is that, you have a series of a variable, such as the population density of different cities. And, you need to classify them into different groups according to this variable, e.g. the very high, medium high, medium, medium low, very low population density, etc.
In some cases, you already have a GeoDataFrame/DataFrame, in other cases, you just have a list that contain the numbers. So, the following cover two major functions:
tm.leveling_vector, which take a dataframe and a column name for the classifying; and
bk.get_levels, which take a list.
The two functions takes a break_method for the breaking methods, such as quantile(default), head_tail_break, natural_break, equal_interval (and manual).
They take a break_N parameter, for specifying the number of groups.
And they also take a break_cuts.
First, import things that is needed.
End of explanation
grid_res = gpd.read_file('data/community_results.shp')
grid_res.head()
Explanation: read a demo file, and take a look
End of explanation
sns.distplot(grid_res['usercount'], kde=False)
Explanation: take a look at the data distribution. using seaborn distplot.
End of explanation
ax = sns.distplot(grid_res['usercount'], kde=False)
#ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposy='clip')
Explanation: the above plot showed that the data is potentially an exponential distribution.
so lets try to make the yscale log.
End of explanation
level_list, cuts = tm.leveling_vector(grid_res, 'usercount') #, break_method='quantile') #default method is quantile
Explanation: using different break method:
quantile
head_tail_break
natural_break
equal_interval
the following is the most simple way of converting the column of a gdf to levels
End of explanation
grid_res['user_level'] = level_list
grid_res.head()
Explanation: Normally, the level_list is used to be assign to the gdf. This is what I did in other functions of mapping.
End of explanation
cuts
ax = sns.distplot(grid_res['usercount'], kde=False)
#ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposy='clip')
for c in cuts:
ax.axvline(x=c)
lev = list(set(level_list))
count = [ level_list.count(l) for l in lev ]
print lev
print count
Explanation: cuts contain the breaking values, and the min/max at the both end of the list.
End of explanation
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break')
print cuts
ax = sns.distplot(grid_res['usercount'], kde=False)
#ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposy='clip')
for c in cuts:
ax.axvline(x=c)
lev = list(set(level_list))
count = [ level_list.count(l) for l in lev ]
print lev
print count
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='natural_break')
print cuts
ax = sns.distplot(grid_res['usercount'], kde=False)
#ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposy='clip')
for c in cuts:
ax.axvline(x=c)
lev = list(set(level_list))
count = [ level_list.count(l) for l in lev ]
print lev
print count
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='equal_interval')
print cuts
ax = sns.distplot(grid_res['usercount'], kde=False)
#ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposy='clip')
for c in cuts:
ax.axvline(x=c)
lev = list(set(level_list))
count = [ level_list.count(l) for l in lev ]
print lev
print count
Explanation: quantile has a similar count for each level.
Lets try some other break method.
End of explanation
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=3)
print cuts
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=5)
print cuts
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=7)
print cuts
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='head_tail_break', break_N=9)
print cuts
Explanation: specifying the number of level
The number of level is set to the parameter break_N, which is default to 5.
After setting the break_N to N, the number of cuts become N+1, because it contain both the largest and the smallest values.
End of explanation
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='quantile', break_cuts=[0.,.25,.5,.75,1.])
print cuts
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='quantile', break_cuts=[0.,0.1,.5,.99,1.])
print cuts
level_list, cuts = tm.leveling_vector(grid_res, 'usercount', break_method='manual', break_cuts=[0.0, 120, 490, 1200, 2200, 4506.0])
print cuts
Explanation: note that what head_tail_break do for increased number of levels.
specifying cuts manually
There are two ways of using the cuts. This will return a cut list, and a level_list that is in the same length and same sequence with the input vector.
using quantile as method, and the cuts are some float numbers betweent 0-1.
using manual as method, and the cuts are some user defined cuts.
NOTE that the cut list has to include the minimum and maximum values.
End of explanation
a_list = grid_res['usercount'].tolist()
Explanation: breaking a list instead of a column of a dataframe
Let say you have a list, instead of a dataframe/geodataframe.
End of explanation
level_list, cuts = bk.get_levels(a_list, method='head_tail_break', N=5)
print cuts
len(level_list)==len(a_list)
Explanation: And you want to get the break levels, another function is also provided (the function that is called by tm.leveling_vector).
End of explanation |
10,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Horizon Graph
Originally called "two-tone pseudo-coloring", a horizon graph increases the density of time series graphs by dividing and layering filled line charts.
Intro
Step1: Visual attributes
<img src='./img/visual_attributes.png' width=600>
Line Chart
Cleveland's Banking to the 45 - aspect ratio & perception.
Related paper on Banking to the 45
Stacked Graph
good | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from bokeh.charts import Horizon, output_file, show
from bokeh.io import output_notebook
%matplotlib inline
output_notebook()
Explanation: Horizon Graph
Originally called "two-tone pseudo-coloring", a horizon graph increases the density of time series graphs by dividing and layering filled line charts.
Intro:
* We review Sizing the Horizon: The Effects of Chart Size and Layering on the Grahpical Perception of Time Series Visualizations.
We will use this example to illustrate these definitions:
Mirror
Offset
Band
Outline:
* Problem: creating an effective presentation of multiple time series.
* Goal: display more charts in a fixed area.
* Reason: viewers need to quickly and reliably spot trends.
Initial thoughts?
* Stock example.
End of explanation
# read in some stock data from the Yahoo Finance API
start_date = '2014'
end_date = '2017'
AAPL = pd.read_csv(
"http://ichart.yahoo.com/table.csv?s=AAPL&a=0&b=1&c={sd}&d=0&e=1&f={ed}".format(sd=start_date,ed=end_date),
parse_dates=['Date'])
MSFT = pd.read_csv(
"http://ichart.yahoo.com/table.csv?s=MSFT&a=0&b=1&c={sd}&d=0&e=1&f={ed}".format(sd=start_date,ed=end_date),
parse_dates=['Date'])
IBM = pd.read_csv(
"http://ichart.yahoo.com/table.csv?s=IBM&a=0&b=1&c={sd}&d=0&e=1&f={ed}".format(sd=start_date,ed=end_date),
parse_dates=['Date'])
TWTR = pd.read_csv(
"http://ichart.yahoo.com/table.csv?s=TWTR&a=0&b=1&c={sd}&d=0&e=1&f={ed}".format(sd=start_date,ed=end_date),
parse_dates=['Date'])
FB = pd.read_csv(
"http://ichart.yahoo.com/table.csv?s=FB&a=0&b=1&c={sd}&d=0&e=1&f={ed}".format(sd=start_date,ed=end_date),
parse_dates=['Date'])
data = dict([
('AAPL', AAPL['Adj Close']),
('Date', AAPL['Date']),
('FB', FB['Adj Close']),
('MSFT', MSFT['Adj Close']),
#('IBM', IBM['Adj Close']),
('TWTR', TWTR['Adj Close'])]
)
hp = Horizon(data, x='Date'
, plot_width=800
, plot_height=300,
title="horizon plot using stock inputs")
#output_file("horizon.html")
show(hp)
Explanation: Visual attributes
<img src='./img/visual_attributes.png' width=600>
Line Chart
Cleveland's Banking to the 45 - aspect ratio & perception.
Related paper on Banking to the 45
Stacked Graph
good: aggregate patterns
Cleveland: perception of position vs length
Horizon Graph Code
Using Bokeh, we can quickly build a stacked graph example.
End of explanation |
10,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 2
Step1: If we target a liquid system, we should not set up the particles in a lattice,
as this introduces unwanted structure in the starting configuration.
We define our system size by the number of particles and the density.
The system parameters lead to the following values
Step2: We save the force field parameters in python dictionaries, now with parameters for the walls
Step3: To finally calculate the box size, we take into account the diameter of the electrode interaction.
Additionally, ELC needs a particle-free gap in the $z$-direction behind the wall.
Step4: In the next snippet, we add the walls to our system. Our constraint takes two arguments
Step5: Now we place the particles at random position without overlap with the walls
Step6: The scheme to set up the Lennard-Jones interaction is the same as before,
extended by the Electrode-Ion interactions
Step7: Next is the Lennard-Jones equilibration, followed by the thermostat
Step8: As described, we use P$^3$M in combination with ELC to account for the 2D-periodicity.
ELC is also added to the <tt>actors</tt> of the system and takes gap size and maximum
pairwise errors as arguments.
Step9: For now, our electric field is zero, but we want to switch it on later.
Here we run over all particles and set an external force on the charges caused
by the field
Step10: This is followed by our standard temperature equilibration
Step11: In the integration loop, we like to measure the density profile for both ion species along the $z$-direction.
We use a simple histogram analysis to accumulate the density data. Integration takes a while.
Step12: Finally, we calculate the average, normalize the data with the bin volume and save it to
a file using NumPy's <tt>savetxt</tt> command.
Step13: Finally we can plot the density of the ions. | Python Code:
from espressomd import System, electrostatics, electrostatic_extensions
from espressomd.shapes import Wall
import espressomd
import numpy
Explanation: Tutorial 2: A Simple Charged System, Part 2
7 2D Electrostatics and Constraints
In this section, we use the parametrized NaCl system from the last task to simulate a molten salt in a
parallel plate capacitor with and without applied electric field. We have to extend our simulation by several aspects:
Confinement
ESPResSo features a number of basic shapes like cylinders, walls or spheres to simulate confined systems.
Here, we use two walls at $z = 0$ and $z = L_z$ for the parallel plate setup ($L_z$: box length in $z$-direction)
2D-Electrostatics
ESPResSo also has a number of ways to account for the unwanted electrostatic interaction in the now non-periodic $z$-dimension.
We use the 3D-periodic P$^3$M algorithm in combination with the Electrostatic Layer Correction (ELC).
ELC subtracts the forces caused by the periodic images in the $z$-dimension. Another way would be to use the explicit 2D-electrostatics algorithm
MMM2D, also available in ESPResSo.
Electric Field
The simple geometry of the system allows us to treat an electric field in $z$-direction as a homogeneous force.
Note that we use inert walls here and don't take into account the dielectric contrast caused by metal electrodes.
Parameters
For our molten NaCl, we use a temperature $100 \ \mathrm{K}$ above the melting point ($1198.3 \ \mathrm{K}$)
and an approximated density of $\rho = 1.1138 \ \mathrm{u \mathring{A}}$$^{-3}$ found in [1].
Let's walk through the python script. We need additional imports for the wall shapes and the ELC algorithm:
End of explanation
required_features = ["EXTERNAL_FORCES", "MASS", "ELECTROSTATICS", "LENNARD_JONES"]
espressomd.assert_features(required_features)
print(espressomd.features())
# System parameters
n_part = 1000
n_ionpairs = n_part / 2
density = 1.1138
time_step = 0.001823
temp = 1198.3
gamma = 50
#l_bjerrum = 0.885^2 * e^2/(4*pi*epsilon_0*k_B*T)
l_bjerrum = 130878.0 / temp
wall_margin = 0.5
Ez = 0
num_steps_equilibration = 3000
num_configs = 200
integ_steps_per_config = 100
Explanation: If we target a liquid system, we should not set up the particles in a lattice,
as this introduces unwanted structure in the starting configuration.
We define our system size by the number of particles and the density.
The system parameters lead to the following values:
End of explanation
# Particle parameters
types = {"Cl": 0, "Na": 1, "Electrode": 2}
numbers = {"Cl": n_ionpairs, "Na": n_ionpairs}
charges = {"Cl": -1.0, "Na": 1.0}
lj_sigmas = {"Cl": 3.85, "Na": 2.52, "Electrode": 3.37}
lj_epsilons = {"Cl": 192.45, "Na": 17.44, "Electrode": 24.72}
lj_cuts = {"Cl": 3.0 * lj_sigmas["Cl"],
"Na": 3.0 * lj_sigmas["Na"],
"Electrode": 3.0 * lj_sigmas["Electrode"]}
masses = {"Cl": 35.453, "Na": 22.99, "Electrode": 12.01}
Explanation: We save the force field parameters in python dictionaries, now with parameters for the walls:
End of explanation
# Setup System
box_l = (n_ionpairs * sum(masses.values()) / density)**(1. / 3.)
box_z = box_l + 2.0 * (lj_sigmas["Electrode"] + wall_margin)
elc_gap = box_z * 0.15
system = System(box_l=[box_l, box_l, box_z + elc_gap])
system.seed = 42
box_volume = numpy.prod([box_l, box_l, box_z])
system.periodicity = [True, True, True]
system.time_step = time_step
system.cell_system.skin = 0.3
Explanation: To finally calculate the box size, we take into account the diameter of the electrode interaction.
Additionally, ELC needs a particle-free gap in the $z$-direction behind the wall.
End of explanation
# Walls
system.constraints.add(shape=Wall(dist=wall_margin, normal=[0, 0, 1]),
particle_type=types["Electrode"])
system.constraints.add(shape=Wall(dist=-(box_z - wall_margin), normal=[0, 0, -1]),
particle_type=types["Electrode"])
Explanation: In the next snippet, we add the walls to our system. Our constraint takes two arguments:
First the <tt>shape</tt>, in our case a simple plane defined by its normal vector and the distance from the origin,
second the <tt>particle_type</tt>, which is used to set up the interaction between particles and constraints.
End of explanation
# Place particles
for i in range(int(n_ionpairs)):
p = numpy.random.random(3) * box_l
p[2] += lj_sigmas["Electrode"]
system.part.add(id=len(system.part), type=types["Cl"],
pos=p, q=charges["Cl"], mass=masses["Cl"])
for i in range(int(n_ionpairs)):
p = numpy.random.random(3) * box_l
p[2] += lj_sigmas["Electrode"]
system.part.add(id=len(system.part), type=types["Na"],
pos=p, q=charges["Na"], mass=masses["Na"])
Explanation: Now we place the particles at random position without overlap with the walls:
End of explanation
# Lennard-Jones interactions parameters
def combination_rule_epsilon(rule, eps1, eps2):
if rule == "Lorentz":
return (eps1 * eps2)**0.5
else:
return ValueError("No combination rule defined")
def combination_rule_sigma(rule, sig1, sig2):
if rule == "Berthelot":
return (sig1 + sig2) * 0.5
else:
return ValueError("No combination rule defined")
for s in [["Cl", "Na"], ["Cl", "Cl"], ["Na", "Na"],
["Na", "Electrode"], ["Cl", "Electrode"]]:
lj_sig = combination_rule_sigma("Berthelot",
lj_sigmas[s[0]], lj_sigmas[s[1]])
lj_cut = combination_rule_sigma("Berthelot",
lj_cuts[s[0]], lj_cuts[s[1]])
lj_eps = combination_rule_epsilon("Lorentz",
lj_epsilons[s[0]], lj_epsilons[s[1]])
system.non_bonded_inter[types[s[0]], types[s[1]]].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig, cutoff=lj_cut, shift="auto")
Explanation: The scheme to set up the Lennard-Jones interaction is the same as before,
extended by the Electrode-Ion interactions:
End of explanation
energy = system.analysis.energy()
print("Before Minimization: E_total = {:.3e}".format(energy['total']))
system.minimize_energy.init(f_max=10, gamma=10, max_steps=2000,
max_displacement=0.01)
system.minimize_energy.minimize()
energy = system.analysis.energy()
print("After Minimization: E_total = {:.3e}".format(energy['total']))
# Set thermostat
system.thermostat.set_langevin(kT=temp, gamma=gamma, seed=42)
Explanation: Next is the Lennard-Jones equilibration, followed by the thermostat:
End of explanation
# Tuning Electrostatics
p3m = electrostatics.P3M(prefactor=l_bjerrum * temp,
accuracy=1e-2)
system.actors.add(p3m)
elc = electrostatic_extensions.ELC(gap_size=elc_gap,
maxPWerror=1e-3)
system.actors.add(elc)
Explanation: As described, we use P$^3$M in combination with ELC to account for the 2D-periodicity.
ELC is also added to the <tt>actors</tt> of the system and takes gap size and maximum
pairwise errors as arguments.
End of explanation
for p in system.part:
p.ext_force = [0, 0, Ez * p.q]
Explanation: For now, our electric field is zero, but we want to switch it on later.
Here we run over all particles and set an external force on the charges caused
by the field:
End of explanation
# Temperature Equilibration
system.time = 0.0
for i in range(int(num_steps_equilibration / 100)):
energy = system.analysis.energy()
temp_measured = energy['kinetic'] / ((3.0 / 2.0) * n_part)
print("progress={:.0f}%, t={:.1f}, E_total={:.2f}, E_coulomb={:.2f}, T={:.4f}"
.format(i * 100. / int(num_steps_equilibration / 100 - 1), system.time,
energy['total'], energy['coulomb'], temp_measured), end='\r')
system.integrator.run(100)
print()
Explanation: This is followed by our standard temperature equilibration:
End of explanation
# Integration
bins = 100
z_dens_na = numpy.zeros(bins)
z_dens_cl = numpy.zeros(bins)
system.time = 0.0
cnt = 0
for i in range(num_configs):
print('progress: {:>3.0f}%'.format(i * 100. / num_configs), end='\r')
energy = system.analysis.energy()
temp_measured = energy['kinetic'] / ((3.0 / 2.0) * n_part)
system.integrator.run(integ_steps_per_config)
for p in system.part:
bz = int(p.pos[2] / box_z * bins)
if p.type == types["Na"]:
z_dens_na[bz] += 1.0
elif p.type == types["Cl"]:
z_dens_cl[bz] += 1.0
cnt += 1
print('progress: 100%')
Explanation: In the integration loop, we like to measure the density profile for both ion species along the $z$-direction.
We use a simple histogram analysis to accumulate the density data. Integration takes a while.
End of explanation
# Analysis
# Average / Normalize with Volume
z_dens_na /= (cnt * box_volume / bins)
z_dens_cl /= (cnt * box_volume / bins)
z_values = numpy.linspace(0, box_l, num=bins)
res = numpy.column_stack((z_values, z_dens_na, z_dens_cl))
numpy.savetxt("z_density.data", res,
header="#z rho_na(z) rho_cl(z)")
Explanation: Finally, we calculate the average, normalize the data with the bin volume and save it to
a file using NumPy's <tt>savetxt</tt> command.
End of explanation
import matplotlib.pyplot as plt
plt.ion()
plt.figure(figsize=(10, 6), dpi=80)
plt.plot(z_values, z_dens_na, label='Na')
plt.plot(z_values, z_dens_cl, label='Cl')
plt.xlabel('$z$-axis $(\\mathrm{\\AA})$', fontsize=20)
plt.ylabel('Density $(\\mathrm{u\\AA}^{-3})$', fontsize=20)
plt.legend(fontsize=16)
plt.show()
Explanation: Finally we can plot the density of the ions.
End of explanation |
10,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DM_Halos and DM_IGM
Splitting $\langle DM_{cosmic}\rangle$ into its constituents.
Step1: $\langle \rho_{diffuse, cosmic}\rangle$
Use f_diffuse to calculate the average mass fraction of diffuse gas and diffuse gas density (physical). Math described in DM_cosmic.ipynb.
Step2: $\langle n_{e,cosmic}\rangle$
Step3: $\langle DM_{cosmic}\rangle$
See DM_cosmic.ipynb for details regarding its computation.
Step4: $\langle DM_{halos}\rangle$ and $\langle DM_{IGM}\rangle$
The fraction of free electrons present in halos should be equal to the fraction of diffuse gas in halos assuming the ionization state of the individual species is only dependent on redshift (and not gas density as well).
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{diffuse,halos}}{\rho_{diffuse,cosmic}}\
& = \frac{\rho_{b, halos}f_{hot}}{\rho_{b, cosmic}f_{diffuse, cosmic}}\
\end{aligned}
$$
Here $\rho_b$ refers to baryon density. $f_{hot}$ refers to the fraction of baryons in halos that is in the hot phase ($\sim10^7$ K). The remaining baryons are either in the neutral phase or in dense objects like stars. Assuming halos have the same baryon mass fraction as the universal average ($\Omega_b/\Omega_M$)
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{m, halos}f_{hot}}{\rho_{m, cosmic}f_{diffuse, cosmic}}\
& = \frac{f_{halos} f_{hot}}{f_{diffuse, cosmic}}\
\end{aligned}
$$
$f_{halos}$ can be computed as a function of redshift by integrating the halo mass function (HMF) times mass over some mass range and dividing it by the density of matter in the universe. This allows us to compute a line of sight integral of $\langle n_{e, halos} \rangle$ to get $\langle DM_{halos}\rangle$. $\langle DM_{IGM}\rangle$ is just obtained by subtracting this from $\langle DM_{cosmic}\rangle$.
Apart from $f_{hot}$ being an obvious free parameter, we also allow variation in the radial extent of halos. This is encoded in the parameter $r_{max}$ which is the radial extent of halos in units of $r_{200}$. Setting $r_{max}>1$ (for all halos; currently it is mass independent) smoothly extends the NFW profile and the modifid profile of the encased diffuse baryons. | Python Code:
# imports
from importlib import reload
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline as IUS
from astropy import units as u
from frb.halos.models import ModifiedNFW
from frb.halos import models as frb_halos
from frb.halos import hmf as frb_hmf
from frb.dm import igm as frb_igm
from frb.figures import utils as ff_utils
from matplotlib import pyplot as plt
plt.rcParams['font.size'] = 17
Explanation: DM_Halos and DM_IGM
Splitting $\langle DM_{cosmic}\rangle$ into its constituents.
End of explanation
help(frb_igm.f_diffuse)
# Define redshifts
zvals = np.linspace(0, 8)
# Get <n_e>
f_diffuse, rho_diffuse = frb_igm.f_diffuse(zvals, return_rho = True)
# Plot
fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7))
fig.tight_layout()
ax1 = axs[0]
ax1.plot(zvals, f_diffuse, lw=2)
ax1.set_ylabel(r'$\langle f_{diffuse, cosmic}\rangle$')
ax2 = axs[1]
ax2.plot(zvals, rho_diffuse.to('Msun*Mpc**-3'), lw=2)
ax2.set_yscale("log")
ax2.set_xlabel('z')
ax2.set_ylabel(r'$\langle \rho_{diffuse, cosmic}\rangle$ $M_\odot~Mpc^{-3}$')
plt.show()
Explanation: $\langle \rho_{diffuse, cosmic}\rangle$
Use f_diffuse to calculate the average mass fraction of diffuse gas and diffuse gas density (physical). Math described in DM_cosmic.ipynb.
End of explanation
help(frb_igm.ne_cosmic)
# Define redshifts
zvals = np.linspace(0, 8)
# Get <n_e>
avg_ne = frb_igm.ne_cosmic(zvals)
# Visualize
fig = plt.figure(figsize = (10, 6))
plt.plot(zvals, avg_ne, label=r'$\langle n_{e, cosmic}\rangle$', lw=2)
plt.yscale("log")
plt.legend(loc = "upper left")
plt.xlabel('z')
plt.ylabel(r'$\langle n_{e, cosmic}\rangle$ [$cm^{-3}$]')
plt.show()
Explanation: $\langle n_{e,cosmic}\rangle$
End of explanation
help(frb_igm.average_DM)
DM_cosmic, zvals = frb_igm.average_DM(8, cumul=True)
# Visualize
fig = plt.figure(figsize = (10, 6))
plt.plot(zvals, DM_cosmic, lw=2)
plt.xlabel('z')
plt.ylabel(r'$\langle DM_{cosmic}\rangle$ $pc~cm^{-3}$')
plt.show()
Explanation: $\langle DM_{cosmic}\rangle$
See DM_cosmic.ipynb for details regarding its computation.
End of explanation
help(frb_igm.average_DMhalos)
# evaluation
frb_igm.average_DMhalos(0.1)
# get cumulative DM_halos
dm, zvals = frb_igm.average_DMhalos(0.1, cumul = True)
dm
zvals
fhot_array = [0.2, 0.5, 0.75]
rmax_array = [0.5, 1.0 , 2.0]
# <DM_halos> for different f_hot
fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7))
fig.tight_layout()
ax1 = axs[0]
for f_hot in fhot_array:
DM_halos, zeval = frb_igm.average_DMhalos(3, f_hot = f_hot, cumul=True)
ax1.plot(zeval, DM_halos, label="{:0.1f}".format(f_hot))
ax1.legend(title="f_hot")
ax1.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$')
# <DM_halos> for different rmax
ax2 = axs[1]
for rmax in rmax_array:
DM_halos, zeval = frb_igm.average_DMhalos(3, rmax = rmax, cumul = True)
ax2.plot(zeval, DM_halos, label="{:0.1f}".format(rmax))
ax2.legend(title="rmax")
ax2.set_xlabel('z')
ax2.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$')
plt.show()
# Limits of calculation
frb_igm.average_DMhalos(3.1)
# Failure above redshift 5
frb_igm.average_DMhalos(5.1)
help(frb_igm.average_DMIGM)
# Sanity check. <DM_cosmic> - (<DM_halos> + <DM_IGM) = 0
dm, zvals = frb_igm.average_DM(0.1, cumul= True)
dm_halos, _ = frb_igm.average_DMhalos(0.1, cumul = True)
dm_igm, _ = frb_igm.average_DMIGM(0.1, cumul = True)
plt.plot(zvals, dm - dm_halos - dm_igm)
plt.ylabel(r"DM $pc~cm^{-3}$")
plt.xlabel("z")
plt.show()
Explanation: $\langle DM_{halos}\rangle$ and $\langle DM_{IGM}\rangle$
The fraction of free electrons present in halos should be equal to the fraction of diffuse gas in halos assuming the ionization state of the individual species is only dependent on redshift (and not gas density as well).
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{diffuse,halos}}{\rho_{diffuse,cosmic}}\
& = \frac{\rho_{b, halos}f_{hot}}{\rho_{b, cosmic}f_{diffuse, cosmic}}\
\end{aligned}
$$
Here $\rho_b$ refers to baryon density. $f_{hot}$ refers to the fraction of baryons in halos that is in the hot phase ($\sim10^7$ K). The remaining baryons are either in the neutral phase or in dense objects like stars. Assuming halos have the same baryon mass fraction as the universal average ($\Omega_b/\Omega_M$)
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{m, halos}f_{hot}}{\rho_{m, cosmic}f_{diffuse, cosmic}}\
& = \frac{f_{halos} f_{hot}}{f_{diffuse, cosmic}}\
\end{aligned}
$$
$f_{halos}$ can be computed as a function of redshift by integrating the halo mass function (HMF) times mass over some mass range and dividing it by the density of matter in the universe. This allows us to compute a line of sight integral of $\langle n_{e, halos} \rangle$ to get $\langle DM_{halos}\rangle$. $\langle DM_{IGM}\rangle$ is just obtained by subtracting this from $\langle DM_{cosmic}\rangle$.
Apart from $f_{hot}$ being an obvious free parameter, we also allow variation in the radial extent of halos. This is encoded in the parameter $r_{max}$ which is the radial extent of halos in units of $r_{200}$. Setting $r_{max}>1$ (for all halos; currently it is mass independent) smoothly extends the NFW profile and the modifid profile of the encased diffuse baryons.
End of explanation |
10,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Impact on Initial Mass Function
When determining stellar masses in young stellar associations, non-magnetic models are often adopted. However, if magnetic inhibition of convection is an important process in governing the structure of young pre-main-sequence stars, then the mass-Teff relationship will be shifted.
Step2: Define the IMF using Chabrier (2003) log-normal distribution
Step3: Create a synthetic population of stars using a Chabrier IMF and magnetic models.
Step4: Now, what would this population look like if we accidentally used the wrong stellar model isochrone to derive the IMF? | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Impact on Initial Mass Function
When determining stellar masses in young stellar associations, non-magnetic models are often adopted. However, if magnetic inhibition of convection is an important process in governing the structure of young pre-main-sequence stars, then the mass-Teff relationship will be shifted.
End of explanation
def chabrierIMF(N, mass_low, mass_high):
define a chabrier IMF
alpha = 2.3
dm = 0.002
masses = np.arange(mass_low, mass_high, dm)
masses_lo = np.arange(mass_low, 1.0, dm)
masses_hi = np.arange(1.0, mass_high, dm)
# below 1.0 Msun
lo_m_prob = 0.086*np.exp(-(np.log10(masses_lo) - np.log10(0.22))**2/(2.0*0.57**2))/masses_lo
hi_m_prob = masses_hi**-alpha*dm
hi_m_prob = (lo_m_prob[-1]/hi_m_prob[0])*hi_m_prob
norm_factor = 1.0/(np.sum(lo_m_prob) + np.sum(hi_m_prob))
return np.column_stack((masses, float(N)*norm_factor*np.append(lo_m_prob, hi_m_prob)))
Explanation: Define the IMF using Chabrier (2003) log-normal distribution
End of explanation
cluster = chabrierIMF(10000, 0.1, 1.7)
plt.loglog(cluster[:,0], cluster[:,1])
mag_iso = np.genfromtxt('../models/iso/mag/dmestar_00010.0myr_z+0.00_a+0.00_phx_magBeq.iso')
from scipy.interpolate import interp1d
icurve = interp1d(mag_iso[:,0], mag_iso[:,1], kind='linear')
Teffs = icurve(cluster[:, 0])
Explanation: Create a synthetic population of stars using a Chabrier IMF and magnetic models.
End of explanation
std_iso = np.genfromtxt('../models/iso/std/dmestar_00005.0myr_z+0.00_a+0.00_phx.iso')
icurve = interp1d(std_iso[:,1], std_iso[:,0], kind='linear')
Masses = icurve(Teffs[24:])
plt.loglog(cluster[:, 0], cluster[:, 1], '-', lw=4, c='#333333')
plt.loglog(Masses, cluster[24:, 1], '-', lw=3, c='#777777')
Explanation: Now, what would this population look like if we accidentally used the wrong stellar model isochrone to derive the IMF?
End of explanation |
10,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Circles Metacog Analysis
Imports
Step1: Load Data
The metacog_dfs function creates 4 dataframes with metacognitive information
Step2: Sanity check
Let's see if the scale converge in order to keep the performance level constant
Step3: It is also important to see that no subject has a Wager value of 1 (or 0) | Python Code:
%matplotlib inline
from __future__ import unicode_literals
import pandas as pd
import numpy as np
from glob import glob
from matplotlib import pyplot as plt
import seaborn as sns
from metacog_utils import add_sdt_utils, metacog_dfs, jointplot_group
from IPython.display import display
Explanation: Circles Metacog Analysis
Imports
End of explanation
dfs = []
for f in glob('data_anto/*.csv'):
dfs.append(pd.read_csv(f, encoding='utf-8'))
df = pd.concat(dfs)
df = add_sdt_utils(df)
means, counts, proba, mecog = metacog_dfs(df)
Explanation: Load Data
The metacog_dfs function creates 4 dataframes with metacognitive information
End of explanation
g = sns.FacetGrid(df[~df['TrialType'].str.contains('easy|hard')], col='Name', col_wrap=5)
g.map(plt.plot, 'Trial', 'Scale')
g = sns.FacetGrid(df[~df['TrialType'].str.contains('easy|hard')], col='Name', col_wrap=5)
g.map(plt.plot, 'trials.thisTrialN', 'Signal')
g = sns.FacetGrid(df[~df['TrialType'].str.contains('easy|hard')], col='Name', col_wrap=5)
g.map(plt.plot, 'trials.thisTrialN', 'cmax2')
Explanation: Sanity check
Let's see if the scale converge in order to keep the performance level constant
End of explanation
df.groupby('Name')[['Response', 'Signal', 'Confidence', 'Wager']].mean()
sns.jointplot('Response', 'Wager', means, marginal_kws={'hist': False, 'kde': True}, stat_func=None)
sns.jointplot('Response', 'Confidence', means, marginal_kws={'hist': False, 'kde': True}, stat_func=None)
sns.jointplot('Wager', 'Confidence', means, marginal_kws={'hist': False, 'kde': True}, stat_func=None)
df.groupby('Name')[['Response RT', 'Wager RT', 'Confidence RT']].mean()
df.groupby('Name')[['Response', 'Wager', 'Confidence']].count()
df[df['TrialType'].str.contains('easy|hard')].pivot_table(index='Name', columns='TrialType', values='Response')
Explanation: It is also important to see that no subject has a Wager value of 1 (or 0)
End of explanation |
10,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating Geopackage Layers
Landung Setiawan 5/27/2016
Updated 6/29/2016
Note
Step1: Reading csv and printing as dictionary
Step2: Use shapely to make points
Since csv module doesn't distinguish between types, shapely is used to make points
Step3: Geopandas reading a geopackage
Step4: Write geopandas dataframe to geopackage
Step5: Uploading Geopackage to GeoServer | Python Code:
%matplotlib inline
# Import the necessary libraries
import csv, os
from shapely.geometry import Point, mapping
import fiona, shapely
from fiona import Collection
import numpy as np
print "fiona version: {}".format(fiona.__version__)
print "shapely version: {}".format(shapely.__version__)
print "gdal version: {}".format(fiona.__gdal_version__)
print "numpy version: {}".format(np.__version__)
# Assign file_path
pth = "/mnt/hgfs/shared_ubuntu/APL/OOI/OOI_ipynb/"
fname = 'Nanoos.gpkg'
fcsv = "OOI_Assets.csv"
Explanation: Creating Geopackage Layers
Landung Setiawan 5/27/2016
Updated 6/29/2016
Note: In order for fiona to be able to read and write geopackage, numpy 1.10.0 and gdal 1.11.0 or greater is required, however, gdal cannot be 2.0.0 or greater!
Creating the environment
bash
conda create -n gpkg -c conda-forge numpy=1.10.0 fiona=1.6.4 gdal=1.11.4 geopandas matplotlib
source activate gpkg
conda install ipython notebook anaconda-client
conda install -c auto gsconfig=0.6.7
End of explanation
with open(os.path.join(pth,fcsv),'rb') as f:
reader = csv.DictReader(f)
for row in reader:
print row # Notice that numbers are strings in this case
Explanation: Reading csv and printing as dictionary
End of explanation
with open(os.path.join(pth,fcsv), 'rb') as f:
reader = csv.DictReader(f)
for row in reader:
point = Point(float(row['Lon']),float(row['Lat']))
print point
Explanation: Use shapely to make points
Since csv module doesn't distinguish between types, shapely is used to make points
End of explanation
import pandas as pd
from geopandas import GeoDataFrame
from shapely.geometry import Point
import matplotlib.pyplot as plt
import geopandas as gpd
import pyproj
print "geopandas version: {}".format(gpd.__version__)
# Test reading geopackage
geopackage = gpd.read_file(os.path.join(pth,fname))
geopackage.head(2)
Explanation: Geopandas reading a geopackage
End of explanation
df = pd.read_csv(os.path.join(pth,fcsv))
# Assign CRS, retrieved from epsg.io, the example below is EPSG:4326
crs = 'GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]'
geometry = [Point(xy) for xy in zip(df.Lon, df.Lat)]
geo_df = GeoDataFrame(df, crs=crs, geometry=geometry)
print "Original Column Header: {}\n".format(geo_df.columns.values)
# Renamed the problematic keys
renamed = geo_df.rename(columns={'Provider URL':'Provider_URL',
'Provider':'Provider',
'Provider Type':'Provider_Type',
'State / Province':'State_or_Province'})
print "Renamed Column Header: {}".format(renamed.columns.values)
# Removing the problematic keys
# Problematic keys can either be renamed or removed.
# package = geo_df.drop(geo_df.columns[[8,9,10,11]],axis=1)
# print package.columns.values
# Write the renamed geodataframe to a geopackage
renamed.to_file('OOI_Assets.gpkg',driver='GPKG')
# Check if the geopackage was written correctly
test = gpd.read_file('OOI_Assets.gpkg')
test
Explanation: Write geopandas dataframe to geopackage
End of explanation
# Import the Catalog module
from geoserver.catalog import Catalog
# Import subprocess to use cURL REST API since gsconfig, doesn't seem to have this capability anymore
import subprocess
# Retrieve catalog from Geoserver Instance via REST (REpresentational State Transfer)
cat = Catalog("http://data.nanoos.org/geoserver2_8/rest", username='####', password='####')
# Get list of workspaces
print cat.get_workspaces()
# Get workspace
nvs = cat.get_workspace('nvs_assets')
print nvs.name
# Create the geopackage datastore
gpkg_ds = cat.create_datastore('OOI_Assets', workspace=nvs)
# Edit the connection parameters
gpkg_ds.connection_parameters = {'Connection timeout': '20',
'Evictor run periodicity': '300',
'Evictor tests per run': '3',
'Expose primary keys': 'false',
'Max connection idle time': '300',
'Test while idle': 'true',
'database': 'file:data/geopackages/OOI_Assets.gpkg', # Point to location of geopackage relative to the geoserver data directory
'dbtype': 'geopkg',
'fetch size': '1000',
'max connections': '10',
'min connections': '1',
'namespace': 'http://data.nanoos.org/geoserver2_8/nvs_assets', # Workspace URL
'validate connections': 'true'}
# Save datastore
cat.save(gpkg_ds)
# Set necessary variables for cURL
data_name = 'OOI_Assets'
wksp_name = nvs.name
ds_name = gpkg_ds.name
print ds_name
# Create layer from geopackage table
subprocess.call('curl -v -u ####:#### -XPOST -H "Content-type: text/xml" -d "<featureType><name>{0}</name></featureType>" http://data.nanoos.org/geoserver2_8/rest/workspaces/{1}/datastores/{2}/featuretypes'.format(data_name,wksp_name,ds_name), shell=True)
# get the newly published layer w/o any projection
layer = cat.get_layer(data_name)
# retrieve resource to assign projection
rsrc = layer.resource
# assign Layer projection
rsrc.projection = 'EPSG:4326'
# save layer
cat.save(rsrc)
Explanation: Uploading Geopackage to GeoServer
End of explanation |
10,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Exploration
Research provides utility functions to query pricing, volume, and returns data for 8000+ US equities, from 2002 up to the most recently completed trading day. These functions take an asset (or list of assets) along with a start and end date, and return a pandas Series (or DataFrame) indexed by date.
Let's define the period of time we want to explore and use the returns function to query data for AAPL
Step1: Alternative Data
In addition to pricing and volume data, Quantopian integrates a number of alternative datasets that include corporate fundamentals, stock sentiment analysis, and macroeconomic indicators, to name a few. You can find the complete list of 50+ datasets on Quantopian's data page.
Our goal in this tutorial will be to build an algorithm that selects and trades assets based on sentiment data, so let's take a look at Sentdex News Sentiment sentiment dataset. The sentiment scores are generated from four simple moving average (SMA) factors over the last 100, 250, 500, and 5000 news events for each stock. News events are pulled from over 20 sources including The Wall Street Journal, CNBC, Forbes, Business Insider, and Yahoo Finance.
We can start by inspecting the sentiment_signal from the sentiment dataset. We will query the data using Quantopian's Pipeline API, which is a powerful tool you will use over and over again to access and analyze data in Research. You will learn a lot more about the Pipeline API in the next lesson and a later tutorial. For now all you need to know is that the following code uses a data pipeline to query sentiment and returns data, and plots the results for AAPL | Python Code:
# Research environment functions
from quantopian.research import returns, symbols
# Select a time range to inspect
period_start = '2014-01-01'
period_end = '2017-1-1'
# Query returns data for AAPL
# over the selected time range
aapl_returns = returns(
assets=symbols('AAPL'),
start=period_start,
end=period_end,
)
# Display first 10 rows
aapl_returns.head(10)
Explanation: Data Exploration
Research provides utility functions to query pricing, volume, and returns data for 8000+ US equities, from 2002 up to the most recently completed trading day. These functions take an asset (or list of assets) along with a start and end date, and return a pandas Series (or DataFrame) indexed by date.
Let's define the period of time we want to explore and use the returns function to query data for AAPL:
End of explanation
# Pipeline imports
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.domain import US_EQUITIES
from quantopian.pipeline.factors import Returns
from quantopian.pipeline.data.sentdex import sentiment
# Pipeline definition
def make_pipeline():
returns = Returns(window_length=2)
sentiment_score = sentiment.sentiment_signal.latest
return Pipeline(
columns={
'daily_returns': returns,
'sentiment': sentiment_score,
},
domain=US_EQUITIES,
)
# Pipeline execution
data_output = run_pipeline(
make_pipeline(),
start_date=period_start,
end_date=period_end
)
# Filter results for AAPL
aapl_output = data_output.xs(
symbols('AAPL'),
level=1
)
# Plot results for AAPL
aapl_output.plot(subplots=True);
Explanation: Alternative Data
In addition to pricing and volume data, Quantopian integrates a number of alternative datasets that include corporate fundamentals, stock sentiment analysis, and macroeconomic indicators, to name a few. You can find the complete list of 50+ datasets on Quantopian's data page.
Our goal in this tutorial will be to build an algorithm that selects and trades assets based on sentiment data, so let's take a look at Sentdex News Sentiment sentiment dataset. The sentiment scores are generated from four simple moving average (SMA) factors over the last 100, 250, 500, and 5000 news events for each stock. News events are pulled from over 20 sources including The Wall Street Journal, CNBC, Forbes, Business Insider, and Yahoo Finance.
We can start by inspecting the sentiment_signal from the sentiment dataset. We will query the data using Quantopian's Pipeline API, which is a powerful tool you will use over and over again to access and analyze data in Research. You will learn a lot more about the Pipeline API in the next lesson and a later tutorial. For now all you need to know is that the following code uses a data pipeline to query sentiment and returns data, and plots the results for AAPL:
End of explanation |
10,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gippsland Basin Uncertainty Study
Step1: The Gippsland Basin Model
In this example we will apply the UncertaintyAnalysis class we have been playing with in the previous example to a 'realistic' (though highly simplified) geological model of the Gippsland Basin, a petroleum field south of Victoria, Australia. The model has been included as part of the PyNoddy directory, and can be found at pynoddy/examples/GBasin_Ve1_V4.his
Step2: While we could hard-code parameter variations here, it is much easier to store our statistical information in a csv file, so we load that instead. This file accompanies the GBasin_Ve1_V4 model in the pynoddy directory.
Step3: Generate randomised model realisations
Now we have all the information required to perform a Monte-Carlo based uncertainty analysis. In this example we will generate 100 model realisations and use them to estimate the information entropy of each voxel in the model, and hence visualise uncertainty. It is worth noting that in reality we would need to produce several thousand model realisations in order to adequately sample the model space, however for convinience we only generate a small number of models here.
Step4: A few utility functions for visualising uncertainty have been included in the UncertaintyAnalysis class, and can be used to gain an understanding of the most uncertain parts of the Gippsland Basin. The probabability voxets for each lithology can also be accessed using ua.p_block[lithology_id], and the information entropy voxset accessed using ua.e_block.
Note that the Gippsland Basin model has been computed with a vertical exaggeration of 3, in order to highlight vertical structure.
Step5: It is immediately apparent (and not particularly surprising) that uncertainty in the Gippsland Basin model is concentrated around the thin (but economically interesting) formations comprising the La Trobe and Strzelecki Groups. The faults in the model also contribute to this uncertainty, though not by a huge amount.
Exporting results to VTK for visualisation
It is also possible (and useful!) to export the uncertainty information to .vtk format for 3D analysis in software such as ParaView. This can be done as follows | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
#import the ususal libraries + the pynoddy UncertaintyAnalysis class
import sys, os
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('/Users/flow/git/pynoddy/')
sys.path.append(repo_path)
sys.path.append(os.path.join(repo_path, "pynoddy/experiment"))
import pynoddy
# from pynoddy.experiment.UncertaintyAnalysis import UncertaintyAnalysis
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
import pynoddy.history
import pynoddy.experiment # .uncertainty_analysis
rcParams.update({'font.size': 20})
Explanation: Gippsland Basin Uncertainty Study
End of explanation
import importlib
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.output)
import pynoddy.experiment.uncertainty_analysis
importlib.reload(pynoddy.experiment.uncertainty_analysis)
importlib.reload(pynoddy)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4.his")
Explanation: The Gippsland Basin Model
In this example we will apply the UncertaintyAnalysis class we have been playing with in the previous example to a 'realistic' (though highly simplified) geological model of the Gippsland Basin, a petroleum field south of Victoria, Australia. The model has been included as part of the PyNoddy directory, and can be found at pynoddy/examples/GBasin_Ve1_V4.his
End of explanation
params = os.path.join(repo_path,"examples/gipps_params.csv")
Explanation: While we could hard-code parameter variations here, it is much easier to store our statistical information in a csv file, so we load that instead. This file accompanies the GBasin_Ve1_V4 model in the pynoddy directory.
End of explanation
# %%timeit # Uncomment to test execution time
ua = pynoddy.experiment.uncertainty_analysis.UncertaintyAnalysis(history_file, params)
ua.estimate_uncertainty(100,verbose=False)
Explanation: Generate randomised model realisations
Now we have all the information required to perform a Monte-Carlo based uncertainty analysis. In this example we will generate 100 model realisations and use them to estimate the information entropy of each voxel in the model, and hence visualise uncertainty. It is worth noting that in reality we would need to produce several thousand model realisations in order to adequately sample the model space, however for convinience we only generate a small number of models here.
End of explanation
ua.plot_section(direction='x',data=ua.block)
ua.plot_entropy(direction='x')
Explanation: A few utility functions for visualising uncertainty have been included in the UncertaintyAnalysis class, and can be used to gain an understanding of the most uncertain parts of the Gippsland Basin. The probabability voxets for each lithology can also be accessed using ua.p_block[lithology_id], and the information entropy voxset accessed using ua.e_block.
Note that the Gippsland Basin model has been computed with a vertical exaggeration of 3, in order to highlight vertical structure.
End of explanation
ua.extent_x = 29000
ua.extent_y = 21600
ua.extent_z = 4500
output_path = os.path.join(repo_path,"sandbox/GBasin_Uncertainty")
ua.export_to_vtk(vtk_filename=output_path,data=ua.e_block)
Explanation: It is immediately apparent (and not particularly surprising) that uncertainty in the Gippsland Basin model is concentrated around the thin (but economically interesting) formations comprising the La Trobe and Strzelecki Groups. The faults in the model also contribute to this uncertainty, though not by a huge amount.
Exporting results to VTK for visualisation
It is also possible (and useful!) to export the uncertainty information to .vtk format for 3D analysis in software such as ParaView. This can be done as follows:
End of explanation |
10,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create some text
Step2: Apply regex | Python Code:
# Load regex package
import re
Explanation: Title: Match URLs
Slug: match_urls
Summary: Match URLs
Date: 2016-05-01 12:00
Category: Regex
Tags: Basics
Authors: Chris Albon
Source: StackOverflow
Preliminaries
End of explanation
# Create a variable containing a text string
text = 'My blog is http://www.chrisalbon.com and not http://chrisalbon.com'
Explanation: Create some text
End of explanation
# Find any ISBN-10 or ISBN-13 number
re.findall(r'(http|ftp|https):\/\/([\w\-_]+(?:(?:\.[\w\-_]+)+))([\w\-\.,@?^=%&:/~\+#]*[\w\-\@?^=%&/~\+#])?', text)
Explanation: Apply regex
End of explanation |
10,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization 1
Step1: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
Step2: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
x = np.random.randn(100)
y = np.random.randn(100)
plt.scatter(x,y, s = 20, c = 'b')
plt.xlabel('Random Number 2')
plt.ylabel('Random Number')
plt.title('Random 2d Scatter Plot')
axis = plt.gca()
axis.spines['top'].set_visible(False)
axis.spines['right'].set_visible(False)
axis.get_xaxis().tick_bottom()
axis.get_yaxis().tick_left()
plt.tight_layout()
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
x = np.random.randn(10)
plt.hist(x,4)
plt.xlabel('X value')
plt.ylabel('Y value')
plt.title('Random Histogram Bins')
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation |
10,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name
Data preparation using Hadoop MapReduce on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, Hadoop, YARN, Apache, MapReduce
Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Hadoop MapReduce job on Apache Hadoop YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache Hadoop MapReduce job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |
| region | The Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| main_jar_file_uri | The Hadoop Compatible Filesystem (HCFS) URI of the JAR file containing the main class to execute. | No | List | | |
| main_class | The name of the driver's main class. The JAR file that contains the class must be either in the default CLASSPATH or specified in hadoop_job.jarFileUris. | No | String | | |
| args | The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None |
| hadoop_job | The payload of a HadoopJob. | Yes | Dict | | None |
| job | The payload of a Dataproc job. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |
Note
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Insepct Input Data
The input file is a simple text file
Step4: Clean up the existing output files (optional)
This is needed because the sample code requires the output folder to be a clean folder. To continue to run the sample, make sure that the service account of the notebook server has access to the OUTPUT_GCS_PATH.
CAUTION
Step5: Example pipeline that uses the component
Step6: Compile the pipeline
Step7: Submit the pipeline for execution
Step8: Inspect the output
The sample in the notebook will count the words in the input text and save them in sharded files. The command to inspect the output is | Python Code:
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
Explanation: Name
Data preparation using Hadoop MapReduce on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, Hadoop, YARN, Apache, MapReduce
Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Hadoop MapReduce job on Apache Hadoop YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache Hadoop MapReduce job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |
| region | The Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| main_jar_file_uri | The Hadoop Compatible Filesystem (HCFS) URI of the JAR file containing the main class to execute. | No | List | | |
| main_class | The name of the driver's main class. The JAR file that contains the class must be either in the default CLASSPATH or specified in hadoop_job.jarFileUris. | No | String | | |
| args | The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None |
| hadoop_job | The payload of a HadoopJob. | Yes | Dict | | None |
| job | The payload of a Dataproc job. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |
Note:
main_jar_file_uri: The examples for the files are :
- gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar
- hdfs:/tmp/test-samples/custom-wordcount.jarfile:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this guide.
* Create a new cluster.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the Kubeflow user service account the role roles/dataproc.editor on the project.
Detailed description
This component creates a Hadoop job from Dataproc submit job REST API.
Follow these steps to use the component in a pipeline:
Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
dataproc_submit_hadoop_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/submit_hadoop_job/component.yaml')
help(dataproc_submit_hadoop_job_op)
Explanation: Load the component using KFP SDK
End of explanation
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
OUTPUT_GCS_PATH = '<Please put your output GCS path here>'
REGION = 'us-central1'
MAIN_CLASS = 'org.apache.hadoop.examples.WordCount'
INTPUT_GCS_PATH = 'gs://ml-pipeline-playground/shakespeare1.txt'
EXPERIMENT_NAME = 'Dataproc - Submit Hadoop Job'
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Setup a Dataproc cluster
Create a new Dataproc cluster (or reuse an existing one) before running the sample code.
Prepare a Hadoop job
Upload your Hadoop JAR file to a Cloud Storage bucket. In the sample, we will use a JAR file that is preinstalled in the main cluster, so there is no need to provide main_jar_file_uri.
Here is the WordCount example source code.
To package a self-contained Hadoop MapReduce application from the source code, follow the MapReduce Tutorial.
Set sample parameters
End of explanation
!gsutil cat $INTPUT_GCS_PATH
Explanation: Insepct Input Data
The input file is a simple text file:
End of explanation
!gsutil rm $OUTPUT_GCS_PATH/**
Explanation: Clean up the existing output files (optional)
This is needed because the sample code requires the output folder to be a clean folder. To continue to run the sample, make sure that the service account of the notebook server has access to the OUTPUT_GCS_PATH.
CAUTION: This will remove all blob files under OUTPUT_GCS_PATH.
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Hadoop job pipeline',
description='Dataproc submit Hadoop job pipeline'
)
def dataproc_submit_hadoop_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
main_jar_file_uri = '',
main_class = MAIN_CLASS,
args = json.dumps([
INTPUT_GCS_PATH,
OUTPUT_GCS_PATH
]),
hadoop_job='',
job='{}',
wait_interval='30'
):
dataproc_submit_hadoop_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
main_jar_file_uri=main_jar_file_uri,
main_class=main_class,
args=args,
hadoop_job=hadoop_job,
job=job,
wait_interval=wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = dataproc_submit_hadoop_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation
!gsutil cat $OUTPUT_GCS_PATH/*
Explanation: Inspect the output
The sample in the notebook will count the words in the input text and save them in sharded files. The command to inspect the output is:
End of explanation |
10,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project 2
Step1: Now, can you find out the following facts about the dataset?
- Total number of students
- Number of students who passed
- Number of students who failed
- Graduation rate of the class (%)
- Number of features
Use the code block below to compute these values. Instructions/steps are marked using TODOs.
Step2: 3. Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Let's first separate our data into feature and target columns, and see if any features are non-numeric.<br/>
Note
Step3: Preprocess feature columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation.
Step4: Split data into training and test sets
So far, we have converted all categorical features into numeric values. In this next step, we split the data (both features and corresponding labels) into training and test sets.
Step5: 4. Training and Evaluating Models
Choose 3 supervised learning models that are available in scikit-learn, and appropriate for this problem. For each model
Step6: 5. Choosing the Best Model
Based on the experiments you performed earlier, in 1-2 paragraphs explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
In 1-2 paragraphs explain to the board of supervisors in layman's terms how the final model chosen is supposed to work (for example if you chose a Decision Tree or Support Vector Machine, how does it make a prediction).
Fine-tune the model. Use Gridsearch with at least one important parameter tuned and with at least 3 settings. Use the entire training set for this.
What is the model's final F<sub>1</sub> score? | Python Code:
# Import libraries
import numpy as np
import pandas as pd
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
# Note: The last column 'passed' is the target/label, all other are feature columns
student_data.head()
#student_data.describe()
student_data.passed.dtype
Explanation: Project 2: Supervised Learning
Building a Student Intervention System
1. Classification vs Regression
Your goal is to identify students who might need early intervention - which type of supervised machine learning problem is this, classification or regression? Why?
2. Exploring the Data
Let's go ahead and read in the student dataset first.
To execute a code cell, click inside it and press Shift+Enter.
End of explanation
shape = student_data.shape
n_students = shape[0]
n_features = shape[1]-1 # the last column is the target
n_passed = len(student_data[student_data.passed == 'yes'])
n_failed = len(student_data[student_data.passed == 'no'])
grad_rate = 100*float(n_passed)/n_students
print "Total number of students: {}".format(n_students)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Number of features: {}".format(n_features)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
Explanation: Now, can you find out the following facts about the dataset?
- Total number of students
- Number of students who passed
- Number of students who failed
- Graduation rate of the class (%)
- Number of features
Use the code block below to compute these values. Instructions/steps are marked using TODOs.
End of explanation
# Extract feature (X) and target (y) columns
feature_cols = list(student_data.columns[:-1]) # all columns but last are features
target_col = student_data.columns[-1] # last column is the target/label
print "Feature column(s):-\n{}".format(feature_cols)
print "Target column: {}".format(target_col)
X_all = student_data[feature_cols] # feature values for all students
y_all = student_data[target_col] # corresponding targets/labels
print "\nFeature values:-"
print X_all.head() # print the first 5 rows
Explanation: 3. Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Let's first separate our data into feature and target columns, and see if any features are non-numeric.<br/>
Note: For this dataset, the last column ('passed') is the target or label we are trying to predict.
End of explanation
# Preprocess feature columns
def preprocess_features(X):
outX = pd.DataFrame(index=X.index) # output dataframe, initially empty
# Check each column
for col, col_data in X.iteritems():
# If data type is non-numeric, try to replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# Note: This should change the data type for yes/no columns to int
# If still non-numeric, convert to one or more dummy variables
if col_data.dtype == object:
col_data = pd.get_dummies(col_data, prefix=col) # e.g. 'school' => 'school_GP', 'school_MS'
outX = outX.join(col_data) # collect column(s) in output dataframe
return outX
X_all = preprocess_features(X_all)
print "Processed feature columns ({}):-\n{}".format(len(X_all.columns), list(X_all.columns))
Explanation: Preprocess feature columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation.
End of explanation
# First, decide how many training vs test samples you want
num_all = student_data.shape[0] # same as len(student_data)
num_train = 300 # about 75% of the data
num_test = num_all - num_train
# TODO: Then, select features (X) and corresponding labels (y) for the training and test sets
# Note: Shuffle the data or randomly select samples to avoid any bias due to ordering in the dataset
indices = range(num_all)
import random
random.shuffle(indices)
train_indices = indices[:num_train]
test_indices = indices[-num_test:]
X_train = X_all.iloc[train_indices]
y_train = y_all[train_indices]
X_test = X_all.iloc[test_indices]
y_test = y_all[test_indices]
print "Training set: {} samples".format(X_train.shape[0])
print "Test set: {} samples".format(X_test.shape[0])
# Note: If you need a validation set, extract it from within training data
Explanation: Split data into training and test sets
So far, we have converted all categorical features into numeric values. In this next step, we split the data (both features and corresponding labels) into training and test sets.
End of explanation
# Train a model
import time
def train_classifier(clf, X_train, y_train):
print "Training {}...".format(clf.__class__.__name__)
start = time.time()
clf.fit(X_train, y_train)
end = time.time()
print "Done!\nTraining time (secs): {:.3f}".format(end - start)
# TODO: Choose a model, import it and instantiate an object
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
# Fit model to training data
train_classifier(clf, X_train, y_train) # note: using entire training set here
print clf # you can inspect the learned model by printing it
# Predict on training set and compute F1 score
from sklearn.metrics import f1_score
def predict_labels(clf, features, target):
print "Predicting labels using {}...".format(clf.__class__.__name__)
start = time.time()
y_pred = clf.predict(features)
end = time.time()
print "Done!\nPrediction time (secs): {:.3f}".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
train_f1_score = predict_labels(clf, X_train, y_train)
print "F1 score for training set: {}".format(train_f1_score)
# Predict on test data
print "F1 score for test set: {}".format(predict_labels(clf, X_test, y_test))
# Train and predict using different training set sizes
def train_predict(clf, X_train, y_train, X_test, y_test):
print "------------------------------------------"
print "Training set size: {}".format(len(X_train))
train_classifier(clf, X_train, y_train)
print "F1 score for training set: {}".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {}".format(predict_labels(clf, X_test, y_test))
num_all = student_data.shape[0] # same as len(student_data)
num_test = 95
test_indices = indices[-num_test:]
X_test = X_all.iloc[test_indices]
y_test = y_all[test_indices]
indices = range(num_all)
import random
random.shuffle(indices)
def try_different_training_sizes(clf):
# TODO: Run the helper function above for desired subsets of training data
# Note: Keep the test set constant
for size in (100, 200, 300):
train_indices = indices[:size]
X_train = X_all.iloc[train_indices]
y_train = y_all[train_indices]
print(train_predict(clf, X_train, y_train, X_test, y_test))
# using DecisionTreeClassifier
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
try_different_training_sizes(clf)
# TODO: Train and predict using two other models
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
nb = GaussianNB()
bc = BaggingClassifier()
knc = KNeighborsClassifier()
svc = SVC()
try_different_training_sizes(bc)
try_different_training_sizes(nb)
try_different_training_sizes(knc)
try_different_training_sizes(svc)
Explanation: 4. Training and Evaluating Models
Choose 3 supervised learning models that are available in scikit-learn, and appropriate for this problem. For each model:
What is the theoretical O(n) time & space complexity in terms of input size?
What are the general applications of this model? What are its strengths and weaknesses?
Given what you know about the data so far, why did you choose this model to apply?
Fit this model to the training data, try to predict labels (for both training and test sets), and measure the F<sub>1</sub> score. Repeat this process with different training set sizes (100, 200, 300), keeping test set constant.
Produce a table showing training time, prediction time, F<sub>1</sub> score on training set and F<sub>1</sub> score on test set, for each training set size.
Note: You need to produce 3 such tables - one for each model.
End of explanation
# TODO: Fine-tune your model and report the best F1 score
Explanation: 5. Choosing the Best Model
Based on the experiments you performed earlier, in 1-2 paragraphs explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
In 1-2 paragraphs explain to the board of supervisors in layman's terms how the final model chosen is supposed to work (for example if you chose a Decision Tree or Support Vector Machine, how does it make a prediction).
Fine-tune the model. Use Gridsearch with at least one important parameter tuned and with at least 3 settings. Use the entire training set for this.
What is the model's final F<sub>1</sub> score?
End of explanation |
10,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programmatic Access to Genome Nexus
This notebook gives some examples in Python for programmatic access to http
Step1: Connect with cBioPortal API
cBioPortal also uses Swagger for their API.
Step2: Annotate cBioPortal mutations with Genome Nexus
For convenience sake we're using only SNVs here. Eventually there will be an endpoint to help convert pos, ref, alt to the hgvs notation.
Step3: Check overlap SIFT/PolyPhen-2 | Python Code:
from bravado.client import SwaggerClient
client = SwaggerClient.from_url('https://www.genomenexus.org/v2/api-docs',
config={"validate_requests":False,"validate_responses":False,"validate_swagger_spec":False})
print(client)
dir(client)
for a in dir(client):
client.__setattr__(a[:-len('-controller')], client.__getattr__(a))
variant = client.annotation.fetchVariantAnnotationGET(variant='17:g.41242962_41242963insGA').result()
dir(variant)
tc1 = variant.transcript_consequences[0]
dir(tc1)
print(tc1)
Explanation: Programmatic Access to Genome Nexus
This notebook gives some examples in Python for programmatic access to http://genomenexus.org. You can run these examples after installing Jupyter. Easiest way for using Jupyter is installing the Python 3 version of anaconda: https://www.anaconda.com/download/. After having that you can install Jupyter with:
conda install jupyter
For these exampels we also require the Swagger API client reader Bravado. Unfortunately not yet available in anaconda, but you can get it through pip:
conda install pip
pip install bravado
Let's try connecting to the Genome Nexus API now:
End of explanation
import seaborn as sns
%matplotlib inline
sns.set_style("white")
sns.set_context('talk')
import matplotlib.pyplot as plt
cbioportal = SwaggerClient.from_url('https://www.cbioportal.org/api/api-docs',
config={"validate_requests":False,"validate_responses":False})
print(cbioportal)
for a in dir(cbioportal):
cbioportal.__setattr__(a.replace(' ', '_').lower(), cbioportal.__getattr__(a))
dir(cbioportal)
muts = cbioportal.mutations.getMutationsInMolecularProfileBySampleListIdUsingGET(
molecularProfileId="msk_impact_2017_mutations", # {study_id}_mutations gives default mutations profile for study
sampleListId="msk_impact_2017_all", # {study_id}_all includes all samples
projection="DETAILED" # include gene info
).result()
import pandas as pd
mdf = pd.DataFrame([dict(m.__dict__['_Model__dict'],
**m.__dict__['_Model__dict']['gene'].__dict__['_Model__dict']) for m in muts])
mdf.groupby('uniqueSampleKey').studyId.count().plot(kind='hist', bins=400, xlim=(0,30))
plt.xlabel('Number of mutations in sample')
plt.ylabel('Number of samples')
plt.title('Number of mutations across samples in MSK-IMPACT (2017)')
sns.despine(trim=True)
mdf.variantType.astype(str).value_counts().plot(kind='bar')
plt.title('Types of mutations in MSK-IMPACT (2017)')
sns.despine(trim=False)
Explanation: Connect with cBioPortal API
cBioPortal also uses Swagger for their API.
End of explanation
snvs = mdf[(mdf.variantType == 'SNP') & (mdf.variantAllele != '-') & (mdf.referenceAllele != '-')].copy()
# need query string like 9:g.22125503G>C
snvs['hgvs_for_gn'] = snvs.chromosome.astype(str) + ":g." + snvs.startPosition.astype(str) + snvs.referenceAllele + '>' + snvs.variantAllele
assert(snvs['hgvs_for_gn'].isnull().sum() == 0)
import time
qvariants = list(set(snvs.hgvs_for_gn))
gn_results = []
chunk_size = 500
print("Querying {} variants".format(len(qvariants)))
for n, qvar in enumerate([qvariants[i:i + chunk_size] for i in range(0, len(qvariants), chunk_size)]):
try:
gn_results += client.annotation.fetchVariantAnnotationPOST(variants=qvar,fields=['hotspots']).result()
print("Querying [{}, {}]: Success".format(n*chunk_size, min(len(qvariants), n*chunk_size+chunk_size)))
except Exception as e:
print("Querying [{}, {}]: Failed".format(n*chunk_size, min(len(qvariants), n*chunk_size+chunk_size)))
pass
time.sleep(1) # add a delay, to not overload server
gn_dict = {v.id:v for v in gn_results}
def is_sift_high(variant):
return variant in gn_dict and \
len(list(filter(lambda x: x.sift_prediction == 'deleterious', gn_dict[variant].transcript_consequences))) > 0
def is_polyphen_high(variant):
return variant in gn_dict and \
len(list(filter(lambda x: x.polyphen_prediction == 'probably_damaging', gn_dict[variant].transcript_consequences))) > 0
Explanation: Annotate cBioPortal mutations with Genome Nexus
For convenience sake we're using only SNVs here. Eventually there will be an endpoint to help convert pos, ref, alt to the hgvs notation.
End of explanation
snvs['is_sift_high'] = snvs.hgvs_for_gn.apply(is_sift_high)
snvs['is_polyphen_high'] = snvs.hgvs_for_gn.apply(is_polyphen_high)
from matplotlib_venn import venn2
venn2(subsets=((snvs.is_sift_high & (~snvs.is_polyphen_high)).sum(),
(snvs.is_polyphen_high & (~snvs.is_sift_high)).sum(),
(snvs.is_polyphen_high & snvs.is_sift_high).sum()), set_labels=["SIFT","PolyPhen-2"])
plt.title("Variants as predicted to have a high impact in MSK-IMPACT (2017)")
Explanation: Check overlap SIFT/PolyPhen-2
End of explanation |
10,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In a previous post, I introduced xarray with some simple manipulation and data plotting. In this super-short post, I'm going to do some more manipulation, using multiple input files to create a new dimension, reorganize the data and store them in multiple output files. All but with a few lines of code.
<!-- TEASER_END -->
GOAL
Step1: Now I set up some file path logic to avoid rewriting full file paths. I then accrue file paths into a list. I, fpaths. The new files I will next create will be stored in the 'Synthesis' directory for later retrieval.
Step2: I'm only interested in the visible bands because of the black pixel assumption used in the atmospheric correction applied during the processing phase, which renders Rrs in the near-infrared bands useless.
Step3: xarray has a nifty feature that allows opening multiple datasets, and automatically concatenating matching (by name and dimension) arrays, with the option of naming the thus newly created dimension. In our case, this is 'experiment'. The next line of code, below, opens what will end up being a temporary xarray Dataset - note that you will need dask installed for this. I'll then label the experiment dimension with the appropriate experiment names. Importantly, the concatenation direction reflects the order in which the file paths are specified, and it's also the order the experiment names are in in the 'expDirs' list defined above. I also make sure that the Rrs uncertainty data is labeled the same, 'rrs_unc'.
Step4: Verify that all the files are where they should be - in the Synthesis directory | Python Code:
import xarray as xr
import os
import glob
Explanation: In a previous post, I introduced xarray with some simple manipulation and data plotting. In this super-short post, I'm going to do some more manipulation, using multiple input files to create a new dimension, reorganize the data and store them in multiple output files. All but with a few lines of code.
<!-- TEASER_END -->
GOAL:
The ultimate goal here is to create new datasets, one for band, that aggregate results across experiments so as to facilitate inter-experiment comparisons.
HOW:
I will load netCDF files from a number of Monte-Carlo uncertainty experiments, among which the source of the uncertainty differs; Lt (sensor noise), wind, pressure, relative humidity, all the above.
At the end of this post, I will have 6 files, one per visible SeaWiFS visible band
containing one 3D array where dimensions are latitude, longitude, experiment.
WHY:
I'm doing this to create an interactive visualization (cf. next post) using GeoViews, where the goal is to compare, band-wise, cross-experiment results.
As usual, start with some imports...
End of explanation
dataDir = '/accounts/ekarakoy/disk02/UNCERTAINTIES/Monte-Carlo/DATA/AncillaryMC/'
expDirs = ['Lt', 'AllAnc_Lt', 'Pressure', 'RH', 'WindSpeed', 'O3']
outDir = 'Synthesis'
fpattern = 'S20031932003196.L3m_4D_SU*.nc'
fpaths = [glob.glob(os.path.join(dataDir, expDir, fpattern))[0] for expDir in expDirs]
Explanation: Now I set up some file path logic to avoid rewriting full file paths. I then accrue file paths into a list. I, fpaths. The new files I will next create will be stored in the 'Synthesis' directory for later retrieval.
End of explanation
bands = [412, 443, 490, 510, 555, 670]
Explanation: I'm only interested in the visible bands because of the black pixel assumption used in the atmospheric correction applied during the processing phase, which renders Rrs in the near-infrared bands useless.
End of explanation
with xr.open_mfdataset(fpaths, concat_dim='experiment') as allData:
allData.coords['experiment'] = expDirs
for band in bands:
foutpath = os.path.join(dataDir, outDir, '%s%d%s' %(fpattern.split('SU')[0],
band, '.nc'))
if not os.path.exists(os.path.dirname(foutpath)):
os.makedirs(os.path.dirname(foutpath))
data = allData.data_vars['Rrs_unc_%d' % band]
data.name='rrs_unc'
dsData = data.to_dataset()
dsData.to_netcdf(path=foutpath, engine='netcdf4')
Explanation: xarray has a nifty feature that allows opening multiple datasets, and automatically concatenating matching (by name and dimension) arrays, with the option of naming the thus newly created dimension. In our case, this is 'experiment'. The next line of code, below, opens what will end up being a temporary xarray Dataset - note that you will need dask installed for this. I'll then label the experiment dimension with the appropriate experiment names. Importantly, the concatenation direction reflects the order in which the file paths are specified, and it's also the order the experiment names are in in the 'expDirs' list defined above. I also make sure that the Rrs uncertainty data is labeled the same, 'rrs_unc'.
End of explanation
os.listdir(os.path.dirname(foutpath))
Explanation: Verify that all the files are where they should be - in the Synthesis directory
End of explanation |
10,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
see
Step1: Some calendar information so we can support any netCDF calendar.
Step4: A few calendar functions to determine the number of days in each month
If you were just using the standard calendar, it would be easy to use the calendar.month_range function. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import xarray
from netCDF4 import num2date
from netCDF4 import Dataset
# !conda list
print("numpy version :", np.__version__)
print("pandas version :", pd.__version__)
print("xray version :", xarray.__version__)
Explanation: see : https://github.com/pydata/xarray/blob/master/examples/xray_seasonal_means.ipynb
Calculating Seasonal Averages from Timeseries of Monthly Means
Author: Joe Hamman
The data used for this example can be found in the xray-data repository. You may need to change the path to RASM_example_data.nc below.
Suppose we have a netCDF or xray Dataset of monthly mean data and we want to calculate the seasonal average. To do this properly, we need to calculate the weighted average considering that each month has a different number of days.
End of explanation
dpm = {'noleap': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'365_day': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'standard': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'proleptic_gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'all_leap': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'366_day': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'360_day': [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30]}
dpm
Explanation: Some calendar information so we can support any netCDF calendar.
End of explanation
def leap_year(year, calendar='standard'):
Determine if year is a leap year
leap = False
if ((calendar in ['standard', 'gregorian',
'proleptic_gregorian', 'julian']) and
(year % 4 == 0)):
leap = True
if ((calendar == 'proleptic_gregorian') and
(year % 100 == 0) and
(year % 400 != 0)):
leap = False
elif ((calendar in ['standard', 'gregorian']) and
(year % 100 == 0) and (year % 400 != 0) and
(year < 1583)):
leap = False
return leap
leap_year(2016), leap_year(2004), leap_year(2001), leap_year(2000)
leap_year(2100), leap_year(2200), leap_year(2300),leap_year(2400),
leap_year(2100, "proleptic_gregorian"), \
leap_year(2200, "proleptic_gregorian"), \
leap_year(2300, "proleptic_gregorian"), \
leap_year(2400, "proleptic_gregorian")
def get_dpm(time, calendar='standard'):
return a array of days per month corresponding to the months provided in `months`
month_length = np.zeros(len(time), dtype=np.int)
cal_days = dpm[calendar]
for i, (month, year) in enumerate(zip(time.month, time.year)):
month_length[i] = cal_days[month]
if leap_year(year, calendar=calendar):
month_length[i] += 1
return month_length
import datetime
# get_dpm(datetime.datetime(2016, 5, 8))
datetime.datetime(2002, 12, 25)
Explanation: A few calendar functions to determine the number of days in each month
If you were just using the standard calendar, it would be easy to use the calendar.month_range function.
End of explanation |
10,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Trees and Random Forests
Suppose you're classifying dogs, and you know (as an expert in the field) that all poodles have tails shorter than 15mm, while all dachshunds have tails longer than 25mm. However, the difference between golden retrievers and poodles is that golden retrievers are taller than 40mm in height, while dachshunds are less than 35mm tall.
I give you a dataset of dogs, where we've measured the heights and tail lengths. Dutifully, you build a linear model taking in the features $f_h$ and $f_t$, and find a threshold/bias for $\alpha f_h + \beta f_t$.
But, you think, why bother? I already know how to decide the breed of a dog. Why spend all this energy?
Step1: As you can probably tell, the problem with this strategy is that while it may work in cases where you are an expert, it doesn't help you figure out what to do when you aren't an expert. And there will be data science problems that you will need to solve where you no know nothing about the problem, and there aren't any experts.
Still, the above strategy has some positive factors
Step2: Let's take a look at an example
Step3: So we're predicting class labels given 4 features. Let's do this
Step4: Wait, we're done? That's it? | Python Code:
def breed(tail, height):
if tail > 25:
if height < 35:
return 'dachshund'
else:
return 'golden retriever'
elif tail < 15:
return 'poodle'
else:
return 'dunno'
Explanation: Decision Trees and Random Forests
Suppose you're classifying dogs, and you know (as an expert in the field) that all poodles have tails shorter than 15mm, while all dachshunds have tails longer than 25mm. However, the difference between golden retrievers and poodles is that golden retrievers are taller than 40mm in height, while dachshunds are less than 35mm tall.
I give you a dataset of dogs, where we've measured the heights and tail lengths. Dutifully, you build a linear model taking in the features $f_h$ and $f_t$, and find a threshold/bias for $\alpha f_h + \beta f_t$.
But, you think, why bother? I already know how to decide the breed of a dog. Why spend all this energy?
End of explanation
from sklearn import tree
Explanation: As you can probably tell, the problem with this strategy is that while it may work in cases where you are an expert, it doesn't help you figure out what to do when you aren't an expert. And there will be data science problems that you will need to solve where you no know nothing about the problem, and there aren't any experts.
Still, the above strategy has some positive factors:
+ It's fast to evaluate (just like linear models, unlike SVMs)
+ It's nonlinear (which means it's sometimes more flexible than linear models)
+ It's very easy to read
Don't discount the latter criteria - when your model is getting 98% accuracy and you want to get to 99% accuracy (to pass your class, to do your job, or to win a million dollars), you're going to want to improve results. What better way to improve your accuracy than to look at the errors that you made and figure out how to fix them? It's very hard to understand why you are making errors in many models, especially SVMs and neural nets, and this model is much easier.
How do we construct decision trees?
Ok, how can we construct these decision trees in cases when we aren't experts and know the correct splits beforehand? It turns out that there are a lot of different ways, most notably (today) CART (Classification and Regression Trees) and C5.0 (some people refer to this as ID3's successor). Let's look at a simplified version of the CART algorithm:
Suppose we want to solve the classification problem of deciding whether animals are good or bad (regression is similar) and we have a labeled data set of the form
$$S = {(\vec{x}, y)i}{i = 1}^n$$
Where $\vec{x}$ is written with an arrow to emphasize that it is a vector of features, $x_i[0]$, $x_i[1]$, ...etc.
We're going to construct the tree in a top-down approach - that is, we're going to start at the top, create a node, and then walk downwards, creating more nodes as we go. Suppose that we want to branch on feature $k$, and there are two possible values for feature $k$ (which is a label, rather than a continuous variable), cat and dog. If we split based on feature $k$, we'll have two clumps of data, $S_{cat}$, and $S_{dog}$. How can we tell whether this is a good split? We'll look at two metrics to evaluate how to choose $k$ at every step.
Gini Impurity
Gini impurity is a measure of the mixiness of a set. If a set of points is really pure (i.e. 100% one label or 100% another label) the Gini impurity will be low. For a given set $S$ (say, $S' = S_{cat}$) we first get the fraction $f_{good}$ of $S'$ that consists of good and the fraction $f_{bad}$ that is bad. More generally, let the labels be $i = [0, 1, 2, ... k]$, and we calculate $f_i$ for all $i$. Then,
$$I_G(S') = \sum_{i = 0}^k f_i(1 - f_i)$$
Note: as a function of each $f_i$s, $I_G$ is maximized whenever $f_i = 0$ or $f_i = 1$ (i.e. when the set is the most pure).
So, to choose the best feature to branch on, we just choose whichever reduces the total Gini impurity the most ($I_G(S_{cat}) + I_G(S_{dog})$).
Example: if our branch factor is breed, and the set of cats contains 45 good cats and 15 bad cats, and the set of dogs contains 60 good dogs and only 5 bad dogs, the overall Gini impurity of the split is:
$$I_G = \left(\frac{45}{105}\cdot\frac{60}{105} + \frac{15}{20}\cdot\frac{5}{20}\right) + \left(\frac{60}{105}\cdot\frac{45}{105} + \frac{5}{20}\cdot\frac{15}{20}\right) = 0.4324$$
Question: if you look closely above, the Gini impurity of each of the sets above is the same. Is this always true?
However, if we split based on color, and the brown animals were 100 good and 10 bad, and the black animals were 5 good and 10 bad, then the Gini impurity is:
$$I_G = \left(\frac{100}{105}\cdot\frac{5}{105} + \frac{10}{20}\cdot\frac{10}{20}\right) + \left(\frac{5}{105}\cdot\frac{100}{105} + \frac{10}{20}\cdot\frac{100}{20}\right) = 0.2954$$
So, we'd choose to branch based on color, which makes some intuitive sense.
Information Gain
Similar to Gini impurity, information gain is another metric. Basically, for any mixed up set we can measure it's [Shannon entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory), which is a measure of randomness. So, at each stage we try to reduce the entropy by as much as possible. However, given a piece of information $I$, the difference in entropy before knowing $I$ (e.g. the value of a feature) and the entropy afterwards is called the information gain.
$$I_E(S') = -\sum_{i = 0}^k f_i \cdot \log k_i$$
What about regression?
If our output isn't class labels (i.e., good or bad), then what are we supposed to do? Fall back on our old friend, mean squared error! We find a mean for each of the target nodes, and then calculate the overall reduction in mean squared error.
Wow, this seems kind of complicated
It is, a little bit. Luckily, we're not cavepeople and we can use the fruits of other people's labor, instead of writing our own code. Everyone's favorite Python library, scikit-learn comes pretty handy here:
End of explanation
from sklearn.datasets import load_iris
from sklearn import tree
import numpy as np
# load data
iris = load_iris()
# just to see what's going on
mask = np.random.randint(len(iris['data']), size=10)
print("Input:")
print(iris['data'][mask, :])
print("Output:")
print(iris['target'][mask])
Explanation: Let's take a look at an example:
Let's load up a sample dataset and walk through using a decision tree on it:
End of explanation
clf = tree.DecisionTreeClassifier()
clf = clf.fit(iris.data, iris.target)
Explanation: So we're predicting class labels given 4 features. Let's do this
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import pydotplus
import StringIO
import pydotplus
from scipy import misc
from IPython.display import Image, display
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
display(Image(graph.create_png()))
Explanation: Wait, we're done? That's it?
End of explanation |
10,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Evoked data structure
Step1: Creating Evoked objects from Epochs
Step2: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the
Step3: Like the plot() methods for
Step4: To select based on time in seconds, the
Step5: Similarities among the core data structures
Step6: Notice that
Step7: If you want to load only some of the conditions present in a .fif file,
Step8: Above, when we created an
Step9: This can be remedied by either passing a baseline parameter to
Step10: Notice that
Step11: This approach will weight each epoch equally and create a single
Step12: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use
Step13: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting | Python Code:
import os
import mne
Explanation: The Evoked data structure: evoked/averaged data
This tutorial covers the basics of creating and working with :term:evoked
data. It introduces the :class:~mne.Evoked data structure in detail,
including how to load, query, subselect, export, and plot data from an
:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked
object from (possibly simulated) data in a :class:NumPy array
<numpy.ndarray>, see tut_creating_data_structures.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions, to save memory:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
Explanation: Creating Evoked objects from Epochs
:class:~mne.Evoked objects typically store an EEG or MEG signal that has
been averaged over multiple :term:epochs, which is a common technique for
estimating stimulus-evoked activity. The data in an :class:~mne.Evoked
object are stored in an :class:array <numpy.ndarray> of shape
(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,
which stores data of shape (n_epochs, n_channels, n_times)). Thus to
create an :class:~mne.Evoked object, we'll start by epoching some raw data,
and then averaging together all the epochs from one condition:
End of explanation
evoked.plot()
Explanation: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the :meth:~mne.Evoked.plot method, which yields a butterfly plot of each
channel type:
End of explanation
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
Explanation: Like the plot() methods for :meth:Raw <mne.io.Raw.plot> and
:meth:Epochs <mne.Epochs.plot> objects,
:meth:evoked.plot() <mne.Evoked.plot> has many parameters for customizing
the plot output, such as color-coding channel traces by scalp location, or
plotting the :term:global field power <GFP> alongside the channel traces.
See tut-visualize-evoked for more information about visualizing
:class:~mne.Evoked objects.
Subselecting Evoked data
.. sidebar:: Evokeds are not memory-mapped
:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute
rather than a :meth:~mne.Epochs.get_data method; this reflects the fact
that the data in :class:~mne.Evoked objects are always loaded into
memory, never memory-mapped_ from their location on disk (because they
are typically much smaller than :class:~mne.io.Raw or
:class:~mne.Epochs objects).
Unlike :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects do not support selection by square-bracket
indexing. Instead, data can be subselected by indexing the
:attr:~mne.Evoked.data attribute:
End of explanation
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
Explanation: To select based on time in seconds, the :meth:~mne.Evoked.time_as_index
method can be useful, although beware that depending on the sampling
frequency, the number of samples in a span of given duration may not always
be the same (see the time-as-index section of the
tutorial about Raw data <tut-raw-class> for details).
Selecting, dropping, and reordering channels
By default, when creating :class:~mne.Evoked data from an
:class:~mne.Epochs object, only the "data" channels will be retained:
eog, ecg, stim, and misc channel types will be dropped. You
can control which channel types are retained via the picks parameter of
:meth:epochs.average() <mne.Epochs.average>, by passing 'all' to
retain all channels, or by passing a list of integers, channel names, or
channel types. See the documentation of :meth:~mne.Epochs.average for
details.
If you've already created the :class:~mne.Evoked object, you can use the
:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,
:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods
to modify which channels are included in an :class:~mne.Evoked object.
You can also use :meth:~mne.Evoked.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Evoked.reorder_channels will be
dropped. Note that channel selection methods modify the object in-place, so
in interactive/exploratory sessions you may want to create a
:meth:~mne.Evoked.copy first.
End of explanation
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
Explanation: Similarities among the core data structures
:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw
and :class:~mne.Epochs objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> (but through
the :attr:~mne.Evoked.data attribute, not through a get_data()
method). :class:Pandas DataFrame <pandas.DataFrame> export is also
available through the :meth:~mne.Evoked.to_data_frame method.
You can change the name or type of a channel using
:meth:evoked.rename_channels() <mne.Evoked.rename_channels> or
:meth:evoked.set_channel_types() <mne.Evoked.set_channel_types>.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and
:meth:~mne.Evoked.plot_projs_topomap methods, and the
:attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more
information on SSP.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have :meth:~mne.Evoked.copy,
:meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,
:meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have evoked.times,
:attr:evoked.ch_names <mne.Evoked.ch_names>, and :class:info <mne.Info>
attributes.
Loading and saving Evoked data
Single :class:~mne.Evoked objects can be saved to disk with the
:meth:evoked.save() <mne.Evoked.save> method. One difference between
:class:~mne.Evoked objects and the other data structures is that multiple
:class:~mne.Evoked objects can be saved into a single .fif file, using
:func:mne.write_evokeds. The example data <sample-dataset>
includes just such a .fif file: the data have already been epoched and
averaged, and the file contains separate :class:~mne.Evoked objects for
each experimental condition:
End of explanation
for evok in evokeds_list:
print(evok.comment)
Explanation: Notice that :func:mne.read_evokeds returned a :class:list of
:class:~mne.Evoked objects, and each one has an evoked.comment
attribute describing the experimental condition that was averaged to
generate the estimate:
End of explanation
right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
Explanation: If you want to load only some of the conditions present in a .fif file,
:func:~mne.read_evokeds has a condition parameter, which takes either a
string (matched against the comment attribute of the evoked objects on disk),
or an integer selecting the :class:~mne.Evoked object based on the order
it's stored in the file. Passing lists of integers or strings is also
possible. If only one object is selected, the :class:~mne.Evoked object
will be returned directly (rather than a length-one list containing it):
End of explanation
evokeds_list[0].plot(picks='eeg')
Explanation: Above, when we created an :class:~mne.Evoked object by averaging epochs,
baseline correction was applied by default when we extracted epochs from the
~mne.io.Raw object (the default baseline period is (None, 0),
which assured zero mean for times before the stimulus event). In contrast, if
we plot the first :class:~mne.Evoked object in the list that was loaded
from disk, we'll see that the data have not been baseline-corrected:
End of explanation
evokeds_list[0].apply_baseline((None, 0))
evokeds_list[0].plot(picks='eeg')
Explanation: This can be remedied by either passing a baseline parameter to
:func:mne.read_evokeds, or by applying baseline correction after loading,
as shown here:
End of explanation
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
Explanation: Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,
:class:~mne.Evoked objects may have been saved to disk with or without
:term:projectors <projector> applied; you can pass proj=True to the
:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj
method after loading.
Combining Evoked objects
One way to pool data across multiple conditions when estimating evoked
responses is to do so prior to averaging (recall that MNE-Python can select
based on partial matching of /-separated epoch labels; see
tut-section-subselect-epochs for more info):
End of explanation
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
Explanation: This approach will weight each epoch equally and create a single
:class:~mne.Evoked object. Notice that the printed representation includes
(average, N=145), indicating that the :class:~mne.Evoked object was
created by averaging across 145 epochs. In this case, the event types were
fairly close in number:
End of explanation
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
Explanation: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.
Another approach to pooling across conditions is to create separate
:class:~mne.Evoked objects for each condition, and combine them afterward.
This can be accomplished by the function :func:mne.combine_evoked, which
computes a weighted sum of the :class:~mne.Evoked objects given to it. The
weights can be manually specified as a list or array of float values, or can
be specified using the keyword 'equal' (weight each ~mne.Evoked object
by $\frac{1}{N}$, where $N$ is the number of ~mne.Evoked
objects given) or the keyword 'nave' (weight each ~mne.Evoked object
proportional to the number of epochs averaged together to create it):
End of explanation
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
Explanation: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting
:class:~mne.Evoked object.
Other uses of Evoked objects
Although the most common use of :class:~mne.Evoked objects is to store
averages of epoched data, there are a couple other uses worth noting here.
First, the method :meth:epochs.standard_error() <mne.Epochs.standard_error>
will create an :class:~mne.Evoked object (just like
:meth:epochs.average() <mne.Epochs.average> does), but the data in the
:class:~mne.Evoked object will be the standard error across epochs instead
of the average. To indicate this difference, :class:~mne.Evoked objects
have a :attr:~mne.Evoked.kind attribute that takes values 'average' or
'standard error' as appropriate.
Another use of :class:~mne.Evoked objects is to represent a single trial
or epoch of data, usually when looping through epochs. This can be easily
accomplished with the :meth:epochs.iter_evoked() <mne.Epochs.iter_evoked>
method, and can be useful for applications where you want to do something
that is only possible for :class:~mne.Evoked objects. For example, here
we use the :meth:~mne.Evoked.get_peak method (which isn't available for
:class:~mne.Epochs objects) to get the peak response in each trial:
End of explanation |
10,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download the dataset
Fetch the Portuguese/English translation dataset from tfds
Step3: This dataset produces Portuguese/English sentence pairs
Step4: Note a few things about the example sentences above
Step5: Generate the vocabulary
This section generates a wordpiece vocabulary from a dataset. If you already have a vocabulary file and just want to see how to build a text.BertTokenizer or text.Wordpiece tokenizer with it then you can skip ahead to the Build the tokenizer section.
Note
Step6: The bert_vocab.bert_vocab_from_dataset function will generate the vocabulary.
There are many arguments you can set to adjust its behavior. For this tutorial, you'll mostly use the defaults. If you want to learn more about the options, first read about the algorithm, and then have a look at the code.
This takes about 2 minutes.
Step7: Here are some slices of the resulting vocabulary.
Step8: Write a vocabulary file
Step9: Use that function to generate a vocabulary from the english data
Step10: Here are the two vocabulary files
Step11: Build the tokenizer
<a id="build_the_tokenizer"></a>
The text.BertTokenizer can be initialized by passing the vocabulary file's path as the first argument (see the section on tf.lookup for other options)
Step12: Now you can use it to encode some text. Take a batch of 3 examples from the english data
Step13: Run it through the BertTokenizer.tokenize method. Initially, this returns a tf.RaggedTensor with axes (batch, word, word-piece)
Step14: If you replace the token IDs with their text representations (using tf.gather) you can see that in the first example the words "searchability" and "serendipity" have been decomposed into "search ##ability" and "s ##ere ##nd ##ip ##ity"
Step15: To re-assemble words from the extracted tokens, use the BertTokenizer.detokenize method
Step16: Note
Step17: Custom detokenization
Before exporting the tokenizers there are a couple of things you can cleanup for the downstream tutorials
Step18: Export
The following code block builds a CustomTokenizer class to contain the text.BertTokenizer instances, the custom logic, and the @tf.function wrappers required for export.
Step19: Build a CustomTokenizer for each language
Step20: Export the tokenizers as a saved_model
Step21: Reload the saved_model and test the methods
Step22: Archive it for the translation tutorials
Step23: <a id="algorithm"></a>
Optional
Step24: Now you have direct access to the lookup table used in the tokenizer.
Step25: You don't need to use a vocabulary file, tf.lookup has other initializer options. If you have the vocabulary in memory you can use lookup.KeyValueTensorInitializer | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install -q -U tensorflow-text
!pip install -q tensorflow_datasets
import collections
import os
import pathlib
import re
import string
import sys
import tempfile
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
import tensorflow_text as text
import tensorflow as tf
tf.get_logger().setLevel('ERROR')
pwd = pathlib.Path.cwd()
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/guide/subwords_tokenizer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/subwords_tokenizer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/subwords_tokenizer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/subwords_tokenizer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Subword tokenizers
This tutorial demonstrates how to generate a subword vocabulary from a dataset, and use it to build a text.BertTokenizer from the vocabulary.
The main advantage of a subword tokenizer is that it interpolates between word-based and character-based tokenization. Common words get a slot in the vocabulary, but the tokenizer can fall back to word pieces and individual characters for unknown words.
Objective: At the end of this tutorial you'll have built a complete end-to-end wordpiece tokenizer and detokenizer from scratch, and saved it as a saved_model that you can load and use in this translation tutorial.
Overview
The tensorflow_text package includes TensorFlow implementations of many common tokenizers. This includes three subword-style tokenizers:
text.BertTokenizer - The BertTokenizer class is a higher level interface. It includes BERT's token splitting algorithm and a WordPieceTokenizer. It takes sentences as input and returns token-IDs.
text.WordpieceTokenizer - The WordPieceTokenizer class is a lower level interface. It only implements the WordPiece algorithm. You must standardize and split the text into words before calling it. It takes words as input and returns token-IDs.
text.SentencepieceTokenizer - The SentencepieceTokenizer requires a more complex setup. Its initializer requires a pre-trained sentencepiece model. See the google/sentencepiece repository for instructions on how to build one of these models. It can accept sentences as input when tokenizing.
This tutorial builds a Wordpiece vocabulary in a top down manner, starting from existing words. This process doesn't work for Japanese, Chinese, or Korean since these languages don't have clear multi-character units. To tokenize these languages conside using text.SentencepieceTokenizer, text.UnicodeCharTokenizer or this approach.
Setup
End of explanation
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Explanation: Download the dataset
Fetch the Portuguese/English translation dataset from tfds:
End of explanation
for pt, en in train_examples.take(1):
print("Portuguese: ", pt.numpy().decode('utf-8'))
print("English: ", en.numpy().decode('utf-8'))
Explanation: This dataset produces Portuguese/English sentence pairs:
End of explanation
train_en = train_examples.map(lambda pt, en: en)
train_pt = train_examples.map(lambda pt, en: pt)
Explanation: Note a few things about the example sentences above:
* They're lower case.
* There are spaces around the punctuation.
* It's not clear if or what unicode normalization is being used.
End of explanation
from tensorflow_text.tools.wordpiece_vocab import bert_vocab_from_dataset as bert_vocab
Explanation: Generate the vocabulary
This section generates a wordpiece vocabulary from a dataset. If you already have a vocabulary file and just want to see how to build a text.BertTokenizer or text.Wordpiece tokenizer with it then you can skip ahead to the Build the tokenizer section.
Note: The vocabulary generation code used in this tutorial is optimized for simplicity. If you need a more scalable solution consider using the Apache Beam implementation available in tools/wordpiece_vocab/generate_vocab.py
The vocabulary generation code is included in the tensorflow_text pip package. It is not imported by default , you need to manually import it:
End of explanation
bert_tokenizer_params=dict(lower_case=True)
reserved_tokens=["[PAD]", "[UNK]", "[START]", "[END]"]
bert_vocab_args = dict(
# The target vocabulary size
vocab_size = 8000,
# Reserved tokens that must be included in the vocabulary
reserved_tokens=reserved_tokens,
# Arguments for `text.BertTokenizer`
bert_tokenizer_params=bert_tokenizer_params,
# Arguments for `wordpiece_vocab.wordpiece_tokenizer_learner_lib.learn`
learn_params={},
)
%%time
pt_vocab = bert_vocab.bert_vocab_from_dataset(
train_pt.batch(1000).prefetch(2),
**bert_vocab_args
)
Explanation: The bert_vocab.bert_vocab_from_dataset function will generate the vocabulary.
There are many arguments you can set to adjust its behavior. For this tutorial, you'll mostly use the defaults. If you want to learn more about the options, first read about the algorithm, and then have a look at the code.
This takes about 2 minutes.
End of explanation
print(pt_vocab[:10])
print(pt_vocab[100:110])
print(pt_vocab[1000:1010])
print(pt_vocab[-10:])
Explanation: Here are some slices of the resulting vocabulary.
End of explanation
def write_vocab_file(filepath, vocab):
with open(filepath, 'w') as f:
for token in vocab:
print(token, file=f)
write_vocab_file('pt_vocab.txt', pt_vocab)
Explanation: Write a vocabulary file:
End of explanation
%%time
en_vocab = bert_vocab.bert_vocab_from_dataset(
train_en.batch(1000).prefetch(2),
**bert_vocab_args
)
print(en_vocab[:10])
print(en_vocab[100:110])
print(en_vocab[1000:1010])
print(en_vocab[-10:])
Explanation: Use that function to generate a vocabulary from the english data:
End of explanation
write_vocab_file('en_vocab.txt', en_vocab)
!ls *.txt
Explanation: Here are the two vocabulary files:
End of explanation
pt_tokenizer = text.BertTokenizer('pt_vocab.txt', **bert_tokenizer_params)
en_tokenizer = text.BertTokenizer('en_vocab.txt', **bert_tokenizer_params)
Explanation: Build the tokenizer
<a id="build_the_tokenizer"></a>
The text.BertTokenizer can be initialized by passing the vocabulary file's path as the first argument (see the section on tf.lookup for other options):
End of explanation
for pt_examples, en_examples in train_examples.batch(3).take(1):
for ex in en_examples:
print(ex.numpy())
Explanation: Now you can use it to encode some text. Take a batch of 3 examples from the english data:
End of explanation
# Tokenize the examples -> (batch, word, word-piece)
token_batch = en_tokenizer.tokenize(en_examples)
# Merge the word and word-piece axes -> (batch, tokens)
token_batch = token_batch.merge_dims(-2,-1)
for ex in token_batch.to_list():
print(ex)
Explanation: Run it through the BertTokenizer.tokenize method. Initially, this returns a tf.RaggedTensor with axes (batch, word, word-piece):
End of explanation
# Lookup each token id in the vocabulary.
txt_tokens = tf.gather(en_vocab, token_batch)
# Join with spaces.
tf.strings.reduce_join(txt_tokens, separator=' ', axis=-1)
Explanation: If you replace the token IDs with their text representations (using tf.gather) you can see that in the first example the words "searchability" and "serendipity" have been decomposed into "search ##ability" and "s ##ere ##nd ##ip ##ity":
End of explanation
words = en_tokenizer.detokenize(token_batch)
tf.strings.reduce_join(words, separator=' ', axis=-1)
Explanation: To re-assemble words from the extracted tokens, use the BertTokenizer.detokenize method:
End of explanation
START = tf.argmax(tf.constant(reserved_tokens) == "[START]")
END = tf.argmax(tf.constant(reserved_tokens) == "[END]")
def add_start_end(ragged):
count = ragged.bounding_shape()[0]
starts = tf.fill([count,1], START)
ends = tf.fill([count,1], END)
return tf.concat([starts, ragged, ends], axis=1)
words = en_tokenizer.detokenize(add_start_end(token_batch))
tf.strings.reduce_join(words, separator=' ', axis=-1)
Explanation: Note: BertTokenizer.tokenize/BertTokenizer.detokenize does not round
trip losslessly. The result of detokenize will not, in general, have the
same content or offsets as the input to tokenize. This is because of the
"basic tokenization" step, that splits the strings into words before
applying the WordpieceTokenizer, includes irreversible
steps like lower-casing and splitting on punctuation. WordpieceTokenizer
on the other hand is reversible.
Customization and export
This tutorial builds the text tokenizer and detokenizer used by the Transformer tutorial. This section adds methods and processing steps to simplify that tutorial, and exports the tokenizers using tf.saved_model so they can be imported by the other tutorials.
Custom tokenization
The downstream tutorials both expect the tokenized text to include [START] and [END] tokens.
The reserved_tokens reserve space at the beginning of the vocabulary, so [START] and [END] have the same indexes for both languages:
End of explanation
def cleanup_text(reserved_tokens, token_txt):
# Drop the reserved tokens, except for "[UNK]".
bad_tokens = [re.escape(tok) for tok in reserved_tokens if tok != "[UNK]"]
bad_token_re = "|".join(bad_tokens)
bad_cells = tf.strings.regex_full_match(token_txt, bad_token_re)
result = tf.ragged.boolean_mask(token_txt, ~bad_cells)
# Join them into strings.
result = tf.strings.reduce_join(result, separator=' ', axis=-1)
return result
en_examples.numpy()
token_batch = en_tokenizer.tokenize(en_examples).merge_dims(-2,-1)
words = en_tokenizer.detokenize(token_batch)
words
cleanup_text(reserved_tokens, words).numpy()
Explanation: Custom detokenization
Before exporting the tokenizers there are a couple of things you can cleanup for the downstream tutorials:
They want to generate clean text output, so drop reserved tokens like [START], [END] and [PAD].
They're interested in complete strings, so apply a string join along the words axis of the result.
End of explanation
class CustomTokenizer(tf.Module):
def __init__(self, reserved_tokens, vocab_path):
self.tokenizer = text.BertTokenizer(vocab_path, lower_case=True)
self._reserved_tokens = reserved_tokens
self._vocab_path = tf.saved_model.Asset(vocab_path)
vocab = pathlib.Path(vocab_path).read_text().splitlines()
self.vocab = tf.Variable(vocab)
## Create the signatures for export:
# Include a tokenize signature for a batch of strings.
self.tokenize.get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string))
# Include `detokenize` and `lookup` signatures for:
# * `Tensors` with shapes [tokens] and [batch, tokens]
# * `RaggedTensors` with shape [batch, tokens]
self.detokenize.get_concrete_function(
tf.TensorSpec(shape=[None, None], dtype=tf.int64))
self.detokenize.get_concrete_function(
tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int64))
self.lookup.get_concrete_function(
tf.TensorSpec(shape=[None, None], dtype=tf.int64))
self.lookup.get_concrete_function(
tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int64))
# These `get_*` methods take no arguments
self.get_vocab_size.get_concrete_function()
self.get_vocab_path.get_concrete_function()
self.get_reserved_tokens.get_concrete_function()
@tf.function
def tokenize(self, strings):
enc = self.tokenizer.tokenize(strings)
# Merge the `word` and `word-piece` axes.
enc = enc.merge_dims(-2,-1)
enc = add_start_end(enc)
return enc
@tf.function
def detokenize(self, tokenized):
words = self.tokenizer.detokenize(tokenized)
return cleanup_text(self._reserved_tokens, words)
@tf.function
def lookup(self, token_ids):
return tf.gather(self.vocab, token_ids)
@tf.function
def get_vocab_size(self):
return tf.shape(self.vocab)[0]
@tf.function
def get_vocab_path(self):
return self._vocab_path
@tf.function
def get_reserved_tokens(self):
return tf.constant(self._reserved_tokens)
Explanation: Export
The following code block builds a CustomTokenizer class to contain the text.BertTokenizer instances, the custom logic, and the @tf.function wrappers required for export.
End of explanation
tokenizers = tf.Module()
tokenizers.pt = CustomTokenizer(reserved_tokens, 'pt_vocab.txt')
tokenizers.en = CustomTokenizer(reserved_tokens, 'en_vocab.txt')
Explanation: Build a CustomTokenizer for each language:
End of explanation
model_name = 'ted_hrlr_translate_pt_en_converter'
tf.saved_model.save(tokenizers, model_name)
Explanation: Export the tokenizers as a saved_model:
End of explanation
reloaded_tokenizers = tf.saved_model.load(model_name)
reloaded_tokenizers.en.get_vocab_size().numpy()
tokens = reloaded_tokenizers.en.tokenize(['Hello TensorFlow!'])
tokens.numpy()
text_tokens = reloaded_tokenizers.en.lookup(tokens)
text_tokens
round_trip = reloaded_tokenizers.en.detokenize(tokens)
print(round_trip.numpy()[0].decode('utf-8'))
Explanation: Reload the saved_model and test the methods:
End of explanation
!zip -r {model_name}.zip {model_name}
!du -h *.zip
Explanation: Archive it for the translation tutorials:
End of explanation
pt_lookup = tf.lookup.StaticVocabularyTable(
num_oov_buckets=1,
initializer=tf.lookup.TextFileInitializer(
filename='pt_vocab.txt',
key_dtype=tf.string,
key_index = tf.lookup.TextFileIndex.WHOLE_LINE,
value_dtype = tf.int64,
value_index=tf.lookup.TextFileIndex.LINE_NUMBER))
pt_tokenizer = text.BertTokenizer(pt_lookup)
Explanation: <a id="algorithm"></a>
Optional: The algorithm
It's worth noting here that there are two versions of the WordPiece algorithm: Bottom-up and top-down. In both cases goal is the same: "Given a training corpus and a number of desired
tokens D, the optimization problem is to select D wordpieces such that the resulting corpus is minimal in the
number of wordpieces when segmented according to the chosen wordpiece model."
The original bottom-up WordPiece algorithm, is based on byte-pair encoding. Like BPE, It starts with the alphabet, and iteratively combines common bigrams to form word-pieces and words.
TensorFlow Text's vocabulary generator follows the top-down implementation from BERT. Starting with words and breaking them down into smaller components until they hit the frequency threshold, or can't be broken down further. The next section describes this in detail. For Japanese, Chinese and Korean this top-down approach doesn't work since there are no explicit word units to start with. For those you need a different approach.
Choosing the vocabulary
The top-down WordPiece generation algorithm takes in a set of (word, count) pairs and a threshold T, and returns a vocabulary V.
The algorithm is iterative. It is run for k iterations, where typically k = 4, but only the first two are really important. The third and fourth (and beyond) are just identical to the second. Note that each step of the binary search runs the algorithm from scratch for k iterations.
The iterations described below:
First iteration
Iterate over every word and count pair in the input, denoted as (w, c).
For each word w, generate every substring, denoted as s. E.g., for the
word human, we generate {h, hu, hum, huma,
human, ##u, ##um, ##uma, ##uman, ##m, ##ma, ##man, #a, ##an, ##n}.
Maintain a substring-to-count hash map, and increment the count of each s
by c. E.g., if we have (human, 113) and (humas, 3) in our input, the
count of s = huma will be 113+3=116.
Once we've collected the counts of every substring, iterate over the (s,
c) pairs starting with the longest s first.
Keep any s that has a c > T. E.g., if T = 100 and we have (pers,
231); (dogs, 259); (##rint; 76), then we would keep pers and dogs.
When an s is kept, subtract off its count from all of its prefixes. This
is the reason for sorting all of the s by length in step 4. This is a
critical part of the algorithm, because otherwise words would be double
counted. For example, let's say that we've kept human and we get to
(huma, 116). We know that 113 of those 116 came from human, and 3
came from humas. However, now that human is in our vocabulary, we know
we will never segment human into huma ##n. So once human has been
kept, then huma only has an effective count of 3.
This algorithm will generate a set of word pieces s (many of which will be
whole words w), which we could use as our WordPiece vocabulary.
However, there is a problem: This algorithm will severely overgenerate word
pieces. The reason is that we only subtract off counts of prefix tokens.
Therefore, if we keep the word human, we will subtract off the count for h,
hu, hu, huma, but not for ##u, ##um, ##uma, ##uman and so on. So we might
generate both human and ##uman as word pieces, even though ##uman will
never be applied.
So why not subtract off the counts for every substring, not just every
prefix? Because then we could end up subtracting off the counts multiple
times. Let's say that we're processing s of length 5 and we keep both
(##denia, 129) and (##eniab, 137), where 65 of those counts came from the
word undeniable. If we subtract off from every substring, we would subtract
65 from the substring ##enia twice, even though we should only subtract
once. However, if we only subtract off from prefixes, it will correctly only be
subtracted once.
Second (and third ...) iteration
To solve the overgeneration issue mentioned above, we perform multiple
iterations of the algorithm.
Subsequent iterations are identical to the first, with one important
distinction: In step 2, instead of considering every substring, we apply the
WordPiece tokenization algorithm using the vocabulary from the previous
iteration, and only consider substrings which start on a split point.
For example, let's say that we're performing step 2 of the algorithm and
encounter the word undeniable. In the first iteration, we would consider every
substring, e.g., {u, un, und, ..., undeniable, ##n, ##nd, ..., ##ndeniable,
...}.
Now, for the second iteration, we will only consider a subset of these. Let's
say that after the first iteration, the relevant word pieces are:
un, ##deni, ##able, ##ndeni, ##iable
The WordPiece algorithm will segment this into un ##deni ##able (see the
section Applying WordPiece for more information). In this
case, we will only consider substrings that start at a segmentation point. We
will still consider every possible end position. So during the second
iteration, the set of s for undeniable is:
{u, un, und, unden, undeni, undenia, undeniab, undeniabl,
undeniable, ##d, ##de, ##den, ##deni, ##denia, ##deniab, ##deniabl
, ##deniable, ##a, ##ab, ##abl, ##able}
The algorithm is otherwise identical. In this example, in the first iteration,
the algorithm produces the suprious tokens ##ndeni and ##iable. Now, these
tokens are never considered, so they will not be generated by the second
iteration. We perform several iterations just to make sure the results converge
(although there is no literal convergence guarantee).
Applying WordPiece
<a id="applying_wordpiece"></a>
Once a WordPiece vocabulary has been generated, we need to be able to apply it
to new data. The algorithm is a simple greedy longest-match-first application.
For example, consider segmenting the word undeniable.
We first lookup undeniable in our WordPiece dictionary, and if it's present,
we're done. If not, we decrement the end point by one character, and repeat,
e.g., undeniabl.
Eventually, we will either find a subtoken in our vocabulary, or get down to a
single character subtoken. (In general, we assume that every character is in our
vocabulary, although this might not be the case for rare Unicode characters. If
we encounter a rare Unicode character that's not in the vocabulary we simply map
the entire word to <unk>).
In this case, we find un in our vocabulary. So that's our first word piece.
Then we jump to the end of un and repeat the processing, e.g., try to find
##deniable, then ##deniabl, etc. This is repeated until we've segmented the
entire word.
Intuition
Intuitively, WordPiece tokenization is trying to satisfy two different
objectives:
Tokenize the data into the least number of pieces as possible. It is
important to keep in mind that the WordPiece algorithm does not "want" to
split words. Otherwise, it would just split every word into its characters,
e.g., human -> {h, ##u, ##m, ##a, #n}. This is one critical thing that
makes WordPiece different from morphological splitters, which will split
linguistic morphemes even for common words (e.g., unwanted -> {un, want,
ed}).
When a word does have to be split into pieces, split it into pieces that
have maximal counts in the training data. For example, the reason why the
word undeniable would be split into {un, ##deni, ##able} rather than
alternatives like {unde, ##niab, ##le} is that the counts for un and
##able in particular will be very high, since these are common prefixes
and suffixes. Even though the count for ##le must be higher than ##able,
the low counts of unde and ##niab will make this a less "desirable"
tokenization to the algorithm.
Optional: tf.lookup
<a id="tf.lookup"></a>
If you need access to, or more control over the vocabulary it's worth noting that you can build the lookup table yourself and pass that to BertTokenizer.
When you pass a string, BertTokenizer does the following:
End of explanation
pt_lookup.lookup(tf.constant(['é', 'um', 'uma', 'para', 'não']))
Explanation: Now you have direct access to the lookup table used in the tokenizer.
End of explanation
pt_lookup = tf.lookup.StaticVocabularyTable(
num_oov_buckets=1,
initializer=tf.lookup.KeyValueTensorInitializer(
keys=pt_vocab,
values=tf.range(len(pt_vocab), dtype=tf.int64)))
pt_tokenizer = text.BertTokenizer(pt_lookup)
Explanation: You don't need to use a vocabulary file, tf.lookup has other initializer options. If you have the vocabulary in memory you can use lookup.KeyValueTensorInitializer:
End of explanation |
10,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solutions to
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 1)
Welcome to the hands-on session of our tutorial! This tutorial is based on the user guide of NetworKit, our network analysis software. You will learn in this tutorial how to use NetworKit for fundamental tasks in network analysis.
NetworKit can run in your browser (thanks to IPython notebooks) and is still very fast (thanks to C++ code in the background). It is easy to mix text with code and solutions in this environment. Thus, you should be able to obtain your results in a convenient and quick manner. This is not only true for the rather small graphs we use for this tutorial, but for larger instances in production runs as well. In particular you can mix text, code, plots and other rich media in this environment. Since this allows a simplified execution and interpretation of experiments, the interactive approach followed by NetworKit can simplify the cyclic algorithm engineering process significantly (without compromising algorithm performance).
Preparation
Let's start by making NetworKit available in your session. Click into the cell below and hit space-return or click the "Play" button or select "Cell -> Run" in the menu.
Step1: In case a Python warning appears that recommends an update to Python 3.4, simply ignore it for this tutorial. Python 3.3 works just as fine for our purposes.
IPython lets us use familiar shell commands in a Python interpreter. Use one of them now to change into the main directory of your NetworKit installation
Step2: Reading Graphs
Let us start by reading a network from a file on disk
Step3: In the course of this tutorial, we are going to work (among others) on the PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another (web of trust). It is distributed with NetworKit as a good starting point.
The Graph Object
Graph is the central class of NetworKit. An object of this type represents an optionally weighted network. In this tutorial we work with undirected graphs, but NetworKit supports directed graphs as well.
Let us inspect several of the methods which the class provides. Maybe the most basic information is the number of nodes and edges in as well as the name of the network.
Step4: NetworKit stores nodes simply as integer indices. Edges are pairs of such indices. The following prints the indices of the first ten nodes and edges, respectively.
Step5: Another very useful feature is to determine if an edge is present and what its weight is. In case of unweighted graphs, edges have the default weight 1.
Step6: Many graph algorithms can be expressed with iterators over nodes or edges. As an example, let us iterate over the nodes to determine how many of them have more than 100 neighbors.
Step7: Interesting Features of a Network
Let us become more concrete | Python Code:
from networkit import *
Explanation: Solutions to
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 1)
Welcome to the hands-on session of our tutorial! This tutorial is based on the user guide of NetworKit, our network analysis software. You will learn in this tutorial how to use NetworKit for fundamental tasks in network analysis.
NetworKit can run in your browser (thanks to IPython notebooks) and is still very fast (thanks to C++ code in the background). It is easy to mix text with code and solutions in this environment. Thus, you should be able to obtain your results in a convenient and quick manner. This is not only true for the rather small graphs we use for this tutorial, but for larger instances in production runs as well. In particular you can mix text, code, plots and other rich media in this environment. Since this allows a simplified execution and interpretation of experiments, the interactive approach followed by NetworKit can simplify the cyclic algorithm engineering process significantly (without compromising algorithm performance).
Preparation
Let's start by making NetworKit available in your session. Click into the cell below and hit space-return or click the "Play" button or select "Cell -> Run" in the menu.
End of explanation
cd ~/Documents/workspace/NetworKit
Explanation: In case a Python warning appears that recommends an update to Python 3.4, simply ignore it for this tutorial. Python 3.3 works just as fine for our purposes.
IPython lets us use familiar shell commands in a Python interpreter. Use one of them now to change into the main directory of your NetworKit installation:
End of explanation
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
Explanation: Reading Graphs
Let us start by reading a network from a file on disk: PGPgiantcompo.graph. NetworKit supports a number of popular file formats. There is a convenient function in the top namespace which reads a graph from a file:
End of explanation
n = G.numberOfNodes()
m = G.numberOfEdges()
print(n, m)
G.toString()
Explanation: In the course of this tutorial, we are going to work (among others) on the PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another (web of trust). It is distributed with NetworKit as a good starting point.
The Graph Object
Graph is the central class of NetworKit. An object of this type represents an optionally weighted network. In this tutorial we work with undirected graphs, but NetworKit supports directed graphs as well.
Let us inspect several of the methods which the class provides. Maybe the most basic information is the number of nodes and edges in as well as the name of the network.
End of explanation
V = G.nodes()
print(V[:10])
E = G.edges()
print(E[:10])
Explanation: NetworKit stores nodes simply as integer indices. Edges are pairs of such indices. The following prints the indices of the first ten nodes and edges, respectively.
End of explanation
edgeExists = G.hasEdge(42,11)
if edgeExists:
print("Weight of existing edge:", G.weight(42,11))
print("Weight of nonexisting edge:", G.weight(42,12))
Explanation: Another very useful feature is to determine if an edge is present and what its weight is. In case of unweighted graphs, edges have the default weight 1.
End of explanation
count = 0 # counts number of nodes with more than 100 neighbors
for v in G.nodes():
if G.degree(v) > 100:
count = count + 1
print("Number of nodes with more than 100 neighbors: ", count)
Explanation: Many graph algorithms can be expressed with iterators over nodes or edges. As an example, let us iterate over the nodes to determine how many of them have more than 100 neighbors.
End of explanation
# Enter code for Q&A Session #1 here
argmin = 0
argmax = 0
mini = G.numberOfNodes()
maxi = 0
degsum = 0
for v in G.nodes():
deg = G.degree(v)
degsum = degsum + deg
if deg < mini:
mini = deg
argmin = v
if deg > maxi:
maxi = deg
argmax = v
avgdeg = degsum / G.numberOfNodes()
# Answers to 1-1) and 1-2)
print("1-1a) min: ", mini, " / argmin: ", argmin)
print("1-1b) max: ", maxi, " / argmax: ", argmax)
print("1-2) avg: ", avgdeg)
# Answer to 1-3)
dd = sorted(centrality.DegreeCentrality(G).run().scores(), reverse=True)
import powerlaw
fit = powerlaw.Fit(dd)
print("1-3) Power law fit: ", fit.alpha)
# Answer to 1-4)
alcc = centrality.LocalClusteringCoefficient(G)
print("1-4) Exact average local clustering coefficient: ", alcc)
# Answer to 1-5)
conncomp = components.ConnectedComponents(G)
conncomp.run()
print("1-5) Number of components: ", conncomp.numberOfComponents())
Explanation: Interesting Features of a Network
Let us become more concrete: In the talk that accompanies this tutorial you learned about basic network features. Go back to the 'Analytics' section of the slides and answer the following questions within the box below, including the code which found your answer (click on the box to enter text). If you need information on method prototypes, you have at least two options: Use the built-in code completion (tab) or the project website, which offers documentation in the form of an automatically generated reference: https://networkit.iti.kit.edu/documentation/ (Python/C++ Documentation in the left navigation bar).
After you answered the questions, go on with Tutorial #2.
Q&A Session #1
Who (which vertex) has the least/most 'friends' and how many?
Answer:
How many neighbors does a vertex have on average?
Answer:
Does the degree distribution follow a power law?
Answer:
How often is the friend of a friend also a friend? Let's go for the average fraction here, other definitions are possible...
Answer:
How many connected components does the graph have?
Answer:
End of explanation |
10,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Working with xml
Step1: 1.3 From file to XML object
Opening an xml file is actually quite simple
Step2: As you can see, we obtained an instance of type lxml.etree._ElementTree. It means the xml markup has been transformed into something Python understands.
The parse function of etree does not take many arguments. One way to customize its behaviour is to give it a home configured or homemade xml parser
Step3: From the documentation of the XMLParser function, here are some arguments that might be useful for you
Step4: DIY
Can you parse a xml document made of one tag "humanities" with two children "field" named "classics" and "history"?
Step6: 1.3.2 Errors and understanding them
Previouly, we have said that lxml was quite strict about xml validity. Let's see an example
Step10: What error did we raise trying to parse this XML ? We got an XMLSyntaxError. It can happen for various reasons, including when entities cannot be parsed. Can you try to find another way to raise an XMLSyntaxError ?
Step11: As you can see, errors are detailed enough so you can correct your own XML, at least manually.
1.3.3 Node properties and methods
Quick explanation
Step13: You can do plenty of things using lxml and access properties or methods of nodes, here is an overview of reading functionalities offered by lxml
Step14: If we want to retrieve the attributes of our div, we can do as follow
Step15: Great ! We accessed our first information using lxml ! Now, how about getting somewhere other than the root tag ? To do so, there are two ways
Step16: Now that we have access to our children, we can have access to their text
Step17: Ok, we are now able to get some stuff done. Remember the namespace naming ? Sometimes it's useful to retrieve namespaces and their prefix
Step19: What you've learned
Step20: As you can see, the xpath returns a list. This behaviour is intended, since an xpath can retrieve more than one item
Step21: You see ? The xpath //l returns two elements, just like python does in a list. Now, let's apply some xpath to the children and see what happens
Step22: As you can see, you can do xpath from any node in lxml. One important thing though
Step24: Xpath with namespaces and prefix
As you've seen, lxml use Clark's naming convention for expressing namespaces. This is extremely important regarding xpath, because you will be able to retrieve a node using it under certain conditions
Step27: What you have learned
Step28: Did you see what happened ? We used xslt(xml). etree.XSLT() transforms a xsl document into a function, which then takes one parameter (in this case an xml document). But can you figure out what this returns ? Let's ask Python
Step29: The result is not of the same type of element we usually have, even though it does share most of its methods and attributes
Step30: And has something more
Step33: XSLT is more complex than just inputing xml. You can do XSLT using parameters as well. In this case, your parameters will be accessibles as a named argument to the generated function. If your XSL has a name xsl-param, the function given back by etree.XSLT will have a name argument
Step34: 2. Using ElementTree
Step35: 2.1 Traversing the Parsed Tree
To visit all of the children in order, use iter() to create a generator that iterates over the ElementTree instance.
Step36: 2.1.1 Finding Nodes in a Document¶
Walking the entire tree like this searching for relevant nodes can be error prone. The example above had to look at each outline node to determine if it was a group (nodes with only a text attribute) or podcast (with both text and xmlUrl). To produce a simple list of the podcast feed URLs, without names or groups, for a podcast downloader application, the logic could be simplified using findall() to look for nodes with more descriptive search characteristics.
As a first pass at converting the above example, we can construct an XPath argument to look for all outline nodes.
Step37: Another version can take advantage of the fact that the outline nodes are only nested two levels deep. Changing the search path to .//outline/outline mean the loop will process only the second level of outline nodes.
Step38: 2.1.2 Parsed Node Attributes
The items returned by findall() and iter() are Element objects, each representing a node in the XML parse tree. Each Element has attributes for accessing data pulled out of the XML. This can be illustrated with a somewhat more contrived example input file, data.xml
Step39: 2.1.3 Parsing Strings
To work with smaller bits of XML text, especially string literals as might be embedded in the source of a program, use XML() and the string containing the XML to be parsed as the only argument. | Python Code:
from lxml import etree
Explanation: 1. Working with xml : reading
1.1 Introduction
Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML (ISO 8879). Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere.
It has been defined at https://www.w3.org/XML/.
Several schema systems exist to aid in the definition of XML-based languages, while programmers have developed many application programming interfaces (APIs) to aid the processing of XML data.
1.2 Parsing XML with Python
As for querying the web, Python has many libraries for playing with xml. You will most likely encounter the following during your Pythonic journey :
lxml, which we will use for this course. A clean, quite fast, strict library for dealing with xml resources. It's the most accepted library for this kind of request. If IBM writes tutorials for it, it should be good. It supports xpath and xslt.
BeautifulSoup. Flexible, average speed. The good thing is if your xml markup is messed up, it will try to correct it. It's perfect for dealing with web scrapped data in HTML formats. For clean xml, it might be too slow.
xml : the native integration in Python. Fast, clean but no good sides such as xpath and xslt.
Read about others on the Python official wiki
Based on my experience, lxml will meet most of your needs when dealing with clean data. Clean is the key word here : do not expect lxml to play well with bad html or bad xml. It will just throw errors at you until you give up or fix it by hand.
We can import lxml.etree the same way we imported requests earlier.
End of explanation
# We open our file
with open("data/books.xml") as file:
# We use the etree.parse property
parsed = etree.parse(file)
# We print the object
print(parsed)
Explanation: 1.3 From file to XML object
Opening an xml file is actually quite simple : you open it and you parse it. Who would have guessed ?
End of explanation
# We initiate a new parser from etree, asking it to remove nodes of text which are empty
parser = etree.XMLParser(remove_blank_text=True)
# We open the file
with open("data/books.xml") as file:
# And we parse using the new parser
parsed = etree.parse(file, parser)
# We print the object
print(parsed)
# We open the file
Explanation: As you can see, we obtained an instance of type lxml.etree._ElementTree. It means the xml markup has been transformed into something Python understands.
The parse function of etree does not take many arguments. One way to customize its behaviour is to give it a home configured or homemade xml parser :
End of explanation
xml = '<root xmlns:a="xmlns1" xmlns:b="xmlns2"><tag xmlns:c="xmlns3" /><tag xmlns:a="xmlns1" /><tag /></root>'
parsed = etree.fromstring(xml)
print(parsed)
Explanation: From the documentation of the XMLParser function, here are some arguments that might be useful for you :
attribute_defaults : Use DTD (if available) to add the default attributes
dtd_validation : Validate against DTD while parsing
load_dtd : Load and parse the DTD while parsing
ns_clean : Clean up redundant namespace declarations
recover : Try to fix ill-formed xml
remove_blank_text : Removes blank text nodes
resolve_entities : Replace entities by their value (Default : on)
You can then create a new parser according to its standards or clean namespace attribute. In this context, ns_clean would transform
<root xmlns:a="xmlns1" xmlns:b="xmlns2"><tag xmlns:c="xmlns3" /><tag xmlns:a="xmlns1" /><tag /></root>
into
<root xmlns:a="xmlns1" xmlns:b="xmlns2"><tag xmlns:c="xmlns3" /><tag/><tag /></root>
1.3.1 From string to XML object
lxml parses strings in the same way that it parses files. The syntax differs, but is quite simple :
End of explanation
# Put your code here
Explanation: DIY
Can you parse a xml document made of one tag "humanities" with two children "field" named "classics" and "history"?
End of explanation
xml =
<fileDesc>
<titleStmt>
<title>Aeneid</title>
<title type="sub">Machine readable text</title>
<author n="Verg.">P. Vergilius Maro</author>
<editor role="editor" n="Greenough">J. B. Greenough</editor>
</titleStmt>
<extent>about 505Kb</extent>
<!-- &Perseus.publish;-->
<sourceDesc>
<biblStruct>
<monogr>
<author>Vergil</author>
<title>Bucolics, Aeneid, and Georgics Of Vergil</title>
<editor role="editor">J. B. Greenough</editor>
<imprint>
<pubPlace>Boston</pubPlace>
<publisher>Ginn & Co.</publisher>
<date>1900</date>
</imprint>
</monogr>
</biblStruct>
</sourceDesc>
</fileDesc>
etree.fromstring(xml)
Explanation: 1.3.2 Errors and understanding them
Previouly, we have said that lxml was quite strict about xml validity. Let's see an example :
End of explanation
#Write your xml in xml variable
# invalid
xml =
#
xml2 =
<start>this is a text</start>
#
xml3 =
<start attr="test"/>
etree.fromstring(xml3)
Explanation: What error did we raise trying to parse this XML ? We got an XMLSyntaxError. It can happen for various reasons, including when entities cannot be parsed. Can you try to find another way to raise an XMLSyntaxError ?
End of explanation
# With no namespace
print(etree.fromstring("<root />"))
# With namespace
print(etree.fromstring("<root xmlns='http://localhost' />"))
Explanation: As you can see, errors are detailed enough so you can correct your own XML, at least manually.
1.3.3 Node properties and methods
Quick explanation : Methods and properties are something special in Python and other programming languages. Unlike traditional functions (len()) and keys of dictionaries (a["b"]), they are part of something bigger.
Methods : Ever seen something such as a.method() ? Yes, you did with .split(), .join(), etc. Functions following a variable with a dot are called methods because they are an extension of the variable type. eh split() and join() are extensions of string objects, and they use their value as argument.
Properties or Attributes : Such as dictionary keys, properties are indexed values of an object, but instead of using the syntax made of square brackets, you just put the name of the key after a dot : a.property
Warning : namespaces : In lxml, namespaces are expressed using the Clark notation. This mean that, if a namespace defines a node, this node will be named using the following syntax "{namespace}tagname. Here is an example :
End of explanation
# First, we will need some xml
xml =
<div type="Book" n="1">
<l n="1">Arma virumque cano, Troiae qui primus ab oris</l>
<tei:l n="2" xmlns:tei="http://www.tei-c.org/ns/1.0">Italiam, fato profugus, Laviniaque venit</tei:l>
<l n="3">litora, multum ille et terris iactatus et alto</l>
<l n="4">vi superum saevae memorem Iunonis ob iram;</l>
<l n="5">multa quoque et bello passus, dum conderet urbem,</l>
<l n="6">inferretque deos Latio, genus unde Latinum,</l>
<l n="7">Albanique patres, atque altae moenia Romae.</l>
</div>
div = etree.fromstring(xml)
print(parsed)
Explanation: You can do plenty of things using lxml and access properties or methods of nodes, here is an overview of reading functionalities offered by lxml :
Let's see what that means in real life :
End of explanation
type_div = div.get("type")
print(type_div)
print(div.get("n"))
# If we want a dictionary of attributes
print(div.attrib)
attributes_div = dict(div.attrib)
print(attributes_div)
# Of if we want a list
list_attributes_div = div.items()
print(list_attributes_div)
Explanation: If we want to retrieve the attributes of our div, we can do as follow :
End of explanation
children = div.getchildren()
print(children)
line_1 = children[0] # Because it's a list we can access children through index
print(line_1)
Explanation: Great ! We accessed our first information using lxml ! Now, how about getting somewhere other than the root tag ? To do so, there are two ways :
getchildren() will returns a list of children tags, such as div.
list(div) will transform div in a list of children.
Both syntaxes return the same results, so it's up to you to decide which one you prefer.
End of explanation
print(line_1.text)
Explanation: Now that we have access to our children, we can have access to their text :
End of explanation
# <tei:l n="2" xmlns:tei="http://www.tei-c.org/ns/1.0">Italiam, fato profugus, Laviniaque venit</tei:l>
line_2 = children[1]
print(line_2.nsmap)
print(line_2.prefix)
print(line_2.tag)
Explanation: Ok, we are now able to get some stuff done. Remember the namespace naming ? Sometimes it's useful to retrieve namespaces and their prefix :
End of explanation
# We generate some xml and parse it
## TODO
xml = <div>
<l n="1">
<p>Text</p>
<p>new p</p>
followed
<test>
<p>p3</p>
</test>
</l>
<l n="2">
by line two
</l>
<p>test</p>
<p><l n="3"> line 3</l></p>
</div>
div = etree.fromstring(xml)
print(div)
# When doing an xpath, the results will be a list
print("-"*20)
ps = div.xpath("/div/l")
for p in ps:
print(p)
print("-"*20)
# print(ps)
print([value.values()[0] for value in ps])
print(ps[0].text == "Text")
Explanation: What you've learned :
How to parse a xml file or a string representing xml through etree.parse() and etree.fromstring()
How to configure the way xml is parsed with etree.XMLParser()
What is an attribute and a method
Properties and methods of a node
XMLParseError handling
Clark's notation for namespaces and tags.
1.4 . XPath and XSLT with lxml
1.4.1 XPath
XPath is a powerful tool for traversing an xml tree. XML is made of nodes such as tags, comments, texts. These nodes have attributes that can be used to identify them. For example, with the following xml :
<div><l n="1"><p>Text</p> followed</l><l n="2">by line two</div>
the node p will be accessible by /div/l[@n="1"]/p. LXML has great support for complex XPath, which makes it the best friend of Humanists dealing with xml :
End of explanation
print(div.xpath("//l"))
Explanation: As you can see, the xpath returns a list. This behaviour is intended, since an xpath can retrieve more than one item :
End of explanation
# We assign our first line to a variable
line_1 = div.xpath("//l")[0]
#print(dir(line_1))
print(line_1.attrib['n'])
# We look for p
print(line_1.xpath("p")) # This works
print(line_1.xpath("./p")) # This works too
print(line_1.xpath(".//p")) # This still works
print(line_1.xpath("//p")) # entire doc
Explanation: You see ? The xpath //l returns two elements, just like python does in a list. Now, let's apply some xpath to the children and see what happens :
End of explanation
root.xpath("wrong:xpath:never:works")
Explanation: As you can see, you can do xpath from any node in lxml. One important thing though : xpath //tagname will return to the root if you do not add a dot in front of it such as .//tagname. This is really important to remember, because most xpath resolvers do not behave this way.
Another point to kepe in mind : if you write your xpath incorrectly, Python will raise an *XPathEvalError * error
End of explanation
# We create a valid xml object
xml = <root>
<tag xmlns="http://localhost">Text</tag>
<tei:tag xmlns:tei="http://www.tei-c.org/ns/1.0">Other text</tei:tag>
<teiTwo:tag xmlns:teiTwo="http://www.tei-c.org/ns/2.0">Other text</teiTwo:tag>
</root>
root = etree.fromstring(xml)
# We register every namespaces in a dictionary using prefix as keys :
ns = {
"local" : "http://localhost", # Even if this namespace had no prefix, we can register one for it
"tei" : "http://www.tei-c.org/ns/1.0",
"two": "http://www.tei-c.org/ns/2.0"
}
print([d.text for namespace in ns
for d in root.xpath("//{namespace}:tag".format(namespace=namespace),
namespaces=ns) ])
Explanation: Xpath with namespaces and prefix
As you've seen, lxml use Clark's naming convention for expressing namespaces. This is extremely important regarding xpath, because you will be able to retrieve a node using it under certain conditions :
End of explanation
# Here is an xml containing an xsl: for each text node of an xml file in the xpath /humanities/field,
# this will return a node <name> with the text inside
xslt_root = etree.fromstring(
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<fields><xsl:apply-templates /></fields>
</xsl:template>
<xsl:template match="/humanities/field">
<name><xsl:value-of select="./text()" /></name>
</xsl:template>
</xsl:stylesheet>)
# We transform our document to an xsl
xslt = etree.XSLT(xslt_root)
# We create some xml we need to change
xml = <humanities>
<field>History</field>
<field>Classics</field>
<field>French</field>
<field>German</field>
</humanities>
parsed_xml = etree.fromstring(xml)
# And now we process our xml :
transformed = xslt(parsed_xml)
print(transformed)
Explanation: What you have learned :
Each node and xml document has an .xpath() method which takes as its first parameter xpath
Method xpath() always returns a list, even for a single result
Method xpath() will return to the root when you don't prefix your // with a dot.
An incorrect XPath will issue a XPathEvalError
Method xpath() accepts a namespaces argument : you should enter a dictionary where keys are prefixes and values namespaces
Unlike findall(), xpath() does not accept Clark's notation
1.4.2 XSLT
XSLT stands for Extensible Stylesheet Language Transformations. It's an xml-based language made for transforming xml documents to xml or other formats such as LaTeX and HTML. XSLT is really powerful when dealing with similarly formated data. It's far easier to transform 100 documents with the exact same structure via XSLT than in Python or any other language.
While Python is great at dealing with weird transformations of xml, the presence of XSLT in Python allows you to create production chains without leaving your favorite IDE.
To do some XSL, lxml needs two things : first, an xml document representing the xsl that will be parsed and entered into the function etree.XSLT(), and second, a document to transform.
End of explanation
print(type(transformed))
print(type(parsed_xml))
Explanation: Did you see what happened ? We used xslt(xml). etree.XSLT() transforms a xsl document into a function, which then takes one parameter (in this case an xml document). But can you figure out what this returns ? Let's ask Python :
End of explanation
print(transformed.xpath("//name"))
Explanation: The result is not of the same type of element we usually have, even though it does share most of its methods and attributes :
End of explanation
string_result = str(transformed)
print(string_result)
Explanation: And has something more : you can change its type to string !
End of explanation
# Here is an xml containing an xsl: for each text node of an xml file in the xpath /humanities/field,
# this will return a node <name> with the text inside
xslt_root = etree.fromstring(
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:param name="n" />
<xsl:template match="/humanities">
<fields>
<xsl:attribute name="n">
<xsl:value-of select="$n"/>
</xsl:attribute>
<xsl:apply-templates select="field"/>
</fields>
</xsl:template>
<xsl:template match="/humanities/field">
<name><xsl:value-of select="./text()" /></name>
</xsl:template>
</xsl:stylesheet>)
# We transform our document to an xsl
xslt = etree.XSLT(xslt_root)
# We create some xml we need to change
xml = <humanities>
<category>Humanities</category>
<field>History</field>
<field>Classics</field>
<field>French</field>
<field>German</field>
</humanities>
parsed_xml = etree.fromstring(xml)
# And now we process our xml :
transformed = xslt(parsed_xml, n="'Humanities'") # Note that for a string, we encapsulate it within single quotes
print(transformed)
# Be aware that you can use xpath as a value for the argument, though it can be rather complex sometimes
transformed = xslt(parsed_xml, n=etree.XPath("//category/text()"))
print(transformed)
Explanation: XSLT is more complex than just inputing xml. You can do XSLT using parameters as well. In this case, your parameters will be accessibles as a named argument to the generated function. If your XSL has a name xsl-param, the function given back by etree.XSLT will have a name argument :
End of explanation
from xml.etree import ElementTree
with open('data/books.xml', 'rt') as f:
tree = ElementTree.parse(f)
print(tree)
Explanation: 2. Using ElementTree
End of explanation
from xml.etree import ElementTree
with open('data/books.xml', 'r') as f:
tree = ElementTree.parse(f)
# print(dir(tree))
for node in tree.iter():
print (node.tag, node.attrib)
print("-----")
# from xml.etree import ElementTree
# with open('data/books.xml', 'r') as f:
# tree = ElementTree.parse(f)
# # print(dir(tree))
# for node in tree.iter():
# print (node.tag, node.attrib)
# print("-----")
### To print only the groups of names and feed URLs for the podcasts,
# leaving out of all of the data in the header section by iterating
# over only the outline nodes and print the text and xmlUrl attributes.
from xml.etree import ElementTree
with open('data/podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
print(len( list(tree.iter('outline'))))
for node in tree.iter('outline'):
name = node.attrib.get('text')
url = node.attrib.get('xmlUrl')
if name and url:
print ('\t%s :: %s' % (name, url))
else:
print (name)
Explanation: 2.1 Traversing the Parsed Tree
To visit all of the children in order, use iter() to create a generator that iterates over the ElementTree instance.
End of explanation
for node in tree.findall('.//outline'):
url = node.attrib.get('xmlUrl')
if url:
print( url)
else:
print(node.attrib.get("text"))
print(dir(tree))
print(tree.getroot)
Explanation: 2.1.1 Finding Nodes in a Document¶
Walking the entire tree like this searching for relevant nodes can be error prone. The example above had to look at each outline node to determine if it was a group (nodes with only a text attribute) or podcast (with both text and xmlUrl). To produce a simple list of the podcast feed URLs, without names or groups, for a podcast downloader application, the logic could be simplified using findall() to look for nodes with more descriptive search characteristics.
As a first pass at converting the above example, we can construct an XPath argument to look for all outline nodes.
End of explanation
for node in tree.findall('.//outline/outline'):
url = node.attrib.get('xmlUrl')
print (url)
Explanation: Another version can take advantage of the fact that the outline nodes are only nested two levels deep. Changing the search path to .//outline/outline mean the loop will process only the second level of outline nodes.
End of explanation
from xml.etree import ElementTree
with open('data/data.xml', 'rt') as f:
tree = ElementTree.parse(f)
node = tree.find('./with_attributes')
print (node.tag)
for name, value in sorted(node.attrib.items()):
print (' %-4s = "%s"' % (name, value))
for path in [ './child', './child_with_tail' ]:
node = tree.find(path)
print(node.tag)
print (' child node text:', node.text)
print (' and tail text :', node.tail)
Explanation: 2.1.2 Parsed Node Attributes
The items returned by findall() and iter() are Element objects, each representing a node in the XML parse tree. Each Element has attributes for accessing data pulled out of the XML. This can be illustrated with a somewhat more contrived example input file, data.xml:
End of explanation
from xml.etree.ElementTree import XML
parsed = XML('''
<root>
<group>
<child id="a">This is child "a".</child>
<child id="b">This is child "b".</child>
</group>
<group>
<child id="c">This is child "c".</child>
</group>
</root>
''')
print ('parsed =', parsed)
for elem in parsed:
print (elem.tag)
if elem.text is not None and elem.text.strip():
print (' text: "%s"' % elem.text)
if elem.tail is not None and elem.tail.strip():
print (' tail: "%s"' % elem.tail)
for name, value in sorted(elem.attrib.items()):
print(' %-4s = "%s"' % (name, value))
print
from xml.etree.ElementTree import Element, tostring
top = Element('top')
children = [
Element('child', num=str(i))
for i in range(3)
]
top.extend(children)
print(top)
Explanation: 2.1.3 Parsing Strings
To work with smaller bits of XML text, especially string literals as might be embedded in the source of a program, use XML() and the string containing the XML to be parsed as the only argument.
End of explanation |
10,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rand 2011 Cooperation Study
This notebook outlines how to recreate the analysis of the Rand et al. 2011 study "Dynamic social networks promote cooperation in experiments with humans" Link to Paper
This outlines the steps to re-create the analysis using the publicly available data published in the paper. This requires either a local or remote copy of Bedrock with the following Opals installed
Step1: Test Connection to Bedrock Server
This code assumes a local bedrock is hosted at localhost on port 81. Change the SERVER variable to match your server's URL and port.
Step2: Check for Spreadsheet Opal
The following code block checks the Bedrock server for the Spreadsheet Opal. This Opal is used to load .csv, .xls, and other such files into a Bedrock matrix format. The code below calls the Bedrock /dataloaders/ingest endpoint to check if the opals.spreadsheet.Spreadsheet.Spreadsheet opal is installed.
If the code below shows the Opal is not installed, there are two options
Step3: Check for logit2 Opal
The following code block checks the Bedrock server for the logit2 Opal.
If the code below shows the Opal is not installed, there are two options
Step4: Check for select-from-dataframe Opal
The following code block checks the Bedrock server for the select-from-dataframe Opal. This allows you to filter by row and reduce the columns in a dataframe loaded by the server.
If the code below shows the Opal is not installed, there are two options
Step5: Check for summarize Opal
The following code block checks the Bedrock server for the summarize Opal. This allows you to summarize a matrix with an optional groupby clause.
If the code below shows the Opal is not installed, there are two options
Step6: Step 2
Step7: Now Upload the source file to the Bedrock Server
This code block uses the Spreadsheet ingest module to upload the source file to Bedrock. Note
Step8: Check available data sources for the CSV file
Call the Bedrock sources list to see available data sources. Note, that the Rand2011 data source should now be available
Step9: Create a Bedrock Matrix from the CSV Source
In order to use the data, the data source must be converted to a Bedrock matrix. The following code steps through that process. Here we are doing a simple transform of csv to matrix. There are options to apply filters (like renaming columns, excluding colum
Step10: Look at basic statistics on the source data
Here we can see that Bedrock has computed some basic statistics on the source data.
For numeric data
The quartiles, max, mean, min, and standard deviation are provided
For non-numeric data
The label values and counts for each label are provided.
For both types
The proposed tags and data type that Bedrock is suggesting are provided
Step11: Step 3
Step12: Check that Matrix is filtered
Step13: Step 4
Step14: Visualize the output of the analysis
Here the output of the analysis is downloaded and from here can be visualized and exported
Step15: Analysis
The output of this analysis shows how the game condition interacts with the decision to either defect or cooperate. The coefficients provide the log-odds along with the Pr(z) scores to show the statistical significance. This is filtered only on round_num==1.
The referenced paper used several other comparisons to evaluate different interactions. The following code repeats the procedure above for the remaining analysis
Apply method to complete Rand2011 Analysis
The following cells replicate the other analysis pieces from the Rand2011 study
Summarize decision grouped on condition and round_num
Step16: Compare round_num effect on decision
Step17: Consider only num_neighbors > 0
Step18: Summarize on filtered matrix
Step19: Compare round_num effect on decision only when there are neighbors
Step20: Compare effect of round_num and Fluid
Look at the effect the round number an if the game is Fluid.
Step21: Condition effect on decision for Round >= 7
Step22: Fluid Effect on decision for Round >= 7
Step23: Relevel on Random and Compare condition effect on decision
Step24: Relevel on Static and Compare condition effect on decision
Step25: Relevel on Random and round_num >= 7
Step26: Relevel on Static and round_num >= 7
Step27: Subset on Fluid Condition and look at effect of num_neighbors on decision | Python Code:
from bedrock.client.client import BedrockAPI
Explanation: Rand 2011 Cooperation Study
This notebook outlines how to recreate the analysis of the Rand et al. 2011 study "Dynamic social networks promote cooperation in experiments with humans" Link to Paper
This outlines the steps to re-create the analysis using the publicly available data published in the paper. This requires either a local or remote copy of Bedrock with the following Opals installed:
Spreadsheet
logit2
select-from-dataframe
summarize
This notebook also requires that bedrock-core be installed locally into the python kernel running this notebook. This can be installed via command line using:
pip install git+https://github.com/Bedrock-py/bedrock-core.git
The other requirements to run this notebook are:
pandas
Step 1: Check Environment
First check that Bedrock is installed locally. If the following cell does not run without error, check the install procedure above and try again. Also, ensure that the kernel selected is the same as the kernel where bedrock-core is installed
End of explanation
import requests
import pandas
import pprint
SERVER = "http://localhost:81/"
api = BedrockAPI(SERVER)
Explanation: Test Connection to Bedrock Server
This code assumes a local bedrock is hosted at localhost on port 81. Change the SERVER variable to match your server's URL and port.
End of explanation
resp = api.ingest("opals.spreadsheet.Spreadsheet.Spreadsheet")
if resp.json():
print("Spreadsheet Opal Installed!")
else:
print("Spreadsheet Opal Not Installed!")
Explanation: Check for Spreadsheet Opal
The following code block checks the Bedrock server for the Spreadsheet Opal. This Opal is used to load .csv, .xls, and other such files into a Bedrock matrix format. The code below calls the Bedrock /dataloaders/ingest endpoint to check if the opals.spreadsheet.Spreadsheet.Spreadsheet opal is installed.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the Spreadsheet Opal with pip on the server Spreadsheet
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
resp = api.analytic('opals.logit2.Logit2.Logit2')
if resp.json():
print("Logit2 Opal Installed!")
else:
print("Logit2 Opal Not Installed!")
Explanation: Check for logit2 Opal
The following code block checks the Bedrock server for the logit2 Opal.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the logit2 Opal with pip on the server logit2
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
resp = api.analytic('opals.select-from-dataframe.SelectByCondition.SelectByCondition')
if resp.json():
print("Select-from-dataframe Opal Installed!")
else:
print("Select-from-dataframe Opal Not Installed!")
Explanation: Check for select-from-dataframe Opal
The following code block checks the Bedrock server for the select-from-dataframe Opal. This allows you to filter by row and reduce the columns in a dataframe loaded by the server.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the select-from-datafram Opal with pip on the server select-from-dataframe
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
resp = api.analytic('opals.summarize.Summarize.Summarize')
if resp.json():
print("Summarize Opal Installed!")
else:
print("Summarize Opal Not Installed!")
Explanation: Check for summarize Opal
The following code block checks the Bedrock server for the summarize Opal. This allows you to summarize a matrix with an optional groupby clause.
If the code below shows the Opal is not installed, there are two options:
1. If you are running a local Bedrock or are the administrator of the Bedrock server, install the summarize with pip on the server summarize
2. If you are not administrator of the Bedrock server, e-mail the Bedrock administrator requesting the Opal be installed
End of explanation
filepath = 'Rand2011PNAS_cooperation_data.csv'
datafile = pandas.read_csv('Rand2011PNAS_cooperation_data.csv')
datafile.head(10)
Explanation: Step 2: Upload Data to Bedrock and Create Matrix
Now that everything is installed, begin the workflow by uploading the csv data and creating a matrix. To understand this fully, it is useful to understand how a data loading workflow occurs in Bedrock.
Create a datasource that points to the original source file
Generate a matrix from the data source (filters can be applied during this step to pre-filter the data source on load
Analytics work on the generated matrix
Note: Each time a matrix is generated from a data source it will create a new copy with a new UUID to represent that matrix
Check for csv file locally
The following code opens the file and prints out the first part. The file must be a csv file with a header that has labels for each column. The file is comma delimited csv.
End of explanation
ingest_id = 'opals.spreadsheet.Spreadsheet.Spreadsheet'
resp = api.put_source('Rand2011', ingest_id, 'default', {'file': open(filepath, "rb")})
if resp.status_code == 201:
source_id = resp.json()['src_id']
print('Source {0} successfully uploaded'.format(filepath))
else:
try:
print("Error in Upload: {}".format(resp.json()['msg']))
except Exception:
pass
try:
source_id = resp.json()['src_id']
print("Using existing source. If this is not the desired behavior, upload with a different name.")
except Exception:
print("No existing source id provided")
Explanation: Now Upload the source file to the Bedrock Server
This code block uses the Spreadsheet ingest module to upload the source file to Bedrock. Note: This simply copies the file to the server, but does not create a Bedrock Matrix format
If the following fails to upload. Check that the csv file is in the correct comma delimited format with headers.
End of explanation
available_sources = api.list("dataloader", "sources").json()
s = next(filter(lambda source: source['src_id'] == source_id, available_sources),'None')
if s != 'None':
pp = pprint.PrettyPrinter()
pp.pprint(s)
else:
print("Could not find source")
Explanation: Check available data sources for the CSV file
Call the Bedrock sources list to see available data sources. Note, that the Rand2011 data source should now be available
End of explanation
resp = api.create_matrix(source_id, 'rand_mtx')
mtx = resp[0]
matrix_id = mtx['id']
print(mtx)
resp
Explanation: Create a Bedrock Matrix from the CSV Source
In order to use the data, the data source must be converted to a Bedrock matrix. The following code steps through that process. Here we are doing a simple transform of csv to matrix. There are options to apply filters (like renaming columns, excluding colum
End of explanation
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = []
summary_mtx = api.run_analytic(analytic_id, mtx, 'rand_mtx_summary', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(matrix_id, summary_mtx['id'], 'matrix.csv')
output
Explanation: Look at basic statistics on the source data
Here we can see that Bedrock has computed some basic statistics on the source data.
For numeric data
The quartiles, max, mean, min, and standard deviation are provided
For non-numeric data
The label values and counts for each label are provided.
For both types
The proposed tags and data type that Bedrock is suggesting are provided
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = [
{"attrname":"colname","value":"round_num"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"1"}
]
filtered_mtx = api.run_analytic(analytic_id, mtx, 'rand_round1_only', input_data=inputData, parameter_data=paramsData)
filtered_mtx
Explanation: Step 3: Filter the data based on a condition
We are doing step 3 of the Original analysis to compare the effect of decision to defect or cooperate based on the game condition (Fluid, Viscous, Static, Random)
End of explanation
output = api.download_results_matrix('rand_mtx', 'rand_round1_only', 'matrix.csv', remote_header_file='features.txt')
output
Explanation: Check that Matrix is filtered
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ condition"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, mtx, 'rand_logit2_step3', input_data=inputData, parameter_data=paramsData)
result_mtx
Explanation: Step 4: Run Logit2 Analysis
Now we will call the Logit2 Analysis on the matrix. This will run a logit analysis on the features in the matrix
End of explanation
coef_table = api.download_results_matrix('rand_mtx', 'rand_logit2_step3', 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Visualize the output of the analysis
Here the output of the analysis is downloaded and from here can be visualized and exported
End of explanation
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = [
{"attrname":"groupby","value":"condition,round_num"},
{"attrname":"columns","value":"decision0d1c"}
]
base_mtx = api.get_matrix_metadata('Rand2011','rand_mtx')
summary_mtx = api.run_analytic(analytic_id, base_mtx,'summarize_grouped', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(base_mtx['id'], summary_mtx['id'], 'matrix.csv')
output
Explanation: Analysis
The output of this analysis shows how the game condition interacts with the decision to either defect or cooperate. The coefficients provide the log-odds along with the Pr(z) scores to show the statistical significance. This is filtered only on round_num==1.
The referenced paper used several other comparisons to evaluate different interactions. The following code repeats the procedure above for the remaining analysis
Apply method to complete Rand2011 Analysis
The following cells replicate the other analysis pieces from the Rand2011 study
Summarize decision grouped on condition and round_num
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ round_num"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step1', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Compare round_num effect on decision
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"num_neighbors"},
{"attrname":"comparator","value":">"},
{"attrname":"value","value":"0"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_has_neighbors', input_data=inputData, parameter_data=paramsData)
Explanation: Consider only num_neighbors > 0
End of explanation
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"groupby","value":"condition,round_num"},
{"attrname":"columns","value":"decision0d1c"}
]
summary_mtx = api.run_analytic(analytic_id, filtered_mtx,'summarize_grouped', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(base_mtx['id'], summary_mtx['id'], 'matrix.csv')
output
Explanation: Summarize on filtered matrix
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ round_num"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step2', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Compare round_num effect on decision only when there are neighbors
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ fluid_dummy*round_num"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step4', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Compare effect of round_num and Fluid
Look at the effect the round number an if the game is Fluid.
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"round_num"},
{"attrname":"comparator","value":">="},
{"attrname":"value","value":"7"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_round7', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ condition"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step5', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Condition effect on decision for Round >= 7
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(fluid_dummy)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step6', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Fluid Effect on decision for Round >= 7
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Random'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step7', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
pandas.set_option('display.max_colwidth', -1)
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Relevel on Random and Compare condition effect on decision
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Static'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_logit2_step8', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
pandas.set_option('display.max_colwidth', -1)
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Relevel on Static and Compare condition effect on decision
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Random'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step9', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Relevel on Random and round_num >= 7
End of explanation
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(condition, Treatment(reference='Static'))"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step10', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Relevel on Static and round_num >= 7
End of explanation
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': base_mtx,
'features.txt': base_mtx
}
paramsData = [
{"attrname":"colname","value":"condition"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"Fluid"}
]
filtered_mtx = api.run_analytic(analytic_id, base_mtx, 'rand_fluid_only', input_data=inputData, parameter_data=paramsData)
analytic_id = "opals.logit2.Logit2.Logit2"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ C(num_neighbors)"},
{"attrname":"family","value":"binomial"},
{"attrname":"clustered_rse","value":"sessionnum,playerid"}
]
result_mtx = api.run_analytic(analytic_id, filtered_mtx, 'rand_logit2_step11', input_data=inputData, parameter_data=paramsData)
coef_table = api.download_results_matrix(base_mtx['id'], result_mtx['id'], 'matrix.csv')
coef_table
summary_table = api.download_results_matrix(result_mtx['src_id'], result_mtx['id'], 'summary.csv')
summary_table
Explanation: Subset on Fluid Condition and look at effect of num_neighbors on decision
End of explanation |
10,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Support Vector Classifier
Support vector machines are a set of supervised learning algorithms that you can use for classification, regression and outlier detection purposes. SciKit-Learn has many classes for SVM usage, depending on your purpose. The one we'll be focusing on is Support Vector Classifier, SVC.
An OCR example
In 1982, the first computer-driven, OCR machine got installed by the United States Postal Service (USPS) in Los Angeles and by the end of 1984, over 250 OCRs machines were installed in 118 major mail processing centers across the country.
Let's see if it's possible to train a support vector classifier in a few seconds using machine learning, and if the classification accuracy is similar or better than the advertised USPS stats.
We start by reading the dataset, which comes from the UCI Machine Learning Repository and is composed by bitmaps of handwritten digits from a preprinted form.
Step1: Let's have a look at these bitmaps of handwritten digits
Step2: Train the SVM Classifier
Now we are ready to train the Support Vector Classifier, using the SciKitLearn library.
We leave all parameters at their defaults, setting only the kernel to be Linear.
More on the kernels later in this notebook.
Step3: Checkpoint
Print the predicted digit and the actual label for a random example.
We take the thousandth digit.
Step4: The model's prediction was correct.
We can display that image, so we can visually check if it was a hard image or an easy image
Step5: visual confirmation of accuracy
Here we can print more digits with indication of what was the predicted label (in red if it was wrong)
Step6: Score
We can see that - on this sample of 50 handwritten digits - 4 of them are wrong, that's 8%.
And here we calculate the score on the entire testing dataset
Step7: Not bad, the model was correct more than 96% of the times!
Non-linear Kernels for the SVC
We experiment now with different kernels, starting with the polynomial kernel.
When training an SVM with a kernel, two additional parameters must be considered
Step8: Which is sightly better, but we can try a different kernel, more performant
Step9: Now it's better than the UPS' score!
Hyper-parameters tuning for SVC Kernels
Proper choice of C and gamma is critical to the SVM’s performance.
One is advised to use sklearn.model_selection.GridSearchCV with C and gamma spaced exponentially far apart to choose good values.
We will tune them - and the pre-processor - using a different example.
A Parkinson's classifier
Apply SVC to the Parkinson's Data Set, provided courtesy of UCI's Machine Learning Repository. The dataset was created at the University of Oxford, in collaboration with 10 medical centers around the US, along with Intel who developed the device used to record the primary features of the dataset
Step10: We can apply different scaler for the pre-processing.
The standard scaler seems the best, feel free to experiment with others, uncommenting one below
Step11: Same for the dimensionality reduction
Step12: Train the SVM classifier.
Create and fit an SVC based on the RBF kernel against the training data and then finally score the testing data.
To search the best hyper-parameters we just use simple nested loops.
Looping C from 0.05 to 2 and looping gamma from 0.001 to 0.1
Step13: Best score was with C=0.85 and gamma=0.088
Comparing KNN vs. SVC
How does SVC compare with other classifiers, such as the KNN?
We classify the UCI's wheat-seeds dataset - that we used previously with the KNN algorithm - by using the SVC and compare the results.
First, benchmark how long it takes to train and predict with SVC relative to how long K-Neighbors took to train and test and then compare the decision boundary plot produced by the two.
Defining some parameters
Step14: Read the data
Step15: Data pre-processing
This is all standard. You can refer to the previous examples for more details, especially the KNN example.
Step16: Split into training and testing data sets
Step17: Utility function
Step18: Utility function
Step19: Train the Knn classifier
Step20: And get its benchmark
Step21: Train the SVM Classifier | Python Code:
import pandas as pd
# The Dataset comes from:
# https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
# Load up the data.
with open('../Datasets/optdigits.tes', 'r') as f: testing = pd.read_csv(f)
with open('../Datasets/optdigits.tra', 'r') as f: training = pd.read_csv(f)
# The number of samples between training and testing can vary
# But the number of features better remain the same!
n_features = testing.shape[1]
X_test = testing.iloc[:,:n_features-1]
X_train = training.iloc[:,:n_features-1]
y_test = testing.iloc[:,n_features-1:].values.ravel()
y_train = training.iloc[:,n_features-1:].values.ravel()
print (n_features)
Explanation: Linear Support Vector Classifier
Support vector machines are a set of supervised learning algorithms that you can use for classification, regression and outlier detection purposes. SciKit-Learn has many classes for SVM usage, depending on your purpose. The one we'll be focusing on is Support Vector Classifier, SVC.
An OCR example
In 1982, the first computer-driven, OCR machine got installed by the United States Postal Service (USPS) in Los Angeles and by the end of 1984, over 250 OCRs machines were installed in 118 major mail processing centers across the country.
Let's see if it's possible to train a support vector classifier in a few seconds using machine learning, and if the classification accuracy is similar or better than the advertised USPS stats.
We start by reading the dataset, which comes from the UCI Machine Learning Repository and is composed by bitmaps of handwritten digits from a preprinted form.
End of explanation
import matplotlib.pyplot as plt
# The 'targets' or labels are stored in y. The 'samples' or data is stored in X
print ("Peeking the data...")
fig = plt.figure()
cnt = 0
for col in range(5):
for row in range(10):
plt.subplot(5, 10, cnt + 1)
plt.imshow(X_train.iloc[cnt,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')
plt.axis('off')
cnt += 1
fig.set_tight_layout(True)
plt.show()
Explanation: Let's have a look at these bitmaps of handwritten digits
End of explanation
from sklearn import svm # Library for Support Vector Machines
#
# Create and train an SVM classifier.
print ("Training SV Classifier...")
svc = svm.SVC(kernel='linear')
svc.fit(X_train, y_train)
Explanation: Train the SVM Classifier
Now we are ready to train the Support Vector Classifier, using the SciKitLearn library.
We leave all parameters at their defaults, setting only the kernel to be Linear.
More on the kernels later in this notebook.
End of explanation
#
# Print out the TRUE value of the 1000th digit in the test set
# By TRUE value, we mean, the actual provided label for that sample
#
true_1000th_test_value = y_test[999]
print ("1000th test label is: ", true_1000th_test_value)
#
# Predict the value of the 1000th digit in the test set.
# Was the model's prediction correct?
#
guess_1000th_test_value = svc.predict(X_test[999:1000])
print ("1000th test prediction is: ", guess_1000th_test_value)
Explanation: Checkpoint
Print the predicted digit and the actual label for a random example.
We take the thousandth digit.
End of explanation
#
# Use IMSHOW to display the 1000th test image
#
#
plt.imshow(X_test.iloc[999,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest');
Explanation: The model's prediction was correct.
We can display that image, so we can visually check if it was a hard image or an easy image:
End of explanation
# Visual Confirmation of accuracy
fig = plt.figure()
# Make some guesses
y_guess = svc.predict(X_test)
num_rows = 10
num_cols = 5
index = 0
for col in range(num_cols):
for row in range(num_rows):
plt.subplot(num_cols, num_rows, index + 1)
# 8x8 is the size of the image, 64 pixels
plt.imshow(X_test.iloc[index,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')
# Green = Guessed right
# Red = Fail!
fontcolor = 'g' if y_test[index] == y_guess[index] else 'r'
plt.title('Label: %i' % y_guess[index], fontsize=7, color=fontcolor)
plt.axis('off')
index += 1
fig.set_tight_layout(True)
plt.show()
Explanation: visual confirmation of accuracy
Here we can print more digits with indication of what was the predicted label (in red if it was wrong):
End of explanation
# Calculate the score of the SVC against the testing data
print ("Scoring SVM Classifier...")
#
score = svc.score(X_test, y_test)
print ("Score: ", score)
Explanation: Score
We can see that - on this sample of 50 handwritten digits - 4 of them are wrong, that's 8%.
And here we calculate the score on the entire testing dataset:
End of explanation
#
# We start with the POLY kernel
svc = svm.SVC(kernel='poly', C=1.0, gamma=0.001)
svc.fit(X_train, y_train)
# Calculate the score of the SVC against the testing data
print ("Scoring SV poly Classifier...")
score = svc.score(X_test, y_test)
print ("Score: ", score)
Explanation: Not bad, the model was correct more than 96% of the times!
Non-linear Kernels for the SVC
We experiment now with different kernels, starting with the polynomial kernel.
When training an SVM with a kernel, two additional parameters must be considered: C and gamma.
The parameter C, common to all SVM kernels, trades off misclassification of training examples against simplicity of the decision surface.
A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly.
We keep C at the default value = 1.
Gamma defines how much influence a single training example has.
The larger gamma is, the closer other examples must be to be affected.
USPS has an advertised accuracy score of 98% which is higher than our SVC with a linear Kernel.
We can beat it with a non-linear kernel!
End of explanation
#
# change SVC's kernel to 'rbf'
svc = svm.SVC(kernel='rbf', C=1.0, gamma=0.001)
svc.fit(X_train, y_train)
# Calculate the score of SVC against the testing data
print ("Scoring SVM rbf Classifier...")
score = svc.score(X_test, y_test)
print ("Score: ", score)
Explanation: Which is sightly better, but we can try a different kernel, more performant: the Radial Basis Function (RBF) kernel.
End of explanation
X = pd.read_csv("../Datasets/parkinsons.data")
X.drop(['name'], axis=1, inplace=True) # drop name column
y = X.status.copy() # copy “y” values out from status
X.drop(['status'], axis=1, inplace=True) # drop status column
# Perform a train/test split. 30% test group size, with a random_state equal to 7.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=7)
Explanation: Now it's better than the UPS' score!
Hyper-parameters tuning for SVC Kernels
Proper choice of C and gamma is critical to the SVM’s performance.
One is advised to use sklearn.model_selection.GridSearchCV with C and gamma spaced exponentially far apart to choose good values.
We will tune them - and the pre-processor - using a different example.
A Parkinson's classifier
Apply SVC to the Parkinson's Data Set, provided courtesy of UCI's Machine Learning Repository. The dataset was created at the University of Oxford, in collaboration with 10 medical centers around the US, along with Intel who developed the device used to record the primary features of the dataset: speech signals.
Goals: first to see if it's possible to differentiate between people who have Parkinson's and who don't using SciKit-Learn's support vector classifier and then to take a first-stab at a naive way of fine-tuning our parameters in an attempt to maximize the accuracy of the testing set.
Read and pre-process the data
End of explanation
from sklearn import preprocessing
# tried with different scaler, standard is the best
scaler = preprocessing.StandardScaler() # best score was 0.932203389831
#scaler = preprocessing.MinMaxScaler() # best score was 0.881355932203
#scaler = preprocessing.MaxAbsScaler() # best score was 0.881355932203
#scaler = preprocessing.Normalizer() # best score was 0.796610169492
#scaler = preprocessing.KernelCenterer() # best score was 0.915254237288
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
Explanation: We can apply different scaler for the pre-processing.
The standard scaler seems the best, feel free to experiment with others, uncommenting one below:
End of explanation
from sklearn.decomposition import PCA
from sklearn import manifold
usePCA = False # change this to use PCA as dimensionality reducer
if usePCA:
reducer = PCA(n_components=7).fit(X_train)
else:
reducer = manifold.Isomap(n_neighbors=3, n_components=6).fit(X_train)
X_train = reducer.transform(X_train)
X_test = reducer.transform(X_test)
Explanation: Same for the dimensionality reduction: feel free to experiment with PCA or Isomap
End of explanation
import numpy as np
# a naive, best-parameter search using nested for-loops.
best_score = 0
for c in np.arange(0.05,2,0.05):
for gamma in np.arange(0.001, 0.1, 0.001):
svc = svm.SVC(kernel='rbf', C=c, gamma=gamma)
svc.fit(X_train, y_train)
score = svc.score(X_test, y_test)
if score > best_score:
best_score = score
#print ("New best score:", score, "using C= ", c, "and gamma = ", gamma)
print(f"New best score: {score:.3f} using C= {c:.2f} and gamma = {gamma:.3f}")
Explanation: Train the SVM classifier.
Create and fit an SVC based on the RBF kernel against the training data and then finally score the testing data.
To search the best hyper-parameters we just use simple nested loops.
Looping C from 0.05 to 2 and looping gamma from 0.001 to 0.1
End of explanation
#
# INFO: Parameters can be adjusted here
C = 1
kernel = 'linear'
iterations = 100
#
# INFO: You can set this to false if you want to
# draw the full square matrix
FAST_DRAW = True
Explanation: Best score was with C=0.85 and gamma=0.088
Comparing KNN vs. SVC
How does SVC compare with other classifiers, such as the KNN?
We classify the UCI's wheat-seeds dataset - that we used previously with the KNN algorithm - by using the SVC and compare the results.
First, benchmark how long it takes to train and predict with SVC relative to how long K-Neighbors took to train and test and then compare the decision boundary plot produced by the two.
Defining some parameters
End of explanation
#
# Load up the wheat dataset into dataframe 'X'
#
df = pd.read_csv("../Datasets/wheat.data", index_col='id')
Explanation: Read the data
End of explanation
# INFO: An easy way to show which rows have nans in them
print (df[pd.isnull(df).any(axis=1)])
#
# Go ahead and drop any row with a nan
#
df.dropna(axis=0, inplace=True)
#
# INFO: you might try setting the nan values to the
# mean value of that column, the mean should only be calculated for
# the specific class rather than across all classes, now that you
# have the labels
#
# Copy the labels out of the dset into variable 'y' then Remove
# them from X. Encode the labels -- canadian:0, kama:1, and rosa:2
#
labels = df.wheat_type.copy() # copy “y” values out
df.drop(['wheat_type'], axis=1, inplace=True) # drop output column
labels = labels.map({'canadian':0, 'kama':1, 'rosa':2})
Explanation: Data pre-processing
This is all standard. You can refer to the previous examples for more details, especially the KNN example.
End of explanation
#
# Split data into test / train sets
#
X_train, X_test, y_train, y_test = train_test_split(df, labels, test_size=0.3,
random_state=7)
Explanation: Split into training and testing data sets
End of explanation
import matplotlib as mpl
import matplotlib.pyplot as plt
def drawPlots(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
# If this line throws an error, use plt.style.use('ggplot') instead
mpl.style.use('ggplot') # Look Pretty
padding = 3
resolution = 0.5
max_2d_score = 0
score = 0
y_colors = ['#ff0000', '#00ff00', '#0000ff']
my_cmap = mpl.colors.ListedColormap(['#ffaaaa', '#aaffaa', '#aaaaff'])
colors = [y_colors[i] for i in y_train]
num_columns = len(X_train.columns)
fig = plt.figure()
fig.canvas.set_window_title(wintitle)
cnt = 0
for col in range(num_columns):
for row in range(num_columns):
# Easy out
if FAST_DRAW and col > row:
cnt += 1
continue
ax = plt.subplot(num_columns, num_columns, cnt + 1)
plt.xticks(())
plt.yticks(())
# Intersection:
if col == row:
plt.text(0.5, 0.5, X_train.columns[row], verticalalignment='center', horizontalalignment='center', fontsize=12)
cnt += 1
continue
# Only select two features to display, then train the model
X_train_bag = X_train.iloc[:, [row,col]]
X_test_bag = X_test.iloc[:, [row,col]]
model.fit(X_train_bag, y_train)
# Create a mesh to plot in
x_min, x_max = X_train_bag.iloc[:, 0].min() - padding, X_train_bag.iloc[:, 0].max() + padding
y_min, y_max = X_train_bag.iloc[:, 1].min() - padding, X_train_bag.iloc[:, 1].max() + padding
xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),
np.arange(y_min, y_max, resolution))
# Plot Boundaries
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# Prepare the contour
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=my_cmap, alpha=0.8)
plt.scatter(X_train_bag.iloc[:, 0], X_train_bag.iloc[:, 1], c=colors, alpha=0.5)
score = round(model.score(X_test_bag, y_test) * 100, 3)
#plt.text(0.5, 0, "Score: {0}".format(score), transform = ax.transAxes, horizontalalignment='center', fontsize=8)
plt.text(0.5, 0, f"Score: {score}", transform = ax.transAxes, horizontalalignment='center', fontsize=8)
max_2d_score = score if score > max_2d_score else max_2d_score
cnt += 1
print ("Max 2D Score: ", max_2d_score)
fig.set_tight_layout(True)
Explanation: Utility function: draw the plots
This is a convenience function to break any higher-dimensional space down and view cross sections of it.
End of explanation
import time
def benchmark(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
print ('\n\n' + wintitle + ' Results')
# the only purpose of doing many iterations was to get a more accurate
# count of the time it took for each classifier
s = time.time()
for i in range(iterations):
#
# train the classifier on the training data / labels:
#
model.fit(X_train, y_train)
#print ("{0} Iterations Training Time: ".format(iterations), time.time() - s)
print(f"{iterations} Iterations Training Time: {time.time() - s:.3f}")
scoreBch = 0
s = time.time()
for i in range(iterations):
#
# score the classifier on the testing data / labels:
#
scoreBch = model.score(X_test, y_test)
#print ("{0} Iterations Scoring Time: ".format(iterations), time.time() - s)
print(f"{iterations} Iterations Scoring Time: {time.time() - s:.3f}")
print ("High-Dimensionality Score: ", round((scoreBch*100), 3))
Explanation: Utility function: benchmark times
End of explanation
#
# Create an KNeighbors classifier
#
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
Explanation: Train the Knn classifier
End of explanation
benchmark(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
drawPlots(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
Explanation: And get its benchmark:
End of explanation
#
# Create an SVM classifier
# Use a linear kernel, and set the C value to C (see initial parameters)
#
from sklearn.svm import SVC
svc = SVC(kernel='linear', C=C)
benchmark(svc, X_train, X_test, y_train, y_test, 'SVC')
drawPlots(svc, X_train, X_test, y_train, y_test, 'SVC')
Explanation: Train the SVM Classifier
End of explanation |
10,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2017 NCTU Data Maning HW0
0416037 李家安
Info
Group 3
Dataset
Step1: Connect sql
Step2: Remove NAN
觀察發現只有 birth 有出現 NAN,用 mean 取代沒有紀錄的 NaN
Step3: 避免出現 start time >= end time
Step4: 刪除 speed 過快不合理的狀況
上網查好像腳踏車時速大概可以到 40 上下,所以我把以上的部分刪掉
Step5: 將處理完的原始 table 先存一份
過程中先轉型
Step6: 將原始資料切出只有 station 的資料
只保留 id, name, lat, lng,並且刪掉重複的
Step7: 確認是否有不當的位置
看起來沒有
Step8: 切出 path table
只留下 tripduration, starttime, stoptime, start station id, end station id, bikeid, usertype, birth year, gender
Step9: 切出 in / out flow table
為了方便 query,留下 id, time, in-flow, out-flow
Step10: Task
Transactions 1
助教給的 transactions
in / out flow when station_id=519
如助教給的作法,我把 in / out binning 成每 10 個 count 為一組
Step11: 我的 transaction 是用
Step12: 我的 transaction 是用 | Python Code:
import pandas as pd
import datetime
df = pd.read_csv('201707-citibike-tripdata.csv')
df.columns = ['tripduration','starttime','stoptime',\
'start_station_id','start_station_name','start_station_latitude','start_station_longitude',\
'end_station_id','end_station_name','end_station_latitude','end_station_longitude',\
'bikeid','usertype','birth_year','gender']
Explanation: 2017 NCTU Data Maning HW0
0416037 李家安
Info
Group 3
Dataset: New York Citi Bike Trip Histories, first data
Task
What rules should be discover?
What rules should be discover?
Need
what is a transaction
what rules should be discovered(and discretization method)
what algorithm you use(Apriori or FP-growth or something else)
a. algorithm code from github is allowed(cite the repository)
top 3 rules
what did you learned, or comparison between different methods you use
Data Preprocessing
由於 hw0 再做 data preprocess 的部份有點不足,因此我在改了一點
另外我希望能將 data 塞入 mysql 以方便一些 sql 原有的 query
因此只會做處理的部份,比較不會印出來
Load Data
End of explanation
from sqlalchemy import create_engine
engine = create_engine('mysql://calee0219:110010@localhost/citybike')
Explanation: Connect sql
End of explanation
print(df.isnull().sum().sum())
print(pd.isnull(df).sum() > 0)
birth_mean = df['birth_year'].mean()
df = df.fillna(birth_mean)
Explanation: Remove NAN
觀察發現只有 birth 有出現 NAN,用 mean 取代沒有紀錄的 NaN
End of explanation
df = df.drop(df.index[df['starttime'] >= df['stoptime']])
df = df.reset_index(drop=True)
Explanation: 避免出現 start time >= end time
End of explanation
import datetime
import operator
from pyproj import Geod
wgs84_geod = Geod(ellps='WGS84')
start = [datetime.datetime.strptime(dt, '%Y-%m-%d %H:%M:%S') for dt in df['starttime'].tolist()]
end = [datetime.datetime.strptime(dt, '%Y-%m-%d %H:%M:%S') for dt in df['stoptime'].tolist()]
def Distance(lat1,lon1,lat2,lon2):
az12,az21,dist = wgs84_geod.inv(lon1,lat1,lon2,lat2)
return dist
dist = Distance(df['start_station_latitude'].tolist(), df['start_station_longitude'].tolist(), \
df['end_station_latitude'].tolist(), df['end_station_longitude'].tolist())
speed = list(map(operator.truediv, [x/1000 for x in dist], [ time.seconds/3600 for time in list(map(operator.sub, end, start))]))
zp = list(zip(speed,list(range(df.shape[0]))))
zp.sort()
zp.reverse()
for i in zp[:6]:
print(i)
df = df.drop(df.index[[716622,320615,1393557,1260345]])
df.reset_index(drop=True, inplace=True)
Explanation: 刪除 speed 過快不合理的狀況
上網查好像腳踏車時速大概可以到 40 上下,所以我把以上的部分刪掉
End of explanation
from sqlalchemy import types
try:
df = pd.read_sql_table(table_name='origin', con=engine)
except:
df['tripduration'].astype(int)
df['starttime'] = pd.to_datetime(df['starttime'])
df['stoptime'] = pd.to_datetime(df['stoptime'])
df['start_station_id'].astype(int)
df['start_station_name'].astype(str)
df['start_station_latitude'].astype(float)
df['start_station_longitude'].astype(float)
df['end_station_id'].astype(int)
df['end_station_name'].astype(str)
df['end_station_latitude'].astype(float)
df['end_station_longitude'].astype(float)
df['bikeid'].astype(int)
df['usertype'].astype(str)
df['birth_year'].astype(int)
df['gender'].astype(int)
df.to_sql(name='origin', con=engine, if_exists='replace',index=False,\
dtype={'starttime': types.DATETIME, 'stoptime': types.DATETIME, 'birth_year': types.BIGINT})
Explanation: 將處理完的原始 table 先存一份
過程中先轉型
End of explanation
try:
station = pd.read_sql_table(table_name='station', con=engine)
except:
station = pd.DataFrame(df[['start_station_id', 'start_station_name', 'start_station_latitude', 'start_station_longitude']])
station.columns = ['id', 'name', 'latitude', 'longitude']
tmp = pd.DataFrame(df[['end_station_id', 'end_station_name', 'end_station_latitude', 'end_station_longitude']])
tmp.columns = ['id', 'name', 'latitude', 'longitude']
station = pd.concat([station, tmp])
station = station.sort_values('id').drop_duplicates().reset_index(drop=True)
station.to_sql(name='station', con=engine, if_exists='fail',index=False)
Explanation: 將原始資料切出只有 station 的資料
只保留 id, name, lat, lng,並且刪掉重複的
End of explanation
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
my_map = Basemap(projection='merc', lat_0=40.7, lon_0=-73.98,
resolution = 'h', area_thresh = 0.01,
llcrnrlon=-74.1, llcrnrlat=40.64,
urcrnrlon=-73.9, urcrnrlat=40.85)
lon = station['longitude'].tolist()
lat = station['latitude'].tolist()
labels = station['id'].tolist()
fig = plt.figure(frameon=False)
fig.set_size_inches(18,12)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='coral')
my_map.drawmapboundary()
x,y = my_map(lon, lat)
my_map.plot(x, y, 'bo', markersize=2)
plt.show()
Explanation: 確認是否有不當的位置
看起來沒有
End of explanation
from sqlalchemy import types
try:
path = pd.read_sql_table(table_name='path', con=engine)
except:
path = df.drop(['start_station_name', 'start_station_latitude', 'start_station_longitude', 'end_station_name', 'end_station_latitude', 'end_station_longitude'], axis=1)
path.to_csv('path.csv', index=False)
path.to_sql(name='path', con=engine, if_exists='fail',index=False,\
dtype={'starttime': types.DATETIME, 'stoptime': types.DATETIME, 'birth_year': types.BIGINT})
Explanation: 切出 path table
只留下 tripduration, starttime, stoptime, start station id, end station id, bikeid, usertype, birth year, gender
End of explanation
import bisect
import datetime
try:
in_out = pd.read_sql_table(table_name='in_out', con=engine)
except:
begin = datetime.datetime(2017, 7, 1, 0, 0, 0)
end = datetime.datetime(2017, 8, 1, 23, 30, 0)
date_list = [ end - datetime.timedelta(seconds=x*60*30) for x in range(0, 1536)][::-1]
table = {}
for idx, row in path.iterrows():
start_date = row['starttime']
start = date_list[bisect.bisect_right(date_list, start_date)]
end_date = row['stoptime']
end = date_list[bisect.bisect_right(date_list, end_date)]
start_tmp = (row['start_station_id'], start)
if table.get(start_tmp) == None:
table[start_tmp] = (1,0)
else:
tmp = list(table[start_tmp])
tmp[0] += 1
table[start_tmp] = tuple(tmp)
stop_tmp = (row['end_station_id'], start)
if table.get(stop_tmp) == None:
table[stop_tmp] = (0,1)
else:
tmp = list(table[stop_tmp])
tmp[1] += 1
table[stop_tmp] = tuple(tmp)
tmp_in_out = []
for key in table.keys():
tmp_in_out.append([key[0], key[1], table[key][0], table[key][1]])
in_out = pd.DataFrame(tmp_in_out, columns=['id', 'time', 'in', 'out'])
in_out.to_sql(name='in_out', con=engine, if_exists='replace',index=False,\
dtype={'time': types.DATETIME})
Explanation: 切出 in / out flow table
為了方便 query,留下 id, time, in-flow, out-flow
End of explanation
import pandas as pd
from mlxtend.preprocessing import OnehotTransactions
from mlxtend.frequent_patterns import apriori
transactions = []
for idx, row in in_out.iterrows():
if row['id'] == 519:
transactions.append([('in',row['in']//10), ('out',row['out']//10)])
min_sup = 0.01
oht = OnehotTransactions()
oht_ary = oht.fit(transactions).transform(transactions)
df = pd.DataFrame(oht_ary, columns=oht.columns_)
frequent_itemsets = apriori(df, min_support=min_sup, use_colnames=True)
frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x))
fqs = frequent_itemsets[ (frequent_itemsets['length'] >= 2) &
(frequent_itemsets['support'] >= min_sup) ].sort_values(['support'], ascending=False)
print(fqs)
for idx, row in fqs.iterrows():
cof = row['itemsets'][0]
import Orange
from orangecontrib.associate.fpgrowth import *
transactions = []
for idx, row in in_out.iterrows():
if row['id'] == 519:
transactions.append([('in',row['in']//10), ('out',row['out']//10)])
#transactions = np.array(transactions)
import pyfpgrowth
patterns = pyfpgrowth.find_frequent_patterns(transactions, 10)
print(patterns)
rules = pyfpgrowth.generate_association_rules(patterns, 0.3)
for key in rules.keys():
print(key, rules[key])
Explanation: Task
Transactions 1
助教給的 transactions
in / out flow when station_id=519
如助教給的作法,我把 in / out binning 成每 10 個 count 為一組
End of explanation
query = "SELECT in_out.id, in_out.time, in_out.in, in_out.out, T.latitude, T.longitude FROM in_out left join ( SELECT id, latitude, longitude from station )T ON T.id = in_out.id ORDER BY id"
table = pd.read_sql_query(query, engine)
lat_mean = station['latitude'].mean()
lon_mean = station['longitude'].mean()
#print(lat_mean, lon_mean)
def Distance(lat1,lon1,lat2,lon2):
az12,az21,dist = wgs84_geod.inv(lon1,lat1,lon2,lat2)
return dist
from orangecontrib.associate.fpgrowth import *
rem = {}
for idx, row in station.iterrows():
rem[row['id']] = Distance(lat_mean, lon_mean, row['latitude'], row['longitude'])//1000 # 以公里為單位
from fp_growth import *
transactions = []
for idx, row in table.iterrows():
rin = row['in'] // 10
rout = row['out'] // 10
if rin == 0 or rout == 0: continue
transactions.append([(rem[row['id']], row['time'].time().isoformat()), ('in',rin), ('out',rout)])
result = {}
for itemset, support in find_frequent_itemsets(transactions, .02*len(transactions), True):
result[tuple(itemset)] = support/len(transactions)
def subs(l):
assert type(l) is list
if len(l) == 1:
return [l]
x = subs(l[1:])
return x + [[l[0]] + y for y in x]
def assRule(freq, min_conf = 0.6):
assert type(freq) is dict
result = []
for item, sup in freq.items():
for subitem in subs(list(item)):
sb = [x for x in item if x not in subitem]
if sb == [] or subitem == []: continue
if len(subitem) == 1 and (subitem[0][0] == 'in' or subitem[0][0] == 'out'):
continue
conf = sup/freq[tuple(subitem)]
if conf >= min_conf:
result.append({'from':subitem, 'to':sb, 'sup':sup, 'conf':conf})
return result
rules = assRule(result, 0.8)
#print(rules)
for ru in rules:
print(ru)
Explanation: 我的 transaction 是用:
如助教給的策資,是用 station_id=519 的 in-flow, out-flow
應該要找到的 rule / discretization method
如助教建議的,對於 in / out flow,做了 10 個數量級切一份的 binning
算法
如上面兩個,一個是用 apriori、一個是用 fpgroth 分別找出 frequency itemset
top 3 rules
('out', 0) -> ('in', 0)
('in', 0) -> ('out', 0)
('out', 3) -> ('in', 3)
What do I learned
用 fpgroth 的有列出了 association rules,但是都可以看出基本上還是以 in-flow = 0, out-flow = 0 為大宗
因此我認為 id = 512 的 itemset 基本上沒有什麼有用的 association rules
in / out flow 似乎看起來在相同的時間區間中,數量級會相同
Transaction 2
我希望能找到 距離中心點位置、時間與 in / out flow 的關係
為避免找 frequency itemset 太慢,之後都用 fpgrowth 的方法找
End of explanation
query = '''
SELECT in_out.id, in_out.time, in_out.in, in_out.out, T1.st_time, T2.en_time
FROM in_out
LEFT JOIN (
SELECT start_station_id AS st_id, SEC_TO_TIME(AVG(TIME_TO_SEC(DATE_FORMAT(starttime, "%%H:%%i:%%s")))) AS st_time
FROM path
GROUP BY start_station_id
)T1 ON in_out.id = T1.st_id
LEFT JOIN (
SELECT end_station_id AS en_id, SEC_TO_TIME(AVG(TIME_TO_SEC(DATE_FORMAT(stoptime, "%%H:%%i:%%s")))) AS en_time
FROM path
GROUP BY end_station_id
)T2 ON in_out.id = T2.en_id
ORDER BY in_out.id;
'''
table = pd.read_sql_query(query, engine)
transactions = []
for idx, row in table.iterrows():
rin = row['in'] // 10
rout = row['out'] // 10
if rin == 0 or rout == 0: continue
st = (datetime.datetime.min+row['st_time']).time().replace(second=0, microsecond=0)
st = st.replace(minute=st.minute//10 * 10).isoformat()
en = (datetime.datetime.min+row['en_time']).time().replace(second=0, microsecond=0)
en = en.replace(minute=en.minute//10 * 10).isoformat()
transactions.append([('stime', st), ('etime', en), ('in',rin), ('out',rout)])
result = {}
for itemset, support in find_frequent_itemsets(transactions, .04*len(transactions), True):
result[tuple(itemset)] = support/len(transactions)
def subs(l):
assert type(l) is list
if len(l) == 1:
return [l]
x = subs(l[1:])
return x + [[l[0]] + y for y in x]
def assRule(freq, min_conf = 0.6):
assert type(freq) is dict
result = []
for item, sup in freq.items():
for subitem in subs(list(item)):
sb = [x for x in item if x not in subitem]
if sb == [] or subitem == []: continue
if len(subitem) == 1 and (subitem[0][0] == 'in' or subitem[0][0] == 'out'):
continue
conf = sup/freq[tuple(subitem)]
if conf >= min_conf:
result.append({'from':subitem, 'to':sb, 'sup':sup, 'conf':conf})
return result
rules = assRule(result, 0.9)
#print(rules)
for ru in rules:
print(ru)
Explanation: 我的 transaction 是用:
與中心點的距離 (km)
時間 30 (min) binning
in-flow
out-flow
應該要找到的 rule / discretization method
我希望找到與中心點距離、時間、in / out flow 關係,例如
我把 位置跟中心距離 與 時間 放在一起看,希望能得到兩個一起出來的結果
可能與中心點一段距離中間,可以找到適當的 in / out flow 關係
discretization method
距離除到公里整數
時間每 30 分鐘一切,一天 24 hr 分成 48 段
in / out flow: 每 10 個數量級一個 binning
算法
我用的算法是 fp-groth
top 3 rules
以 confidence 來看了話,應該是:
('in', 1), (1, 18:30) -> ('out', 1)
(1, 19:00) -> ('out', 1)
(1, 18:00) -> ('out', 1)
What do I learned
基本上可以看出,與中心點 1 km - 2 km 的距離的站台,在晚上 18:00 - 19:00 時的 in / out flow 大概會在 10 - 20 的數量級之間
Transaction 3
我想試著找出 start time / end time / in-flow / out-flow / speed / distance 之間是否有神秘的關係
End of explanation |
10,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ránking de investigadores en Informática en la UGR
Step1: Los datos han sido descargados de la web de la UGR
usando urllib2
A continuación se muestra la tabla descargada
Step2: El siguiente gráfico muestra el número H de los investigadores por orden decreciente | Python Code:
%matplotlib inline
from BeautifulSoup import BeautifulSoup
import urllib2
from IPython.display import (
display, HTML
)
import matplotlib.pyplot as plt
url = 'http://investigacion.ugr.es/ugrinvestiga/static/BuscadorRanking/*/buscar?tipo=&rama_c=&disciplina_c=TELE_D&especialidad_c=&indicador=&periodo='
response = urllib2.urlopen( url )
html= response.read()
all_data = BeautifulSoup( html )
investigadores = all_data.find( "table" ).findAll( "tr" )
Explanation: Ránking de investigadores en Informática en la UGR
End of explanation
output = "<table>"
h_data = []
for row in investigadores[1:]:
columnas = row.findAll('td')
rank = int(columnas[0].string.strip())
nombre = columnas[1].find('a').string.strip()
citas = int(columnas[2].find('strong').string.strip())
h = int(columnas[3].string.strip())
output += "<tr><td>"+nombre+"</td><td>"+str(citas)+"</td><td>"+str(h)+"</td></tr>"
h_data.append( h )
output += "</table>"
display(HTML(output))
Explanation: Los datos han sido descargados de la web de la UGR
usando urllib2
A continuación se muestra la tabla descargada
End of explanation
h_data.sort()
h_data.reverse()
plt.plot(h_data)
plt.show
Explanation: El siguiente gráfico muestra el número H de los investigadores por orden decreciente
End of explanation |
10,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-1', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CAMS
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
10,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 33
Step1: import os and use unlink() to delete a file
Step2: os.rmdir() can delete folders, but only empty folders.
Step3: shutil hasa rmtree() function, which is the inverse of the copytree() function.
Step4: Since these deletions are permanent, it is useful to run these programs in 'dry-run' mode; where the deletions/functions are commented, and print() is used instead
Step5: All these deletions are permanent, but the send2trash module can be used to send deletions to the trash. | Python Code:
import os
# Define base directory
defaultpath = os.path.expanduser('~/Dropbox/learn/books/Python/AutomateTheBoringStuffWithPython')
#Change directory to files directory if set in default
if (os.getcwd() == defaultpath):
os.chdir('/files')
else:
os.chdir(defaultpath + '/files')
Explanation: Lesson 33:
Deleting Files
There are three typical functions delete files.
After moving into /files:
End of explanation
import shutil # import shutil for testing
shutil.copy('bacon.txt', 'bacon2.txt') # Copy file to with a new name
os.unlink('bacon2.txt') # Deletes a file
Explanation: import os and use unlink() to delete a file:
End of explanation
os.rmdir('newerfiles') # Attempt to delete a directory (if empty)
Explanation: os.rmdir() can delete folders, but only empty folders.
End of explanation
if (os.path.exists(os.path.abspath('newerfiles')) != True):
shutil.copytree('newfiles', 'newerfiles')
shutil.rmtree('newerfiles') # Deletes entire folder tree
Explanation: shutil hasa rmtree() function, which is the inverse of the copytree() function.
End of explanation
import os
#Change directory to files directory if set in default
if (os.getcwd() == defaultpath):
os.chdir('/files')
else:
os.chdir(defaultpath + '/files')
# Move into the newfiles directory
os.chdir('newfiles')
for filename in os.listdir():
if filename.endswith('.txt'):
#os.unlink(filename)
print(filename)
Explanation: Since these deletions are permanent, it is useful to run these programs in 'dry-run' mode; where the deletions/functions are commented, and print() is used instead:
End of explanation
import send2trash # install via pip3
send2trash.send2trash(os.path.abspath('newfiles/bacon3.txt')) # Delete by snding to trash
Explanation: All these deletions are permanent, but the send2trash module can be used to send deletions to the trash.
End of explanation |
10,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logging 模块
data, object type
print to terminal
write to log.txt
different level?
Step1: Quick Start
导入模块后直接logging.waring(),logging.error()简单粗暴地调用即可。默认的level是DEBUG,所以warning会打印出信息,info级别更低,不会输出信息。如果你不知道level等参数的意义请后面解释,淡定,继续往下看。
如果不特别配置,logging模块将日志打印到屏幕上(stdout)。
Step2: Log写入文件
更常见的情形是把信息记录在log文件里。需要用logging.basicConfig()设置文件名以及level等参数,常见的level见下表。
|Level|Value|Usage|
|--|--|--|
|CRITICAL|50|严重错误,表明程序已不能继续运行了。|
|ERROR|40|严重的问题,程序已不能执行一些功能了|
|WARNING|30|有意外,将来可能发生问题(如‘磁盘满了’),但依然可用|
|INFO|20|证明事情按预期工作。|
|DEBUG|10|详细信息,调试问题时会感兴趣。|
如果设置level为INFO,那么DEBUG级别的信息就不会输出。常见的函数接口有debug(), info(), warning(), error() and critical(),分别对应log不同严重级别的信息。
注意把下面代码写入脚本(直接在terminal里不会生成文件),比如test_log.py。
Step3: 改变Log输出格式
通过format参数,可以定制写入log文件的格式。
Step4: DEBUG
Step5: 07/16/2016 12 | Python Code:
import logging
Explanation: Logging 模块
data, object type
print to terminal
write to log.txt
different level?
End of explanation
#!/usr/local/bin/python
# -*- coding:utf-8 -*-
import logging
logging.warning('Watch out!') # print message to console
logging.info('I told you so') # will not print anything
Explanation: Quick Start
导入模块后直接logging.waring(),logging.error()简单粗暴地调用即可。默认的level是DEBUG,所以warning会打印出信息,info级别更低,不会输出信息。如果你不知道level等参数的意义请后面解释,淡定,继续往下看。
如果不特别配置,logging模块将日志打印到屏幕上(stdout)。
End of explanation
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG,filemode='w')
# filemode = 'w' 每次运行,重写log
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
cat example.log
Explanation: Log写入文件
更常见的情形是把信息记录在log文件里。需要用logging.basicConfig()设置文件名以及level等参数,常见的level见下表。
|Level|Value|Usage|
|--|--|--|
|CRITICAL|50|严重错误,表明程序已不能继续运行了。|
|ERROR|40|严重的问题,程序已不能执行一些功能了|
|WARNING|30|有意外,将来可能发生问题(如‘磁盘满了’),但依然可用|
|INFO|20|证明事情按预期工作。|
|DEBUG|10|详细信息,调试问题时会感兴趣。|
如果设置level为INFO,那么DEBUG级别的信息就不会输出。常见的函数接口有debug(), info(), warning(), error() and critical(),分别对应log不同严重级别的信息。
注意把下面代码写入脚本(直接在terminal里不会生成文件),比如test_log.py。
End of explanation
import logging
logging.basicConfig(format='%(levelname)s:%(message)s',level=logging.DEBUG)
logging.debug('This message should appear on the console')
logging.info('So should this')
logging.warning('And this, too')
Explanation: 改变Log输出格式
通过format参数,可以定制写入log文件的格式。
End of explanation
import logging
logging.basicConfig(format='%(asctime)s %(message)s',datefmt='%m/%d/%Y %I:%M:%S %p')
logging.warning('is when this event was logged.')
Explanation: DEBUG:This message should appear on the console
INFO:So should this
WARNING:And this, too
记录时间
通过datafmt参数,可以格式化输出log的时间。
End of explanation
import logging
# create logger with name
# if not specified, it will be root
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
# create a handler, write to log.txt
# logging.FileHandler(self, filename, mode='a', encoding=None, delay=0)
# A handler class which writes formatted logging records to disk files.
fh = logging.FileHandler('log.txt')
fh.setLevel(logging.DEBUG)
# create another handler, for stdout in terminal
# A handler class which writes logging records to a stream
sh = logging.StreamHandler()
sh.setLevel(logging.DEBUG)
# set formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
sh.setFormatter(formatter)
# add handler to logger
logger.addHandler(fh)
logger.addHandler(sh)
# log it
logger.debug('Debug')
logger.info('Info')
Explanation: 07/16/2016 12:10:35 AM is when this event was logged.
更丰富的Log控制
上面的代码大部分是利用默认配置,其实我们自定义更多。比如把输出到terminal和log.txt文件里。
首先理解几个概念是有用的。
- Logger 记录器,暴露了应用程序代码能直接使用的接口。
- Handler 处理器,将(记录器产生的)日志记录发送至合适的目的地。
- Filter 过滤器,提供了更好的粒度控制,它可以决定输出哪些日志记录。
- Formatter 格式化器,指明了最终输出中日志记录的布局。
首先,创建一个logger,记录器,然后给其添加不同的handler,输出到不同的渠道。
End of explanation |
10,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Find Me
Michael duPont - CodeCamp 2017
Find Faces
The first thing we need to do is pick out faces from a larger image. Because the model for this is not user or case specific, we can use an existing model, load it with OpenCV, and tune the hyperparameters instead of building one from scratch, which we will have to do later.
Step3: That's really all we need. Now let's test it by drawing rectangles around a few images of groups. Here's one example
Step6: After tuning the hyperparameters, we're getting good face identification over our test images.
Build Dataset
Base Corpus
Now let's use this to build a base corpus of "these faces are not mine" so we can augment it later with the face we want to target.
Step7: Now that we have some faces to work with, let's save them to a pickle file for use later on.
Step8: Target Corpus
Now we need to add our target data. Since this is going to power a personal project, I'm going to train it to recognize my face. Other than adding some new images, we can reuse the code from before but just supplying a different glob string.
Step10: That was easy enough. In order to have a large enough corpus of target faces, I included pictures of myself with other people and deleted their faces after the code block ran. It ended up having eleven target faces.
Model Training Data
Now that we have our faces, we need to create the features and labels that will be used to train our facial recognition model. We've already classified our data based on the face's filename; all we need to do is assign a 1 or 0 to each group for our labels. We'll also need to scale each image to a standard size. Thankfully the output for each bounding box is a square, so we don't have to worry about introducing distortions.
Step11: Simple enough. Let's do a quick check before shuffling. The first image should be part of the base corpus
Step12: And the last image should be of the target
Step13: Looks good. Let's create a quick data and file checkpoint. This means we'll be able to load the file in from this point on without having to run most of the above code.
Step14: DATA/FILE CHECKPOINT
The notebook can be run from scratch from this point onward.
Step15: That's it for our data. You'll notice that we only loaded a subset of our dataset. This ensures that the number of target and non-target images matches, which leads to a better model even though it has less data overall. We'll split our data in the next section.
Am I in This?
We've already created all of our data. Now for the model we're going to train. First, we need to convert our labels to one-hot encoding for use in the model. This means our output layer will have two nodes
Step17: Now we need to define our model architecture one layer at a time. We'll create three convolutional layers, two fully-connected layers, and the output layer.
Step18: Now we need to train the model. Even though we have a large model in terms of its parameters, we can still let the model train for many epochs because our feature set is so small. On a MacBook Air, it takes around 30 seconds to train the model with 500 epochs. To save space, I've disabled the full training printout that Keras provides, but you can watch the accuracy progress yourself by changing verbose from 0 to 1.
We also need to shuffle our data because feeding all of the non-target and target faces into the model in order will lead to a biased model. Scikit-Learn has a convenient function to do this for us. Rather than just calling random, this function preserves the relationship between the feature and label indexes.
Step19: Let's quickly see how well it trained to the given data. Because the dataset is so small, we didn't want to keep any for a test or validation set. We'll test it on a new image later.
Step20: That's it. While Keras has its own mechanisms for training and validating models, we're using a wrapper around our Keras model so it conforms to the Scikit-Learn model API. We can use fit and predict when working with the model in our code, and it let's us train and use our model with the other helper modules sk-learn provides. For example, we could have evaluated the model using StratifiedKFold and cross_val_score which would look like this
Step22: Now for the function itself. Because we've already made function around the core parts of our data pipeline, this function is going to be incredibly short yet powerful.
Step23: Yeah. That's it. Let's break down the steps | Python Code:
import cv2
import numpy as np
CASCADE = cv2.CascadeClassifier('findme/haar_cc_front_face.xml')
def find_faces(img: np.ndarray, sf=1.16, mn=5) -> np.array([[int]]):
Returns a list of bounding boxes for every face found in an image
return CASCADE.detectMultiScale(
cv2.cvtColor(img, cv2.COLOR_RGB2GRAY),
scaleFactor=sf,
minNeighbors=mn,
minSize=(45, 45),
flags=cv2.CASCADE_SCALE_IMAGE
)
Explanation: Find Me
Michael duPont - CodeCamp 2017
Find Faces
The first thing we need to do is pick out faces from a larger image. Because the model for this is not user or case specific, we can use an existing model, load it with OpenCV, and tune the hyperparameters instead of building one from scratch, which we will have to do later.
End of explanation
import matplotlib.pyplot as plt
from matplotlib.image import imread, imsave
%matplotlib inline
plt.imshow(imread('test_imgs/initial/group0.jpg'))
from glob import glob
def draw_boxes(bboxes: [[int]], img: 'np.array', line_width: int=2) -> 'np.array':
Returns an image array with the bounding boxes drawn around potential faces
for x, y, w, h in bboxes:
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), line_width)
return img
#Find faces for each test image
for fname in glob('test_imgs/initial/group*.jpg'):
img = imread(fname)
bboxes = find_faces(img)
print(bboxes)
imsave(fname.replace('initial', 'find_faces'), draw_boxes(bboxes, img))
plt.imshow(imread('test_imgs/find_faces/group0.jpg'))
Explanation: That's really all we need. Now let's test it by drawing rectangles around a few images of groups. Here's one example:
End of explanation
#Creates cropped faces for imgs matching 'test_imgs/group*.jpg'
def crop(img: np.ndarray, x: int, y: int, width: int, height: int) -> np.ndarray:
Returns an image cropped to a given bounding box of top-left coords, width, and height
return img[y:y+height, x:x+width]
def pull_faces(glob_in: str, path_out: str) -> int:
Pulls faces out of images found in glob_in and saves them as path_out
Returns the total number of faces found
i = 0
for fname in glob(glob_in):
print(fname)
img = imread(fname)
bboxes = find_faces(img)
for bbox in bboxes:
cropped = crop(img, *bbox)
imsave(path_out.format(i), cropped)
i += 1
return i
found = pull_faces('test_imgs/initial/group*.jpg', 'test_imgs/corpus/face{}.jpg')
print('Total number of base corpus faces found:', found)
plt.imshow(imread('test_imgs/corpus/face0.jpg'))
Explanation: After tuning the hyperparameters, we're getting good face identification over our test images.
Build Dataset
Base Corpus
Now let's use this to build a base corpus of "these faces are not mine" so we can augment it later with the face we want to target.
End of explanation
from pickle import dump
#Creates base_corpus.pkl from face imgs in test_imgs/corpus
imgs = [imread(fname) for fname in glob('test_imgs/corpus/face*.jpg')]
dump(imgs, open('findme/base_corpus.pkl', 'wb'))
Explanation: Now that we have some faces to work with, let's save them to a pickle file for use later on.
End of explanation
found = pull_faces('test_imgs/initial/me*.jpg', 'test_imgs/corpus/me{}.jpg')
print('Total number of target faces found:', found)
plt.imshow(imread('test_imgs/corpus/me0.jpg'))
Explanation: Target Corpus
Now we need to add our target data. Since this is going to power a personal project, I'm going to train it to recognize my face. Other than adding some new images, we can reuse the code from before but just supplying a different glob string.
End of explanation
#Load the two sets of images
from pickle import load
notme = load(open('findme/base_corpus.pkl', 'rb'))
me = [imread(fname) for fname in glob('test_imgs/corpus/me*.jpg')]
#Create features and labels
features = notme + me
labels = [0] * len(notme) + [1] * len(me)
#Preprocess images for the model
def preprocess(img: np.ndarray) -> np.ndarray:
Resizes a given image and remove alpha channel
img = cv2.resize(img, (45, 45), interpolation=cv2.INTER_AREA)[:,:,:3]
return img
features = [preprocess(face) for face in features]
Explanation: That was easy enough. In order to have a large enough corpus of target faces, I included pictures of myself with other people and deleted their faces after the code block ran. It ended up having eleven target faces.
Model Training Data
Now that we have our faces, we need to create the features and labels that will be used to train our facial recognition model. We've already classified our data based on the face's filename; all we need to do is assign a 1 or 0 to each group for our labels. We'll also need to scale each image to a standard size. Thankfully the output for each bounding box is a square, so we don't have to worry about introducing distortions.
End of explanation
print('Is the target:', labels[0] == 1)
plt.imshow(features[0], cmap='gray')
Explanation: Simple enough. Let's do a quick check before shuffling. The first image should be part of the base corpus:
End of explanation
print('Is the target:', labels[-1] == 1)
plt.imshow(features[-1], cmap='gray')
Explanation: And the last image should be of the target:
End of explanation
#Convert into numpy arrays
features = np.array(features)
labels = np.array(labels)
dump(features, open('test_imgs/features.pkl', 'wb'))
dump(labels, open('test_imgs/labels.pkl', 'wb'))
Explanation: Looks good. Let's create a quick data and file checkpoint. This means we'll be able to load the file in from this point on without having to run most of the above code.
End of explanation
# DATA/FILE CHECKPOINT
from pickle import load
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.image import imread, imsave
%matplotlib inline
from findme.imageutil import crop, draw_boxes, preprocess
from findme.models import find_faces
features = load(open('findme/features.pkl', 'rb'))
labels = load(open('findme/labels.pkl', 'rb'))
features = features[-24:]
labels = labels[-24:]
Explanation: DATA/FILE CHECKPOINT
The notebook can be run from scratch from this point onward.
End of explanation
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
labels = enc.fit_transform(labels.reshape(-1, 1)).toarray()
print('Not target label:', labels[0])
print('Is target label:', labels[-1])
Explanation: That's it for our data. You'll notice that we only loaded a subset of our dataset. This ensures that the number of target and non-target images matches, which leads to a better model even though it has less data overall. We'll split our data in the next section.
Am I in This?
We've already created all of our data. Now for the model we're going to train. First, we need to convert our labels to one-hot encoding for use in the model. This means our output layer will have two nodes: True and False.
End of explanation
from keras.layers import Activation, Convolution2D, Dense, Dropout, Flatten, MaxPooling2D
from keras.metrics import binary_accuracy
from keras.models import Sequential
SHAPE = features[0].shape
NB_FILTER = 16
def make_model() -> Sequential:
Create a Sequential Keras model to boolean classify faces
model = Sequential()
#First Convolution
model.add(Convolution2D(NB_FILTER, (3, 3), input_shape=SHAPE))
model.add(Activation('relu'))
model.add(MaxPooling2D())
model.add(Dropout(0.1))
# Second Convolution
model.add(Convolution2D(NB_FILTER*2, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D())
model.add(Dropout(0.2))
# Third Convolution
model.add(Convolution2D(NB_FILTER*4, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D())
model.add(Dropout(0.3))
# Flatten for Fully Connected
model.add(Flatten())
# First Fully Connected
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dropout(0.4))
# Second Fully Connected
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dropout(0.5))
# Output
model.add(Dense(2))
model.compile(loss = 'mean_squared_error', optimizer = 'rmsprop', metrics=[binary_accuracy])
return model
print(make_model().summary())
Explanation: Now we need to define our model architecture one layer at a time. We'll create three convolutional layers, two fully-connected layers, and the output layer.
End of explanation
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.utils import shuffle
model = KerasClassifier(build_fn=make_model, epochs=500, batch_size=len(labels), verbose=0)
X, Y = shuffle(features, labels, random_state=42)
model.fit(X, Y)
Explanation: Now we need to train the model. Even though we have a large model in terms of its parameters, we can still let the model train for many epochs because our feature set is so small. On a MacBook Air, it takes around 30 seconds to train the model with 500 epochs. To save space, I've disabled the full training printout that Keras provides, but you can watch the accuracy progress yourself by changing verbose from 0 to 1.
We also need to shuffle our data because feeding all of the non-target and target faces into the model in order will lead to a biased model. Scikit-Learn has a convenient function to do this for us. Rather than just calling random, this function preserves the relationship between the feature and label indexes.
End of explanation
preds = model.predict(features)
print('Non-target faces predicted correctly:', np.all(preds[:12] == 0))
print('Non-target faces predicted correctly:', preds[-12:] == 1))
Explanation: Let's quickly see how well it trained to the given data. Because the dataset is so small, we didn't want to keep any for a test or validation set. We'll test it on a new image later.
End of explanation
test_img = imread('test_imgs/evaluate/me1.jpg')
plt.imshow(test_img)
Explanation: That's it. While Keras has its own mechanisms for training and validating models, we're using a wrapper around our Keras model so it conforms to the Scikit-Learn model API. We can use fit and predict when working with the model in our code, and it let's us train and use our model with the other helper modules sk-learn provides. For example, we could have evaluated the model using StratifiedKFold and cross_val_score which would look like this:
```python
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import StratifiedKFold, cross_val_score
model = KerasClassifier(build_fn=make_model, epochs=5, batch_size=len(labels), verbose=0)
evaluate using 10-fold cross validation
kfold = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
result = cross_val_score(model, features, labels, cv=kfold)
print(result.mean())
```
This method allows us to determine how effective our model is but does not return a trained model for us to use.
Putting It Together
Lastly, let's create a single function that takes in an image and returns if the target was found and where.
First we'll load in our test image. Keep in mind that the model we just trained has never seen this image before and it contains multiple people (and a manatee statue).
End of explanation
def target_in_img(img: np.ndarray) -> (bool, np.array([int])):
Returns whether the target is in a given image and where
for bbox in find_faces(img):
face = preprocess(crop(img, *bbox))
if model.predict(np.array([face])) == 1:
return True, bbox
return False, None
Explanation: Now for the function itself. Because we've already made function around the core parts of our data pipeline, this function is going to be incredibly short yet powerful.
End of explanation
found, bbox = target_in_img(test_img)
print('Target face found in test image:', found)
if found:
plt.imshow(draw_boxes([bbox], test_img, line_width=20))
Explanation: Yeah. That's it. Let's break down the steps:
find_faces returns a list of bounding boxes containing faces
We prepare each face by cropping the image to the bounding box, scaling to 45x45, and removing the alpha channel
The model predicts whether the face is or is not the target
If the target is found (pred == 1), return True and the current bounding box
If there aren't any faces or none of the faces belongs to the target, return False and None
Now let's test it. If it works properly, we should see a bounding bx appear around the target's face.
End of explanation |
10,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 3
Imports
Step2: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
Explanation: Algorithms Exercise 3
Imports
End of explanation
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
probs={} # empty dictionary
for i in s:
if i not in probs:
probs[i]=1
else:
probs[i] +=1 # if already in, add count
return probs # this displays how many times each character appears
count =0
for i in range(len(s)):
count+=1
return count # counts total number of characters
for i in range(len(probs)):
probs[i]=((probs[i])/count) # the new value for the key is now the probability. Number of times it appeared
# divided by the total count.
return probs
#a='aaaadadad'
#count=0
#for i in range(len(a)):
# count+=1
#a={' ': 1, 'a': 5, 'b': 3, 'h': 1, 't': 1, 'w': 1}
#char_probs(a)
#X=np.array(a)
#X
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
interact(entropy, d='Hi ')
assert True # use this for grading the pi digits histogram
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation |
10,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing and Breaking ConvNets
In this exercise we will visualize saliency maps for individual images and we will construct images to fool a trained ConvNet.
Step1: Load the data and pretrained model
You should have already downloaded the TinyImageNet-100-A dataset and the pretrained models.
Step2: Compute predictions on validation set
For the experiments in this exercise it will be useful to have access to the predictions of the trained ConvNet on the TinyImageNet-100-A validation set.
Step4: Visualize Saliency Maps
In a recent paper [1], it was suggested that you can understand which part of an image is important for classification by visualizing the gradient of the correct class score with respect to the input image. This was covered in lecture on 2/2/2015 under the section "Visualize the data gradient". Recall that if a region of the image has a high data gradient, then this indicates that the output of the ConvNet is sensitive to perturbations in that region of the input image.
We will do something similar, instead visualizing the gradient of the data loss with respect to the input image; this gives similar results and is cleaner to implement using our codebase.
First, open the file cs231n/classifiers/convnet.py and modify the five_layer_net function to return the gradient of the loss with respect to the input when the compute_dX flag is true.
Once you have done so, complete the implementation in the following cell to allow you to visualize image-specific class saliency maps for images in the TinyImageNet-100-A validation set.
[1] K. Simonyan, A. Vedaldi, A. Zisserman , "Deep Inside Convolutional Networks
Step6: Fooling images for ConvNets
Two other papers [1, 2] discussed in lecture on 2/2 presented the idea of performing optimization over the input images to construct images that "fool" a trained ConvNet. This paper showed that given a trained ConvNet, an input image, and a desired label, that we can add a small amount of noise to the input image to force the ConvNet to classify it as having the desired label.
In this section we will reproduce some of these results.
Suppose that $L(x, y, m)$ is the data loss under model $m$, where we tell the network that the input $x$ should be classified as having label $y$. Given a starting image $x_0$, a desired label $y$, and a pretrained model $m$, we will create a fooling image $x_f$ by solving the following optimization problem
Step7: Fooling images from correctly classified images
We will choose an image that is correctly classified by the pretrained network and create a fooling image that the network classifies as a goldfish.
You should experiment with different step sizes, regularizations, confidence thresholds, and target classes
Step8: Fooling image from random noise
Instead of starting from a correctly classified image, we can instead start our optimization from random noise. This will allow us to produce fooling images that do not look like anything to humans.
You should experiment with the scale of the initial random noise, the step size, the regularization, the confidence threshold, and the target class. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from time import time
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Visualizing and Breaking ConvNets
In this exercise we will visualize saliency maps for individual images and we will construct images to fool a trained ConvNet.
End of explanation
# Load the TinyImageNet-100-A dataset and a pretrained model
from cs231n.data_utils import load_tiny_imagenet, load_models
tiny_imagenet_a = 'cs231n/datasets/tiny-imagenet-100-A'
class_names, X_train, y_train, X_val, y_val, X_test, y_test = load_tiny_imagenet(tiny_imagenet_a)
# Zero-mean the data
mean_img = np.mean(X_train, axis=0)
X_train -= mean_img
X_val -= mean_img
X_test -= mean_img
# Load a pretrained model; it is a five layer convnet.
models_dir = 'cs231n/datasets/tiny-100-A-pretrained'
model = load_models(models_dir)['model1']
Explanation: Load the data and pretrained model
You should have already downloaded the TinyImageNet-100-A dataset and the pretrained models.
End of explanation
from cs231n.classifiers.convnet import five_layer_convnet
# Array of shape (X_val.shape[0],) storing predictions on the validation set.
# y_val_pred[i] = c indicates that the model predicts that X_val[i] has label c.
y_val_pred = None
################################################################################
# TODO: Use the pretrained model stored in model to compute predictions on the #
# validation set. Store the results in y_val_pred. #
# #
# HINT: As in the previous exercises, you will want to break the validation #
# set into batches. #
################################################################################
import math
batch_size = 1000
num_batches = int(math.ceil((X_val.shape[0] / batch_size)))
for i in range(num_batches):
y_pred = five_layer_convnet(X_val[i * batch_size:(i+1) * batch_size],
model,
None,return_probs=True)
if y_val_pred is None:
y_val_pred = y_pred
else:
y_val_pred = np.concatenate((y_val_pred, y_pred), axis=0)
y_val_pred = np.argmax(y_val_pred,axis=-1)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
correct_indices, = np.nonzero(y_val_pred == y_val)
print correct_indices
Explanation: Compute predictions on validation set
For the experiments in this exercise it will be useful to have access to the predictions of the trained ConvNet on the TinyImageNet-100-A validation set.
End of explanation
def show_image(img, rescale=False, add_mean=True):
Utility to show an image. In our ConvNets, images are 3D slices of 4D
volumes; to visualize them we need to squeeze out the extra dimension,
flip the axes so that channels are last, add the mean image, convert to
uint8, and possibly rescale to be between 0 and 255. To make figures
prettier we also need to suppress the axis labels after imshow.
Input:
- img: (1, C, H, W) or (C, H, W) or (1, H, W) or (H, W) giving
pixel data for an image.
- rescale: If true rescale the data to fit between 0 and 255
- add_mean: If true add the training data mean image
img = img.copy()
if add_mean:
img += mean_img
img = img.squeeze()
if img.ndim == 3:
img = img.transpose(1, 2, 0)
if rescale:
low, high = np.min(img), np.max(img)
img = 255.0 * (img - low) / (high - low)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# The number of example images to show. You can change this.
num_examples = 6
# The label of the class to visualize. You can change this.
class_idx = 22 # goldfish
# An array of shape (num_examples,) containing the indices of validation set
# images for which saliency maps will be visualized. We wil visualize several
# examples of images from the validation set whose label is class_idx and which
# are correctly classified using the pretrained ConvNet. In other words, if
# example_idxs[i] = j then we should have y_val[j] = class_idx and the pretrained
# ConvNet should correctly classify X_val[j].
example_idxs = None
################################################################################
# TODO: Choose several examples from the validation set whose correct label is #
# class_idx and which are correctly classified by the pretrained ConvNet. #
# Store the results in the example_idxs variable. #
################################################################################
# np.nonzero() returns an index tuple
y_val_example_idxs = np.nonzero((y_val_pred == class_idx)[y_val == class_idx])[0]
example_idxs = y_val_example_idxs[np.random.choice(
y_val_example_idxs.shape[0], num_examples , replace=False)]
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Array to store gradients of the loss with respect to your chosen example images.
dX = np.zeros((num_examples, 3, 64, 64))
################################################################################
# TODO: Compute image gradients for your chosen examples. Store the result in #
# the dX variable. #
################################################################################
dX = five_layer_convnet(X_val[example_idxs], model, y_val[example_idxs],
compute_dX=True)
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Plot the images and their saliency maps.
for i in xrange(num_examples):
# Visualize the image
plt.subplot(2, num_examples, i + 1)
show_image(X_val[example_idxs[i]])
plt.title(class_names[y_val[example_idxs[i]]][0])
# Saliency map for the ith example image.
sal = np.zeros((64, 64))
############################################################################
# TODO: Compute the saliency map for the ith example image. Use image #
# derivatives from dX[i] to compute the saliency map for #
# X_val[example_idxs[i]]. Store the result in the sal variable. #
############################################################################
# To derive a single class saliency value for each pixel (i, j), we took the
# maximum magnitude of image gradients across all colour channels
sal = np.max(np.absolute(dX[i]), axis=0)
pass
############################################################################
# END OF YOUR CODE #
############################################################################
# Visualize its saliency map.
plt.subplot(2, num_examples, num_examples + i + 1)
show_image(sal, rescale=True, add_mean=False)
Explanation: Visualize Saliency Maps
In a recent paper [1], it was suggested that you can understand which part of an image is important for classification by visualizing the gradient of the correct class score with respect to the input image. This was covered in lecture on 2/2/2015 under the section "Visualize the data gradient". Recall that if a region of the image has a high data gradient, then this indicates that the output of the ConvNet is sensitive to perturbations in that region of the input image.
We will do something similar, instead visualizing the gradient of the data loss with respect to the input image; this gives similar results and is cleaner to implement using our codebase.
First, open the file cs231n/classifiers/convnet.py and modify the five_layer_net function to return the gradient of the loss with respect to the input when the compute_dX flag is true.
Once you have done so, complete the implementation in the following cell to allow you to visualize image-specific class saliency maps for images in the TinyImageNet-100-A validation set.
[1] K. Simonyan, A. Vedaldi, A. Zisserman , "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", ICLR Workshop 2014
End of explanation
def make_fooling_image(img, y, model, reg=0.0, step_size=500, confidence=0.5):
Perform optimization in image space to create an image that is similar to img
but is classified as y by model.
Inputs:
- img: Array of shape (1, C, H, W) containing (mean-subtracted) pixel data for
the starting point for the fooling image.
- y: The desired label; should be a single integer.
- model: Dictionary mapping parameter names to weights; this is a pretrained
five_layer_net model.
- reg: Regularization strength (in image space) for the fooling image. This
is the parameter lambda in the equation above.
- step_size: The step size to use for gradient descent.
- confidence: The desired confidence threshold for the fooling image.
fooling_img = img.copy()
################################################################################
# TODO: Use gradient descent in image space to create a fooling image, #
# stopping when the predicted probability for the fooling image is greater #
# than the specified confidence threshold. #
################################################################################
i = 0
while True:
confidence_i = five_layer_convnet(fooling_img, model, return_probs=True)[0, y]
if not i % 100:
print('iteration %d iteration confidence: %f' % (i, confidence_i))
if confidence_i > confidence:
print('iteration %d iteration confidence: %f' % (i, confidence_i))
break
dX = five_layer_convnet(fooling_img, model, np.array([y]), compute_dX=True)
fooling_img -= step_size * (dX + reg * (fooling_img - img))
i += 1
pass
pass
############################################################################
# END OF YOUR CODE #
############################################################################
return fooling_img
Explanation: Fooling images for ConvNets
Two other papers [1, 2] discussed in lecture on 2/2 presented the idea of performing optimization over the input images to construct images that "fool" a trained ConvNet. This paper showed that given a trained ConvNet, an input image, and a desired label, that we can add a small amount of noise to the input image to force the ConvNet to classify it as having the desired label.
In this section we will reproduce some of these results.
Suppose that $L(x, y, m)$ is the data loss under model $m$, where we tell the network that the input $x$ should be classified as having label $y$. Given a starting image $x_0$, a desired label $y$, and a pretrained model $m$, we will create a fooling image $x_f$ by solving the following optimization problem:
$$x_f = \arg\min_x \left(L(x, y, m) + \frac\lambda2 \|x - x_0\|^2_2\right)$$
The term $\|x - x_0\|^2$ is $L_2$ regularization in image space which encourages the fooling image to look similar to the starting image, and the constant $\lambda$ is the strength of this regularization. We will use gradient descent to perform optimization under this model.
In the past, when using gradient descent we have stopped after a fixed number of iterations. Here we will use a different stopping criteria. Suppose that $p(x=y \mid m)$ is the probability that the input $x$ is assigned the label $y$ under the model $m$. We will specify a desired confidence threshold $t$ for the fooling image, and we will stop our optimization when we have $p(x_f=y\mid m) >= t$.
[1] Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint, 2013.
<br>
[2] Nguyen, Anh, Jason Yosinski, and Jeff Clune. "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images." arXiv preprint, 2014.
End of explanation
# Choose a random image that is correctly classified
idx = np.random.choice(np.nonzero(y_val_pred == y_val)[0])
img = X_val[idx:idx+1]
class_idx = 22 # Goldfish
confidence = 0.5
fooling_img = make_fooling_image(img, class_idx, model, step_size=1000, reg=0.00002, confidence=confidence)
# Check that the fooling image has probability above the threshold.
assert five_layer_convnet(fooling_img, model, return_probs=True)[0, class_idx] >= confidence, \
'The ConvNet is not fooled.'
# Show the original image
plt.subplot(1, 3, 1)
plt.title('Original image (%s)' % class_names[y_val[idx]][0])
show_image(img)
# Show the difference between the original and fooling image
plt.subplot(1, 3, 2)
plt.title('+distort')
show_image(fooling_img - img, add_mean=False, rescale=True)
# Show the fooling image
plt.subplot(1, 3, 3)
plt.title('Fooling image (%s)' % class_names[class_idx][0])
show_image(fooling_img, rescale=True)
Explanation: Fooling images from correctly classified images
We will choose an image that is correctly classified by the pretrained network and create a fooling image that the network classifies as a goldfish.
You should experiment with different step sizes, regularizations, confidence thresholds, and target classes
End of explanation
# Generate random noise to start
img = 20 * np.random.randn(1, 3, 64, 64)
class_idx = 22 # Goldfish
fooling_img = make_fooling_image(img, class_idx, model, step_size=500, reg=0.00005, confidence=0.5)
# Check that the fooling image has probability above the threshold.
assert five_layer_convnet(fooling_img, model, return_probs=True)[0, class_idx] >= confidence, \
'The ConvNet is not fooled.'
# Show the original image
plt.subplot(1, 3, 1)
plt.title('Random original image')
show_image(img)
# Show the difference between the original and fooling image
plt.subplot(1, 3, 2)
plt.title('+distort')
show_image(fooling_img - img, add_mean=False, rescale=True)
# Show the fooling image
plt.subplot(1, 3, 3)
plt.title('Fooling image (%s)' % class_names[class_idx][0])
show_image(fooling_img, rescale=True)
Explanation: Fooling image from random noise
Instead of starting from a correctly classified image, we can instead start our optimization from random noise. This will allow us to produce fooling images that do not look like anything to humans.
You should experiment with the scale of the initial random noise, the step size, the regularization, the confidence threshold, and the target class.
End of explanation |
10,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here we look at some Chang'e 5 high-speed data decoded by Paul Marsh M0EYT.
The frames are CCSDS concatenated frames with a Reed-solomon interleaving depth of 4. The frame size is 1024 bytes including the 32 bit ASM. We need to perform Reed-Solomon decoding with libfec first.
Step1: AOS frames
AOS frames come from spacecraft ID 26, virtual channels 38 and 63. Other combinations are most likely to corruted frames despite the fact that the Reed-Solomon decoder was successful.
It's interesting that there are many frames that show a transfer frame version of 2, which looks somewhat anomalous. This should deserve further study.
Step2: Virtual channel 63
Virtual channel 63 contains padding. Besides the AOS header, there is a 6 byte insert zone which contains 0xf0f0 and a timestamp (in the same format as the low rate telemetry). The remaining data is filled with 0xaa.
Step3: Virtual channel 38
The contents of virtual channel 38 are rather complicated due to the large amount of layer present. Here we peel the layers one by one. The first layer are AOS frames with an insert zone just like in virtual channel 63.
The data field of the AOS frames contains a single Space Packet.
Step4: The timestamps of these packets seem to span too much time for this short recording. I think this has something to do with the underlying replay data.
Step5: All the space packets belong to APID 92 and have the same length.
Step6: The end of all the Space Packets is 0xaa padding.
Step7: The data field of the Space Packets contains two back to back AOS packets of 252 bytes.
The AOS frames have the replay flag set and belong to spacecraft 197 virtual channel 52. The insert zone of these frames contains 3 unknown bytes followed by a 32 bit timestamp in the usual format.
Step8: These AOS frames have a CRC-16. All of the frames are OK.
Step9: The replay AOS frames contain Space Packets belonging to APIDs 301 and 2047 (the idle APID). | Python Code:
def load_frames(path):
frame_size = 1024
frames = np.fromfile(path, dtype = 'uint8')
frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))
# drop ccsds header
frames = frames[:, 4:]
return frames
frames = np.concatenate([load_frames(f) for f in sorted(pathlib.Path('uhf_satcom_wideband').glob('*.frm'))[::-1]])
# deinterleave RS frames
frames = np.transpose(frames.reshape((-1,255,4)), axes = (0,2,1)).reshape((-1,255))
rs_ret = [libfec.decode_rs_ccsds(f.ctypes.data_as(ctypes.POINTER(ctypes.c_char)), 0, 0, 0) for f in frames]
frames = frames[:,:-32] # drop RS bytes
frames = np.transpose(frames.reshape((-1,4,223)), axes = (0,2,1)).reshape((-1, 223*4))
plt.figure(figsize = (10,6), facecolor = 'w')
plt.plot(rs_ret)
plt.plot(np.convolve(rs_ret, 0.01*np.ones(100)))
plt.title('Reed-Solomon errors corrected')
plt.xlabel('Reed-Solomon codeword')
plt.ylabel('Byte errors')
plt.legend(['Per frame', '100 frame moving average']);
Explanation: Here we look at some Chang'e 5 high-speed data decoded by Paul Marsh M0EYT.
The frames are CCSDS concatenated frames with a Reed-solomon interleaving depth of 4. The frame size is 1024 bytes including the 32 bit ASM. We need to perform Reed-Solomon decoding with libfec first.
End of explanation
aos = [CE5_AOSFrame.parse(f) for f in frames]
collections.Counter([a.primary_header.transfer_frame_version_number for a in aos])
collections.Counter([a.primary_header.spacecraft_id for a in aos
if a.primary_header.transfer_frame_version_number == 1])
collections.Counter([a.primary_header.virtual_channel_id for a in aos
if a.primary_header.transfer_frame_version_number == 1
and a.primary_header.spacecraft_id == 26])
Explanation: AOS frames
AOS frames come from spacecraft ID 26, virtual channels 38 and 63. Other combinations are most likely to corruted frames despite the fact that the Reed-Solomon decoder was successful.
It's interesting that there are many frames that show a transfer frame version of 2, which looks somewhat anomalous. This should deserve further study.
End of explanation
vc63 = [a for a in aos if a.primary_header.virtual_channel_id == 63
and a.primary_header.transfer_frame_version_number == 1
and a.primary_header.spacecraft_id == 26]
[a.primary_header for a in vc63[:10]]
frames_vc63 = frames[[a.primary_header.virtual_channel_id == 63
and a.primary_header.transfer_frame_version_number == 1
and a.primary_header.spacecraft_id == 26
for a in aos]]
np.unique(frames_vc63[:,12:])
hex(170)
{hex(a.insert_zone.unknown) for a in vc63}
hex(240)
vc63_timestamps = np.timedelta64(1,'s')*np.array([a.insert_zone.timestamp for a in vc63])\
+ np.datetime64('2012-08-01')
fc = [a.primary_header.virtual_channel_frame_count for a in vc63]
plt.figure(figsize = (10,6), facecolor = 'w')
plt.plot(vc63_timestamps, fc, '.')
plt.title("Chang'e 5 spacecraft 26 virtual channel 63 timestamps")
plt.xlabel('AOS frame timestamp')
plt.ylabel('AOS virtual channel frame counter');
plt.figure(figsize = (10,6), facecolor = 'w')
plt.plot(vc63_timestamps[1:], np.diff(fc)-1)
plt.ylim((-1,20))
plt.title("Chang'e 5 spacecraft 26 virtual channel 63 frame loss")
plt.xlabel('AOS frame timestamp')
plt.ylabel('Lost frames');
frames_per_second = (fc[-1]-fc[0])/(vc63_timestamps[-1] - vc63_timestamps[0]).astype('float')
frames_per_second
Explanation: Virtual channel 63
Virtual channel 63 contains padding. Besides the AOS header, there is a 6 byte insert zone which contains 0xf0f0 and a timestamp (in the same format as the low rate telemetry). The remaining data is filled with 0xaa.
End of explanation
vc38 = [a for a in aos if a.primary_header.virtual_channel_id == 38]
[a.primary_header for a in vc38]
Explanation: Virtual channel 38
The contents of virtual channel 38 are rather complicated due to the large amount of layer present. Here we peel the layers one by one. The first layer are AOS frames with an insert zone just like in virtual channel 63.
The data field of the AOS frames contains a single Space Packet.
End of explanation
fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc38])
vc38_timestamps = np.timedelta64(1,'s')*np.array([a.insert_zone.timestamp for a in vc38])\
+ np.datetime64('2012-08-01')
good = vc38_timestamps <= np.datetime64('2020-12-01')
plt.figure(figsize = (10,6), facecolor = 'w')
plt.plot(vc38_timestamps[good], fc[good], '.')
plt.title("Chang'e 5 spacecraft 26 virtual channel 38 timestamps")
plt.xlabel('AOS frame timestamp')
plt.ylabel('AOS virtual channel frame counter');
frames_per_second = (fc[-1]-fc[0])/(vc38_timestamps[-1] - vc38_timestamps[0]).astype('float')
frames_per_second
frames_vc38 = frames[[a.primary_header.virtual_channel_id == 38
and a.primary_header.transfer_frame_version_number == 1
and a.primary_header.spacecraft_id == 26
for a in aos]]
plt.figure(figsize = (15,15), facecolor = 'w')
plt.imshow(frames_vc38[:200], aspect = 1)
plt.title("Chang'e 5 spacecraft 26 virtual channel 38 AOS frames");
Explanation: The timestamps of these packets seem to span too much time for this short recording. I think this has something to do with the underlying replay data.
End of explanation
vc38_packet_headers = [ccsds.SpacePacketPrimaryHeader.parse(a.data_field) for a in vc38]
vc38_packet_headers[:10]
collections.Counter([h.APID for h in vc38_packet_headers])
collections.Counter([h.data_length for h in vc38_packet_headers])
data_slice = slice(12 + ccsds.SpacePacketPrimaryHeader.sizeof(),\
12 + ccsds.SpacePacketPrimaryHeader.sizeof() + vc38_packet_headers[0].data_length+1)
vc38_packet_data = frames_vc38[:, data_slice]
Explanation: All the space packets belong to APID 92 and have the same length.
End of explanation
np.unique(frames_vc38[:, data_slice.stop:])
Explanation: The end of all the Space Packets is 0xaa padding.
End of explanation
vc38_replay_aos = vc38_packet_data.reshape((vc38_packet_data.shape[0]*2, -1))
vc38_replay_aos_frames = [CE5_AOSReplayFrame.parse(f) for f in vc38_replay_aos]
[a.primary_header for a in vc38_replay_aos_frames[:10]]
{a.primary_header.virtual_channel_id for a in vc38_replay_aos_frames}
{a.primary_header.spacecraft_id for a in vc38_replay_aos_frames}
vc38_replay_timestamps = np.timedelta64(1,'s')*np.array([a.insert_zone.timestamp for a in vc38_replay_aos_frames])\
+ np.datetime64('2012-08-01')
fc = [a.primary_header.virtual_channel_frame_count for a in vc38_replay_aos_frames]
plt.figure(figsize = (10,6), facecolor = 'w')
plt.plot(vc38_replay_timestamps, fc, '.')
plt.title("Chang'e 5 spacecraft 197 virtual channel 52 timestamps (replay)")
plt.xlabel('AOS frame timestamp')
plt.ylabel('AOS virtual channel frame counter');
Explanation: The data field of the Space Packets contains two back to back AOS packets of 252 bytes.
The AOS frames have the replay flag set and belong to spacecraft 197 virtual channel 52. The insert zone of these frames contains 3 unknown bytes followed by a 32 bit timestamp in the usual format.
End of explanation
crc_ok = np.array([crc16_ccitt_false(f) for f in vc38_replay_aos]) == 0
np.all(crc_ok)
Explanation: These AOS frames have a CRC-16. All of the frames are OK.
End of explanation
vc38_replay_packets = list(ccsds.extract_space_packets(vc38_replay_aos_frames, 197, 52))
vc38_replay_packets_headers = [ccsds.SpacePacketPrimaryHeader.parse(p) for p in vc38_replay_packets]
vc38_replay_apids = collections.Counter([p.APID for p in vc38_replay_packets_headers])
vc38_replay_apids
vc38_replay_by_apid = {apid : [p for h,p in zip(vc38_replay_packets_headers, vc38_replay_packets)
if h.APID == apid] for apid in vc38_replay_apids}
plot_apids(vc38_replay_by_apid, None, 52)
Explanation: The replay AOS frames contain Space Packets belonging to APIDs 301 and 2047 (the idle APID).
End of explanation |
10,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GRIB Data Example
GRIB format is commonly used to disemminate atmospheric model data. With Xarray and the cfgrib engine, GRIB data can easily be analyzed and visualized.
Step1: To read GRIB data, you can use xarray.load_dataset. The only extra code you need is to specify the engine as cfgrib.
Step2: Let's create a simple plot of 2-m air temperature in degrees Celsius
Step3: With CartoPy, we can create a more detailed plot, using built-in shapefiles to help provide geographic context
Step4: Finally, we can also pull out a time series for a given location easily | Python Code:
import xarray as xr
import matplotlib.pyplot as plt
Explanation: GRIB Data Example
GRIB format is commonly used to disemminate atmospheric model data. With Xarray and the cfgrib engine, GRIB data can easily be analyzed and visualized.
End of explanation
ds = xr.tutorial.load_dataset('era5-2mt-2019-03-uk.grib', engine='cfgrib')
Explanation: To read GRIB data, you can use xarray.load_dataset. The only extra code you need is to specify the engine as cfgrib.
End of explanation
ds = ds - 273.15
ds.t2m[0].plot(cmap=plt.cm.coolwarm)
Explanation: Let's create a simple plot of 2-m air temperature in degrees Celsius:
End of explanation
import cartopy.crs as ccrs
import cartopy
fig = plt.figure(figsize=(10,10))
ax = plt.axes(projection=ccrs.Robinson())
ax.coastlines(resolution='10m')
plot = ds.t2m[0].plot(cmap=plt.cm.coolwarm, transform=ccrs.PlateCarree(), cbar_kwargs={'shrink':0.6})
plt.title('ERA5 - 2m temperature British Isles March 2019')
Explanation: With CartoPy, we can create a more detailed plot, using built-in shapefiles to help provide geographic context:
End of explanation
ds.t2m.sel(longitude=0,latitude=51.5).plot()
plt.title('ERA5 - London 2m temperature March 2019')
Explanation: Finally, we can also pull out a time series for a given location easily:
End of explanation |
10,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advance Data Types
This section will cover the following advance topics in data types
Collections
Collections
The collections module is a tresure trove of a built-in module that implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers.
This module implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers, dict, list, set, and tuple.
| Name | Description|
|
Step1: The child mappings are searched in the order they are passed to the constructor, so the value reported for the key 'c' comes from the a dictionary.
Reordering
The ChainMap stores the list of mappings over which it searches in a list in its maps attribute. This list is mutable, so it is possible to add new mappings directly or to change the order of the elements to control lookup and update behavior.
Step2: When the list of mappings is reversed, the value associated with 'c' changes.
Updating Values
A ChainMap does not cache the values in the child mappings. Thus, if their contents are modified, the results are reflected when the ChainMap is accessed.
Step3: Changing the values associated with existing keys and adding new elements works the same way.
It is also possible to set values through the ChainMap directly, although only the first mapping in the chain is actually modified.
Step4: When the new value is stored using m, the a mapping is updated.
ChainMap provides a convenience method for creating a new instance with one extra mapping at the front of the maps list to make it easy to avoid modifying the existing underlying data structures.
This stacking behavior is what makes it convenient to use ChainMap instances as template or application contexts. Specifically, it is easy to add or update values in one iteration, then discard the changes for the next iteration.
Step5: For situations where the new context is known or built in advance, it is also possible to pass a mapping to new_child().
Step6: Counter
Counter is a dict subclass which helps count the hashable objects. It stores elements as dictionary keys and the counts of the objects as value. In other words , It is a container that keeps track of how many times equivalent values are present.
For example
Step7: Where as Counter can be used
Step8: Counter with Strings
Step9: Counter methods
Step10: Default dict
The standard dictionary includes the method setdefault() for retrieving a value and establishing a default if the value does not exist. By contrast, defaultdict lets the caller specify the default up front when the container is initialized.
Step11: deque — Double-Ended Queue
A double-ended queue, or deque, supports adding and removing elements from either end of the queue. The more commonly used stacks and queues are degenerate forms of deques, where the inputs and outputs are restricted to a single end.
Step12: Adding
Step13: Consuming
OrderedDict
It is a dictionary subclass that remembers the order in which its contents are added.
Lets start with a normal dictionary
Step14: namedtuple
Named tuples helps to have meaning of each position in a tuple and allow us to code with better readability and self-documenting code. You can use them in any place where you are using tuples. In the example we will create a namedtuple to show hold information for points. | Python Code:
import collections
# from collections import ChainMap
a = {'a': 'A', 'c': 'C'}
b = {'b': 'B', 'c': 'D'}
m = collections.ChainMap(a, b)
print('Individual Values')
print('a = {}'.format(m['a']))
print('b = {}'.format(m['b']))
print('c = {}'.format(m['c']))
print("-"*20)
print(type(m.keys()))
print('Keys = {}'.format(list(m.keys())))
print('Values = {}'.format(list(m.values())))
print("-"*20)
print('Items:')
for k, v in m.items():
print('{} = {}'.format(k, v))
print("-"*20)
print('"d" in m: {}'.format(('d' in m)))
a = {'a': 'A', 'c': 'C'}
b = {'b': 'B', 'c': 'D'}
m = collections.ChainMap(a, b)
lst = []
for v in m.keys():
lst.append(v)
for v in m.values():
lst.append(v)
print(lst)
Explanation: Advance Data Types
This section will cover the following advance topics in data types
Collections
Collections
The collections module is a tresure trove of a built-in module that implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers.
This module implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers, dict, list, set, and tuple.
| Name | Description|
|:-------------:|---------------|
|namedtuple() | factory function for creating tuple subclasses with named fields|
|deque |list-like container with fast appends and pops on either end|
|ChainMap |dict-like class for creating a single view of multiple mappings|
|Counter | dict subclass for counting hashable objects|
|OrderedDict | dict subclass that remembers the order entries were added|
|defaultdict | dict subclass that calls a factory function to supply missing values|
|UserDict | wrapper around dictionary objects for easier dict subclassing|
|UserList |wrapper around list objects for easier list subclassing|
|UserString | wrapper around string objects for easier string subclassing|
## ChainMap — Search Multiple Dictionaries
The ChainMap class manages a list of dictionaries, and can be used to searche through them in the order they are added to find values for associated keys.
It makes a good "context" container, as it can be visualised as a stack for which changes happen as soon as the stack grows, with these changes being discarded again as soon as the stack shrinks.
Treat it as a view table in DB, where actual values are still stored in their respective table and we can still perform all the operation on them.
Accessing Values
The ChainMap supports the same API as a regular dictionary for accessing existing values.
End of explanation
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(a, b)
print(cm.maps)
print('c = {}\n'.format(cm['c']))
# reverse the list
cm.maps = list(reversed(cm.maps)) # m = collections.ChainMap(b, a)
print(cm.maps)
print('c = {}'.format(cm['c']))
Explanation: The child mappings are searched in the order they are passed to the constructor, so the value reported for the key 'c' comes from the a dictionary.
Reordering
The ChainMap stores the list of mappings over which it searches in a list in its maps attribute. This list is mutable, so it is possible to add new mappings directly or to change the order of the elements to control lookup and update behavior.
End of explanation
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
m = collections.ChainMap(a, b)
print('Before: {}'.format(m['c']))
a['c'] = '3.3'
print('After : {}'.format(m['c']))
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(b, a)
print(cm.maps)
print('Before: {}'.format(cm['c']))
a['c'] = '3.3'
print('After : {}'.format(cm['c']))
Explanation: When the list of mappings is reversed, the value associated with 'c' changes.
Updating Values
A ChainMap does not cache the values in the child mappings. Thus, if their contents are modified, the results are reflected when the ChainMap is accessed.
End of explanation
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(a, b)
print('Before: {}'.format(cm['c']))
cm['c'] = '3.3'
print('After : {}'.format(cm['c']))
print(a['c'])
print(b['c'])
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(b, a)
print('Before: {}'.format(cm['c']))
cm['c'] = '3.3'
print('After : {}'.format(cm['c']))
print(a['c'])
print(b['c'])
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(a, b)
print('Before: {}'.format(cm['c']))
cm['d'] = '3.3'
print('After : {}'.format(cm['c']))
print(cm.maps)
print(a)
print(b)
Explanation: Changing the values associated with existing keys and adding new elements works the same way.
It is also possible to set values through the ChainMap directly, although only the first mapping in the chain is actually modified.
End of explanation
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
m1 = collections.ChainMap(a, b)
m2 = m1.new_child()
print('m1 before:', m1)
print('m2 before:', m2)
m2['c'] = '3.3'
print('m1 after:', m1)
print('m2 after:', m2)
Explanation: When the new value is stored using m, the a mapping is updated.
ChainMap provides a convenience method for creating a new instance with one extra mapping at the front of the maps list to make it easy to avoid modifying the existing underlying data structures.
This stacking behavior is what makes it convenient to use ChainMap instances as template or application contexts. Specifically, it is easy to add or update values in one iteration, then discard the changes for the next iteration.
End of explanation
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
c = {'c': '333'}
m1 = collections.ChainMap(a, b)
m2 = m1.new_child(c)
print('m1["c"] = {}'.format(m1['c']))
print('m2["c"] = {}'.format(m2['c']))
print(m2)
#This is the equivalent of
m2_1 = collections.ChainMap(c, *m1.maps)
print(m2_1)
Explanation: For situations where the new context is known or built in advance, it is also possible to pass a mapping to new_child().
End of explanation
# Tally occurrences of words in a list
from collections import Counter
cnt = Counter()
for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']:
cnt[word] += 1
Counter({'blue': 3, 'red': 2, 'green': 1})
# Find the ten most common words in Hamlet
import re
words = re.findall(r'\w+', open('hamlet.txt').read().lower())
Counter(words).most_common(10)
Explanation: Counter
Counter is a dict subclass which helps count the hashable objects. It stores elements as dictionary keys and the counts of the objects as value. In other words , It is a container that keeps track of how many times equivalent values are present.
For example:
End of explanation
l = [1 ,23 , 23, 44, 4, 44, 55, 555, 44, 32, 23, 44, 56, 64, 2, 1]
lstCounter = Counter(l)
print(lstCounter)
print(lstCounter.most_common(4))
Explanation: Where as Counter can be used:
Counter() with lists
End of explanation
sentance = "The collections module is a tresure trove of a built-in module that implements " + \
"specialized container datatypes providing alternatives to Python’s general purpose " + \
"built-in containers."
wordList = sentance.split(" ")
Counter(wordList).most_common(3)
Explanation: Counter with Strings
End of explanation
# find the most common words
# Methods with Counter()
c = Counter(wordList)
print(c.most_common(4))
print(c.items())
Explanation: Counter methods
End of explanation
d = {"a": 1, "b": 2}
print(d)
print(d['a'])
print(d['d'])
from collections import defaultdict
dd = defaultdict(object)
print(dd)
print(dd['one'])
print(dd)
dd['Two'] = 2
print(dd)
for d in dd:
print(d)
print(dd[d])
help(defaultdict)
# Initializing with default value
dd = defaultdict(1)
print(dd)
print(dd['one'])
print(dd)
dd['Two'] = 2
print(dd)
for d in dd:
print(d)
print(dd[d])
# Using factory function
import collections
def default_factory():
return 'default value'
d = collections.defaultdict(default_factory, india='new delhi')
print('d:', d)
print('india =>', d['india'])
print('bar =>', d['bar'])
print(d)
# Using factory function
import collections
def default_factory():
return 'Bhopal'
d = collections.defaultdict(default_factory,
{"india": 'new delhi',
"karnataka":"Bangaluru"})
print('d:', d)
print('india =>', d['india'])
print('MP =>', d['MP'])
print(d)
# Using factory function
# ---------------------------------------------------
# TODO: How can i pass value to the default function
# ---------------------------------------------------
import collections
def default_factory():
return 'default value'
d = collections.defaultdict(default_factory, foo='bar')
print('d:', d)
print('foo =>', d['foo'])
print('bar =>', d['bar'])
# Using list as the default_factory, it is easy to group a sequence of key-value pairs into a dictionary of lists:
from collections import defaultdict
countryList = [("India", "New Delhi"), ("Iceland", "Reykjavik"),
("Indonesia", "Jakarta"), ("Ireland", "Dublin"),
("Israel", "Jerusalem"), ("Italy", "Rome")]
d = defaultdict(list)
for country, capital in countryList:
d[country].append(capital)
print(d.items())
# Setting the default_factory to int makes the defaultdict useful for counting
quote = 'Vande Mataram'
dd = defaultdict(int)
print(dd)
for chars in quote:
dd[chars] += 1
print(dd.items())
print(dd['T'])
Explanation: Default dict
The standard dictionary includes the method setdefault() for retrieving a value and establishing a default if the value does not exist. By contrast, defaultdict lets the caller specify the default up front when the container is initialized.
End of explanation
import collections
d = collections.deque('Vande Mataram')
print('Deque:', d)
print('Length:', len(d))
print('Left end:', d[0])
print('Right end:', d[-1])
d.remove('e')
print('remove(e):', d)
Explanation: deque — Double-Ended Queue
A double-ended queue, or deque, supports adding and removing elements from either end of the queue. The more commonly used stacks and queues are degenerate forms of deques, where the inputs and outputs are restricted to a single end.
End of explanation
import collections
# Add to the right
d1 = collections.deque()
d1.extend('Vande')
print('extend :', d1)
for a in " Mataram":
d1.append(a)
d1.extend(" !!!")
print('append :', d1)
d1.extendleft(" #!* ")
print('append :', d1)
# Add to the left
d2 = collections.deque()
d2.extendleft(range(6))
print('extendleft:', d2)
d2.appendleft(6)
print('appendleft:', d2)
Explanation: Adding
End of explanation
fruitsCount = {}
fruitsCount["apple"] = 10
fruitsCount["grapes"] = 120
fruitsCount["mango"] = 200
fruitsCount["kiwi"] = 2000
fruitsCount["leeche"] = 20
print(fruitsCount)
for fruit in fruitsCount:
print(fruit)
# Now lets try this with OrderedDict
from collections import OrderedDict as OD
fruitsCount = OD()
fruitsCount["apple"] = 10
fruitsCount["grapes"] = 120
fruitsCount["mango"] = 200
fruitsCount["kiwi"] = 2000
fruitsCount["leeche"] = 20
print(fruitsCount)
for fruit in fruitsCount:
print(fruit)
Explanation: Consuming
OrderedDict
It is a dictionary subclass that remembers the order in which its contents are added.
Lets start with a normal dictionary:
End of explanation
from collections import namedtuple
Point = namedtuple("India", ['x', 'y', "z"]) # Defining the namedtuple
p = Point(10, y=20, z = 30) # Creating an object
print(p)
print(p.x + p.y + p.z)
p[0] + p[1] # Accessing the values in normal way
x, y, z = p # Unpacking the tuple
print(x)
print(y)
Explanation: namedtuple
Named tuples helps to have meaning of each position in a tuple and allow us to code with better readability and self-documenting code. You can use them in any place where you are using tuples. In the example we will create a namedtuple to show hold information for points.
End of explanation |
10,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a bqplot recreation of Mike Bostock's Wealth of Nations. This was also done by Gapminder. It is originally based on a TED Talk by Hans Rosling.
Step1: Cleaning and Formatting JSON Data
Step2: Creating the Tooltip to display the required fields
bqplot's native Tooltip allows us to simply display the data fields we require on a mouse-interaction.
Step3: Creating the Label to display the year
Staying true to the d3 recreation of the talk, we place a Label widget in the bottom-right of the Figure (it inherits the Figure co-ordinates when no scale is passed to it). With enable_move set to True, the Label can be dragged around.
Step4: Defining Axes and Scales
The inherent skewness of the income data favors the use of a LogScale. Also, since the color coding by regions does not follow an ordering, we use the OrdinalColorScale.
Step5: Creating the Scatter Mark with the appropriate size and color parameters passed
To generate the appropriate graph, we need to pass the population of the country to the size attribute and its region to the color attribute.
Step6: Creating the Figure
Step7: Using a Slider to allow the user to change the year and a button for animation
Here we see how we can seamlessly integrate bqplot into the jupyter widget infrastructure.
Step8: When the hovered_point of the Scatter plot is changed (i.e. when the user hovers over a different element), the entire path of that country is displayed by making the Lines object visible and setting it's x and y attributes.
Step9: On the slider value callback (a function that is triggered everytime the value of the slider is changed) we change the x, y and size co-ordinates of the Scatter. We also update the text of the Label to reflect the current year.
Step10: Add an animation button
Step11: Displaying the GUI | Python Code:
import pandas as pd
import numpy as np
import os
from bqplot import (
LogScale, LinearScale, OrdinalColorScale, ColorAxis,
Axis, Scatter, Lines, CATEGORY10, Label, Figure, Tooltip
)
from ipywidgets import HBox, VBox, IntSlider, Play, jslink
initial_year = 1800
Explanation: This is a bqplot recreation of Mike Bostock's Wealth of Nations. This was also done by Gapminder. It is originally based on a TED Talk by Hans Rosling.
End of explanation
data = pd.read_json(os.path.abspath('../data_files/nations.json'))
def clean_data(data):
for column in ['income', 'lifeExpectancy', 'population']:
data = data.drop(data[data[column].apply(len) <= 4].index)
return data
def extrap_interp(data):
data = np.array(data)
x_range = np.arange(1800, 2009, 1.)
y_range = np.interp(x_range, data[:, 0], data[:, 1])
return y_range
def extrap_data(data):
for column in ['income', 'lifeExpectancy', 'population']:
data[column] = data[column].apply(extrap_interp)
return data
data = clean_data(data)
data = extrap_data(data)
income_min, income_max = np.min(data['income'].apply(np.min)), np.max(data['income'].apply(np.max))
life_exp_min, life_exp_max = np.min(data['lifeExpectancy'].apply(np.min)), np.max(data['lifeExpectancy'].apply(np.max))
pop_min, pop_max = np.min(data['population'].apply(np.min)), np.max(data['population'].apply(np.max))
def get_data(year):
year_index = year - 1800
income = data['income'].apply(lambda x: x[year_index])
life_exp = data['lifeExpectancy'].apply(lambda x: x[year_index])
pop = data['population'].apply(lambda x: x[year_index])
return income, life_exp, pop
Explanation: Cleaning and Formatting JSON Data
End of explanation
tt = Tooltip(fields=['name', 'x', 'y'], labels=['Country Name', 'Income per Capita', 'Life Expectancy'])
Explanation: Creating the Tooltip to display the required fields
bqplot's native Tooltip allows us to simply display the data fields we require on a mouse-interaction.
End of explanation
year_label = Label(x=[0.75], y=[0.10], default_size=46, font_weight='bolder', colors=['orange'],
text=[str(initial_year)], enable_move=True)
Explanation: Creating the Label to display the year
Staying true to the d3 recreation of the talk, we place a Label widget in the bottom-right of the Figure (it inherits the Figure co-ordinates when no scale is passed to it). With enable_move set to True, the Label can be dragged around.
End of explanation
x_sc = LogScale(min=min(200, income_min), max=income_max)
y_sc = LinearScale(min=life_exp_min, max=life_exp_max)
c_sc = OrdinalColorScale(domain=data['region'].unique().tolist(), colors=CATEGORY10[:6])
size_sc = LinearScale(min=pop_min, max=pop_max)
ax_y = Axis(label='Life Expectancy', scale=y_sc, orientation='vertical', side='left', grid_lines='solid')
ticks = [2, 4, 6, 8, 10]
income_ticks = [t*100 for t in ticks] + [t*1000 for t in ticks] + [t*10000 for t in ticks]
ax_x = Axis(label='Income per Capita', scale=x_sc, grid_lines='solid', tick_format='~s', tick_values=income_ticks)
Explanation: Defining Axes and Scales
The inherent skewness of the income data favors the use of a LogScale. Also, since the color coding by regions does not follow an ordering, we use the OrdinalColorScale.
End of explanation
# Start with the first year's data
cap_income, life_exp, pop = get_data(initial_year)
wealth_scat = Scatter(x=cap_income, y=life_exp, color=data['region'], size=pop,
names=data['name'], display_names=False,
scales={'x': x_sc, 'y': y_sc, 'color': c_sc, 'size': size_sc},
default_size=4112, tooltip=tt, animate=True, stroke='Black',
unhovered_style={'opacity': 0.5})
nation_line = Lines(x=data['income'][0], y=data['lifeExpectancy'][0], colors=['Gray'],
scales={'x': x_sc, 'y': y_sc}, visible=False)
Explanation: Creating the Scatter Mark with the appropriate size and color parameters passed
To generate the appropriate graph, we need to pass the population of the country to the size attribute and its region to the color attribute.
End of explanation
time_interval = 10
fig = Figure(marks=[wealth_scat, year_label, nation_line], axes=[ax_x, ax_y],
title='Health and Wealth of Nations', animation_duration=time_interval)
Explanation: Creating the Figure
End of explanation
year_slider = IntSlider(min=1800, max=2008, step=1, description='Year', value=initial_year)
Explanation: Using a Slider to allow the user to change the year and a button for animation
Here we see how we can seamlessly integrate bqplot into the jupyter widget infrastructure.
End of explanation
def hover_changed(change):
if change.new is not None:
nation_line.x = data[data['name'] == wealth_scat.names[change.new]]['income'].values[0]
nation_line.y = data[data['name'] == wealth_scat.names[change.new]]['lifeExpectancy'].values[0]
nation_line.visible = True
else:
nation_line.visible = False
wealth_scat.observe(hover_changed, 'hovered_point')
Explanation: When the hovered_point of the Scatter plot is changed (i.e. when the user hovers over a different element), the entire path of that country is displayed by making the Lines object visible and setting it's x and y attributes.
End of explanation
def year_changed(change):
wealth_scat.x, wealth_scat.y, wealth_scat.size = get_data(year_slider.value)
year_label.text = [str(year_slider.value)]
year_slider.observe(year_changed, 'value')
Explanation: On the slider value callback (a function that is triggered everytime the value of the slider is changed) we change the x, y and size co-ordinates of the Scatter. We also update the text of the Label to reflect the current year.
End of explanation
play_button = Play(min=1800, max=2008, interval=time_interval)
jslink((play_button, 'value'), (year_slider, 'value'))
Explanation: Add an animation button
End of explanation
VBox([HBox([play_button, year_slider]), fig])
Explanation: Displaying the GUI
End of explanation |
10,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Introduction
Doppler cooling is one of the most important experimental techniques in cold atom science. Perhaps it's indicative of the impact of this technology that at least five of the names associated with the early days of laser cooling (Wineland, Hansch, Chu, Tannoudji, and Phillips) went on to earn Nobel Prizes in Physics, most of them directly for laser cooling and others for the lifetime contributions to cold atom science. William D. Phillips' Nobel lecture gives a good overview of some of the ways in which laser cooling has had an impact in low energy physics and other areas of physics research.
In this notebook we explore how Doppler cooling, the simplest form of laser cooling, can be modeled in the coldatoms library.
Central to laser cooling is the radiation pressure force generated by photons resonantly scattering off of atoms. For a two level atom the scattering rate is given by
$$
\dot{n} = S\frac{\gamma_0}{2\pi}\frac{\left(\gamma_0/2\right)^2}{(\gamma_0/2)^2(1+2S)+\Delta^2(\mathbf{v})}
$$
In this equation, $S$ is the intensity of the laser in units of the saturation intesnity, $2\pi/\gamma_0$ is the excited state lifetime, and $\Delta$ is the detuning of the laser frequency from the atomic resonance frequency.
Each time the atom absorbs and reemits a photon it receives a recoil kick. Very absorbed photon comes out the the beam and therefore has a well defined momentum ($\hbar \mathbf{k}$ where $\mathbf{k}$ is the wavevector of the laser). The emitted photons travel in more or less random directions and the force due to them therefore averages approximately to zero. The net result is a forc eon the atom along the direction of propagagation of the laser beam and a fluctuating force that is more or less isotropic.
Now comes the imporatnt part that allows us to use this force for cooling
Step3: The following figure shows a contour plot of such a beam originating at $\mathbf{x}_0=(1,1,0)^T$ and propagating in the $\mathbf{k}=(1, 1, 0)^T$ direction.
Step4: Besides the intensity we also need to tell RadiationPressure what the laser-atom detuning is. Here we assume that we only need to account for Doppler shifts and thus we have
$$
\Delta(\mathbf{x}, \mathbf{v}) = \Delta_0-\mathbf{k}\cdot\mathbf{v}.
$$
The frequency $\Delta_0$ is the detuning between laser and atomic transition when the atom is at rest. Here is a class to represent this detuning
Step5: One dimensional laser cooling
As a first example we consider a single atom being laser cooled along the $x$ dimension. We take ${}^{87}\rm{Rb}$
Step6: To represent the 1D MOT we need to add two radiation pressure forces. One for the laser propagating along the $+x$ direction and one propagating along the $-x$ direction. We use a an intensity of $S=0.1S_0$ and a beam width of $\sigma = 1{\rm mm}$
Step7: We have the atom start with a velocity of $v_x = 5\rm{m}/\rm{s}$ and we simulate its evolution with three different time step sizes to check if our simulation is converged
Step8: The following plot shows the evolution of the particle's velocity.
Step9: After about $5\rm{ms}$ the particle has been brought to an almost complete stop. The non-exponential shape of the velocity evolution is due to the finite capture range of the cooling lasers. When the atomic velocity is too large the laser is too far detuned from the atomic transition. The atom then doesn't scatter any photons and hence does not get decelerated.
The three simulation traces agree well with one another indicating that the numerical integration is converged.
3D optical molasses
Next we consider fully three dimensional laser cooling as is often used in magneto-optical traps. To obtain cooling in all three dimensions we need three pairs of lasers
Step10: The integration of the atoms' equations of motion is virtually unchanged. We merely have to use the forces due to the three dimensional mot
Step11: When we now consider the $x$ component of the atomic velocity we see that the residual velocity fluctuations are larger. This is because now noise from two additional pairs of lasers feed into the $x$ component
Step12: The residual velocity fluctuations correspond to the so-called Doppler temperature. They are caused by the atoms absorbing a photon from the "wrong" beam. We find for the standard deviation of the $x$-component of the velocity
Step13: We can compare that with the value that theory predicts based on the Doppler temperature $\sqrt{\langle v_x^2\rangle}$ | Python Code:
class GaussianBeam(object):
A laser beam with a Gaussian intensity profile.
def __init__(self, S0, x0, k, sigma):
Construct a Gaussian laser beam from position, direction, and width.
S0 -- Peak intensity (in units of the saturation intensity).
x0 -- A location on the center of the beam.
k -- Propagation direction of the beam (need not be normalized).
sigma -- 1/e width of the beam.
self.S0 = S0
self.x0 = np.copy(x0)
self.k_hat = k / np.linalg.norm(k)
self.sigma = sigma
def intensities(self, x):
xp = x - self.x0
xperp = xp - np.outer(xp.dot(self.k_hat[:, np.newaxis]), self.k_hat)
return self.S0 * np.exp(-np.linalg.norm(xperp, axis=1)**2/self.sigma**2)
Explanation: Introduction
Doppler cooling is one of the most important experimental techniques in cold atom science. Perhaps it's indicative of the impact of this technology that at least five of the names associated with the early days of laser cooling (Wineland, Hansch, Chu, Tannoudji, and Phillips) went on to earn Nobel Prizes in Physics, most of them directly for laser cooling and others for the lifetime contributions to cold atom science. William D. Phillips' Nobel lecture gives a good overview of some of the ways in which laser cooling has had an impact in low energy physics and other areas of physics research.
In this notebook we explore how Doppler cooling, the simplest form of laser cooling, can be modeled in the coldatoms library.
Central to laser cooling is the radiation pressure force generated by photons resonantly scattering off of atoms. For a two level atom the scattering rate is given by
$$
\dot{n} = S\frac{\gamma_0}{2\pi}\frac{\left(\gamma_0/2\right)^2}{(\gamma_0/2)^2(1+2S)+\Delta^2(\mathbf{v})}
$$
In this equation, $S$ is the intensity of the laser in units of the saturation intesnity, $2\pi/\gamma_0$ is the excited state lifetime, and $\Delta$ is the detuning of the laser frequency from the atomic resonance frequency.
Each time the atom absorbs and reemits a photon it receives a recoil kick. Very absorbed photon comes out the the beam and therefore has a well defined momentum ($\hbar \mathbf{k}$ where $\mathbf{k}$ is the wavevector of the laser). The emitted photons travel in more or less random directions and the force due to them therefore averages approximately to zero. The net result is a forc eon the atom along the direction of propagagation of the laser beam and a fluctuating force that is more or less isotropic.
Now comes the imporatnt part that allows us to use this force for cooling: The detuning $\Delta$ of the laser from the atomic transition is velocity dependent due to the Doppler shift. In free space we have
$$
\Delta(\mathbf{v}) = \omega_L - \omega_A - \mathbf{k}\cdot\mathbf{v}
$$
If we then "red detune" the laser i.e. we choose a laser frequency such that $\omega_L<\omega_A$, it is easy to see that atoms moving towards the laser with $\mathbf{k}\cdot\mathbf{v}<0$, will scatter more photons than atoms moving away from the laser. They will hence experience a decelerating force if they're moving towards the laser beam.
"Wait a second" you say. "That's all good and fine. The atoms are slowed down if they're moving in the opposite direction as the laser's propagation direction. But what if they move in the direction of propagation of the laser. Won't they get accelerated in that case?"
You are correct! One laser beam can only slow down the atoms if they're going one way. To slow them down going either direction we need a second laser that is propagating in the opposite direction. By combining six such lasers, a pair for each Cartesian direction, we can achieve cooling of the atoms' motion in all three spatial directions.
We have neglected the fluctuating force due to the emitted photons so far. Unfortunately these recoil kicks never cancel completely because they are uncorrelated with one another. The recoil kicks make th atoms undergo a random walk in momentum space. They are a heating mechanism.
The balance between the cooling rate due to the coeherent friction force from absorption and the heating due to the incoherent emission events corresponds to the lowest temperature that can be achieved by Doppler cooling. This temperature is called the Doppler temperature. We will determine it computationally later in this notebook.
It may be worth mentioning that many laser cooling schemes exist that are able to achieve temperatures lower than the Doppler limit. We will not consider these so called sub-Doppler schemes here.
Doppler cooling in coldatoms
So how dow we simulate Doppler cooling using the coldatoms library? The answer is we use the RadiationPressure force. This force mimicks the momentum recoil due to absorption and emission of photons in resonance fluorescence. The RadiationPressure force is completely determined by the excited state decay rate $\gamma_0$, the driving laser's wavevector $\mathbf{k}$, the laser intensity, and the detuning of the laser from the atomic transition frequency. The intensity is a function of the atomic position because the laser intensity varies throughout space. The detuning depends on the atomic position and velocity. It depends on position because external fields may lead to shifts of the atomic transition frequency (e.g. magnetic fields leading to Zeeman shifts) and it depends on velocity via the Doppler shift.
In this notebook we consider a well collimated Gaussian laser beam. It has an intensity profile given by
$$
S(\mathbf{x})=S_0e^{-x_\perp^2/\sigma^2}
$$
where $S_0$ is the peak intensity of the beam, $\sigma$ is the width of the beam, and $x_\perp=\mathbf{x}-\mathbf{x_0} - (\mathbf{x}-\mathbf{x_0})\cdot \mathbf{k}/k$ is the distance of the atom from the center of the beam. We represent the intensity by the following Python class:
End of explanation
beam = GaussianBeam(1.0, np.array([1.0,1.0,0.0]),np.array([1.0, 1.0, 0.0]), 1.0)
num_pts = 10
x_min = -3
x_max = 3
y_min = -3
y_max = 3
x = np.linspace(x_min, x_max, num_pts)
y = np.linspace(x_min, x_max, num_pts)
pts = np.array([[x[i], y[j], 0] for i in range(num_pts) for j in range(num_pts)])
intensities = beam.intensities(pts).reshape(num_pts, num_pts)
xx, yy = np.meshgrid(x, y)
plt.contour(xx, yy, intensities)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
Explanation: The following figure shows a contour plot of such a beam originating at $\mathbf{x}_0=(1,1,0)^T$ and propagating in the $\mathbf{k}=(1, 1, 0)^T$ direction.
End of explanation
class DopplerDetuning(object):
def __init__(self, Delta0, k):
self.Delta0 = Delta0
self.k = np.copy(k)
def detunings(self, x, v):
return self.Delta0 - np.inner(self.k, v)
detuning = DopplerDetuning(-0.5, (1, 0, 0))
Explanation: Besides the intensity we also need to tell RadiationPressure what the laser-atom detuning is. Here we assume that we only need to account for Doppler shifts and thus we have
$$
\Delta(\mathbf{x}, \mathbf{v}) = \Delta_0-\mathbf{k}\cdot\mathbf{v}.
$$
The frequency $\Delta_0$ is the detuning between laser and atomic transition when the atom is at rest. Here is a class to represent this detuning:
End of explanation
ensemble = coldatoms.Ensemble(num_ptcls=1)
ensemble.ensemble_properties['mass'] = 87*1.67e-27
wavelength = 780.0e-9
k = 2.0 * np.pi / wavelength
gamma = 2.0 * np.pi * 6.1e6
hbar = 1.0e-34
sigma = 1.0e-3
Explanation: One dimensional laser cooling
As a first example we consider a single atom being laser cooled along the $x$ dimension. We take ${}^{87}\rm{Rb}$:
End of explanation
one_d_mot = [
coldatoms.RadiationPressure(gamma, np.array([hbar * k, 0.0, 0.0]),
GaussianBeam(S0=0.1, x0=np.array([0.0, 0.0, 0.0]), k=np.array([k, 0.0, 0.0]), sigma=sigma),
DopplerDetuning(-0.5 * gamma, np.array([k, 0.0, 0.0]))),
coldatoms.RadiationPressure(gamma, np.array([-hbar * k, 0.0, 0.0]),
GaussianBeam(S0=0.1, x0=np.array([0.0, 0.0, 0.0]), k=np.array([-k, 0.0, 0.0]), sigma=sigma),
DopplerDetuning(-0.5 * gamma, np.array([-k, 0.0, 0.0])))]
Explanation: To represent the 1D MOT we need to add two radiation pressure forces. One for the laser propagating along the $+x$ direction and one propagating along the $-x$ direction. We use a an intensity of $S=0.1S_0$ and a beam width of $\sigma = 1{\rm mm}$:
End of explanation
velocities = []
times = []
# Loop over time step sizes.
for i in range(3):
# Reset positions and velocities to initial conditions.
ensemble.x *= 0.0
ensemble.v *= 0.0
ensemble.v[0, 0] = 5.0e0
# The time step size.
dt = 1.0e-5 / 2**i
v = []
t = []
# Now do the time integration and record velocities and times.
for i in range(2000 * 2**i):
v.append(ensemble.v[0, 0])
t.append(i * dt)
coldatoms.drift_kick(dt=dt, ensemble=ensemble, forces=one_d_mot)
velocities.append(v)
times.append(t)
Explanation: We have the atom start with a velocity of $v_x = 5\rm{m}/\rm{s}$ and we simulate its evolution with three different time step sizes to check if our simulation is converged:
End of explanation
plt.figure()
for i in range(3):
plt.plot(1.0e3 * np.array(times[i]), velocities[i])
plt.xlabel(r'$t/\rm{ms}$')
plt.ylabel(r'$v_x/(\rm{m}/\rm{s})$')
Explanation: The following plot shows the evolution of the particle's velocity.
End of explanation
three_d_mot = [
coldatoms.RadiationPressure(gamma, hbar * kp,
GaussianBeam(S0=0.1, x0=np.array([0.0, 0.0, 0.0]), k=kp, sigma=1.0e-3),
DopplerDetuning(-0.5 * gamma, kp)) for kp in [
np.array([k, 0.0, 0.0]),
np.array([-k, 0.0, 0.0]),
np.array([0.0, k, 0.0]),
np.array([0.0, -k, 0.0]),
np.array([0.0, 0.0, k]),
np.array([0.0, 0.0, -k]),
]]
Explanation: After about $5\rm{ms}$ the particle has been brought to an almost complete stop. The non-exponential shape of the velocity evolution is due to the finite capture range of the cooling lasers. When the atomic velocity is too large the laser is too far detuned from the atomic transition. The atom then doesn't scatter any photons and hence does not get decelerated.
The three simulation traces agree well with one another indicating that the numerical integration is converged.
3D optical molasses
Next we consider fully three dimensional laser cooling as is often used in magneto-optical traps. To obtain cooling in all three dimensions we need three pairs of lasers:
End of explanation
velocities_3d = []
times_3d = []
# Loop over time step sizes.
for i in range(3):
# Reset positions and velocities to initial conditions.
ensemble.x *= 0.0
ensemble.v *= 0.0
ensemble.v[0, 0] = 5.0e0
# The time step size.
dt = 1.0e-5 / 2**i
v = []
t = []
# Now do the time integration and record velocities and times.
for i in range(3000 * 2**i):
v.append(ensemble.v[0, 0])
t.append(i * dt)
coldatoms.drift_kick(dt=dt, ensemble=ensemble, forces=three_d_mot)
velocities_3d.append(v)
times_3d.append(t)
Explanation: The integration of the atoms' equations of motion is virtually unchanged. We merely have to use the forces due to the three dimensional mot:
End of explanation
plt.figure()
for i in range(3):
plt.plot(1.0e3 * np.array(times_3d[i]), velocities_3d[i])
plt.xlabel(r'$t/\rm{ms}$')
plt.ylabel(r'$v_x/(\rm{m}/\rm{s})$')
Explanation: When we now consider the $x$ component of the atomic velocity we see that the residual velocity fluctuations are larger. This is because now noise from two additional pairs of lasers feed into the $x$ component:
End of explanation
tmin = [500, 1000, 2000]
for i in range(3):
print(np.std(np.array(velocities_3d[i][tmin[i]:])))
Explanation: The residual velocity fluctuations correspond to the so-called Doppler temperature. They are caused by the atoms absorbing a photon from the "wrong" beam. We find for the standard deviation of the $x$-component of the velocity:
End of explanation
np.sqrt(hbar * gamma / ensemble.ensemble_properties['mass'] / 2.0 / 3.0)
Explanation: We can compare that with the value that theory predicts based on the Doppler temperature $\sqrt{\langle v_x^2\rangle}$:
End of explanation |
10,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data analysis tools - Pearson correlation coefficient
For Week 3 assignment I'm testing association between average country income per person and average oil usage rate.
Step1: Following block draws the diagram
Step2: Diagram clearly indicates that association between incompe per person and oil usage per person is positive but doesn't look very strong. | Python Code:
%matplotlib inline
import pandas
import numpy
import seaborn
import scipy
import matplotlib.pyplot as plt
data = pandas.read_csv('gapminder.csv', low_memory=False)
data['oilperperson'] = pandas.to_numeric(data['oilperperson'], errors='coerce')
data['incomeperperson'] = pandas.to_numeric(data['incomeperperson'], errors='coerce')
data['incomeperperson']=data['incomeperperson'].replace(' ', numpy.nan)
Explanation: Data analysis tools - Pearson correlation coefficient
For Week 3 assignment I'm testing association between average country income per person and average oil usage rate.
End of explanation
scat1 = seaborn.regplot(x="incomeperperson", y="oilperperson", fit_reg=True, data=data)
plt.xlabel('Income per person')
plt.ylabel('Oil usage per person')
plt.title('Scatterplot for the Association Between "Income per person" and "Oil per person"')
plt.show()
Explanation: Following block draws the diagram
End of explanation
data_clean=data.dropna()
pearson_coeff, p_value = scipy.stats.pearsonr(data_clean['incomeperperson'], data_clean['oilperperson'])
print ('Association between "Income per person" and "Oil per person" values')
print "Pearson correlation coefficient: ", pearson_coeff
print "P_Value: ", p_value
Explanation: Diagram clearly indicates that association between incompe per person and oil usage per person is positive but doesn't look very strong.
End of explanation |
10,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 6 – Decision Trees
This notebook contains all the sample code and solutions to the exercices in chapter 6.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Training and visualizing
Step2: Predicting classes and class probabilities
Step3: Sensitivity to training set details
Step4: Regression trees | Python Code:
#Advanced: Using other libs...
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "decision_trees"
def image_path(fig_id):
return os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id)
def save_fig(fig_id, tight_layout=True):
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(image_path(fig_id) + ".png", format='png', dpi=300)
Explanation: Chapter 6 – Decision Trees
This notebook contains all the sample code and solutions to the exercices in chapter 6.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier, export_graphviz
iris = load_iris()
X = iris.data[:, 2:] # petal length and width
y = iris.target
tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42)
tree_clf.fit(X, y)
export_graphviz(
tree_clf,
out_file=image_path("iris_tree.dot"),
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True
)
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap, linewidth=10)
if not iris:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
if plot_training:
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica")
plt.axis(axes)
if iris:
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
else:
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
if legend:
plt.legend(loc="lower right", fontsize=14)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf, X, y)
plt.plot([2.45, 2.45], [0, 3], "k-", linewidth=2)
plt.plot([2.45, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.plot([4.95, 4.95], [0, 1.75], "k:", linewidth=2)
plt.plot([4.85, 4.85], [1.75, 3], "k:", linewidth=2)
plt.text(1.40, 1.0, "Depth=0", fontsize=15)
plt.text(3.2, 1.80, "Depth=1", fontsize=13)
plt.text(4.05, 0.5, "(Depth=2)", fontsize=11)
save_fig("decision_tree_decision_boundaries_plot")
plt.show()
Explanation: Training and visualizing
End of explanation
tree_clf.predict_proba([[5, 1.5]])
tree_clf.predict([[5, 1.5]])
Explanation: Predicting classes and class probabilities
End of explanation
X[(X[:, 1]==X[:, 1][y==1].max()) & (y==1)] # widest Iris-Versicolor flower
not_widest_versicolor = (X[:, 1]!=1.8) | (y==2)
X_tweaked = X[not_widest_versicolor]
y_tweaked = y[not_widest_versicolor]
tree_clf_tweaked = DecisionTreeClassifier(max_depth=2, random_state=40)
tree_clf_tweaked.fit(X_tweaked, y_tweaked)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf_tweaked, X_tweaked, y_tweaked, legend=False)
plt.plot([0, 7.5], [0.8, 0.8], "k-", linewidth=2)
plt.plot([0, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.text(1.0, 0.9, "Depth=0", fontsize=15)
plt.text(1.0, 1.80, "Depth=1", fontsize=13)
save_fig("decision_tree_instability_plot")
plt.show()
from sklearn.datasets import make_moons
Xm, ym = make_moons(n_samples=100, noise=0.25, random_state=53)
deep_tree_clf1 = DecisionTreeClassifier(random_state=42)
deep_tree_clf2 = DecisionTreeClassifier(min_samples_leaf=4, random_state=42)
deep_tree_clf1.fit(Xm, ym)
deep_tree_clf2.fit(Xm, ym)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_decision_boundary(deep_tree_clf1, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("No restrictions", fontsize=16)
plt.subplot(122)
plot_decision_boundary(deep_tree_clf2, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("min_samples_leaf = {}".format(deep_tree_clf2.min_samples_leaf), fontsize=14)
save_fig("min_samples_leaf_plot")
plt.show()
angle = np.pi / 180 * 20
rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
Xr = X.dot(rotation_matrix)
tree_clf_r = DecisionTreeClassifier(random_state=42)
tree_clf_r.fit(Xr, y)
plt.figure(figsize=(8, 3))
plot_decision_boundary(tree_clf_r, Xr, y, axes=[0.5, 7.5, -1.0, 1], iris=False)
plt.show()
rnd.seed(6)
Xs = rnd.rand(100, 2) - 0.5
ys = (Xs[:, 0] > 0).astype(np.float32) * 2
angle = np.pi / 4
rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
Xsr = Xs.dot(rotation_matrix)
tree_clf_s = DecisionTreeClassifier(random_state=42)
tree_clf_s.fit(Xs, ys)
tree_clf_sr = DecisionTreeClassifier(random_state=42)
tree_clf_sr.fit(Xsr, ys)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_decision_boundary(tree_clf_s, Xs, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False)
plt.subplot(122)
plot_decision_boundary(tree_clf_sr, Xsr, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False)
save_fig("sensitivity_to_rotation_plot")
plt.show()
Explanation: Sensitivity to training set details
End of explanation
from sklearn.tree import DecisionTreeRegressor
# Quadratic training set + noise
rnd.seed(42)
m = 200
X = rnd.rand(m, 1)
y = 4 * (X - 0.5) ** 2
y = y + rnd.randn(m, 1) / 10
tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2)
tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"):
x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1)
y_pred = tree_reg.predict(x1)
plt.axis(axes)
plt.xlabel("$x_1$", fontsize=18)
if ylabel:
plt.ylabel(ylabel, fontsize=18, rotation=0)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_regression_predictions(tree_reg1, X, y)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
plt.text(0.21, 0.65, "Depth=0", fontsize=15)
plt.text(0.01, 0.2, "Depth=1", fontsize=13)
plt.text(0.65, 0.8, "Depth=1", fontsize=13)
plt.legend(loc="upper center", fontsize=18)
plt.title("max_depth=2", fontsize=14)
plt.subplot(122)
plot_regression_predictions(tree_reg2, X, y, ylabel=None)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
for split in (0.0458, 0.1298, 0.2873, 0.9040):
plt.plot([split, split], [-0.2, 1], "k:", linewidth=1)
plt.text(0.3, 0.5, "Depth=2", fontsize=13)
plt.title("max_depth=3", fontsize=14)
save_fig("tree_regression_plot")
plt.show()
export_graphviz(
tree_reg1,
out_file=image_path("regression_tree.dot"),
feature_names=["x1"],
rounded=True,
filled=True
)
tree_reg1 = DecisionTreeRegressor(random_state=42)
tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
x1 = np.linspace(0, 1, 500).reshape(-1, 1)
y_pred1 = tree_reg1.predict(x1)
y_pred2 = tree_reg2.predict(x1)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", fontsize=18, rotation=0)
plt.legend(loc="upper center", fontsize=18)
plt.title("No restrictions", fontsize=14)
plt.subplot(122)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14)
save_fig("tree_regression_regularization_plot")
plt.show()
Explanation: Regression trees
End of explanation |
10,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notes on the LRIS Blue reduction
Step1: Detectors
Note
Step2: Display Raw LRIS image in Ginga | Python Code:
# imports
sys.path.append(os.path.abspath('/Users/xavier/local/Python/PYPIT/src'))
import arload as pyp_arload
import ario as pyp_ario
Explanation: Notes on the LRIS Blue reduction
End of explanation
fil = '/Users/xavier/PYPIT/LRIS_blue/Raw/b150910_2033.fits.gz'
hdu = fits.open(fil)
hdu.info()
head0['OBSTYPE']
head0 = hdu[0].header
head0
#head0['DATE']
plt.clf()
plt.imshow(hdu[1].data)
plt.show()
Explanation: Detectors
Note: LRISb has employed different detectors. We may need to
make PYPIT backwards compatible.
FITS file
End of explanation
### Need to port readmhdufits
head0
reload(pyp_ario)
img, head = pyp_ario.read_lris('/Users/xavier/PYPIT/LRIS_blue/Raw/b150910_2070.fits',TRIM=True)
xdb.ximshow(img)
import subprocess
subprocess.call(["touch", "dum.fil"])
b = 'as'
'{1:s}'.format(b)
range(1,5)
tmp = np.ones((10,20))
tmp[0:1,:].shape
Explanation: Display Raw LRIS image in Ginga
End of explanation |
10,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Machine Learning
LA Team Submission 3 ##
Lukas Mosser, Alfredo De la Fuente
In this python notebook we explore a facies classification model using Deep Neural Networks taking into account spatial dependencies to outperform the prediction model proposed in the prediction facies from wel logs challenge.
Problem Modeling
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are
Step1: We load the training and testing data to preprocess it for further analysis.
Step2: We fill the missing data values in the PE field with zero and proceed to normalize the data that will be fed into our model.
Step3: In order to start training stage, it is required to format the data by considering a sliding window over the depth component in order to classify a given set of features at some specific depth for each well in the training set.
Step4: Data Analysis
We will experiment with an ensemble of classification models to outperform the accuracy at predicting facies. As input we will consider a set of features extracted by padding a depth interval segment, that way we take into account spatial dependencies. As output we obtain a vector filled with values ranging from [0-8] that indicate the presence of each facie with respect to depth.
Step5: In order to evaluate our classification model accurary we will use the our following defined metrics, based on the confusion matrix once the classification is performed. The first metric only considers misclassification error and the second one takes into account the fact that facies could be misclassified if they belong to a same group with similar geological characteristics.
Step6: Model 1 - Deep Neural Network
Our model will be composed by a Deep Neural Network of an input layer, two hidden layers and an output layer.
Step7: Once the set of parameters are fixed, the training stage of our model begins. We perform a Cross Validation routine to evaluate the performance of the model.
Step8: Prediction
We obtain the predictions for test data. | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install keras
from __future__ import print_function
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
Explanation: Facies classification using Machine Learning
LA Team Submission 3 ##
Lukas Mosser, Alfredo De la Fuente
In this python notebook we explore a facies classification model using Deep Neural Networks taking into account spatial dependencies to outperform the prediction model proposed in the prediction facies from wel logs challenge.
Problem Modeling
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Data Preprocessing
Let's import all the libraries that will be particularly needed for the analysis.
End of explanation
filename = 'train_test_data.csv'
data = pd.read_csv(filename)
data.head(10)
Explanation: We load the training and testing data to preprocess it for further analysis.
End of explanation
# Set 'Well Name' and 'Formation' fields as categories
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Fill missing values and normalize for 'PE' field
data['PE'] = data['PE'].fillna(value=0)
mean_pe = data['PE'].mean()
std_pe = data['PE'].std()
data['PE'] = (data['PE']-mean_pe)/std_pe
# Normalize the rest of fields (GR, ILD_log10, DelthaPHI, PHIND,NM_M,RELPOS)
correct_facies_labels = data['Facies'].values
feature_vectors = data.drop(['Formation', 'Depth'], axis=1)
well_labels = data[['Well Name', 'Facies']].values
data_vectors = feature_vectors.drop(['Well Name', 'Facies'], axis=1).values
scaler = preprocessing.StandardScaler().fit(data_vectors)
scaled_features = scaler.transform(data_vectors)
data_out = np.hstack([well_labels, scaled_features])
Explanation: We fill the missing data values in the PE field with zero and proceed to normalize the data that will be fed into our model.
End of explanation
def preprocess(data_out):
data = data_out
well_data = {}
well_names = list(set(data[:, 0]))
for name in well_names:
well_data[name] = [[], []]
for row in data:
well_data[row[0]][1].append(row[1])
well_data[row[0]][0].append(list(row[2::]))
# Sliding window
positive_lag = 0
negative_lag = 0
chunks = []
chunks_test = []
chunk_length = positive_lag+negative_lag+1
chunks_facies = []
for name in well_names:
if name not in ['STUART', 'CRAWFORD']:
test_well_data = well_data[name]
log_values = np.array(test_well_data[0])
facies_values = np.array(test_well_data[1])
for i in range(log_values.shape[0]):
chunks.append(log_values[i:i+1, :])
chunks_facies.append(facies_values[i])
else:
test_well_data = well_data[name]
log_values = np.array(test_well_data[0])
facies_values = np.array(test_well_data[1])
for i in range(log_values.shape[0]):
chunks_test.append(log_values[i:i+1, :])
chunks_facies = np.array(chunks_facies, dtype=np.int32)-1
X_ = np.array(chunks)
X = np.zeros((len(X_),len(X_[0][0]) * len(X_[0])))
for i in range(len(X_)):
X[i,:] = X_[i].flatten()
X_test = np.array(chunks_test)
X_test_out = np.zeros((len(X_test),len(X_test[0][0]) * len(X_test[0])))
for i in range(len(X_test)):
X_test_out[i,:] = X_test[i].flatten()
y = np_utils.to_categorical(chunks_facies)
return X, y, X_test_out
Explanation: In order to start training stage, it is required to format the data by considering a sliding window over the depth component in order to classify a given set of features at some specific depth for each well in the training set.
End of explanation
# Reproducibility
np.random.seed(7)
# Load data
X_train, y_train, X_test = preprocess(data_out)
# Obtain labels
y_labels = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_labels[i] = np.argmax(y_train[i])
y_labels = y_labels.astype(int)
Explanation: Data Analysis
We will experiment with an ensemble of classification models to outperform the accuracy at predicting facies. As input we will consider a set of features extracted by padding a depth interval segment, that way we take into account spatial dependencies. As output we obtain a vector filled with values ranging from [0-8] that indicate the presence of each facie with respect to depth.
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
Explanation: In order to evaluate our classification model accurary we will use the our following defined metrics, based on the confusion matrix once the classification is performed. The first metric only considers misclassification error and the second one takes into account the fact that facies could be misclassified if they belong to a same group with similar geological characteristics.
End of explanation
# Set parameters
input_dim = 77
hidden_dim_1 = 200
hidden_dim_2 = 50
output_dim = 9
batch_size = 32
nb_epoch = 10
def dnn_model():
# Define the model
model = Sequential()
model.add(Dense(200, input_dim=7, init='normal', activation='relu'))
model.add(Dense(50, input_dim=200, init='normal', activation='relu'))
model.add(Dense(9, init='normal', activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
Explanation: Model 1 - Deep Neural Network
Our model will be composed by a Deep Neural Network of an input layer, two hidden layers and an output layer.
End of explanation
# Cross Validation
estimator = KerasClassifier(build_fn=dnn_model, nb_epoch=10, batch_size=1, verbose=0)
skf = StratifiedKFold(n_splits=5, shuffle=True)
results_dnn = cross_val_score(estimator, X_train, y_train, cv= skf.get_n_splits(X_train, y_train))
print(' Cross Validation Results')
print( results_dnn )
# Load the model
model_dnn = dnn_model()
#Train model
model_dnn.fit(X_train, y_train, nb_epoch=10, verbose=2)
# Predict Values on Training set
y_predicted = model_dnn.predict( X_train , batch_size=1, verbose=2)
# Print Report
# Format output [0 - 8 ]
y_ = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_[i] = np.argmax(y_train[i])
y_predicted_ = np.zeros((len(y_predicted), 1))
for i in range(len(y_predicted)):
y_predicted_[i] = np.argmax( y_predicted[i] )
# Confusion Matrix
conf = confusion_matrix(y_, y_predicted_)
# Print Results
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("\nConfusion Matrix")
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
Explanation: Once the set of parameters are fixed, the training stage of our model begins. We perform a Cross Validation routine to evaluate the performance of the model.
End of explanation
# DNN model Prediction
y_test = model_dnn.predict( X_test , batch_size=100, verbose=0)
predictions_dnn = np.zeros((len(y_test),1))
for i in range(len(y_test)):
predictions_dnn[i] = np.argmax(y_test[i]) + 1
predictions_dnn = predictions_dnn.astype(int)
# Store results
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data['Facies'] = predictions_dnn
test_data.to_csv('Prediction_3.csv')
Explanation: Prediction
We obtain the predictions for test data.
End of explanation |
10,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source localization with MNE/dSPM/sLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
inverse method such as MNE/dSPM/sLORETA on evoked/raw/epochs data.
Step1: Process MEG data
Step2: Compute regularized noise covariance
For more details see tut_compute_covariance.
Step3: Compute the evoked response
Step4: Inverse modeling
Step5: Compute inverse solution
Step6: Visualization
View activation time-series
Step7: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
Step8: Morph data to average brain
Step9: Dipole orientations
The pick_ori parameter of the | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import (make_inverse_operator, apply_inverse,
write_inverse_operator)
# sphinx_gallery_thumbnail_number = 9
Explanation: Source localization with MNE/dSPM/sLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
inverse method such as MNE/dSPM/sLORETA on evoked/raw/epochs data.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname) # already has an average reference
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_r=1) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
exclude='bads')
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, reject=reject)
Explanation: Process MEG data
End of explanation
noise_cov = mne.compute_covariance(
epochs, tmax=0., method=['shrunk', 'empirical'])
fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
Explanation: Compute regularized noise covariance
For more details see tut_compute_covariance.
End of explanation
evoked = epochs.average()
evoked.plot()
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag')
# Show whitening
evoked.plot_white(noise_cov)
Explanation: Compute the evoked response
End of explanation
# Read the forward solution and compute the inverse operator
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
fwd = mne.convert_forward_solution(fwd, surf_ori=True)
# Restrict forward solution as necessary for MEG
fwd = mne.pick_types_forward(fwd, meg=True, eeg=False)
# make an MEG inverse operator
info = evoked.info
inverse_operator = make_inverse_operator(info, fwd, noise_cov,
loose=0.2, depth=0.8)
write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',
inverse_operator)
Explanation: Inverse modeling: MNE/dSPM on evoked and raw data
End of explanation
method = "dSPM"
snr = 3.
lambda2 = 1. / snr ** 2
stc = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None)
del fwd, epochs # to save memory
Explanation: Compute inverse solution
End of explanation
plt.figure()
plt.plot(1e3 * stc.times, stc.data[::100, :].T)
plt.xlabel('time (ms)')
plt.ylabel('%s value' % method)
plt.show()
Explanation: Visualization
View activation time-series
End of explanation
vertno_max, time_max = stc.get_peak(hemi='rh')
subjects_dir = data_path + '/subjects'
brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]),
initial_time=time_max, time_unit='s')
brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',
scale_factor=0.6)
brain.show_view('lateral')
Explanation: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
End of explanation
fs_vertices = [np.arange(10242)] * 2 # fsaverage is special this way
morph_mat = mne.compute_morph_matrix('sample', 'fsaverage', stc.vertices,
fs_vertices, smooth=None,
subjects_dir=subjects_dir)
stc_fsaverage = stc.morph_precomputed('fsaverage', fs_vertices, morph_mat)
brain_fsaverage = stc_fsaverage.plot(
surface='inflated', hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), initial_time=time_max,
time_unit='s', size=(800, 800), smoothing_steps=5)
brain_fsaverage.show_view('lateral')
Explanation: Morph data to average brain
End of explanation
stc_vec = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori='vector')
stc_vec.plot(hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]),
initial_time=time_max, time_unit='s')
Explanation: Dipole orientations
The pick_ori parameter of the
:func:mne.minimum_norm.apply_inverse function controls
the orientation of the dipoles. One useful setting is pick_ori='vector',
which will return an estimate that does not only contain the source power at
each dipole, but also the orientation of the dipoles.
End of explanation |
10,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hydrometrics
In this notebook, we show how to compute several hydrometics parameters based on stream network produced from model. The analysis relies on the flow files (i.e. stream) found in Badlands outputs. If you are interested in looking at morphometrics and stratigraphic analysis there are other notebooks specially designed for that in the Badlands companion repository.
Hydrometrics here refers only to quantitative description and analysis of water surface and we don't consider groundwater analysis. We will show how you can extract a particular catchment from a given model and compute for this particular catchment a series of paramters such as
Step1: 1. Load catchments parameters
We first have to define the path to the Badlands outputs we want to analyse. In addition Badlands is creating several files for each processors that have been used, you need to specify this number in the ncpus variable.
We then need to provide a point coordinates (X,Y) contained in the catchment of interest. This point doesn't need to be the outlet of the catchment.
For more information regarding the function uncomment the following line.
Step2: 2. Extract particular catchment dataset
We now extract the data from a particular time step (timestep) and for the catchment of interest, which contained the point specified in previous function.
Note
If you are interested in making some hydrometric comparisons between different time steps you can create multiple instances of the hydrometrics python class each of them associated to a given time step.
Step3: We can visualise the stream network using the viewNetwork function. The following paramters can be displayed
Step4: 3. Extract catchment main stream
We now extract the main stream for the considered catchment based on flow
discharge values.
Step5: As for the global stream network, you can use the viewStream function to visualise the main stream dataset.
Step6: 4. Compute main stream hydrometrics
Here, we compute the stream parameters using the distance from outlet and the Badlands simulation coefficients for the stream power law and the hillslope linear diffusion.
The formulation for the Peclet number is
Step7: The following combination of parameters can be visualised with the viewPlot function
Step8: 5. River profile through time
Using the same functions as before we can now create the river profile evolution through time and plot it on a single graph. | Python Code:
%matplotlib inline
from matplotlib import cm
# Import badlands grid generation toolbox
import pybadlands_companion.hydroGrid as hydr
# display plots in SVG format
%config InlineBackend.figure_format = 'svg'
Explanation: Hydrometrics
In this notebook, we show how to compute several hydrometics parameters based on stream network produced from model. The analysis relies on the flow files (i.e. stream) found in Badlands outputs. If you are interested in looking at morphometrics and stratigraphic analysis there are other notebooks specially designed for that in the Badlands companion repository.
Hydrometrics here refers only to quantitative description and analysis of water surface and we don't consider groundwater analysis. We will show how you can extract a particular catchment from a given model and compute for this particular catchment a series of paramters such as:
river profile evolution based on main stream elevation and distance to outlet,
peclet number distribution which evaluates the dominant processes shaping the landscape,
$\chi$ parameter that characterizes rivers system evolution based on terrain steepness and the arrangement of tributaries,
discharge profiles
End of explanation
#help(hydr.hydroGrid.__init__)
hydro1 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [40599,7656.65])
hydro2 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [33627.6,30672.9])
Explanation: 1. Load catchments parameters
We first have to define the path to the Badlands outputs we want to analyse. In addition Badlands is creating several files for each processors that have been used, you need to specify this number in the ncpus variable.
We then need to provide a point coordinates (X,Y) contained in the catchment of interest. This point doesn't need to be the outlet of the catchment.
For more information regarding the function uncomment the following line.
End of explanation
#help(hydro.getCatchment)
hydro1.getCatchment(timestep=200)
hydro2.getCatchment(timestep=200)
Explanation: 2. Extract particular catchment dataset
We now extract the data from a particular time step (timestep) and for the catchment of interest, which contained the point specified in previous function.
Note
If you are interested in making some hydrometric comparisons between different time steps you can create multiple instances of the hydrometrics python class each of them associated to a given time step.
End of explanation
#help(hydro.viewNetwork)
hydro1.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15,
val = 'chi', width = 300, height = 500, colorMap = cm.viridis,
colorScale = 'Viridis', reverse = False,
title = '<br>Stream network graph 1')
hydro2.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15,
val = 'chi', width = 300, height = 500, colorMap = cm.viridis,
colorScale = 'Viridis', reverse = False,
title = '<br>Stream network graph 2')
hydro1.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3,
val = 'FA', width = 300, height = 500, colorMap = cm.Blues,
colorScale = 'Blues', reverse = True,
title = '<br>Stream network graph 1')
hydro2.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3,
val = 'FA', width = 300, height = 500, colorMap = cm.Blues,
colorScale = 'Blues', reverse = True,
title = '<br>Stream network graph 2')
Explanation: We can visualise the stream network using the viewNetwork function. The following paramters can be displayed:
- $\chi$ paramater 'chi',
- elevation 'Z',
- discharge 'FA' (logarithmic values)
End of explanation
#help(hydro.extractMainstream)
hydro1.extractMainstream()
hydro2.extractMainstream()
Explanation: 3. Extract catchment main stream
We now extract the main stream for the considered catchment based on flow
discharge values.
End of explanation
#help(hydro.viewStream)
hydro1.viewStream(linePlot = False, lineWidth = 1, markerSize = 7,
val = 'Z', width = 300, height = 500, colorMap = cm.jet,
colorScale = 'Jet', reverse = False,
title = '<br>Stream network graph 1')
hydro2.viewStream(linePlot = True, lineWidth = 1, markerSize = 7,
val = 'Z', width = 300, height = 500, colorMap = cm.jet,
colorScale = 'Jet', reverse = False,
title = '<br>Stream network graph 2')
Explanation: As for the global stream network, you can use the viewStream function to visualise the main stream dataset.
End of explanation
hydro1.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)
hydro2.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)
Explanation: 4. Compute main stream hydrometrics
Here, we compute the stream parameters using the distance from outlet and the Badlands simulation coefficients for the stream power law and the hillslope linear diffusion.
The formulation for the Peclet number is:
$$Pe =\frac {\kappa_{c}l^{2(m+1)-n}}{\kappa_{d}z^{1-n}}$$
where $\kappa_{c}$ is the erodibility coefficient, $\kappa_{d}$ the hillslope diffusion coefficient and m, n the exponents from the stream power law equation. Their values are defined in your model input file.
The formulation for the $\chi$ parameter follows:
$$\chi = \int_{x_b}^x \left( \frac{A_o}{A(x')} \right)^{m/n} dx' $$
where $A_o$ is an arbitrary scaling area, and the integration is performed upstream from base level to location $x$.
In addition the function computeParams requires an additional parameter num which is the number of samples to generate along the main stream profile for linear interpolation.
End of explanation
#help(hydro1.viewPlot)
hydro1.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',
width = 800, height = 500, colorLine = 'black', colorMarker = 'black',
opacity = 0.2, title = 'Chi vs distance to outlet')
hydro2.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',
width = 800, height = 500, colorLine = 'orange', colorMarker = 'purple',
opacity = 0.2, title = 'Chi vs distance to outlet')
Explanation: The following combination of parameters can be visualised with the viewPlot function:
- 'dist': distance from catchment outlet
- 'FA': flow discharge (logorithmic)
- 'Pe': Peclet number
- 'Chi': $\chi$ parameter
- 'Z': elevation from outlet.
End of explanation
#help(hydro.timeProfiles)
hydro0 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [40599,7656.65])
timeStp = [20,40,60,80,100,120,140,160,180,200]
timeMA = map(lambda x: x * 0.25, timeStp)
print 'Profile time in Ma:',timeMA
dist = []
elev = []
for t in range(len(timeStp)):
hydro0.getCatchment(timestep=timeStp[t])
hydro0.extractMainstream()
hydro0.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=1000)
dist.append(hydro0.dist)
elev.append(hydro0.Zdata)
hydro0.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,
title = 'River profile through time')
hydro00 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [33627.6,30672.9])
timeStp = [20,40,60,80,100,120,140,160,180,200]
timeMA = map(lambda x: x * 0.25, timeStp)
print 'Profile time in Ma:',timeMA
dist = []
elev = []
for t in range(len(timeStp)):
hydro00.getCatchment(timestep=timeStp[t])
hydro00.extractMainstream()
hydro00.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=50)
dist.append(hydro00.dist)
elev.append(hydro00.Zdata)
hydro00.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,
title = 'River profile through time')
Explanation: 5. River profile through time
Using the same functions as before we can now create the river profile evolution through time and plot it on a single graph.
End of explanation |
10,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--NAVIGATION-->
< Account Information | Contents | Trade Management >
Order Management
OANDA REST-V20 API wrapper doc on Order
OANDA API Getting Started
OANDA API Order
Create an Order for an Account
Step1: Get a List of Orders for an Account
Step2: List all Pending Orders in an Account
Step3: Get Details for a Single Order in an Account
Step4: Replace an Order in an Account by simultaneously cancelling it and createing a replacement Order.
Step5: Cancel a pending Order in an Account.
Step6: MKT Order | Python Code:
import pandas as pd
import oandapyV20
import oandapyV20.endpoints.orders as orders
import configparser
config = configparser.ConfigParser()
config.read('../config/config_v20.ini')
accountID = config['oanda']['account_id']
access_token = config['oanda']['api_key']
client = oandapyV20.API(access_token=access_token)
data = {
"order": {
"price": "1.2",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.22"
},
"timeInForce": "GTC",
"instrument": "EUR_USD",
"units": "-100",
"type": "LIMIT",
"positionFill": "DEFAULT"
}
}
r = orders.OrderCreate(accountID, data=data)
client.request(r)
print(r.response)
pd.Series(r.response['orderCreateTransaction'])
Explanation: <!--NAVIGATION-->
< Account Information | Contents | Trade Management >
Order Management
OANDA REST-V20 API wrapper doc on Order
OANDA API Getting Started
OANDA API Order
Create an Order for an Account
End of explanation
r = orders.OrderList(accountID)
client.request(r)
print(r.response)
pd.Series(r.response['orders'][0])
Explanation: Get a List of Orders for an Account
End of explanation
r = orders.OrdersPending(accountID)
client.request(r)
print(r.response)
res = r.response['orders']
print(res)
last_order_id = res[0]['id']
pd.Series(r.response['orders'][0])
Explanation: List all Pending Orders in an Account
End of explanation
r = orders.OrderDetails(accountID=accountID, orderID=last_order_id)
client.request(r)
Explanation: Get Details for a Single Order in an Account
End of explanation
data = {
"order": {
"units": "-500000",
"instrument": "EUR_USD",
"price": "1.25000",
"type": "LIMIT"
}
}
r = orders.OrderReplace(accountID=accountID, orderID=last_order_id, data=data)
client.request(r)
print(r.response)
req_id = r.response['lastTransactionID']
Explanation: Replace an Order in an Account by simultaneously cancelling it and createing a replacement Order.
End of explanation
r = orders.OrderCancel(accountID=accountID, orderID=req_id)
client.request(r)
print(r.response)
last_order_id
Explanation: Cancel a pending Order in an Account.
End of explanation
data = {"order":
{"units": "100",
"instrument": "GBP_USD",
"timeInForce": "FOK",
"type": "MARKET",
"positionFill": "DEFAULT"
},
}
r = orders.OrderCreate(accountID, data=data)
client.request(r)
print(r.response)
pd.Series(r.response['orderCreateTransaction'])
pd.Series(r.response['orderFillTransaction'])
Explanation: MKT Order
End of explanation |
10,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plots for SFR-Stellar Mass-Size Paper
This is an outline of the results that will go into the first paper. The first paper will focus on the $$SFR-M_{*}-Size$$ relation, where $$Size \equiv R_e(24)/R_e(r).$$
Step1: As of 1/6/16, need to make one more pass through the sample and remove galaxies that are blended with nearby companion. Not sure if people think the numbers in each panel are useful.
Galaxies that are blended with a nearby companion are
Step2: RESULT
Step3: SFR-Mass-Size for Blue Galaxies only
Step4: RESULT
Step5: The Impact of Coma
Step6: RESULT
Step7: RESULT
Step8: RESULT
Step9: Take away
used two measures of SF
Step10: where are small galaxies in SFR-M* plane
Step11: constraint on disk shrinking time
use sample toy model from paper | Python Code:
import numpy as np
from pylab import *
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: Plots for SFR-Stellar Mass-Size Paper
This is an outline of the results that will go into the first paper. The first paper will focus on the $$SFR-M_{*}-Size$$ relation, where $$Size \equiv R_e(24)/R_e(r).$$
End of explanation
# Now moving on to the SFR-M*-Size analysis
%run ~/Dropbox/pythonCode/LCSanalyzeblue.py
# using John Moustakas's stellar mass estimates
figure()
plot(s.s.ABSMAG[:,4][s.blueflag2],s.logstellarmass[s.blueflag2],'bo')
xlabel('$M_r$')
ylabel('$ log_{10}(M_*) $')
# r-band limit
rlim=17.7
# distance modulus to Hercules, the furthest cluster
mM=35.97
# absolute mag limit corresponding to r=17.7
Mr=rlim-mM
axvline(x=Mr,ls='--',color='r')
axis([-20,-16,8.5,10.5])
axhline(y=minmass)
Explanation: As of 1/6/16, need to make one more pass through the sample and remove galaxies that are blended with nearby companion. Not sure if people think the numbers in each panel are useful.
Galaxies that are blended with a nearby companion are:
99056 - Hercules
103927 - Coma
103933 (AGN) - Coma
140160 - A1367
143485 - MKW11
146607 - Hercules
ALSO running this from ipython and using paperplots().
Converting SDSS Magnitude Limit to Stellar Mass Cut
End of explanation
s.plotsalimcolormag()
Explanation: RESULT: SDSS mag limit corresponds to a stellar mass cut of approximately $log_{10}(M_*) > 9.5$.
Select Blue Galaxies using NUV-r Color
I have limited the sample to blue galaxies only, using a NUV-r color cut:
self.NUVr=self.s.ABSMAG[:,1] - self.s.ABSMAG[:,4]
self.blueflag2=self.NUVr < 4.1
Some galaxies don't have GALEX data (JM is checking into why this is the case). For these, I require u-r < 1.8.
We have a total of 138 blue star-forming galaxies with successful GALFIT fits.
70 cluster members
51 near-field galaxies
17 field galaxies
End of explanation
# For blue galaxies only
s.plotSFRStellarmassSizeBlue(blueflag=True,plotbadfits=False)
# to compare size distributions
print 'comparing size ratios for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(s.s.SIZE_RATIO[f1],s.s.SIZE_RATIO[m1])
print 'mean of field = %5.2f +/- %5.2f'%(mean(s.s.SIZE_RATIO[f1]),std(s.s.SIZE_RATIO[f1])/sqrt(1.*sum(f1)))
print 'mean of clust = %5.2f +/- %5.2f'%(mean(s.s.SIZE_RATIO[m1]),std(s.s.SIZE_RATIO[m1])/sqrt(1.*sum(m1)))
s.printsizeblue()
Explanation: SFR-Mass-Size for Blue Galaxies only
End of explanation
s.calc_size_starburst()
Explanation: RESULT:
The ratio of Re(24)/Re(r) increases as galaxy environment transitions from cluster, to near field, and to field. The difference in size ratios is significant at the $3\sigma$ level.
Interestingly, the cluster and field galaxies are both consistent with the star-forming main sequence, even though the star-formation is more compact on average in the cluster galaxies. Thus, the size of the star-forming region is an important parameter to add to the SFR-Mass analysis when looking at galaxy properties as a function of environment.
Starburst Galaxies
End of explanation
# comparing sizes for sample with Coma removed
nc.plotSFRStellarmassSizeBlue(blueflag=True,plotbadfits=False)
nc.printsize()
ncf1=nc.bluesampleflag & ~nc.membflag & ~nc.agnflag
ncm1=nc.bluesampleflag & nc.membflag & ~nc.agnflag
t=ks(nc.s.SIZE_RATIO[ncf1],nc.s.SIZE_RATIO[ncm1])
Explanation: The Impact of Coma
End of explanation
# stellar mass
print 'comparing stellar mass for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(s.logstellarmass[f1],s.logstellarmass[m1])
# B/T
print ''
print 'comparing B/T for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag & s.gim2dflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag & s.gim2dflag
t=ks(s.s.B_T_r[f1],s.s.B_T_r[m1])
# ssfr
print ''
print 'comparing sSFR for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(log10(s.ssfr[f1]),log10(s.ssfr[m1]))
# B/A
print ''
print 'comparing B/A for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks((s.s.SERSIC_BA[f1]),(s.s.SERSIC_BA[m1]))
# ir surface brightness
print ''
print 'comparing $L{IR}/R_e(24)^2$ for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks(log10(s.sigma_ir[f1]),log10(s.sigma_ir[m1]))
# size
print ''
print 'comparing Re(24)/Re(r) for field vs cluster'
f1=s.bluesampleflag & ~s.membflag & ~s.agnflag
m1=s.bluesampleflag & s.membflag & ~s.agnflag
t=ks((s.s.SIZE_RATIO[f1]),(s.s.SIZE_RATIO[m1]))
Explanation: RESULT:
The difference in size ratios between cluster and field galaxies is not as significant once Coma is removed from the sample. The difference between the field and cluster galaxies is at the $2\sigma$ level. The conclusion is that Coma is important or unique among the clusters in the sample. Perhaps environmental effects are stronger in the more X-ray luminous environment, or maybe the sample size gets too small once Coma is removed (weak, I admit).
Checking for Other Hidden Systematics
Need to make sure that we are not seeing the effect of some
parameter that is linked with environment. For example, B/T
is strongly correlated with environments.
Check:
stellar mass
B/T
sSFR
B/A
End of explanation
figure(figsize=(12,5))
subplot(1,2,1)
subplots_adjust(wspace=.5)
pcolor=s.s.CLUSTER_LX
pcolorlabel='$log_{10}(L_X)$'
#pcolor=s.s.SIGMA_5
#pcolor=sqrt(s.s.DR_R200**2 + s.s.DELTA_V**2)
#pcolor=10.**s.logstellarmass
#pcolor=s.massdensity
#pcolor=s.s.B_T_r
#pcolorlabel='$log_{10}(B/T)$'
f=s.bluesampleflag & ~s.agnflag & s.membflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60)
f=s.bluesampleflag & ~s.agnflag & ~s.membflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,marker='^')
#f=s.bluesampleflag & ~s.agnflag & s.fieldflag
#plot(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),'ks',mfc='None',label='Field')
colorbar(fraction=.08,label=pcolorlabel)
xlabel('$log_{10}(\Sigma_{ir}) $')
ylabel('$sSFR/Gyr$')
subplot(1,2,2)
#pcolor=s.s.CLUSTER_LX
#pcolorlabel='$log_{10}(L_X)$'
#pcolor=s.s.SIGMA_5
#pcolor=sqrt(s.s.DR_R200**2 + s.s.DELTA_V**2)
#pcolor=10.**s.logstellarmass
#pcolor=s.massdensity
pcolor=s.s.B_T_r
pcolorlabel='$log_{10}(B/T)$'
f=s.bluesampleflag & ~s.agnflag & s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60)
f=s.bluesampleflag & ~s.agnflag & ~s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,marker='^')
#f=s.bluesampleflag & ~s.agnflag & s.fieldflag
#plot(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),'ks',mfc='None',label='Field')
colorbar(fraction=.08,label=pcolorlabel)
xlabel('$log_{10}(\Sigma_{ir}) $')
#ylabel('$sSFR/Gyr$')
figure()
#pcolor=s.s.CLUSTER_LX
#pcolorlabel='$log_{10}(L_X)$'
#pcolor=s.s.SIGMA_5
#pcolor=sqrt(s.s.DR_R200**2 + s.s.DELTA_V**2)
#pcolor=10.**s.logstellarmass
#pcolor=s.massdensity
pcolor=s.s.B_T_r
pcolorlabel='$log_{10}(B/T)$'
f=s.bluesampleflag & ~s.agnflag & s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,label='Cluster')
f=s.bluesampleflag & ~s.agnflag & ~s.membflag & s.gim2dflag
scatter(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),c=log10(pcolor[f]),s=60,marker='^',label='Field')
#f=s.bluesampleflag & ~s.agnflag & s.fieldflag
#plot(log10(s.sigma_ir[f]),log10(s.ssfr[f]*1.e9),'ks',mfc='None',label='Field')
colorbar(fraction=.08,label=pcolorlabel)
xlabel('$log_{10}(\Sigma_{ir}) $')
ylabel('$log_{10}(sSFR/Gyr)$')
legend(scatterpoints=1,loc='upper left')
Explanation: RESULT:
Difference in size ratio between cluster and field galaxies can't be explained by systematic differences in stellar mass, B/T, sSFR, or B/A.
IR Surface Brightness
End of explanation
%run ~/github/LCS/python/Python3/LCS_MS_rf_plots.py
Explanation: RESULT:
Compactness of star-forming region is correlated with B/T. However, you would expect the two quantities to be correlated. The stellar mass density must increase as B/T increases, and stellar mass density is correlated with $\Sigma_{IR}$ because stellar mass correlated with SFR, and optical size correlates with IR size.
PAPER 2
Add SFRs to analysis. Focus on size, stellar mass, SFR relation
field/cluster samples are comparable in terms of stellar mass, B/T, SFR
cluster galaxies have smaller size
cluster and field galaxies are still consistent with the SF main sequence
RESULTS
blue galaxies in dense environments have sSFRs that are consistent with the SF main sequence despite having more centrally-concentrated SF disks.
starburst galaxies have more centrally-concentrated SF disks when compared with galaxies on the SF main sequence, and relative size of SF disk does not appear to vary with environment for these galaxies.
Coma has an unusually high fraction of starburst galaxies
Statistics for Comparing SFR between core and external galaxies
End of explanation
g.calcstats()
g.calcstats(allgals=False)
Explanation: Take away
used two measures of SF: SFR-Mass (main sequence), and sSFR.
for SFR vs mass and sSFR vs mass, I fit a first order polynomial to the external galaxies with $9.5 < \log(M/M_\star) < 10.5$.
We then calculate distance from MS (parallel to y axis) and distance perpendicular to MS
I comaparand difference in sSFR (
End of explanation
%run ~/github/LCS/python/Python3/LCS_MS_rf_plots.py
# plot distance from ms vs size
g.plotsizevsmsdist()
plt.yscale('log')
plt.legend()
plt.axvline(x=0)
plt.axhline(y=1)
Explanation: where are small galaxies in SFR-M* plane
End of explanation
g.s.columns
plt.figure()
plt.plot(g.s.fre1, g.s.fcre1,'k.')
plt.plot(g.s.fre1[g.sampleflag], g.s.fcre1[g.sampleflag],'bo')
plt.plot(g.s.fre1[g.agnflag], g.s.fcre1[g.agnflag],'g*')
plt.plot(g.s.fre1[~g.galfitflag], g.s.fcre1[~g.galfitflag],'co',markersize=9, mfc='None')
plt.xlabel('Re no PSF conv')
plt.ylabel('Re with PSF conv')
plt.axis([-.5,12,-.5,12])
#plt.xscale('log')
#plt.yscale('log')
plt.figure()
#plt.plot(g.s.fmag1, g.s.fcre1,'k.')
plt.plot(g.s.fmag1[g.sampleflag], g.s.fcmag1[g.sampleflag],'bo')
#plt.plot(g.s.fmag1[g.agnflag], g.s.fcmag1[g.agnflag],'g*')
plt.plot(g.s.fmag1[~g.galfitflag], g.s.fcmag1[~g.galfitflag],'co',markersize=6,alpha=.3)
plt.xlabel('mag no PSF conv')
plt.ylabel('mag with PSF conv')
#plt.axis([10,18,10,18])
#plt.xscale('log')
#plt.yscale('log')
plt.figure()
dr = g.s.fcre1 - g.s.fre1
dm = g.s.fcmag1 - g.s.fmag1
#plt.plot(g.s.fmag1, g.s.fcre1,'k.')
massflag = (g.logstellarmass > 9.7)
flag = g.lirflag & ~g.galfitflag & g.gim2dflag & massflag & ~g.agnflag & g.sizeflag
#plt.plot(g.s.fmag1[g.agnflag], g.s.fcmag1[g.agnflag],'g*')
plt.plot(dm[flag], dr[flag],'co',markersize=6,alpha=.3)
#plt.plot(dm[g.sampleflag], dr[g.sampleflag],'bo',alpha=.3)
plt.xlabel('fcmag1 - fmag1')
plt.ylabel('fcre1 - fre1')
plt.axis([-2,2,-15,15])
plt.axvline(x=0)
plt.axhline(y=0)
#plt.xscale('log')
#plt.yscale('log')
sum(flag)
Explanation: constraint on disk shrinking time
use sample toy model from paper
End of explanation |
10,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TF.Text Metrics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: ROUGE-L
The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS).
Source
Step3: The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks.
Now we can call text.metrics.rouge_l and get our result back
Step4: ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q tensorflow-text
import tensorflow as tf
import tensorflow_text as text
Explanation: TF.Text Metrics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/text_similarity"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
TensorFlow Text provides a collection of text-metrics-related classes and ops ready to use with TensorFlow 2.0. The library contains implementations of text-similarity metrics such as ROUGE-L, required for automatic evaluation of text generation models.
The benefit of using these ops in evaluating your models is that they are compatible with TPU evaluation and work nicely with TF streaming metric APIs.
Setup
End of explanation
hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'],
['the', '1990', 'transcript']])
references = tf.ragged.constant([['delta', 'air', 'lines', 'flight'],
['this', 'concludes', 'the', 'transcript']])
Explanation: ROUGE-L
The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS).
Source: https://www.microsoft.com/en-us/research/publication/rouge-a-package-for-automatic-evaluation-of-summaries/
The TF.Text implementation returns the F-measure, Precision, and Recall for each (hypothesis, reference) pair.
Consider the following hypothesis/reference pair:
End of explanation
result = text.metrics.rouge_l(hypotheses, references)
print('F-Measure: %s' % result.f_measure)
print('P-Measure: %s' % result.p_measure)
print('R-Measure: %s' % result.r_measure)
Explanation: The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks.
Now we can call text.metrics.rouge_l and get our result back:
End of explanation
# Compute ROUGE-L with alpha=0
result = text.metrics.rouge_l(hypotheses, references, alpha=0)
print('F-Measure (alpha=0): %s' % result.f_measure)
print('P-Measure (alpha=0): %s' % result.p_measure)
print('R-Measure (alpha=0): %s' % result.r_measure)
# Compute ROUGE-L with alpha=1
result = text.metrics.rouge_l(hypotheses, references, alpha=1)
print('F-Measure (alpha=1): %s' % result.f_measure)
print('P-Measure (alpha=1): %s' % result.p_measure)
print('R-Measure (alpha=1): %s' % result.r_measure)
Explanation: ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall.
End of explanation |
10,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: CAMS
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
10,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Datasets
TFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.
It handles downloading and preparing the data deterministically and constructing a tf.data.Dataset (or np.array).
Note
Step1: Find available datasets
All dataset builders are subclass of tfds.core.DatasetBuilder. To get the list of available builders, use tfds.list_builders() or look at our catalog.
Step2: Load a dataset
tfds.load
The easiest way of loading a dataset is tfds.load. It will
Step3: Some common arguments
Step4: tfds build CLI
If you want to generate a specific dataset, you can use the tfds command line. For example
Step5: To find out the dict key names and structure, look at the dataset documentation in our catalog. For example
Step6: As numpy (tfds.as_numpy)
Uses tfds.as_numpy to convert
Step7: As batched tf.Tensor (batch_size=-1)
By using batch_size=-1, you can load the full dataset in a single batch.
This can be combined with as_supervised=True and tfds.as_numpy to get the the data as (np.array, np.array)
Step8: Be careful that your dataset can fit in memory, and that all examples have the same shape.
Benchmark your datasets
Benchmarking a dataset is a simple tfds.benchmark call on any iterable (e.g. tf.data.Dataset, tfds.as_numpy,...).
Step9: Do not forget to normalize the results per batch size with the batch_size= kwarg.
In the summary, the first warmup batch is separated from the other ones to capture tf.data.Dataset extra setup time (e.g. buffers initialization,...).
Notice how the second iteration is much faster due to TFDS auto-caching.
tfds.benchmark returns a tfds.core.BenchmarkResult which can be inspected for further analysis.
Build end-to-end pipeline
To go further, you can look
Step10: tfds.show_examples
tfds.show_examples returns a matplotlib.figure.Figure (only image datasets supported now)
Step11: Access the dataset metadata
All builders include a tfds.core.DatasetInfo object containing the dataset metadata.
It can be accessed through
Step12: The tfds.core.DatasetBuilder API
Step13: The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
Step14: Features metadata (label names, image shape,...)
Access the tfds.features.FeatureDict
Step15: Number of classes, label names
Step16: Shapes, dtypes
Step17: Split metadata (e.g. split names, number of examples,...)
Access the tfds.core.SplitDict
Step18: Available splits
Step19: Get info on individual split
Step20: It also works with the subsplit API | Python Code:
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
Explanation: TensorFlow Datasets
TFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.
It handles downloading and preparing the data deterministically and constructing a tf.data.Dataset (or np.array).
Note: Do not confuse TFDS (this library) with tf.data (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around tf.data. If you're not familiar with this API, we encourage you to read the official tf.data guide first.
Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/datasets/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/datasets/blob/master/docs/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/datasets/blob/master/docs/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/datasets/docs/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Installation
TFDS exists in two packages:
pip install tensorflow-datasets: The stable version, released every few months.
pip install tfds-nightly: Released every day, contains the last versions of the datasets.
This colab uses tfds-nightly:
End of explanation
tfds.list_builders()
Explanation: Find available datasets
All dataset builders are subclass of tfds.core.DatasetBuilder. To get the list of available builders, use tfds.list_builders() or look at our catalog.
End of explanation
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
Explanation: Load a dataset
tfds.load
The easiest way of loading a dataset is tfds.load. It will:
Download the data and save it as tfrecord files.
Load the tfrecord and create the tf.data.Dataset.
End of explanation
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
Explanation: Some common arguments:
split=: Which split to read (e.g. 'train', ['train', 'test'], 'train[80%:]',...). See our split API guide.
shuffle_files=: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).
data_dir=: Location where the dataset is saved (
defaults to ~/tensorflow_datasets/)
with_info=True: Returns the tfds.core.DatasetInfo containing dataset metadata
download=False: Disable download
tfds.builder
tfds.load is a thin wrapper around tfds.core.DatasetBuilder. You can get the same output using the tfds.core.DatasetBuilder API:
End of explanation
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
Explanation: tfds build CLI
If you want to generate a specific dataset, you can use the tfds command line. For example:
sh
tfds build mnist
See the doc for available flags.
Iterate over a dataset
As dict
By default, the tf.data.Dataset object contains a dict of tf.Tensors:
End of explanation
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
Explanation: To find out the dict key names and structure, look at the dataset documentation in our catalog. For example: mnist documentation.
As tuple (as_supervised=True)
By using as_supervised=True, you can get a tuple (features, label) instead for supervised datasets.
End of explanation
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
Explanation: As numpy (tfds.as_numpy)
Uses tfds.as_numpy to convert:
tf.Tensor -> np.array
tf.data.Dataset -> Iterator[Tree[np.array]] (Tree can be arbitrary nested Dict, Tuple)
End of explanation
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
Explanation: As batched tf.Tensor (batch_size=-1)
By using batch_size=-1, you can load the full dataset in a single batch.
This can be combined with as_supervised=True and tfds.as_numpy to get the the data as (np.array, np.array):
End of explanation
ds = tfds.load('mnist', split='train')
ds = ds.batch(32).prefetch(1)
tfds.benchmark(ds, batch_size=32)
tfds.benchmark(ds, batch_size=32) # Second epoch much faster due to auto-caching
Explanation: Be careful that your dataset can fit in memory, and that all examples have the same shape.
Benchmark your datasets
Benchmarking a dataset is a simple tfds.benchmark call on any iterable (e.g. tf.data.Dataset, tfds.as_numpy,...).
End of explanation
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
Explanation: Do not forget to normalize the results per batch size with the batch_size= kwarg.
In the summary, the first warmup batch is separated from the other ones to capture tf.data.Dataset extra setup time (e.g. buffers initialization,...).
Notice how the second iteration is much faster due to TFDS auto-caching.
tfds.benchmark returns a tfds.core.BenchmarkResult which can be inspected for further analysis.
Build end-to-end pipeline
To go further, you can look:
Our end-to-end Keras example to see a full training pipeline (with batching, shuffling,...).
Our performance guide to improve the speed of your pipelines (tip: use tfds.benchmark(ds) to benchmark your datasets).
Visualization
tfds.as_dataframe
tf.data.Dataset objects can be converted to pandas.DataFrame with tfds.as_dataframe to be visualized on Colab.
Add the tfds.core.DatasetInfo as second argument of tfds.as_dataframe to visualize images, audio, texts, videos,...
Use ds.take(x) to only display the first x examples. pandas.DataFrame will load the full dataset in-memory, and can be very expensive to display.
End of explanation
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
Explanation: tfds.show_examples
tfds.show_examples returns a matplotlib.figure.Figure (only image datasets supported now):
End of explanation
ds, info = tfds.load('mnist', with_info=True)
Explanation: Access the dataset metadata
All builders include a tfds.core.DatasetInfo object containing the dataset metadata.
It can be accessed through:
The tfds.load API:
End of explanation
builder = tfds.builder('mnist')
info = builder.info
Explanation: The tfds.core.DatasetBuilder API:
End of explanation
print(info)
Explanation: The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
End of explanation
info.features
Explanation: Features metadata (label names, image shape,...)
Access the tfds.features.FeatureDict:
End of explanation
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
Explanation: Number of classes, label names:
End of explanation
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
Explanation: Shapes, dtypes:
End of explanation
print(info.splits)
Explanation: Split metadata (e.g. split names, number of examples,...)
Access the tfds.core.SplitDict:
End of explanation
print(list(info.splits.keys()))
Explanation: Available splits:
End of explanation
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
Explanation: Get info on individual split:
End of explanation
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
Explanation: It also works with the subsplit API:
End of explanation |
10,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prediction with SVMs over SLM.
In this notebook I confirm that the good performance that I get with the letters does not depend on some quirk involved with the fact that the letters (that is the targers) are represented as text and not as number
Libraries and files
Step1: Now we transform the letters into number with a dictionary
Step2: Load nexa with its parameters
Step3: Now we make the predictions | Python Code:
import numpy as np
import h5py
from sklearn import svm, cross_validation, preprocessing
# First we load the file
file_location = '../results_database/text_wall_street_big.hdf5'
run_name = '/low-resolution'
f = h5py.File(file_location, 'r')
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
Explanation: Prediction with SVMs over SLM.
In this notebook I confirm that the good performance that I get with the letters does not depend on some quirk involved with the fact that the letters (that is the targers) are represented as text and not as number
Libraries and files
End of explanation
symbol_to_number = {}
for number, symbol in enumerate(symbols):
symbol_to_number[symbol] = number
letters_sequence = [symbol_to_number[letter] for letter in letters_sequence]
Explanation: Now we transform the letters into number with a dictionary
End of explanation
# Nexa parameters
Nspatial_clusters = 5
Ntime_clusters = 15
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
Explanation: Load nexa with its parameters
End of explanation
delay = 4
N = 5000
cache_size = 1000
# Exctrat and normalized SLM
SLM = np.array(f[run_name]['SLM'])
print('Standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='rbf')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
print('Not standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
Explanation: Now we make the predictions
End of explanation |
10,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
x = list(one, two, three)
x
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
10,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
About arithmetic accuracy in Python
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Integers" data-toc-modified-id="Integers-1"><span class="toc-item-num">1 </span><a href="https
Step1: Floats
Python uses (hardware) 754 double precision representation for floats. This means that some floats can be only represented approximately.
Using string format to see the precision limitation of doubles in Python. For example, it is impossible to represent exactly the number 0.1
Step2: This can give us surprises
Step3: For "infinite" precision float arithmetic you can use decimal or mpmath
Step4: Getting 30 digits of 1/7
Step5: We can see how many digits are true of 1/7 using doubles
Step6: Decimal arithmetic produces decimal objects
Step7: Decimal objects can be printed with format
Step8: A more complex example | Python Code:
x = 7**273
print(x)
print(type(x))
Explanation: About arithmetic accuracy in Python
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Integers" data-toc-modified-id="Integers-1"><span class="toc-item-num">1 </span><a href="https://docs.python.org/3/c-api/long.html" target="_blank">Integers</a></a></span></li><li><span><a href="#Floats" data-toc-modified-id="Floats-2"><span class="toc-item-num">2 </span><a href="https://docs.python.org/3/tutorial/floatingpoint.html" target="_blank">Floats</a></a></span></li></ul></div>
Integers
In python, integers have arbitrary precision and therefore we can represent an arbitrarily large range of integers (only limited by the available memory).
End of explanation
format(0.1, '.80f')
Explanation: Floats
Python uses (hardware) 754 double precision representation for floats. This means that some floats can be only represented approximately.
Using string format to see the precision limitation of doubles in Python. For example, it is impossible to represent exactly the number 0.1:
End of explanation
.1 + .1 + .1 == .3
.1 + .1 == .2
Explanation: This can give us surprises:
End of explanation
from decimal import Decimal, getcontext
Explanation: For "infinite" precision float arithmetic you can use decimal or mpmath:
End of explanation
getcontext().prec=80
format(Decimal(1)/Decimal(7), '.80f')
Explanation: Getting 30 digits of 1/7:
End of explanation
format(1/7, '.80f')
#12345678901234567 (17 digits)
Explanation: We can see how many digits are true of 1/7 using doubles:
End of explanation
Decimal(1)/Decimal(7)
Explanation: Decimal arithmetic produces decimal objects:
End of explanation
print('{:.50f}'.format(Decimal(1)/Decimal(7)))
Explanation: Decimal objects can be printed with format:
End of explanation
# https://stackoverflow.com/questions/28284996/python-pi-calculation
from decimal import Decimal, getcontext
getcontext().prec=1000
my_pi= sum(1/Decimal(16)**k *
(Decimal(4)/(8*k+1) -
Decimal(2)/(8*k+4) -
Decimal(1)/(8*k+5) -
Decimal(1)/(8*k+6)) for k in range(1000))
'{:.1000f}'.format(my_pi)
Explanation: A more complex example: lets compute 1000 digits of the $\pi$ number using the Bailey–Borwein–Plouffe formula:
$$
\pi = \sum_{k = 0}^{\infty}\Bigg[ \frac{1}{16^k} \left( \frac{4}{8k + 1} - \frac{2}{8k + 4} - \frac{1}{8k + 5} - \frac{1}{8k + 6} \right) \Bigg]
$$
End of explanation |
10,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reusable components
This tutorial describes the manual way of writing a full component program (in any language) and a component definition for it. Below is a summary of the steps involved in creating and using a component
Step1: Create client
If you run this notebook outside of a Kubeflow cluster, run the following command
Step2: Writing the program code
The following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.
Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt.
Step3: Create a Docker container
Create your own container image that includes your program.
Creating a Dockerfile
Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results.
Step4: Build docker image
Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options
Step5: If you want to use docker to build the image
Run the following in a cell
```bash
%%bash -s "{PROJECT_ID}"
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
Create script to build docker image and push it.
cat > ./tmp/reuse_components/mnist_training/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}
Step6: Writing your component definition file
To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.
For the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.
Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section
Step7: Create your workflow as a Python function
Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
Step8: Submit a pipeline run | Python Code:
import kfp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
import kfp.components as comp
import datetime
import kubernetes as k8s
# Required Parameters
PROJECT_ID='<ADD GCP PROJECT HERE>'
GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
Explanation: Reusable components
This tutorial describes the manual way of writing a full component program (in any language) and a component definition for it. Below is a summary of the steps involved in creating and using a component:
Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.
Containerize the program.
Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.
Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.
Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command:
which docker
The result should be something like:
/usr/bin/docker
End of explanation
# Optional Parameters, but required for running outside Kubeflow cluster
# The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com'
# The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline'
# Examples are:
# https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com
# https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline
HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>'
# For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following
# will be needed to access the endpoint.
CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>'
OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>'
OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>'
# This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines'
# If you are not working with 'AI Platform Pipelines', this step is not necessary
! gcloud auth print-access-token
# Create kfp client
in_cluster = True
try:
k8s.config.load_incluster_config()
except:
in_cluster = False
pass
if in_cluster:
client = kfp.Client()
else:
if HOST.endswith('googleusercontent.com'):
CLIENT_ID = None
OTHER_CLIENT_ID = None
OTHER_CLIENT_SECRET = None
client = kfp.Client(host=HOST,
client_id=CLIENT_ID,
other_client_id=OTHER_CLIENT_ID,
other_client_secret=OTHER_CLIENT_SECRET)
Explanation: Create client
If you run this notebook outside of a Kubeflow cluster, run the following command:
- host: The URL of your Kubeflow Pipelines instance, for example "https://<your-deployment>.endpoints.<your-project>.cloud.goog/pipeline"
- client_id: The client ID used by Identity-Aware Proxy
- other_client_id: The client ID used to obtain the auth codes and refresh tokens.
- other_client_secret: The client secret used to obtain the auth codes and refresh tokens.
python
client = kfp.Client(host, client_id, other_client_id, other_client_secret)
If you run this notebook within a Kubeflow cluster, run the following command:
python
client = kfp.Client()
You'll need to create OAuth client ID credentials of type Other to get other_client_id and other_client_secret. Learn more about creating OAuth credentials
End of explanation
%%bash
# Create folders if they don't exist.
mkdir -p tmp/reuse_components/mnist_training
# Create the Python file that lists GCS blobs.
cat > ./tmp/reuse_components/mnist_training/app.py <<HERE
import argparse
from datetime import datetime
import tensorflow as tf
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_file', type=str, required=True, help='Name of the model file.')
parser.add_argument(
'--bucket', type=str, required=True, help='GCS bucket name.')
args = parser.parse_args()
bucket=args.bucket
model_file=args.model_file
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(model.summary())
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()),
# Interrupt training if val_loss stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
]
model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
model.save(model_file)
from tensorflow import gfile
gcs_path = bucket + "/" + model_file
if gfile.Exists(gcs_path):
gfile.Remove(gcs_path)
gfile.Copy(model_file, gcs_path)
with open('/output.txt', 'w') as f:
f.write(gcs_path)
HERE
Explanation: Writing the program code
The following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.
Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt.
End of explanation
%%bash
# Create Dockerfile.
cat > ./tmp/reuse_components/mnist_training/Dockerfile <<EOF
FROM tensorflow/tensorflow:1.15.0-py3
WORKDIR /app
COPY . /app
EOF
Explanation: Create a Docker container
Create your own container image that includes your program.
Creating a Dockerfile
Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results.
End of explanation
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format(
PROJECT_ID=PROJECT_ID,
IMAGE_NAME=IMAGE_NAME,
TAG=TAG
)
APP_FOLDER='./tmp/reuse_components/mnist_training/'
# In the following, for the purpose of demonstration
# Cloud Build is choosen for 'AI Platform Pipelines'
# kaniko is choosen for 'full Kubeflow deployment'
if HOST.endswith('googleusercontent.com'):
# kaniko is not pre-installed with 'AI Platform Pipelines'
import subprocess
# ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER}
cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER]
build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
print(build_log)
else:
if kfp.__version__ <= '0.1.36':
# kfp with version 0.1.36+ introduce broken change that will make the following code not working
import subprocess
builder = kfp.containers._container_builder.ContainerBuilder(
gcs_staging=GCS_BUCKET + "/kfp_container_build_staging"
)
kfp.containers.build_image_from_working_dir(
image_name=GCR_IMAGE,
working_dir=APP_FOLDER,
builder=builder
)
else:
raise("Please build the docker image use either [Docker] or [Cloud Build]")
Explanation: Build docker image
Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:
- Use the kfp.containers.build_image_from_working_dir to build the image and push to the Container Registry (GCR). This requires kaniko, which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'.
- Use Cloud Build, which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build.
- Use Docker installed locally and push to e.g. GCR.
Note:
If you run this notebook within Kubeflow cluster, with Kubeflow version >= 0.7 and exploring kaniko option, you need to ensure that valid credentials are created within your notebook's namespace.
- With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through Configurations, which doesn't work properly at the time of creating this notebook.
- You can also add credentials to the new namespace by either copying credentials from an existing Kubeflow namespace, or by creating a new service account.
- The following cell demonstrates how to copy the default secret to your own namespace.
```bash
%%bash
NAMESPACE=<your notebook name space>
SOURCE=kubeflow
NAME=user-gcp-sa
SECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}.json}" | base64 -D)
kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}"
```
End of explanation
image_name = GCR_IMAGE
Explanation: If you want to use docker to build the image
Run the following in a cell
```bash
%%bash -s "{PROJECT_ID}"
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
Create script to build docker image and push it.
cat > ./tmp/reuse_components/mnist_training/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"
docker build -t \${IMAGE_NAME} .
docker tag \${IMAGE_NAME} \${GCR_IMAGE}
docker push \${GCR_IMAGE}
docker image rm \${IMAGE_NAME}
docker image rm \${GCR_IMAGE}
HERE
cd tmp/reuse_components/mnist_training
bash build_image.sh
```
End of explanation
%%bash -s "{image_name}"
GCR_IMAGE="${1}"
echo ${GCR_IMAGE}
# Create Yaml
# the image uri should be changed according to the above docker image push output
cat > mnist_component.yaml <<HERE
name: Mnist training
description: Train a mnist model and save to GCS
inputs:
- name: model_file
description: 'Name of the model file.'
type: String
- name: bucket
description: 'GCS bucket name.'
type: String
outputs:
- name: model_path
description: 'Trained model path.'
type: GCSPath
implementation:
container:
image: ${GCR_IMAGE}
command: [
python, /app/app.py,
--model_file, {inputValue: model_file},
--bucket, {inputValue: bucket},
]
fileOutputs:
model_path: /output.txt
HERE
Explanation: Writing your component definition file
To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.
For the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.
Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:
End of explanation
import os
mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_component.yaml'))
mnist_train_op.component_spec
# Define the pipeline
@dsl.pipeline(
name='Mnist pipeline',
description='A toy pipeline that performs mnist model training.'
)
def mnist_reuse_component_pipeline(
model_file: str = 'mnist_model.h5',
bucket: str = GCS_BUCKET
):
mnist_train_op(model_file=model_file, bucket=bucket).apply(gcp.use_gcp_secret('user-gcp-sa'))
return True
Explanation: Create your workflow as a Python function
Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
End of explanation
pipeline_func = mnist_reuse_component_pipeline
experiment_name = 'minist_kubeflow'
arguments = {"model_file":"mnist_model.h5",
"bucket":GCS_BUCKET}
run_name = pipeline_func.__name__ + ' run'
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments)
Explanation: Submit a pipeline run
End of explanation |
10,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now that you are familiar with the coding environment, it's time to learn how to make your own charts!
In this tutorial, you'll learn just enough Python to create professional looking line charts. Then, in the following exercise, you'll put your new skills to work with a real-world dataset.
Set up the notebook
We begin by setting up the coding environment. (This code is hidden, but you can un-hide it by clicking on the "Code" button immediately below this text, on the right.)
Step1: Select a dataset
The dataset for this tutorial tracks global daily streams on the music streaming service Spotify. We focus on five popular songs from 2017 and 2018
Step2: The end result of running both lines of code above is that we can now access the dataset by using spotify_data.
Examine the data
We can print the first five rows of the dataset by using the head command that you learned about in the previous tutorial.
Step3: Check now that the first five rows agree with the image of the dataset (from when we saw what it would look like in Excel) above.
Empty entries will appear as NaN, which is short for "Not a Number".
We can also take a look at the last five rows of the data by making only one small change (where .head() becomes .tail())
Step4: Thankfully, everything looks about right, with millions of daily global streams for each song, and we can proceed to plotting the data!
Plot the data
Now that the dataset is loaded into the notebook, we need only one line of code to make a line chart!
Step5: As you can see above, the line of code is relatively short and has two main components
Step6: The first line of code sets the size of the figure to 14 inches (in width) by 6 inches (in height). To set the size of any figure, you need only copy the same line of code as it appears. Then, if you'd like to use a custom size, change the provided values of 14 and 6 to the desired width and height.
The second line of code sets the title of the figure. Note that the title must always be enclosed in quotation marks ("...")!
Plot a subset of the data
So far, you've learned how to plot a line for every column in the dataset. In this section, you'll learn how to plot a subset of the columns.
We'll begin by printing the names of all columns. This is done with one line of code and can be adapted for any dataset by just swapping out the name of the dataset (in this case, spotify_data).
Step7: In the next code cell, we plot the lines corresponding to the first two columns in the dataset. | Python Code:
#$HIDE$
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
Explanation: Now that you are familiar with the coding environment, it's time to learn how to make your own charts!
In this tutorial, you'll learn just enough Python to create professional looking line charts. Then, in the following exercise, you'll put your new skills to work with a real-world dataset.
Set up the notebook
We begin by setting up the coding environment. (This code is hidden, but you can un-hide it by clicking on the "Code" button immediately below this text, on the right.)
End of explanation
# Path of the file to read
spotify_filepath = "../input/spotify.csv"
# Read the file into a variable spotify_data
spotify_data = pd.read_csv(spotify_filepath, index_col="Date", parse_dates=True)
Explanation: Select a dataset
The dataset for this tutorial tracks global daily streams on the music streaming service Spotify. We focus on five popular songs from 2017 and 2018:
1. "Shape of You", by Ed Sheeran (link)
2. "Despacito", by Luis Fonzi (link)
3. "Something Just Like This", by The Chainsmokers and Coldplay (link)
4. "HUMBLE.", by Kendrick Lamar (link)
5. "Unforgettable", by French Montana (link)
Notice that the first date that appears is January 6, 2017, corresponding to the release date of "The Shape of You", by Ed Sheeran. And, using the table, you can see that "The Shape of You" was streamed 12,287,078 times globally on the day of its release. Notice that the other songs have missing values in the first row, because they weren't released until later!
Load the data
As you learned in the previous tutorial, we load the dataset using the pd.read_csv command.
End of explanation
# Print the first 5 rows of the data
spotify_data.head()
Explanation: The end result of running both lines of code above is that we can now access the dataset by using spotify_data.
Examine the data
We can print the first five rows of the dataset by using the head command that you learned about in the previous tutorial.
End of explanation
# Print the last five rows of the data
spotify_data.tail()
Explanation: Check now that the first five rows agree with the image of the dataset (from when we saw what it would look like in Excel) above.
Empty entries will appear as NaN, which is short for "Not a Number".
We can also take a look at the last five rows of the data by making only one small change (where .head() becomes .tail()):
End of explanation
# Line chart showing daily global streams of each song
sns.lineplot(data=spotify_data)
Explanation: Thankfully, everything looks about right, with millions of daily global streams for each song, and we can proceed to plotting the data!
Plot the data
Now that the dataset is loaded into the notebook, we need only one line of code to make a line chart!
End of explanation
# Set the width and height of the figure
plt.figure(figsize=(14,6))
# Add title
plt.title("Daily Global Streams of Popular Songs in 2017-2018")
# Line chart showing daily global streams of each song
sns.lineplot(data=spotify_data)
Explanation: As you can see above, the line of code is relatively short and has two main components:
- sns.lineplot tells the notebook that we want to create a line chart.
- Every command that you learn about in this course will start with sns, which indicates that the command comes from the seaborn package. For instance, we use sns.lineplot to make line charts. Soon, you'll learn that we use sns.barplot and sns.heatmap to make bar charts and heatmaps, respectively.
- data=spotify_data selects the data that will be used to create the chart.
Note that you will always use this same format when you create a line chart, and the only thing that changes with a new dataset is the name of the dataset. So, if you were working with a different dataset named financial_data, for instance, the line of code would appear as follows:
sns.lineplot(data=financial_data)
Sometimes there are additional details we'd like to modify, like the size of the figure and the title of the chart. Each of these options can easily be set with a single line of code.
End of explanation
list(spotify_data.columns)
Explanation: The first line of code sets the size of the figure to 14 inches (in width) by 6 inches (in height). To set the size of any figure, you need only copy the same line of code as it appears. Then, if you'd like to use a custom size, change the provided values of 14 and 6 to the desired width and height.
The second line of code sets the title of the figure. Note that the title must always be enclosed in quotation marks ("...")!
Plot a subset of the data
So far, you've learned how to plot a line for every column in the dataset. In this section, you'll learn how to plot a subset of the columns.
We'll begin by printing the names of all columns. This is done with one line of code and can be adapted for any dataset by just swapping out the name of the dataset (in this case, spotify_data).
End of explanation
# Set the width and height of the figure
plt.figure(figsize=(14,6))
# Add title
plt.title("Daily Global Streams of Popular Songs in 2017-2018")
# Line chart showing daily global streams of 'Shape of You'
sns.lineplot(data=spotify_data['Shape of You'], label="Shape of You")
# Line chart showing daily global streams of 'Despacito'
sns.lineplot(data=spotify_data['Despacito'], label="Despacito")
# Add label for horizontal axis
plt.xlabel("Date")
Explanation: In the next code cell, we plot the lines corresponding to the first two columns in the dataset.
End of explanation |
10,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
10,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Beaming and Boosting
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Let's make our system so that the boosting effects will be quite noticeable.
Step3: We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
Step4: Relevant Parameters
Step5: Influence on Light Curves (fluxes)
Step6: Influence on Radial Velocities
Step7: Influence on Meshes | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Beaming and Boosting
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b['requiv@primary'] = 1.8
b['requiv@secondary'] = 0.96
b['teff@primary'] = 10000
b['gravb_bol@primary'] = 1.0
b['teff@secondary'] = 5200
b['gravb_bol@secondary'] = 0.32
b['q@binary'] = 0.96/1.8
b['incl@binary'] = 88
b['period@binary'] = 1.0
b['sma@binary'] = 6.0
Explanation: Let's make our system so that the boosting effects will be quite noticeable.
End of explanation
times = np.linspace(0,1,101)
b.add_dataset('lc', times=times, dataset='lc01')
b.add_dataset('rv', times=times, dataset='rv01')
b.add_dataset('mesh', times=times[::10], dataset='mesh01', columns=['boost_factors@lc01'])
Explanation: We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
End of explanation
b.set_value('irrad_method', 'none')
print b['boosting_method@compute']
print b['boosting_method@compute'].choices
Explanation: Relevant Parameters
End of explanation
b.run_compute(boosting_method='none', model='boosting_none')
b.run_compute(boosting_method='linear', model='boosting_linear')
afig, mplfig = b['lc01'].plot(show=True, legend=True)
afig, mplfig = b['lc01'].plot(ylim=(1.01,1.03), show=True, legend=True)
Explanation: Influence on Light Curves (fluxes)
End of explanation
afig, mplfig = b['rv01@model'].plot(show=True, legend=True)
Explanation: Influence on Radial Velocities
End of explanation
afig, mplfig = b['mesh@boosting_none'].plot(time=0.6, fc='boost_factors', ec='none', show=True)
afig, mplfig = b['mesh@boosting_linear'].plot(time=0.6, fc='boost_factors', ec='none', show=True)
Explanation: Influence on Meshes
End of explanation |
10,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Data Generation
Data is generated from a 2D mixture of Gaussians.
Step2: Plotting
Step3: Models and Training
A multilayer perceptron with the ReLU activation function.
Step4: The loss function for the discriminator is
Step5: The loss function for the generator is
Step6: Perform a training step by first updating the discriminator parameters $\phi$ using the gradient $\nabla_\phi L_D (\phi, \theta)$ and then updating the generator parameters $\theta$ using the gradient $\nabla_\theta L_G (\phi, \theta)$.
Step7: Plot Results
Plot the data and the examples generated by the generator. | Python Code:
!pip install -q flax
from typing import Sequence
import matplotlib.pyplot as plt
import jax
import jax.numpy as jnp
try:
import flax.linen as nn
except ModuleNotFoundError:
%pip install -qq flax
import flax.linen as nn
from flax.training import train_state
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
import functools
import scipy as sp
import math
rng = jax.random.PRNGKey(0)
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/gan_mixture_of_gaussians.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This notebook implements a Generative Adversarial Network to fit a synthetic dataset generated from a mixture of Gaussians in 2D.
The code was adapted from the ODEGAN code here: https://github.com/deepmind/deepmind-research/blob/master/ode_gan/odegan_mog16.ipynb. The original notebook was created by Chongli Qin.
Some modifications made by Mihaela Rosca here were also incorporated.
Imports
End of explanation
@functools.partial(jax.jit, static_argnums=(1,))
def real_data(rng, batch_size):
mog_mean = jnp.array(
[
[1.50, 1.50],
[1.50, 0.50],
[1.50, -0.50],
[1.50, -1.50],
[0.50, 1.50],
[0.50, 0.50],
[0.50, -0.50],
[0.50, -1.50],
[-1.50, 1.50],
[-1.50, 0.50],
[-1.50, -0.50],
[-1.50, -1.50],
[-0.50, 1.50],
[-0.50, 0.50],
[-0.50, -0.50],
[-0.50, -1.50],
]
)
temp = jnp.tile(mog_mean, (batch_size // 16 + 1, 1))
mus = temp[0:batch_size, :]
return mus + 0.02 * jax.random.normal(rng, shape=(batch_size, 2))
Explanation: Data Generation
Data is generated from a 2D mixture of Gaussians.
End of explanation
def plot_on_ax(ax, values, contours=None, bbox=None, xlabel="", ylabel="", title="", cmap="Blues"):
kernel = sp.stats.gaussian_kde(values.T)
ax.axis(bbox)
ax.set_aspect(abs(bbox[1] - bbox[0]) / abs(bbox[3] - bbox[2]))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xticks([])
ax.set_yticks([])
xx, yy = jnp.mgrid[bbox[0] : bbox[1] : 300j, bbox[2] : bbox[3] : 300j]
positions = jnp.vstack([xx.ravel(), yy.ravel()])
f = jnp.reshape(kernel(positions).T, xx.shape)
cfset = ax.contourf(xx, yy, f, cmap=cmap)
if contours is not None:
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
cx, cy = jnp.meshgrid(x, y)
new_set = ax.contour(
cx, cy, contours.squeeze().reshape(cx.shape), levels=20, colors="k", linewidths=0.8, alpha=0.5
)
ax.set_title(title)
Explanation: Plotting
End of explanation
class MLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, x):
for feat in self.features[:-1]:
x = jax.nn.relu(nn.Dense(features=feat)(x))
x = nn.Dense(features=self.features[-1])(x)
return x
Explanation: Models and Training
A multilayer perceptron with the ReLU activation function.
End of explanation
@jax.jit
def discriminator_step(disc_state, gen_state, latents, real_examples):
def loss_fn(disc_params):
fake_examples = gen_state.apply_fn(gen_state.params, latents)
real_logits = disc_state.apply_fn(disc_params, real_examples)
fake_logits = disc_state.apply_fn(disc_params, fake_examples)
disc_real = -jax.nn.log_sigmoid(real_logits)
# log(1 - sigmoid(x)) = log_sigmoid(-x)
disc_fake = -jax.nn.log_sigmoid(-fake_logits)
return jnp.mean(disc_real + disc_fake)
disc_loss, disc_grad = jax.value_and_grad(loss_fn)(disc_state.params)
disc_state = disc_state.apply_gradients(grads=disc_grad)
return disc_state, disc_loss
Explanation: The loss function for the discriminator is:
$$L_D(\phi, \theta) = \mathbb{E}{p^*(x)} g(D\phi(x)) + \mathbb{E}{q(z)}h(D\phi(G_\theta(z)))$$
where $g(t) = -\log t$, $h(t) = -\log(1 - t)$ as in the original GAN.
End of explanation
@jax.jit
def generator_step(disc_state, gen_state, latents):
def loss_fn(gen_params):
fake_examples = gen_state.apply_fn(gen_params, latents)
fake_logits = disc_state.apply_fn(disc_state.params, fake_examples)
disc_fake = -jax.nn.log_sigmoid(fake_logits)
return jnp.mean(disc_fake)
gen_loss, gen_grad = jax.value_and_grad(loss_fn)(gen_state.params)
gen_state = gen_state.apply_gradients(grads=gen_grad)
return gen_state, gen_loss
Explanation: The loss function for the generator is:
$$L_G(\phi, \theta) = \mathbb{E}{q(z)} l(D\phi(G_\theta(z))$$
where $l(t) = -\log t$ for the non-saturating generator loss.
End of explanation
@jax.jit
def train_step(disc_state, gen_state, latents, real_examples):
disc_state, disc_loss = discriminator_step(disc_state, gen_state, latents, real_examples)
gen_state, gen_loss = generator_step(disc_state, gen_state, latents)
return disc_state, gen_state, disc_loss, gen_loss
batch_size = 512
latent_size = 32
discriminator = MLP(features=[25, 25, 1])
generator = MLP(features=[25, 25, 2])
# Initialize parameters for the discriminator and the generator
latents = jax.random.normal(rng, shape=(batch_size, latent_size))
real_examples = real_data(rng, batch_size)
disc_params = discriminator.init(rng, real_examples)
gen_params = generator.init(rng, latents)
# Plot real examples
bbox = [-2, 2, -2, 2]
plot_on_ax(plt.gca(), real_examples, bbox=bbox, title="Data")
plt.tight_layout()
plt.savefig("gan_gmm_data.pdf")
plt.show()
# Create train states for the discriminator and the generator
lr = 0.05
disc_state = train_state.TrainState.create(
apply_fn=discriminator.apply, params=disc_params, tx=optax.sgd(learning_rate=lr)
)
gen_state = train_state.TrainState.create(apply_fn=generator.apply, params=gen_params, tx=optax.sgd(learning_rate=lr))
# x and y grid for plotting discriminator contours
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
X, Y = jnp.meshgrid(x, y)
pairs = jnp.stack((X, Y), axis=-1)
pairs = jnp.reshape(pairs, (-1, 2))
# Latents for testing generator
test_latents = jax.random.normal(rng, shape=(batch_size * 10, latent_size))
num_iters = 20001
n_save = 2000
draw_contours = False
history = []
for i in range(num_iters):
rng_iter = jax.random.fold_in(rng, i)
data_rng, latent_rng = jax.random.split(rng_iter)
# Sample minibatch of examples
real_examples = real_data(data_rng, batch_size)
# Sample minibatch of latents
latents = jax.random.normal(latent_rng, shape=(batch_size, latent_size))
# Update both the generator
disc_state, gen_state, disc_loss, gen_loss = train_step(disc_state, gen_state, latents, real_examples)
if i % n_save == 0:
print(f"i = {i}, Discriminator Loss = {disc_loss}, " + f"Generator Loss = {gen_loss}")
# Generate examples using the test latents
fake_examples = gen_state.apply_fn(gen_state.params, test_latents)
if draw_contours:
real_logits = disc_state.apply_fn(disc_state.params, pairs)
disc_contour = -real_logits + jax.nn.log_sigmoid(real_logits)
else:
disc_contour = None
history.append((i, fake_examples, disc_contour, disc_loss, gen_loss))
Explanation: Perform a training step by first updating the discriminator parameters $\phi$ using the gradient $\nabla_\phi L_D (\phi, \theta)$ and then updating the generator parameters $\theta$ using the gradient $\nabla_\theta L_G (\phi, \theta)$.
End of explanation
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
plt.gca(),
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
plt.tight_layout()
plt.savefig(f"gan_gmm_iter_{iter}.pdf")
plt.show()
cols = 3
rows = math.ceil((len(history) + 1) / cols)
bbox = [-2, 2, -2, 2]
fig, axs = plt.subplots(rows, cols, figsize=(cols * 3, rows * 3), dpi=200)
axs = axs.flatten()
# Plot real examples
plot_on_ax(axs[0], real_examples, bbox=bbox, title="Data")
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
axs[i + 1],
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
# Remove extra plots from the figure
for i in range(len(history) + 1, len(axs)):
axs[i].remove()
plt.tight_layout()
plt.show()
Explanation: Plot Results
Plot the data and the examples generated by the generator.
End of explanation |
10,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing the Hyperparameter tuning
Learning Objectives
1. Learn how to use cloudml-hypertune to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the .yaml file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Cloud AI Platform
Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set prior to training a model, as opposed to parameters which are learned during training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters
Step1: Note
Step2: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes
Step3: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
Step4: Make the validation dataset be 1/10 the size of the training dataset.
Step5: Export the tables as CSV files
Step6: If all ran smoothly, you should be able to list the data bucket by running the following command
Step7: Move code into python package
Here, we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the taxifare/trainer directory
Step8: To use hyperparameter tuning in your training job you must perform the following steps
Step9: Modify task.py
Step10: Create config.yaml file
Specify the hyperparameter tuning configuration for your training job
Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.
In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1
Step11: Report your hyperparameter metric to AI Platform Training
The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.
We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.
TensorFlow with a runtime version
If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.
You may need to install cloudml-hypertune on your machine to run this code locally.
Step12: Kindly ignore, if you get the version warnings related to pip install command.
Step13: The below hyperparameter training job step will take upto 1 hour to complete. | Python Code:
# Use the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Installing the latest version of the package
!pip install --user google-cloud-bigquery==1.25.0
Explanation: Performing the Hyperparameter tuning
Learning Objectives
1. Learn how to use cloudml-hypertune to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the .yaml file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Cloud AI Platform
Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set prior to training a model, as opposed to parameters which are learned during training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters:
1. Manual
2. Grid Search
3. Random Search
4. Bayesian Optimzation
1. Manual
Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intuition about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance.
Pros
- Educational, builds up your intuition as a data scientist
- Inexpensive because only one trial is conducted at a time
Cons
- Requires alot of time and patience
2. Grid Search
On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination.
Pros
- Can run hundreds of trials in parallel using the cloud
- Gauranteed to find the best solution within the search space
Cons
- Expensive
3. Random Search
Alternatively define a range for each hyperparameter (e.g. 0-256) and sample uniformly at random from that range.
Pros
- Can run hundreds of trials in parallel using the cloud
- Requires less trials than Grid Search to find a good solution
Cons
- Expensive (but less so than Grid Search)
4. Bayesian Optimization
Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here here.
Pros
- Picks values intelligenty based on results from past trials
- Less expensive because requires fewer trials to get a good result
Cons
- Requires sequential trials for best results, takes longer
AI Platform HyperTune
AI Platform HyperTune, powered by Google Vizier, uses Bayesian Optimization by default, but also supports Grid Search and Random Search.
When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
End of explanation
# Importing the necessary module
import os
from google.cloud import bigquery
# Change with your own bucket and project below:
BUCKET = "<BUCKET>"
PROJECT = "<PROJECT>"
REGION = "<YOUR REGION>"
OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET)
os.environ['BUCKET'] = BUCKET
os.environ['OUTDIR'] = OUTDIR
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = "2.6"
%%bash
# Setting up cloud SDK properties
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
End of explanation
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
# Creating a dataset
try:
bq.create_dataset(dataset)
print("Dataset created")
except:
print("Dataset already exists")
Explanation: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes:
Upload data to Google Cloud Storage
Move code into a trainer Python package
Submit training job with gcloud to train on AI Platform
Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
Create BigQuery tables
If you haven not already created a BigQuery dataset for our data, run the following cell:
End of explanation
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Explanation: Let's create a table with 1 million examples.
Note that the order of columns is exactly what was in our CSV files.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Explanation: Make the validation dataset be 1/10 the size of the training dataset.
End of explanation
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
# List the files of the bucket
gsutil ls -l $OUTDIR
# Here, it shows the short header for each object
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
Explanation: Export the tables as CSV files
End of explanation
# List the files of the bucket
!gsutil ls gs://$BUCKET/taxifare/data
Explanation: If all ran smoothly, you should be able to list the data bucket by running the following command:
End of explanation
# It will list all the files in the mentioned directory with a long listing format
!ls -la taxifare/trainer
Explanation: Move code into python package
Here, we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the taxifare/trainer directory:
- __init__.py
- model.py
- task.py
End of explanation
%%writefile ./taxifare/trainer/model.py
# Importing the necessary modules
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
# Splits features and labels from feature dictionary
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# Loads dataset using the tf.data API from CSV files
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
# Prefetch overlaps the preprocessing and model execution of a training step
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
# Parse a string and return a datetime.datetime
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
# Here, tf.sqrt Computes element-wise square root of the input tensor
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
# Timestamp.weekday() function return the day of the week represented by the date in the given Timestamp object
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
# It wraps a python function into a TensorFlow op that executes it eagerly
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
# Here, tf.sqrt Computes element-wise square root of the input tensor
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
# Define train and evaluate method to evaluate performance of the model
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
eval_data_path = hparams['eval_data_path']
nnsize = hparams['nnsize']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
dnn_model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(dnn_model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = dnn_model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(dnn_model, model_export_path)
# TODO 1
hp_metric = history.history['val_rmse'][num_evals-1]
# TODO 1
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=hp_metric,
global_step=num_evals
)
return history
Explanation: To use hyperparameter tuning in your training job you must perform the following steps:
Specify the hyperparameter tuning configuration for your training job by including a HyperparameterSpec in your TrainingInput object.
Include the following code in your training application:
Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial.
Add your hyperparameter metric to the summary for your graph.
To submit a hyperparameter tuning job, we must modify model.py and task.py to expose any variables we want to tune as command line arguments.
Modify model.py
End of explanation
%%writefile taxifare/trainer/task.py
# Importing the necessary module
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
nargs = "+",
type = int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
hparams["output_dir"] = os.path.join(
hparams["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
Explanation: Modify task.py
End of explanation
%%writefile hptuning_config.yaml
# Setting parameters for hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: 10 # TODO 2
maxParallelTrials: 2 # TODO 2
hyperparameterMetricTag: rmse # TODO 2
enableTrialEarlyStopping: True
params:
- parameterName: lr
# TODO 2
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterName: nbuckets
# TODO 2
type: INTEGER
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterName: batch_size
# TODO 2
type: DISCRETE
discreteValues:
- 15
- 30
- 50
Explanation: Create config.yaml file
Specify the hyperparameter tuning configuration for your training job
Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.
In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1:
End of explanation
# Installing the latest version of the package
!pip install cloudml-hypertune
Explanation: Report your hyperparameter metric to AI Platform Training
The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.
We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.
TensorFlow with a runtime version
If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.
You may need to install cloudml-hypertune on your machine to run this code locally.
End of explanation
%%bash
# Testing our training code locally
EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*
TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*
OUTPUT_DIR=./taxifare-model
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python3 -m trainer.task \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTPUT_DIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size 5 \
--num_examples_to_train_on 100 \
--num_evals 1 \
--nbuckets 10 \
--lr 0.001 \
--nnsize 32 8
ls taxifare-model/tensorboard
Explanation: Kindly ignore, if you get the version warnings related to pip install command.
End of explanation
%%bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID
REGION="us-central1"
TFVERSION="2.4"
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO 3
gcloud ai-platform jobs submit training $JOBID \
--module-name=trainer.task \
--package-path=taxifare/trainer \
--staging-bucket=gs://${BUCKET} \
--config=hptuning_config.yaml \
--python-version=3.7 \
--runtime-version=${TFVERSION} \
--region=${REGION} \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
Explanation: The below hyperparameter training job step will take upto 1 hour to complete.
End of explanation |
10,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple data representations
Before we delve into learnable data representations, feature crosses, etc., let’s look at simpler data representations. We can think of these simple data representations as common idioms in machine learning -- not quite patterns, but commonly employed solutions nevertheless.
Scaling helps
Models trained with scaled data converge faster and are therefore faster/cheaper to train.
Step1: Numerical inputs
One key predictor of the weight of a baby is the mother's age. We can verify this by looking at the average weight of a baby born to mothers with different ages. Since the dataset is large enough, we will do the computation in BigQuery
Step2: Looking at the distribution (histogram) of the raw mother's age makes the weird behavior at the edges clear. We don't have enough data for mothers in their low-teens and in their fifties. In statistical terms, these are outliers.
Step5: Let's look at the data after applying different forms of scaling.
Step6: Skewed data
For an example of highly skewed data, assume that we are building a model to predict the likely sales of a non-fiction book. One of the inputs to the model is the popularity of the Wikipedia page corresponding to the topic. The number of views of pages in Wikipedia is highly skewed. | Python Code:
from sklearn import datasets, linear_model
diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)
raw = diabetes_X[:, None, 2]
max_raw = max(raw)
min_raw = min(raw)
scaled = (2*raw - max_raw - min_raw)/(max_raw - min_raw)
def train_raw():
linear_model.LinearRegression().fit(raw, diabetes_y)
def train_scaled():
linear_model.LinearRegression().fit(scaled, diabetes_y)
import timeit
raw_time = timeit.timeit(train_raw, number=1000)
scaled_time = timeit.timeit(train_scaled, number=1000)
print('Raw: {:.4f}s, Scaled: {:.4f}s, Improvement: {:2f}%'
.format(raw_time, scaled_time, 100*(raw_time-scaled_time)/raw_time))
Explanation: Simple data representations
Before we delve into learnable data representations, feature crosses, etc., let’s look at simpler data representations. We can think of these simple data representations as common idioms in machine learning -- not quite patterns, but commonly employed solutions nevertheless.
Scaling helps
Models trained with scaled data converge faster and are therefore faster/cheaper to train.
End of explanation
%%bigquery df
SELECT
mother_age,
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY mother_age
ORDER BY mother_age
df.plot(x='mother_age', y='avg_wt');
Explanation: Numerical inputs
One key predictor of the weight of a baby is the mother's age. We can verify this by looking at the average weight of a baby born to mothers with different ages. Since the dataset is large enough, we will do the computation in BigQuery:
End of explanation
df.plot(x='mother_age', y='num_babies');
Explanation: Looking at the distribution (histogram) of the raw mother's age makes the weird behavior at the edges clear. We don't have enough data for mothers in their low-teens and in their fifties. In statistical terms, these are outliers.
End of explanation
base_sql =
CREATE TEMPORARY FUNCTION CLIP_LESS(x FLOAT64, a FLOAT64) AS (
IF (x < a, a, x)
);
CREATE TEMPORARY FUNCTION CLIP_GT(x FLOAT64, b FLOAT64) AS (
IF (x > b, b, x)
);
CREATE TEMPORARY FUNCTION CLIP(x FLOAT64, a FLOAT64, b FLOAT64) AS (
CLIP_GT(CLIP_LESS(x, a), b)
);
WITH stats AS (
SELECT
MIN(mother_age) AS min_age,
MAX(mother_age) AS max_age,
AVG(mother_age) AS avg_age,
STDDEV(mother_age) AS stddev_age,
APPROX_QUANTILES(mother_age, 100)[OFFSET(1)] AS percentile_1,
APPROX_QUANTILES(mother_age, 100)[OFFSET(99)] AS percentile_99
FROM
publicdata.samples.natality
WHERE
year > 2000
),
scaling AS (
SELECT
mother_age,
weight_pounds,
SAFE_DIVIDE(2*mother_age - max_age - min_age, max_age - min_age) AS minmax_scaled,
CLIP( (mother_age - 30)/15, -1, 1 ) AS clipped,
SAFE_DIVIDE(mother_age - avg_age, stddev_age) AS zscore,
CLIP(mother_age, percentile_1, percentile_99) AS winsorized_1_99,
SAFE_DIVIDE(2*CLIP(mother_age, percentile_1, percentile_99) - percentile_1 - percentile_99, percentile_99 - percentile_1) AS winsorized_scaled
FROM
publicdata.samples.natality, stats
)
def scaled_stats(age_col):
sql = base_sql +
SELECT
{0},
AVG(weight_pounds) AS avg_wt,
COUNT(1) AS num_babies
FROM
scaling
GROUP BY {0}
ORDER BY {0}
.format(age_col)
from google.cloud import bigquery
return bigquery.Client().query(sql).to_dataframe()
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [15, 15]
plt.rcParams.update({'font.size': 15})
fig, axs = plt.subplots(3, 2);
scaled_stats('mother_age').plot(x='mother_age', y='num_babies', ax=axs[0, 0]);
scaled_stats('minmax_scaled').plot(x='minmax_scaled', y='num_babies', ax=axs[0, 1]);
scaled_stats('clipped').plot(x='clipped', y='num_babies', ax=axs[1, 0]);
scaled_stats('zscore').plot(x='zscore', y='num_babies', ax=axs[1, 1], xlim=[-2, 2]);
scaled_stats('winsorized_1_99').plot(x='winsorized_1_99', y='num_babies', ax=axs[2, 0]);
scaled_stats('winsorized_scaled').plot(x='winsorized_scaled', y='num_babies', ax=axs[2, 1]);
fig.savefig('scaling.png')
plt.close(fig)
Explanation: Let's look at the data after applying different forms of scaling.
End of explanation
%%bigquery df
WITH bypage AS (
SELECT
title,
SUM(views) AS num_views
FROM `bigquery-samples.wikipedia_benchmark.Wiki1M`
WHERE language = 'en'
GROUP BY title
HAVING num_views > 10 # non-niche
ORDER by num_views desc
),
percentile AS (
SELECT
APPROX_QUANTILES(num_views, 100) AS bins
FROM
bypage
)
SELECT
title,
num_views,
(ROUND(POW(LOG(num_views), 0.25), 1) - 1.3) AS fourthroot_log_views,
CAST(REPLACE(ML.BUCKETIZE(num_views, bins), 'bin_', '') AS int64) AS bin,
FROM
percentile, bypage
from scipy import stats
data, est_lambda = stats.boxcox(df['num_views'])
df['boxcox'] = data
df
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [15, 10]
plt.rcParams.update({'font.size': 15})
fig, axs = plt.subplots(1, 4);
for axno, name in enumerate('num_views,fourthroot_log_views,bin,boxcox'.split(',')):
df.hist(histtype='bar', bins=20, column=name, ax=axs[axno]);
fig.savefig('skew_log.png')
plt.close(fig)
Explanation: Skewed data
For an example of highly skewed data, assume that we are building a model to predict the likely sales of a non-fiction book. One of the inputs to the model is the popularity of the Wikipedia page corresponding to the topic. The number of views of pages in Wikipedia is highly skewed.
End of explanation |
10,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First example
Step1: Note that we also chose a grid to solve the equation on. The $x$ and $y$ coordinates can be obtained by
Step2: To integrate in time we need an initial state. Equations instances have a random_state method that generates a state. The distribution of these initial conditions, when sampled from different seeds, will define the training set for later. Let's sample one random initial state and plot it
Step3: The state of an equation is a dict object that contains all relevant fields needed for integrating in time. For advection diffusion these are concentration, x_velocity, and y_velocity
Step4: To perform the actual integration we need to choose a method with which to estimate the spatial derivatives of the concentration $c$. The object which estimates the derivatives is called a model and there are various models defined in models.py. Here we will use a finite difference estimation. Lastly, we need to choose a timestep, which we can ask the equation instance to supply.
Step5: The result is a dict object. The concentration member of the dict is a tensor whose first axis corresponds to the times at which the solution was evaluated. Here we save the result as an xarray.DataArray, which makes it easy to plot.
Step8: Defining a new equation
In this section we learn how to define a new equation. We will look at coupled reaction diffusion equations, aka the Turing Equation. They decscribe the evolution of two fields, $A$ and $B$, according to
Step9: Now we can generate a random state and evolve it in time | Python Code:
equation = pde.advection.equations.FiniteVolumeAdvectionDiffusion(diffusion_coefficient=0.01)
grid = grids.Grid.from_period(size=256, length=2*np.pi)
Explanation: First example: Advection diffusion
In this example we'll see how to integrate in time a pre-defined equation. Here we deal with the Advection-Diffusion equation, which describes the time evolution of the concentration $c(x,y,t)$ when it is advected by the velocity field $\vec v(x,y)=(v_x(x,y), v_y(x,y)$ and also undergoes diffusion. The equation reads
$$\frac{\partial c}{\partial t}+\vec{v}\cdot\vec{\nabla}c= D \nabla^2 c$$
where $D$ is the diffusion coefficient. The equation is implemented in various forms in the folder advection/equations. Here we choose the Finite Volume formulation.
End of explanation
x, y = grid.get_mesh()
Explanation: Note that we also chose a grid to solve the equation on. The $x$ and $y$ coordinates can be obtained by
End of explanation
initial_state = equation.random_state(grid, seed=7109179)
fig, axs = plt.subplots(1,2, figsize=(8,4))
axs[0].pcolor(grid.get_mesh()[1],
grid.get_mesh()[0],
initial_state['concentration'])
axs[0].set_title('initial concentration')
axs[1].streamplot(grid.get_mesh()[1],
grid.get_mesh()[0],
initial_state['y_velocity'],initial_state['x_velocity'],
density=2)
axs[1].set_title('velocity field');
Explanation: To integrate in time we need an initial state. Equations instances have a random_state method that generates a state. The distribution of these initial conditions, when sampled from different seeds, will define the training set for later. Let's sample one random initial state and plot it:
End of explanation
print(initial_state.keys())
Explanation: The state of an equation is a dict object that contains all relevant fields needed for integrating in time. For advection diffusion these are concentration, x_velocity, and y_velocity:
End of explanation
time_step = equation.get_time_step(grid)
times = time_step*np.arange(400)
results = pde.core.integrate.integrate_times(
model=pde.core.models.FiniteDifferenceModel(equation,grid),
state=initial_state,
times=times, axis=0)
Explanation: To perform the actual integration we need to choose a method with which to estimate the spatial derivatives of the concentration $c$. The object which estimates the derivatives is called a model and there are various models defined in models.py. Here we will use a finite difference estimation. Lastly, we need to choose a timestep, which we can ask the equation instance to supply.
End of explanation
conc=xr.DataArray(results['concentration'].numpy(),
dims=['time', 'x','y'],
coords={'time':times, 'x': x[:,0], 'y': y[0]}
)
conc[::99].plot(col='time', robust=True, aspect=1)
Explanation: The result is a dict object. The concentration member of the dict is a tensor whose first axis corresponds to the times at which the solution was evaluated. Here we save the result as an xarray.DataArray, which makes it easy to plot.
End of explanation
from datadrivenpdes.core import equations
from datadrivenpdes.core import grids
from datadrivenpdes.core import polynomials
from datadrivenpdes.core import states
import scipy as sp
def smooth_random_field(N, amp=0.1, np_random_state=None):
generates a random field of shape (N,1) and smoothes it a bit
if np_random_state is None:
np_random_state = np.random.RandomState()
noise=np_random_state.randn(N)
kernel=np.exp(-np.linspace(-6,6,N)**2)
return amp*sp.ndimage.convolve(noise, kernel, mode='wrap')[:,np.newaxis]
class TuringEquation(equations.Equation):
DISCRETIZATION_NAME = 'finite_difference'
METHOD = polynomials.Method.FINITE_DIFFERENCE
MONOTONIC = False
CONTINUOUS_EQUATION_NAME = 'Turing'
key_definitions = {
'A': states.StateDefinition(name='A',
tensor_indices=(),
derivative_orders=(0,0,0),
offset=(0,0)),
'A_xx': states.StateDefinition(name='A',
tensor_indices=(),
derivative_orders=(2, 0, 0),
offset=(0, 0)),
'B': states.StateDefinition(name='B',
tensor_indices=(),
derivative_orders=(0, 0, 0),
offset=(0, 0)),
'B_xx': states.StateDefinition(name='B',
tensor_indices=(),
derivative_orders=(2, 0, 0),
offset=(0, 0)),
'Source' : states.StateDefinition(name='Source',
tensor_indices=(),
derivative_orders=(0, 0, 0),
offset=(0, 0)),
}
evolving_keys = {'A', 'B'}
constant_keys = {'Source'}
def __init__(self, alpha, beta, D_A, D_B, timestep=1e-4):
self.alpha = alpha
self.beta = beta
self.D_A = D_A
self.D_B = D_B
self._timestep = timestep
super().__init__()
def time_derivative(
self, grid, A, A_xx, B, B_xx, Source):
See base class.
rA = self.reaction_A(A, B)
rB = self.reaction_B(A, B)
diff_A = self.D_A * A_xx
diff_B = self.D_B * B_xx
return {'A': rA + diff_A + Source,
'B': rB + diff_B,}
def reaction_A(self, A, B):
return A - (A ** 3) - B + self.alpha
def reaction_B(self, A, B):
return (A - B) * self.beta
def get_time_step(self, grid):
return self._timestep
def random_state(self, grid, seed=None, dtype=tf.float32):
if seed is None:
R = np.random.RandomState()
else:
R = np.random.RandomState(seed=seed)
state = {
'A': smooth_random_field(N=grid.size_x, np_random_state=R),
'B': smooth_random_field(N=grid.size_x, np_random_state=R),
'Source': smooth_random_field(N=grid.size_x, np_random_state=R),
}
state = {k: tf.cast(v, dtype) for k, v in state.items()}
return state
Explanation: Defining a new equation
In this section we learn how to define a new equation. We will look at coupled reaction diffusion equations, aka the Turing Equation. They decscribe the evolution of two fields, $A$ and $B$, according to:
$$\begin{align}
\frac{\partial A}{\partial t} &= D_A\nabla^2 A + R_A(A,B)+S\
\frac{\partial B}{\partial t} &= D_B\nabla^2 B + R_B(A,B)
\end{align}$$
$D_{A,B}$ are the diffusion constants of $A$ and $B$, $R_{A,B}$ are nonlinear reaction terms and $S$ is some constant source term. For example, we'll take
$$\begin{align}
R_A&=A(1-A^2)-\alpha B &
R_B&=\beta(A-B)
\end{align}$$
where $\alpha$ and $\beta$ are model parameters. For simplicity, we'll implelment the equation in one spatial dimension.
Equation Keys
Because the computational framework is fully differentiable, defining an equation requires specifiying in advance what are the quantities that are used in calcualting time derivatives. These are called keys and are stored in the equation attribute key_definitions. In our case, to calculate the time evolution we need $A, B, \partial_{xx}A, \partial_{xx}B $ and $S$.
The auxilliary function states.StateDefinition defines these keys. Its input arguments are:
* name - The base name of the field. For example, the field $\partial_{xx} A$ is derived from the base field A.
* tensor_indices - In 2D and above, specify whether a field is a component of a tensor (like $v_x$ and $v_y$ in the advection example).
* derivative_orders - Specifies whether a key is a spatial derivative of a different key.
* offset - Specifies whether a field is evaluated off the center point of a grid (useful for staggered grids, e.g. finite volume schemes)
For example, in our case the key_definitions for $A$ and $\partial_{xx}A$ are
```python
key_definitions = {
'A': states.StateDefinition(name='A',
tensor_indices=(), # Not used in one dimenional equations
derivative_orders=(0,0,0), # A is not a derivative of anything else
offset=(0,0)), # A is evaluated on the centerpoints of the grid
'A_xx': states.StateDefinition(name='A', # A_xx is is derived from A
tensor_indices=(),
derivative_orders=(2, 0, 0), # Two derivatives on the x axis
offset=(0, 0)),
}
``
There are two types of keys: those that evolve in time, in our case $A$ and $B$, and constant ones, in our case $S$ (and in the Advection Diffusion example - the velocity field $v$). When defining the equation we need to set the attributesevolving_keysandconstant_keys, which are both pythonset`s.
The different keys of an Equation instance can be inspected with
python
equation.all_keys # in our case: {'A', 'A_xx', 'B', 'B_xx', 'Source'}
equation.base_keys # in our case: {'A', 'B', 'Source'}
equation.evolving_keys # in our case: {'A', 'B'}
equation.constant_keys # in our case: {'Source'}
Defining the equation
Here is a full definition of the equation:
End of explanation
eq = TuringEquation(alpha=-0.0001, beta=10, D_A=1, D_B=30)
NX=100
NY=1 # 1D can be obtained by haveing a y dimension of size 1
LX=200
grid = grids.Grid(NX, NY, step=LX/NX)
x, y=grid.get_mesh()
initial_state = eq.random_state(grid=grid, seed=12345)
times = eq._timestep*np.arange(0, 1000, 20)
model = pde.core.models.FiniteDifferenceModel(eq,grid)
res = pde.core.integrate.integrate_times(
model=model,
state=initial_state,
times=times, axis=0)
fig, axs=plt.subplots(1,2, figsize=(10,5), sharey=True)
for ax, k in zip(axs, ['A','B']):
ax.pcolormesh(x.flat, times, res[k].numpy()[...,0], cmap='RdBu')
ax.set_title(k)
ax.set_xlabel('x')
axs[0].set_ylabel('time')
fig.tight_layout()
Explanation: Now we can generate a random state and evolve it in time
End of explanation |
10,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A half-baked tutorial on ensemble methods
<center>by Ivan Nazarov<center/>
This tutorial covers both introductiory level theory underpinning
each ensemble method, as well as the tools available in Scikit-Learn
and XGBoost. We also cover the topic of Stacking.
Materials
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning
Step1: Toy data from HTF, p. 339
Step2: Fix the RNG
Step3: Generate four samples
Step4: <hr/>
Ensemble methods
In general any ensemble methods can be broken down into the following two
stages, possibly overlapping
Step5: Bagging
Bagging is meta algortihm that aims at constructing an esimator by averaging
many noisyб but approximately unbiased models. The general idea is that averaging
a set of unbiased estimates, yields an estimate with much reduced variance
(provided the base estimates are uncorrelated).
Bagging works poorly on models, that linearly depend on the data (like linear
regression), and best performs on nonlinear base estimators (like trees). In
other terms bagging succeeds in building a better combined estimator, if the
base estimator is unstable. Indeed, if the learning procedure is stable, and
random perturbation of the train dataset do not affect it by much, the bagging
estimator will not differ much from a single predictor, and may even weak its
performance somewhat.
Bootstrapping
Consider a train sample $Z = (X, y) = (x_i, y_i)_{i=1}^n \in \mathcal{X}\times \mathcal{Y}$,
samplesed form a distribution $P$.
A bootstrap sample $Z^ = (z^i){i=1}^n$ is a subsample of $Z = (z_j){j=1}^n$ with
each element drawn with replacement from $Z$. More technically, a bootstrap sample
of size $l$ is a sample from the empirical distribution of the training data $Z$, denoted
by $\hat{P}$. So $Z^\sim \hat{P}^l$ means that $(z^_i){i=1}^l \sim \hat{P}$ iid, or,
similarly,
$$ z^*_i = \bigl{ z_j \text{ w. prob. } \frac{1}{n}\,,\, j=1, \ldots, n\bigr.\,. $$
An interesting property of a bootstraped sample, is that on average $36.79\%$ of
the original sample are left out of each $Z^{b}$. Indeed, the probability that a
given sample is present in $Z^$ is
$$ 1 - \bigl(1 - \frac{1}{n}\bigr)^n = 1 - e^{-1} + o(n) \approx 63.21\%\,. $$
This means that the observations not selected for the $b$-th bootstrap sample $Z^{b}$,
denoted by $Z\setminus Z^{b}$, $b=1,\ldots,B$, can be used as an independent test set.
The out-of-bag sample, $Z\setminus Z^{b}$, and for estimating the generalization
error, and for defining an OOB*-predictor.
For a given collection of bootstrap samples $(Z^{b})_{b=1}^B$ define the set of samples
the $i$-th observation does not belong to as $\Gamma_i = {b=1,\ldots, n\,
Step6: Both Bagging Calssifier and Regressor have similar parameters
Step7: <hr/>
Random Forest
Essentially, a random forest is an bagging ensemble constructed from a large collection
of decorrelated regression/decision trees. The algorithm specifially modifies
the tree induction procedure to produce trees with as low correlation as possible.
1. for $b=1,\ldots, B$ do
Step8: As with Bagging, Random Forest Classifier and Regressor accept similar parametrs
Step9: <hr/>
Boosting
Classification
The underlying idea of boosting is to combine a collection of weak predictors,
into one strong powerful committee model. Most commonly a dictionary of nonlinear
base predictors, like decision trees (regression/classification), is used weak
predictors in boosting.
Consider the following classification problem
Step10: Common parameters
Step11: <hr/>
Gradient boosting
In certain circumstances in order to minimize a convex twice-differentiable
function $f
Step12: Both Gradient boosting ensembles in scikit accept the following paramters
Step13: Large ensemble, small learning rate
Step14: <hr/>
XGBoost
Briefly, XGBoost, is a higlhy streamlined open-source gradient boosting library, which
supports many useful loss functions and uses second order loss approximation both
to increas the ensemble accuracy and speed of convergence
Step15: Scikit-Learn interface
Step16: Internally XGBoost relies heavily on a custom dataset format DMatrix.
The interface, which is exposed into python has three capabilities
Step17: The same XGBoost classifier as in the Scikit-learn example.
Step18: Both the sklearn-compatible and basic python interfaces have the similar
parameters. Except they are passed slightly differently.
Gradient boosting parameters
Step20: <hr/>
Other methods
Stacking
Every ensemble method comprises of essentially two phases
Step21: Examples
Combining base classifiers using Logistic Regression is a typical example of how
first level features $x\in \mathcal{X}$ are transformed by $\hat{f}m
Step22: Define the first-level predictors.
Step23: Create meta features for the train set
Step24: Now using the whole train, create test set meta features
Step25: The prediction error of each individual classifier (trained on the whole train dataset).
Step26: Now using $10$-fold cross validation on the train dataset $(\hat{p}i, y_i){i=1}^n$,
find the best $L_1$ regularization coefficient $C$.
Step27: The weights chosen by logisitc regression are
Step28: Let's see how well the final model works on the test set
Step29: and the best model
Step30: <hr/>
Voting Classifier
This is a very basic method of constructing an aggregated classifier from a finite dictionary.
Let $\mathcal{V}$ be the set of classifiers (voters), with each calssifier's class probablilites
given by $\hat{f}_v
Step31: VotingClassifier options
Step32: Let's use LASSO Least Angle Regression (LARS, HTF p. 73) to select weights of the
base calssifiers.
Step33: Show the RMSE of lars, and the error rates of the base classifiers.
Step34: Let's see if there is improvement.
Step35: Indeed, this illustrates that clever selection of classifier weights might be profitable.
Another example on Voting Clasifier (from Scikit guide)
Step36: Get a train set, a test set, and a $2$-d mesh for plotting.
Step37: Make a dictionary of simple classifers
Step38: Show the decision boundary.
Step39: Let's see if this simple soft-voting ensemble improved the test error.
Step40: <hr/>
Example from HTF pp. 339 - 340
Now let's inspect the test error as a function of the size of the ensemble
Step41: Get the prediction as a function of the memebers in the ensemble.
Step42: Plot the test error. | Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.utils import check_random_state
Explanation: A half-baked tutorial on ensemble methods
<center>by Ivan Nazarov<center/>
This tutorial covers both introductiory level theory underpinning
each ensemble method, as well as the tools available in Scikit-Learn
and XGBoost. We also cover the topic of Stacking.
Materials
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. Springer New York, 2013.
Bagging: ch. 8.7;
Random Forests: ch. 15;
Boosting: ch. 10;
Stacking: ch 8.8;
Ensemble methods: ch. 16;
A. J. Izenman. Modern Multivariate Statistical Techniques: Regression, Classiffcation, and Manifold Learning. Springer Texts in Statistics. Springer NewYork, 2009.
Committee methods: ch. 14, pp. 506-510, 530-532;
Import the necessary modules and fix the RNG.
End of explanation
def htf_p339(n_samples=2000, p=10, random_state=None):
random_state=check_random_state(random_state)
## Inputs
X = random_state.normal(size=(n_samples, max(10, p)))
## Response: \chi^2_10 0.5-prob outliers
y = (np.sum(X[:, :10]**2, axis=1) > 9.34).astype(int).reshape(-1)
return X, y
Explanation: Toy data from HTF, p. 339
End of explanation
random_state = np.random.RandomState(0xC01DC0DE)
Explanation: Fix the RNG
End of explanation
X_train, y_train = htf_p339(2000, 10, random_state)
X_test, y_test = htf_p339(10000, 10, random_state)
X_valid_1, y_valid_1 = htf_p339(2000, 10, random_state)
X_valid_2, y_valid_2 = htf_p339(2000, 10, random_state)
Explanation: Generate four samples
End of explanation
from sklearn.tree import DecisionTreeClassifier
clf1_ = DecisionTreeClassifier(max_depth=1,
random_state=random_state).fit(X_train, y_train)
clf2_ = DecisionTreeClassifier(max_depth=3,
random_state=random_state).fit(X_train, y_train)
clf3_ = DecisionTreeClassifier(max_depth=7,
random_state=random_state).fit(X_train, y_train)
clf4_ = DecisionTreeClassifier(max_depth=None,
random_state=random_state).fit(X_train, y_train)
print "Decision tree (1 levels) error:", 1 - clf1_.score(X_test, y_test)
print "Decision tree (3 levels) error:", 1 - clf2_.score(X_test, y_test)
print "Decision tree (7 levels) error:", 1 - clf3_.score(X_test, y_test)
print "Decision tree (max levels) error:", 1 - clf4_.score(X_test, y_test)
Explanation: <hr/>
Ensemble methods
In general any ensemble methods can be broken down into the following two
stages, possibly overlapping:
1. Populate a dictionary of base learners;
2. Combine them to get a composite predictor.
Many ML estimators can be considerd ensemble methods:
1. Regression is a linear ensemble of basis functions: predictors $x\in \mathbb{R}^{p\times 1}$;
2. Any model with additive structure, like regression/classificatio trees;
3. Feedforward Neural network is a bunch of layers of nonlinear predictors stacked one atop
the other, in a specific DAG-like manner;
Trees
A regression tree is a piecewise constant function $T:\mathcal{X} \mapsto \mathbb{R}$
having the following expression
$$ T(x) = \sum_{j=1}^J w_j 1_{R_j}(x) \,, $$
where $(R_j){j=1}^J$, $J\geq 1$, is a tree-partition of the input space,
and $(w_j){j=1}^J$ are estimated values at terminal nodes.
In a multiclass problem, a classification tree is a composition of a majority
voting decision function
$$ \mathtt{MAJ}(y) = \mathop{\text{argmax}}_{k=1\,\ldots, K} y_k \,, $$
with a scoring funciton $T:\mathcal{X} \mapsto \mathbb{R}^K$ of similar structure
as in the regression case
$$ T(x) = \sum_{j=1}^J w_j 1_{R_j}(x) \,, $$
where $(w_j)_{j=1}^J\in\mathbb{R}^K$ are vectors of class likelihoods (probabilities)
at the terminal nodes.
The tree-partition $(R_j){j=1}^J$ and node values $(w_j){j=1}^J$ result from running
a variant of the standard greedy top-down tree-induction algorithm (CART, C.45, et c.).
End of explanation
from sklearn.ensemble import BaggingClassifier, BaggingRegressor
Explanation: Bagging
Bagging is meta algortihm that aims at constructing an esimator by averaging
many noisyб but approximately unbiased models. The general idea is that averaging
a set of unbiased estimates, yields an estimate with much reduced variance
(provided the base estimates are uncorrelated).
Bagging works poorly on models, that linearly depend on the data (like linear
regression), and best performs on nonlinear base estimators (like trees). In
other terms bagging succeeds in building a better combined estimator, if the
base estimator is unstable. Indeed, if the learning procedure is stable, and
random perturbation of the train dataset do not affect it by much, the bagging
estimator will not differ much from a single predictor, and may even weak its
performance somewhat.
Bootstrapping
Consider a train sample $Z = (X, y) = (x_i, y_i)_{i=1}^n \in \mathcal{X}\times \mathcal{Y}$,
samplesed form a distribution $P$.
A bootstrap sample $Z^ = (z^i){i=1}^n$ is a subsample of $Z = (z_j){j=1}^n$ with
each element drawn with replacement from $Z$. More technically, a bootstrap sample
of size $l$ is a sample from the empirical distribution of the training data $Z$, denoted
by $\hat{P}$. So $Z^\sim \hat{P}^l$ means that $(z^_i){i=1}^l \sim \hat{P}$ iid, or,
similarly,
$$ z^*_i = \bigl{ z_j \text{ w. prob. } \frac{1}{n}\,,\, j=1, \ldots, n\bigr.\,. $$
An interesting property of a bootstraped sample, is that on average $36.79\%$ of
the original sample are left out of each $Z^{b}$. Indeed, the probability that a
given sample is present in $Z^$ is
$$ 1 - \bigl(1 - \frac{1}{n}\bigr)^n = 1 - e^{-1} + o(n) \approx 63.21\%\,. $$
This means that the observations not selected for the $b$-th bootstrap sample $Z^{b}$,
denoted by $Z\setminus Z^{b}$, $b=1,\ldots,B$, can be used as an independent test set.
The out-of-bag sample, $Z\setminus Z^{b}$, and for estimating the generalization
error, and for defining an OOB*-predictor.
For a given collection of bootstrap samples $(Z^{b})_{b=1}^B$ define the set of samples
the $i$-th observation does not belong to as $\Gamma_i = {b=1,\ldots, n\,:\, z_i \notin Z^{b} }$,
$i=1,\ldots, n$. For a fixed observation $i$ the set $\Gamma_i$ is empty, meaning that
$z_i$ is never out-of-bag, occurs with probability $\bigl(1 - (1-n^{-1})^n\bigr)^B
\approx (1-e^{-1})^B$, which is negligible for $B \geq 65$.
Regression
Let $\mathcal{A}$ is a learning algorithm, taking a learning sample, that learns
regression models $\hat{f}:\mathcal{X} \mapsto \mathbb{R}$, like Regression Tree,
$k$-NN, multi-layer neural netowrk et c. The bagged regression estimator is constructed
as follows:
1. Draw $B$ independent bootstrap samples $(Z^{b})_{b=1}^B$;
2. On each bootstrap sample $Z^{b}$ learn an estimator $\hat{f}^{b} = \hat{f}^{b}(\cdot; Z^{b})
= \mathcal{A}(Z^{b})(\cdot)$;
3. Construct the bagged estimator:
$$ \hat{f}^{\text{bag}}B(x) = B^{-1} \sum{b=1}^B \hat{f}^*(x) \,. $$
The bagged estimator $\hat{f}^{\text{bag}}_B$ is different from the original-sample
estimator $\hat{f}=\hat{f}(\cdot; Z)$ if the ML algorithm is nonlinear on the data,
or adaptive. Bagged estimator $\hat{f}^{\text{bag}}_B$ is a Monte-Carlo approximation
of the ideal Bagging estimator, given by the function
$$ \hat{f}^{\text{bag}}(x) = \mathop{\mathbb{E}}\nolimits_{Z^} \hat{f}^(x; Z^*) \,.$$
By the law of large numbers we have $\hat{f}^{\text{bag}}_B \to \hat{f}^{\text{bag}}$
with probability one (over the empirical distribution $\hat{P}$) as $B\to \infty$.
OOB samples can be used to construct the OOB-predictor -- an estimator, defined only
for the training samples:
$$\hat{f}^{\text{oob}}b (x_i) = \frac{1}{|\Gamma_i|} \sum{b\in \Gamma_i} \hat{f}^{*b}(x_i) \,, $$
and based on it the OOB mean squared error:
$$ \text{oob-MSE} = n^{-1} \sum_{i=1}^n \bigl(y_i - \hat{f}^{\text{oob}}_B(x_i)\bigr)^2 \,, $$
where observations with $\Gamma_i=\emptyset$ are omitted.
Classification
In case of classification the baggin estimator is constructed similarly, but there
are important caveats. In this case the ML algorithm learns a class-score function
$\hat{f}:\mathcal{X} \mapsto \mathbb{R}^K$, and then the class label is predicted
by $\mathtt{MAJ}$ (majority voting) on $\hat{f}(x)$.
The majority vote over $K$ candidates with weights $(w_k){k=1}^K\in \mathbb{R}$ is defined as
$$ \mathtt{MAJ}(w) = \mathop{\text{argmax}}{k=1\,\ldots, K} w_k \,. $$
One option is to define the bagged estimator as
$$ \hat{g}^{\text{bag}}B(x)
= \mathtt{MAJ}\Bigl( B^{-1}\sum{b=1}^B e_{k^{b}(x)} \Bigr)
\,, $$
where $e_k$ is the $k$-th unit vector in ${0,1}^{K\times 1}$, and
$k^{b}(x)=\mathtt{MAJ}\bigl(\hat{f}^{*b}(x)\bigr)$.
Basically, this ensemble classifies according to voting proportions of the population
of bootstrapped classifiers. However, when most calssifiers within the population
classify some class correctly, then its voting poportion will overestimate the
class probability.
A better option, especially for well-calibrated classfiers is to use their scores directly:
$$ \hat{g}^{\text{bag}}B(x)
= \mathtt{MAJ}\bigl( B^{-1}\sum{b=1}^B \hat{f}^{*b}(x) \bigr)
\,, $$
One can construct an OOB-classifier (or generally an OOB-predictor) using the following
idea:
$$ \hat{g}^{\text{oob}}B(x_i)
= \mathtt{MAJ}\Bigl(
\frac{1}{|\Gamma_i|} \sum{b\in \Gamma_i} e_{k^{b}(x_i)}
\Bigr)\,, $$
or
$$ \hat{g}^{\text{oob}}B(x_i)
= \mathtt{MAJ}\Bigl(
\frac{1}{|\Gamma_i|} \sum{b\in \Gamma_i} \hat{f}^{b}(x_i)
\Bigr)\,. $$
Obviously, this classifier is defined only for the observed samples data, and for only those
examples, for which $\Gamma_i\neq\emptyset$.
Bagging a good classifier (one with misclassification rate less than $0.5$) can
improve its accuracy, while bagging a poor one (with higher than $0.5$ error rate)
can seriously degrade predictive accuracy.
Usage
End of explanation
clf1_ = BaggingClassifier(n_estimators=10,
base_estimator=DecisionTreeClassifier(max_depth=3),
random_state=random_state).fit(X_train, y_train)
clf2_ = BaggingClassifier(n_estimators=10,
base_estimator=DecisionTreeClassifier(max_depth=None),
random_state=random_state).fit(X_train, y_train)
clf3_ = BaggingClassifier(n_estimators=100,
base_estimator=DecisionTreeClassifier(max_depth=3),
random_state=random_state).fit(X_train, y_train)
clf4_ = BaggingClassifier(n_estimators=100,
base_estimator=DecisionTreeClassifier(max_depth=None),
random_state=random_state).fit(X_train, y_train)
print "Bagged (10) decision tree (3 levels) error:", 1 - clf1_.score(X_test, y_test)
print "Bagged (10) decision tree (max levels) error:", 1 - clf2_.score(X_test, y_test)
print "Bagged (100) decision tree (3 levels) error:", 1 - clf3_.score(X_test, y_test)
print "Bagged (100) decision tree (max levels) error:", 1 - clf4_.score(X_test, y_test)
Explanation: Both Bagging Calssifier and Regressor have similar parameters:
- n_estimators -- the number of estimators in the ensemble;
- base_estimator -- the base estimator from which the bagged ensemble is
built;
- max_samples -- the fraction of samples to be used to train each
individual base estimator. Choosing max_samples < 1.0 leads to a reduction
of variance and an increase in bias.
- max_features -- The number of features to draw from X to train each
base estimator;
- bootstrap -- determines whether samples are drawn with replacement;
- bootstrap_features -- determines whether features are drawn with replacement;
- oob_score -- determines whether to use out-of-bag samples to estimate
the generalization error;
Example
End of explanation
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
Explanation: <hr/>
Random Forest
Essentially, a random forest is an bagging ensemble constructed from a large collection
of decorrelated regression/decision trees. The algorithm specifially modifies
the tree induction procedure to produce trees with as low correlation as possible.
1. for $b=1,\ldots, B$ do:
1. Draw a bootstrap sample $Z^{b} = (z^{b}i){i=1}^P$, of size $P = \lfloor \eta n\rfloor$ from $Z$;
2. Grow a tree $T^{b}$ in a specialized manner: the greedy recursive algorithm
is the same, but each time split candidates are chosen from a random subset of
features, and the tree is grown until a minimum node size is reached;
2. Take the tree ensemble $(\hat{T}^{b})_{b=1}^B$, and return the bagged estimator;
Trees benefit the most from bagging and random forest ensembles due to their high nonlinearity.
Usage
End of explanation
clf1_ = RandomForestClassifier(n_estimators=10, max_depth=3,
random_state=random_state).fit(X_train, y_train)
clf2_ = RandomForestClassifier(n_estimators=100, max_depth=3,
random_state=random_state).fit(X_train, y_train)
clf3_ = RandomForestClassifier(n_estimators=10, max_depth=None,
random_state=random_state).fit(X_train, y_train)
clf4_ = RandomForestClassifier(n_estimators=100, max_depth=None,
random_state=random_state).fit(X_train, y_train)
print "Random Forest (10, 3 levels) error:", 1 - clf1_.score(X_test, y_test)
print "Random Forest (100, 3 levels) error:", 1 - clf2_.score(X_test, y_test)
print "Random Forest (10, max levels) error:", 1 - clf3_.score(X_test, y_test)
print "Random Forest (100, max levels) error:", 1 - clf4_.score(X_test, y_test)
Explanation: As with Bagging, Random Forest Classifier and Regressor accept similar parametrs:
- criterion -- the function to measure the quality of a split. Supported criteria
are:
* "gini" -- Gini impurity (classification only);
* "entropy" -- the information gain (classification only);
* "mse" -- mean squared error (regression only);
- max_features -- The number of features to consider when looking for the
best split: sqrt, log2 and share in $(0,1]$ are accepted (choosing max_features < n_features
leads to a reduction of variance and an increase in bias);
- max_depth -- maximum depth of the individual regression tree estimators
(the maximum depth limits the number of nodes in the tree, the best value depends
on the interaction of the input variables);
- min_samples_split -- The minimum number of samples required to split an
internal node;
- min_samples_leaf -- The minimum number of samples required to be at a
leaf node;
- min_weight_fraction_leaf -- The minimum weighted fraction of the input
samples required to be at a leaf node;
- max_leaf_nodes -- Grow trees with max_leaf_nodes in best-first
fashion, determined by the relative reduction in impurity;
- bootstrap -- determines whether samples are drawn with replacement;
- oob_score -- determines whether to use out-of-bag samples to estimate
the generalization error.
Note that in Scikit-learn the bootstrap sample size is the same as teh original
sample ($\eta=1$).
RandomForestClassifier also handles imbalanced classification problems via
the class_weight parameter:
class_weight -- weights associated with classes given in the form of a
dictionary with elements {class_label: weight}, or a rebalancing mode:
"balanced" -- mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data;
"balanced__subsample" -- mode is the same as "balanced", except that
weights are re-computed based on the bootstrap sample for every tree grown.
These weights will be used to adjust the sample weight (passed
through the fit method).
Example
End of explanation
from sklearn.ensemble import AdaBoostClassifier, AdaBoostRegressor
Explanation: <hr/>
Boosting
Classification
The underlying idea of boosting is to combine a collection of weak predictors,
into one strong powerful committee model. Most commonly a dictionary of nonlinear
base predictors, like decision trees (regression/classification), is used weak
predictors in boosting.
Consider the following classification problem: learn a hypothesis (algorithm)
$h:\mathcal{X}\mapsto {-1,1}$ that is able to generalize well beyond the given
learning sample $Z = (X, y) = (x_i, y_i){i=1}^n \in \mathcal{X}\times {-1, +1}$.
The empirical risk is the sample average loss
$$ \hat{\mathcal{R}}_Z(h(\cdot))
= n^{-1} \sum{i=1}^n L(h(x_i), y_i)
= \mathbb{E}_{(x,y)\sim Z} L(h(x), y) \,, $$
where $\mathbb{E}_Z$ denotes the expectation over the empirical measure induced
by $Z$.
Theoretically, it would be great to learn such a classifier $g:\mathcal{X}\mapsto{-1,+1}$,
that minimizes the theoretical risk
$$ \mathcal{R}(h(\cdot)) = \mathbb{E}{(x, y)\sim D} 1{y\neq h(x)} \,, $$
where $D$ is the true unknown distribution on $\mathcal{X} \times {-1, +1}$ of
the data. The ideal calssifier given by the Bayes classifier $g^*(x) = \mathbb{P}_D(y=1|X=x)$.
However, this functional is unavailable in real life, and thus we have to get
by minimizing the empirical risk, which is known to be an approximation of the
theoretical risk due to the Law of Large Numbers. We do this, hoping that
$$ \hat{h} \in
\mathop{\text{argmin}}_{g\in \mathcal{F}}
\hat{\mathcal{R}}_Z(g(\cdot)) \,, $$
also more-or-less minimizes the theoretical risk.
Furthermore for a general class of hypotheses $h:\mathcal{X}\mapsto {-1,+1}$, the
empirical risk minimization problem cannot be solved efficiently due to non-convexity
of the objective function.
FSAM
Forward Stagewise Additive Modelling is a general greedy approach
to modelling additive enesembles (generalized additive models). The basic idea
of this approach is to construct a suboptimal model incrementally in a greedy fashion.
The goal is to minimize $\sum_{i=1}^n L(y_i, f(x_i)) + \Omega(f)$ over some class
$f\in \mathcal{F}$, where $\Omega(\cdot)$ is an additive complexity regularizer.
Algorithm:
1. set $F_0 = 0$;
2. for $k = 1,\ldots, K$ do:
1. using some efficient method find at least a good approximation to the following:
$$ f_k
\leftarrow \mathop{\mathtt{argmin}}\limits_{f\in \mathcal{F}}
\sum_{i=1}^n L\bigl( y_i, F_{k-1}(x_i) + f(x_i)\bigr)
+ \Omega(F_{k-1}) + \Omega(f)
\,; $$
2. set $ F_k = F_{k-1} + f_k$;
3. Return $F_K$.
AdaBoost
The AdaBoost algorithm is based on the Forward-Stagewise Additive Modelling
approach, which implements a greedy strategy of constructing an additive model, such as
an ensemble (or even a tree), from a rich dictionary of basis functions. In classification,
it is a particular example of a convex relaxation of the empirical risk minimization problem:
AdaBoost dominates the $0-1$ loss $(y, p)\mapsto 1_{y p < 0}$ with exp-loss $(y,p)\mapsto e^{-yp}$
and minimizes a convex upper bound of the classification error.
AdaBoost.M1
initialize $\omega_{1i} \leftarrow \frac{1}{n}$, $i=1\ldots, n$;
for $m=1,\ldots, M$ do:
fit a classifier $\hat{g}m$ to $(X, y)$ with sample weights $(\omega{mi})_{i=1}^n$;
get the miscassification error $\epsilon_m = W_m^{-1} \sum_{i\,:\,y_i\neq \hat{g}m(x_i)} \omega{mi}$,
for $W_m = \sum_{i=1}^n \omega_{mi}$;
compute the log-odds ratio $\alpha_m = \log \frac{1-\epsilon_m}{\epsilon_m}$;
update the weights:
$\omega_{m+1,t} \leftarrow \omega_{mi} \text{exp}\bigl( \alpha_m 1_{{i\,:\,y_i\neq \hat{g}_m(x_i)}} \bigr)$;
Output the ensemble $\hat{g} = \mathop{\text{sign}}\bigl{\sum_{m=1}^m \alpha_m \hat{g}_m\bigr}$;
The AdaBoost.M1 algorithm employs an adversarial teaching approach to strengthen
the ensemble. As is visible from the algorithm, the teacher tries to maximize the
classification error of the learner by amplifying the weights of the difficult to
classify examples.
The size of the ensemble $M$ serves as a regularization parameter, since
the greater the $M$, the more boosting overfits. An optimal $M$ can be
chosen by cross-validation (preferably on a single common validation set).
A recent development, called DeepBoost Mohri et al.; 2014,
proposes a new ensemble learning algorithm, that is similar in spirit to
AdaBoost. Its key feature is that the algorithm incorporates a complexity
penalty for convex combinations of models into the convex relaxation of
the loss criterion. This enables selection of better hypotheses that minimize
the upper bound on the theoretical risk.
Usage
End of explanation
clf1_ = AdaBoostClassifier(n_estimators=10,
base_estimator=DecisionTreeClassifier(max_depth=1),
random_state=random_state).fit(X_train, y_train)
clf2_ = AdaBoostClassifier(n_estimators=100,
base_estimator=DecisionTreeClassifier(max_depth=1),
random_state=random_state).fit(X_train, y_train)
clf3_ = AdaBoostClassifier(n_estimators=10,
base_estimator=DecisionTreeClassifier(max_depth=3),
random_state=random_state).fit(X_train, y_train)
clf4_ = AdaBoostClassifier(n_estimators=100,
base_estimator=DecisionTreeClassifier(max_depth=3),
random_state=random_state).fit(X_train, y_train)
print "AdaBoost.M1 (10, stumps) error:", 1 - clf1_.score(X_test, y_test)
print "AdaBoost.M1 (100, stumps) error:", 1 - clf2_.score(X_test, y_test)
print "AdaBoost.M1 (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test)
print "AdaBoost.M1 (100, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
Explanation: Common parameters:
- n_estimators -- the maximum number of estimators at which boosting is
terminated (in case of perfect fit, the learning procedure is stopped early);
- base_estimator -- the base estimator, which supports sample weighting,
from which the boosted ensemble is built;
- learning_rate -- learning rate shrinks the contribution of each classifier
by learning_rate.
AdaBoostClassifier only:
- algorithm -- the AdaBoost version to use:
* "SAMME.R" -- the SAMME.R real boosting algorithm;
* "SAMME" -- the SAMME (M1) discrete boosting algorithm;
The SAMME.R algorithm typically converges faster than SAMME,
achieving a lower test error with fewer boosting iterations.
AdaBoostRegressor only:
- loss -- the loss function to use when updating the weights after each
boosting iteration:
* "linear" -- absolute loss $L(y, p) = |y-p|$;
* "square" -- squared loss $L(y, p) = |y-p|^2$;
* "exponential" -- Exponential loss $L(y, p) = 1-e^{-|y-p|}$.
Examples
End of explanation
from sklearn.ensemble import GradientBoostingClassifier, GradientBoostingRegressor
Explanation: <hr/>
Gradient boosting
In certain circumstances in order to minimize a convex twice-differentiable
function $f:\mathbb{R}^p \mapsto \mathbb{R}$ one uses Newton-Raphson iterative
procedure, which repeats until convergence this update step:
$$ x_{m+1} \leftarrow x_m - \bigl(\nabla^2 f(x_m)\bigr)^{-1} \nabla f(x_m) \,, $$
where $\nabla^2 f(x_m)$ is the hessian of $f$ at $x_m$ and $\nabla f(x_m)$ is its
gradient.
In a more general setting, if the function is not twice differentiable, or if
the hessian is expensive to compute, then one resorts to a gradient descent
procedure, which moves in the direction os teh steepest descent and updates
according to
$$ x_{m+1} \leftarrow x_m - \eta \nabla f(x_m) \,,$$
for some step $\eta > 0$.
Gradient Boosting is, to a certain extent, a gradient descent procedure aimed at
minimizing an expected loss functional $\mathcal{L}: \mathcal{F}\mapsto \mathbb{R}$
on some function space $\mathcal{F} \subset \mathbb{R}^{\mathcal{X}}$. In particular,
it the underlying distribution of the data were known, Gradient Boosting would
attempt to find a minimizer $x\mapsto \hat{f}(x)$ such that for all $x\in \mathcal{X}$
$$ \hat{f}(x)
= \mathop{\text{argmin}}{f\in\mathcal{F}}
\mathbb{E}{y \sim P|x} L(y, f(x)) \,. $$
At each itration it would update the current estimate of the minimizer $\hat{f}m$
in the direction of the steepest-descent towards $\hat{f}$:
$$ \hat{f}{m+1} \leftarrow \hat{f}m - \rho \hat{g}_m \,, $$
where $\hat{g}_m \in \mathcal{F}$ is given by
$$ \hat{g}_m(x) = \biggl. \frac{\partial}{\partial f(x)}
\Bigl( \mathbb{E}{y \sim P|x} L\bigl(y, f(x)\bigr) \Bigr)
\biggr\rvert_{f=\hat{f}m}
= \biggl.
\mathbb{E}{y \sim P|x} \frac{\partial}{\partial f(x)}L\bigl(y, f(x)\bigr)
\biggr\rvert_{f=\hat{f}m} \,, $$
(under some regularity conditions it is possible to interchange the expectation
and differentiation operation). In turn $\rho$ is determined by
$$ \rho = \mathop{\text{argmin}}\rho \mathbb{E}_{(x,y) \sim P} L(y, \hat{f}_m(x) - \rho \hat{g}_m(x)) \,. $$
Since in practice the expectaions are not known, one approximates them with their
empirical counterparts, which makes the gradient undefined outside the observed
sample points. That is why one needs a class of basis functions, which can
generalize the gradient from a point to its neighbourhood.
Gradient Boosting procedure
1. Initialize the ensemble with $\hat{f}0 \leftarrow \mathop{\text{argmin}}\gamma \sum_{i=1}^n L(y_i, \gamma)$;
2. for $m=1,\ldots, M$ do:
1. Gradient approximation: Compute the current sample descent direction (negative
gradient) using the current ensmeble:
$$ r_{mi} = \biggl.
- \frac{\partial}{\partial f(x_i)} L\bigl(y_i, f(x_i)\bigr)
\biggr\rvert_{f=f_{m-1}} \,, $$
this can be thought of as a finte-dimensional approximation of a functional
gradient $\delta \mathcal{L}$ of the loss functional $\mathcal{L}$;
2. Fit an MSE minimizing parametric basis function $h(x;\theta)$ to the approximation
of the gradient $(r_{mi}){i=1}^n$:
$$ (\theta_m, \beta_m)
\leftarrow \mathop{\text{argmin}}{\theta, \beta}
\sum_{i=1}^n \bigl(r_{mi} - \beta h(x_i;\theta) \bigr)^2\,; $$
basically we hope that $h(\cdot;\theta)$ approximates the functional gradient well
enought and extrapolates beyond the point estimates to their immediate neighbourhoods;
3. Line search: determine the optmial step in the direction of the functional
gradient that minimizes the loss functional:
$$ \gamma_m \leftarrow \mathop{\text{argmin}}\gamma
\sum{i=1}^n L\bigl(y_i, f_{m-1}(x_i) + \gamma h(x_i;\theta_m)\bigr)\,;$$
4. Update the ensemble $f_m = f_{m-1} + \eta \, \gamma_m h(\cdot;\theta_m)$;
3. Return $\hat{f}(x) = f_M(x)$;
Here $\eta$>0 is the learning rate.
Gradient Boosted Regression Trees
Gradient Boost algorithm uses basis functions $h(\cdot; \theta)$ from some class
to approximate the gradient. For example, one can use regression splines, or more
generally fit a kernel ridge regression for gradient interpolation, or use regression
trees. Regression trees do not assume a predetermined parametric form, and instead
are constructed according to information derived from the data.
Algorithm
With a given tree-partition structure $(R_j){j=1}^J$, it is really straightforward
to find optimal estimates $(w_j){j=1}^J\in \mathbb{R}$.
Now finding an optimal partition $(R_j)_{j=1}^J$ is entirely different matter: exhaustive
search is out of question, so the algorithm to go is the greedy top-down recursive
partitioning procedure.
Boosted trees is an ensemble $\hat{f}(x) = \sum_{m=1}^M \hat{f}_m(x)$, with weights
incorporated in each base estimator.
GBRT
1. Initialize the ensemble with $\hat{f}0 \leftarrow \mathop{\text{argmin}}\gamma \sum_{i=1}^n L(y_i, \gamma)$;
2. for $m=1,\ldots, M$ do:
1. Compute the current sample descent direction (negative gradient) using the current ensmeble:
$$ r_{mi} = \biggl.
- \frac{\partial}{\partial f(x_i)} L\bigl(y_i, f(x_i)\bigr)
\biggr\rvert_{f=f_{m-1}} \,, $$
this is a finte-dimensional version of the first variation $\delta J$
of a functional $J:\mathbb{R}^{\mathcal{X}}\mapsto \mathbb{R}$ on some
function space;
2. Fit an MSE minimizing regression tree $\hat{T}m = \sum{j=1}^J \beta_j 1_{R_{mj}}(x)$
to the current gradient $(r_{mi}){i=1}^n$ and keep its partition structure;
basically, we want to generalize the point estimates of the variation to
some neighbourhood of each sample point (here the heighbourhoods are the tree
partitions);
3. Line search: determine the optmial node-weights
$$w{mj} \leftarrow \mathop{\text{argmin}}w
\sum{i\,:\,x_i\in R_{mj}} L(y_i, f_{m-1}(x_i) + w)\,;$$
4. Update the ensemble $f_m = f_{m-1} + \sum_{j=1}^J w_{mj} 1_{R_{mj}}$;
3. Return $\hat{f}(x) = f_M(x)$;
Usage
End of explanation
clf1_ = GradientBoostingClassifier(n_estimators=10,
max_depth=1, learning_rate=0.75,
random_state=random_state).fit(X_train, y_train)
clf2_ = GradientBoostingClassifier(n_estimators=100,
max_depth=1, learning_rate=0.75,
random_state=random_state).fit(X_train, y_train)
clf3_ = GradientBoostingClassifier(n_estimators=10,
max_depth=3, learning_rate=0.75,
random_state=random_state).fit(X_train, y_train)
clf4_ = GradientBoostingClassifier(n_estimators=100,
max_depth=3, learning_rate=0.75,
random_state=random_state).fit(X_train, y_train)
print "GBRT (10, stumps) error:", 1 - clf1_.score(X_test, y_test)
print "GBRT (100, stumps) error:", 1 - clf2_.score(X_test, y_test)
print "GBRT (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test)
print "GBRT (100, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
Explanation: Both Gradient boosting ensembles in scikit accept the following paramters:
- loss -- loss function to be optimized:
* Classification:
* 'deviance' -- refers logistic regression with probabilistic outputs;
* 'exponential' -- gradient boosting recovers the AdaBoost algorithm;
* Regression:
* 'ls' -- refers to least squares regression;
* 'lad' -- (least absolute deviation) is a highly robust loss function solely
based on order information of the input variables;
* 'huber' -- is a combination of the two;
* 'quantile' -- allows quantile regression (use alpha to specify the
quantile);
- learning_rate -- learning rate shrinks the contribution of each tree
by learning_rate;
- n_estimators -- The number of boosting stages to perform. Gradient boosting
is fairly robust to over-fitting so a large number usually results in better performance;
- max_depth -- maximum depth of the individual regression tree estimators (the
maximum depth limits the number of nodes in the tree, the best value depends on the
interaction of the input variables);
- min_samples_split -- The minimum number of samples required to split an
internal node;
- min_samples_leaf -- The minimum number of samples required to be at a
leaf node;
- min_weight_fraction_leaf -- The minimum weighted fraction of the input
samples required to be at a leaf node;
- subsample -- The fraction of samples to be used for fitting the individual
base learners (choosing subsample < 1.0 results in Stochastic Gradient Boosting
and leads to a reduction of variance and an increase in bias);
- max_features -- The number of features to consider when looking for the
best split: sqrt, log2 and share in $(0,1]$ are accepted (choosing max_features < n_features
leads to a reduction of variance and an increase in bias);
- max_leaf_nodes -- Grow trees with max_leaf_nodes in best-first fashion,
with best nodes are defined as relative reduction in impurity;
- alpha -- the alpha-quantile of the huber loss function and the quantile
loss function (only if loss='huber' or loss='quantile').
Examples
High learning Rate, small ensemble
End of explanation
clf1_ = GradientBoostingClassifier(n_estimators=100,
max_depth=1, learning_rate=0.1,
random_state=random_state).fit(X_train, y_train)
clf2_ = GradientBoostingClassifier(n_estimators=1000,
max_depth=1, learning_rate=0.1,
random_state=random_state).fit(X_train, y_train)
clf3_ = GradientBoostingClassifier(n_estimators=100,
max_depth=3, learning_rate=0.1,
random_state=random_state).fit(X_train, y_train)
clf4_ = GradientBoostingClassifier(n_estimators=1000,
max_depth=3, learning_rate=0.1,
random_state=random_state).fit(X_train, y_train)
print "GBRT (100, stumps) error:", 1 - clf1_.score(X_test, y_test)
print "GBRT (1000, stumps) error:", 1 - clf2_.score(X_test, y_test)
print "GBRT (100, 3 levels) error:", 1 - clf3_.score(X_test, y_test)
print "GBRT (1000, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
Explanation: Large ensemble, small learning rate
End of explanation
import xgboost as xg
seed = random_state.randint(0x7FFFFFFF)
Explanation: <hr/>
XGBoost
Briefly, XGBoost, is a higlhy streamlined open-source gradient boosting library, which
supports many useful loss functions and uses second order loss approximation both
to increas the ensemble accuracy and speed of convergence:
1. learning rate $\eta>0$ to regulate the convergence;
2. offer $l_1$ and $l_2$ regularization on the node-weights and bias-varaince tradeoff and sparsity;
3. cost-complexity pruning of the growm trees;
4. Employs specialized regression and classification tree growth algorithms
with random projections, and bagging;
It important to note, that XGBoost implements binary trees, which does not restrict
the model in any way. However this adds the need for an extra preprocessing step for
categorical features. Specifically the binary structure requires that such features
be $0-1$ encoded, which is likely to use excessive volumes of memory, especially
when the set of possible categories is of the order of thousands.
In order to permit the use of arbitrary convex loss functions --
$$ \sum_{i=1}^n L( y_i, \hat{y}i ) + \sum{k=1}^K \Omega(f_k)
\rightarrow \mathop{\mathtt{min}}{f_k\in\mathcal{M} } \,,$$
with prediction $\hat{y}_i = \sum{k=1}^K f_k(x_i)$, the loss $L(y, \hat{y})$,
and the additive complexity regularizer $\Omega(\cdot)$, -- and still achieve
high preformance during learning, the author of XGBoost, implemented a clever
trick: he uses FSAM general approach, but the minimization with respect to the
increment $f(\cdot)$ is performed on the second order Taylor series approximation
of the loss $L$ at $(x_i, y_i)$ and $F(\cdot)$. In particular the minimization
over $f(\cdot)$ is done on a quadratic approximation
$$ q_{y, x}
= L(y, F(x))
+ \frac{\partial L}{\partial \hat{y}}\bigg\vert_{(y,F(x))}!! f(x)
+ \frac{1}{2} \frac{\partial^2 L}{\partial \hat{y}^2}\bigg\vert_{(y,F(x))}! f(x)^2
\,, $$
rather than $L(y, F(x) + f(x))$.
Since $\Omega(F_{k-1})$ and $L( y_i, F_{k-1}(x_i) )$ are unaffected by the
choice of $f\in\mathcal{F}$ at iteration $k$, the greedy step can be reduced
to:
$$ f_k
\leftarrow \mathop{\mathtt{argmin}}\limits_{f\in \mathcal{F}}
\sum_{i=1}^n g^{k-1}i f(x_i) + \frac{1}{2} h^{k-1}_i f(x_i)^2 + \Omega(f)
\,, $$
where $g^{k-1}_i = \frac{\partial l(y, \hat{y})}{\partial \hat{y}}$ and
$h^{k-1}_i = \frac{\partial^2 l(y, \hat{y})}{\partial \hat{y}^2}$ evaluated
at $y=y_i$ and $\hat{y}=F{k-1}(x_i)$.
The values $g^{k-1}_i$ and $h^{k-1}_i$ are the gradient and hessian statistics
on the $i$-th observation, respectively. These statistics have to be recomputed
at each stage for the new $\hat{y}$. The statistics $g^{0}_i$ and $h^{0}_i$ are
initialized to values of the first and second derivatives of $L(y_i, c)$ for some
fixed $c$ at each $i=1,\ldots n$ ($c$ is the sample average in the case or
regression, or the log-odds of the class ratio).
Optimizing the objective
XGBoost uses criteria derived from the objective function that permit automatic
tree-pruning. Consider some tree $f$ with structure
$$ f = \sum_{j=1}^J w_j 1_{R_j} \,,$$
where $(R_j){j=1}^J\subseteq \mathcal{X}$ is its partition and $w\in\mathbb{R}^J$
-- leaf predicted values. For this tree the complexity regularization is
$$ \Omega(f) = \gamma J + \frac{\lambda}{2} \sum{j=1}^J w_j^2 + \alpha \sum_{j=1}^J \bigl|w_j\bigr| \,. $$
As one can see both excessively large leaf values and tree depths are
penalized.
stage $k\geq 1$
Using the map $x\mapsto j(x)$, which gives the unique leaf index $j=1,\ldots,J$ such
that $x\in R_j$, the objective function minimized at each stage $k\geq 1$ is given by
\begin{align}
\mathtt{Obj}k(R, w)
&= \sum{i=1}^n \bigl( g^{k-1}i w{j(x_i)} + \frac{1}{2} h^{k-1}i w{j(x_i)}^2 \bigr)
+ \frac{\lambda}{2} \sum_{j=1}^J w_j^2 + \alpha \sum_{j=1}^J \bigl|w_j\bigr| + \gamma J \
&= \sum_{j=1}^J \bigl( w_j G_{k-1}(R_j) + \frac{1}{2} \bigl( H_{k-1}(R_j) + \lambda \bigr) w_j^2
+ \alpha \bigl|w_j\bigr| + \gamma \bigr) \,,
\end{align}
where for any $P\subseteq X$, the values $G_{k-1}(P) = \sum_{i\,:\,x_i\in P} g^{k-1}i$
and $H{k-1}(P) = \sum_{i\,:\,x_i\in P} h^{k-1}i$ are called the first and the
second order gradient scores respectively. When $P = R_j$ these are the $j$-th leaf
gradinet statistics, which depend only on the ensemble $F{k-1}$ and are constant
relative to the increment $f$.
The structural score of an XGBoost regression tree is the minimal value of the
objective function for a fixed partition structure $R = (R_j){j=1}^J$:
$$ \mathtt{Obj}^*(R)
= \min{w_j} \mathtt{Obj}k(R, w)
= \min{w_j} \sum_{i=1}^n \bigl( g^{k-1}i w{j(x_i)} + \frac{1}{2} h^{k-1}i w{j(x_i)}^2 \bigr)
+ \frac{\lambda}{2} \sum_{j=1}^J w_j^2 + \alpha \sum_{j=1}^J \bigl|w_j\bigr| + \gamma J
\,. $$
This is not an intermediate value of the objective function, but rather its difference
against $\sum_{i=1}^n l(y_i, F_{k-1}(x_i))$.
It is worth noting, that since there are no cross interactions between scores $w_j$
for different leaves, this minimization problem equivalently reduces to $J$ univariate
optimization problems:
$$ w_j G_{k-1}(R_j) + \frac{1}{2} \bigl( H_{k-1}(R_j) + \lambda \bigr) w_j^2
+ \alpha \bigl|w_j\bigr| + \gamma \to \min_{w_j}\,,$$
for $j=1,\ldots, J$. Let's assume that $H_{k-1}(R_j) + \lambda > 0$, since otherwise
this problem has no solution.
The optimal leaf value $w_j^$ in the general case is given by
$$ w^j = - \frac{1}{H{k-1}(R_j) + \lambda}\begin{cases}
G_{k-1}(R_j) + \alpha & \text{ if } G_{k-1}(R_j) \leq -\alpha\
0&\text{ if } G_{k-1}(R_j) \in [-\alpha, \alpha]\
G_{k-1}(R_j) - \alpha & \text{ if } G_{k-1}(R_j) \geq \alpha
\end{cases} \,. $$
Tree construction process
Trees in XGBoost employ a greedy algorithm for recursive tree construction, outlined below:
1. every region $R_j$ in the partition $R$ is probed for the optimal binary split
$R_j\to R_{j_1}!\|R_{j_2}$ according to the structural gain score
$$ \mathtt{Gain}\bigl( R_j\to R_{j_1}!\| R_{j_2} \bigr) = \mathtt{Obj}^( R ) - \mathtt{Obj}^( R' ) \,, $$
where the partition $R'$ is constructed from $R$ by splitting $R_j\to R_{j_1}\|R_{j_2}$;
2. the region $R_j$ with the highest gain from the optimal split is split into $R_{j_1}$ and $R_{j_2}$;
3. the tree growth process continues until no more splits are possible.
The first step is the most computatuionally intensive, since it requires $O( J d n\log n )$
operations. This step which is performed by XGBoost in parallel, since FSAM and tree-induction
are series by nature.
Tree growth gain
For simplicity, let's consider the case when $\alpha = 0$, $L^2$ regularization.
In this case the following weights give optimal leaf scores
$$ w^j = -\frac{G{k-1}(R_j)}{H_{k-1}(R_j) + \lambda}\,.$$
The strucutral score becomes
$$ \mathtt{Obj}^(R) = \gamma J - \frac{1}{2}\sum_{j=1}^J \frac{G_{k-1}^2(R_j)}{H_{k-1}(R_j) + \lambda} \,. $$
Any split $R_j \rightarrow R_{j_1}!\| R_{j_2}$ yields the following gain:
$$ \mathtt{Gain} = \frac{1}{2}\Biggl(
\frac{G_{k-1}^2(R_{j_1})}{H_{k-1}(R_{j_1}) + \lambda}
+ \frac{G_{k-1}^2(R_{j_2})}{H_{k-1}(R_{j_2}) + \lambda}
- \frac{G_{k-1}^2(R_j)}{ H_{k-1}(R_j) + \lambda}
\Biggr) - \gamma\,.$$
Note that $G_{k-1}(\cdot)$ and $H_{k-1}(\cdot)$ are additive by construction:
$$G_{k-1}(R_j) = G_{k-1}(R_{j_1}) + G_{k-1}(R_{j_2}) \,,$$
and
$$H_{k-1}(R_j) = H_{k-1}(R_{j_1}) + H_{k-1}(R_{j_2}) \,.$$
Usage
End of explanation
clf_ = xg.XGBClassifier(
## Boosting:
n_estimators=50,
learning_rate=0.1,
objective="binary:logistic",
base_score=0.5,
## Regularization: tree growth
max_depth=3,
gamma=0.5,
min_child_weight=1.0,
max_delta_step=0.0,
subsample=1.0,
colsample_bytree=1.0,
colsample_bylevel=1.0,
## Regularization: leaf weights
reg_alpha=0.0,
reg_lambda=1.0,
## Class balancing
scale_pos_weight=1.0,
## Service parameters: missing=None, makes use np.nan as missing.
seed=seed,
missing=None,
nthread=2,
silent=False)
clf_.fit(
X_train, y_train,
early_stopping_rounds=5,
eval_set=[(X_valid_1, y_valid_1),
(X_valid_2, y_valid_2),])
y_pred_ = clf_.predict(X_test)
Explanation: Scikit-Learn interface
End of explanation
dtrain = xg.DMatrix(X_train, label=y_train, missing=np.nan)
dtest = xg.DMatrix(X_test, missing=np.nan)
dvalid1 = xg.DMatrix(X_valid_1, label=y_valid_1, missing=np.nan)
dvalid2 = xg.DMatrix(X_valid_2, label=y_valid_2, missing=np.nan)
Explanation: Internally XGBoost relies heavily on a custom dataset format DMatrix.
The interface, which is exposed into python has three capabilities:
- load datasets in libSVM compatible format;
- load SciPy's sparse matrices;
- load Numpy's ndarrays.
The DMatrix class is constructed with the following parameters:
- data : Data source of DMatrix. When data is string type, it represents
the path libsvm format txt file, or binary file that xgboost can read from,
or a matrix of observed features $X$ in a numpy or scipy matrix;
- label : the observation labels $y$ (could be categorical or numeric);
- missing : a vector of values that encode missing observations, if None defaults to np.nan;
- feature_names : the columns names of $X$;
- feature_types : defines the python types of each column of $X$, in case of heterogeneous data;
- weight : the vector of nonnegative weights of each observation in the dataset.
End of explanation
param = dict(
## Boosting:
eta=0.1,
objective="binary:logistic",
base_score=0.5,
## Regularization: tree growth
max_depth=3,
gamma=0.5,
min_child_weight=1.0,
max_delta_step=0.0,
subsample=1.0,
colsample_bytree=1.0,
colsample_bylevel=1.0,
## Regularization: leaf weights
reg_alpha=0.0,
reg_lambda=1.0,
## Class balancing
scale_pos_weight=1.0,
## Service parameters:
seed=seed,
nthread=2,
silent=1)
evals_result = dict()
xgb_ = xg.train(
## XGboost settings
param,
## Train dataset
dtrain,
## The size of the ensemble
num_boost_round=50,
## Early-stopping
early_stopping_rounds=5,
evals=[(dvalid1, "v1"),
(dvalid2, "v2"),],
evals_result=evals_result)
pred_ = xgb_.predict(dtest)
Explanation: The same XGBoost classifier as in the Scikit-learn example.
End of explanation
clf1_ = xg.XGBClassifier(n_estimators=10,
max_depth=1, learning_rate=0.1,
seed=seed).fit(X_train, y_train)
clf2_ = xg.XGBClassifier(n_estimators=1000,
max_depth=1, learning_rate=0.1,
seed=seed).fit(X_train, y_train)
clf2_ = xg.XGBClassifier(n_estimators=10,
max_depth=3, learning_rate=0.1,
seed=seed).fit(X_train, y_train)
clf2_ = xg.XGBClassifier(n_estimators=1000,
max_depth=3, learning_rate=0.1,
seed=seed).fit(X_train, y_train)
print "XGBoost (10, stumps) error:", 1 - clf1_.score(X_test, y_test)
print "XGBoost (1000, stumps) error:", 1 - clf2_.score(X_test, y_test)
print "XGBoost (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test)
print "XGBoost (1000, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
clf1_ = xg.XGBClassifier(n_estimators=10,
max_depth=1, learning_rate=0.5,
seed=seed).fit(X_train, y_train)
clf2_ = xg.XGBClassifier(n_estimators=1000,
max_depth=1, learning_rate=0.5,
seed=seed).fit(X_train, y_train)
clf2_ = xg.XGBClassifier(n_estimators=10,
max_depth=3, learning_rate=0.5,
seed=seed).fit(X_train, y_train)
clf2_ = xg.XGBClassifier(n_estimators=1000,
max_depth=3, learning_rate=0.5,
seed=seed).fit(X_train, y_train)
print "XGBoost (10, stumps) error:", 1 - clf1_.score(X_test, y_test)
print "XGBoost (1000, stumps) error:", 1 - clf2_.score(X_test, y_test)
print "XGBoost (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test)
print "XGBoost (1000, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
clf1_ = xg.XGBClassifier(n_estimators=1000,
max_depth=1, learning_rate=0.5,
seed=seed).fit(X_train, y_train,
early_stopping_rounds=20,
eval_set=[(X_valid_1, y_valid_1),
(X_valid_2, y_valid_2),])
Explanation: Both the sklearn-compatible and basic python interfaces have the similar
parameters. Except they are passed slightly differently.
Gradient boosting parameters:
- eta, learning_rate ($\eta$) -- step size shirinkage factor;
- n_estimators, num_boost_round ($M$) -- the size of the ensemble, number of boosting rounds;
- objective -- objective functions:
* "reg:linear" -- Linear regression: $(x_i, y_i){i=1}^n \in \mathcal{X} \times \mathbb{R}$,
$\hat{p}:\mathcal{X} \mapsto \mathbb{R}$;
* "reg:logistic" -- Logistic regression for probability regression task: $(x_i, y_i){i=1}^n
\in \mathcal{X} \times [0, 1]$, $\hat{p}:\mathcal{X} \mapsto [0, 1]$;
* "binary:logistic" -- Logistic regression for binary classification task: $(x_i, y_i){i=1}^n
\in \mathcal{X} \times {0, 1}$, $\hat{p}:\mathcal{X} \mapsto {0, 1}$;
* "binary:logitraw" -- Logistic regression for binary classification, output score
before logistic transformation: $\hat{p}:\mathcal{X} \mapsto \mathbb{R}$;
* "multi:softmax" -- Softmax for multi-class classification, output class index:
$\hat{p}:\mathcal{X} \mapsto {1,\ldots,K}$;
* "multi:softprob" -- Softmax for multi-class classification, output probability
distribution: $\hat{p}:\mathcal{X} \mapsto {\omega\in [0,1]^K\,:\, \sum{k=1}^K \omega_k = 1 }$;
- base_score -- global bias of the model: in linear regression ("reg:linear") sets
the bias of the regression function, in binary classification ("reg:logistic",
"binary:logistic" and "binary:logitraw") sets the base class ratio (transformed to log-odds
and added to logistic score).
Regularization - related to tree growth and decorrelation:
- max_depth -- this parameters limits the size of the tree, by setting a
hard bound on the number of tree layers (limits the recursion depth);
- min_child_weight -- the minimal value of the hessian statistic of a leaf
required for it to be considered a candidate for splitting;
- gamma ($\gamma$) -- the complexity cost parameter, imposes minimal structural
score gain for splitting a leaf of the currnt tree;
- subsample -- the share of the training data to use for growing a tree:
determines the size bootstrap smaples $Z^{b}$;
- colsample_bytree -- the size of the random subset of features, that
cam be used in the growth of the whole tree (accessible features);
- colsample_bylevel* -- subsample ratio of features when considering a split:
determines the size of the random subset of accessible features considered as
candidates for node splitting at each level of every tree.
Regularization - tree leaf weights:
- reg_alpha ($\alpha$) -- the importance of the $L^1$ regularizer;
- reg_lambda ($\lambda$) -- the weight of the $L^2$ regularization term;
- max_delta_step -- clips the absolute value of each leaf's score, thereby
making the tree growth step more conservative.
Class balancing (not used in multiclass problems as of commit c9a73fe2a99300aec3041371675a8fa6bc6a8a72):
- scale_pos_weight -- a uniform upscale/downscale factor for the weights of
positive examples ($y=+1$); Useful in imbalanced binary classification problems.
Early-stopping
- early_stopping_rounds -- the validation error on the last validation dataset needs
to decrease at least every early_stopping_rounds round(s) to continue training; If
equal to None, then early stopping is deactivated.
- eval_set -- validation datasets given as a list of tuples (DMatrix, name);
- evals_result -- a dictionary to store the validation results; the keys are the names
of the validation datasets, and values are the dictionaries of key-values pairs:
loss -- list of scores.
Examples
End of explanation
from sklearn.base import clone
from sklearn.cross_validation import KFold
def kfold_stack(estimators, X, y=None, predict_method="predict",
n_folds=3, shuffle=False, random_state=None,
return_map=False):
Splits the dataset into `n_folds` (K) consecutive folds (without shuffling
by default). Predictions are made on each fold while the K - 1 remaining folds
form the training set for the predictor.
Parameters
----------
estimators : list of estimators
The dictionary of estimators used to construct meta-features on
the dataset (X, y). A cloned copy of each estimator is fitted on
remainind data of each fold.
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples], optional
Target values.
predict_method : string, default="predict"
The method of each estimator, to be used for predictiong the
meta features.
n_folds : int, default=3
Number of folds. Must be at least 2.
shuffle : boolean, optional
Whether to shuffle the data before splitting into batches.
random_state : None, int or RandomState
When shuffle=True, pseudo-random number generator state used for
shuffling. If None, use default numpy RNG for shuffling.
Returns
----------
meta : array-like, shape = [n_samples, ...]
Computed meta-features of each estimator.
map : array-like
The map, identifying which estimator each column of `meta`
came from.
stacked_, index_ = list(), list()
folds_ = KFold(X.shape[0], n_folds=n_folds,
shuffle=shuffle, random_state=random_state)
for rest_, fold_ in folds_:
fitted_ = [clone(est_).fit(X[rest_], y[rest_])
for est_ in estimators]
predicted_ = [getattr(fit_, predict_method)(X[fold_])
for fit_ in fitted_]
stacked_.append(np.stack(predicted_, axis=1))
index_.append(fold_)
stacked_ = np.concatenate(stacked_, axis=0)
meta_ = stacked_[np.concatenate(index_, axis=0)]
if not return_map:
return meta_
map_ = np.repeat(np.arange(len(estimators)),
[pred_.shape[1] for pred_ in predicted_])
return meta_, map_
Explanation: <hr/>
Other methods
Stacking
Every ensemble method comprises of essentially two phases:
1. population of a dictionary of base learners (models, like classification
trees in AdaBoost, or regression trees in GBRT);
2. aggregation of the dictionary into a sinlge estimator;
These phases are not necessarily separated: in Bagging and Random Forests
they are (and so these can be done in parallel), in GBRT and AdaBoost they
are not. In hte latter, the procedure is path dependent (serial), i.e. the
dictionary is populated sequentially, so that each successive base estimator
is learnt conditional on the current dictonary.
Stacking is a method which allows to corectly construct second-level meta
features using ML models atop the first level inputs. By correctly we
mostly mean that there is little train-test leakeage, the resultng meta-
features though not i.i.d, can still to a certain degree comply to the
standard ML assumtions, and allow to focus on the aggregation step of
ensemble methods.
General pipeline
Let $Z = (X, y) = (x_i, y_i)_{i=1}^n$ be a dataset. The model construction
and verification pipeline goes as follows:
1. Split the dataset into nonoverlapping train and test datasets:
$Z^{\text{train}}$ and $Z^{\text{test}}$;
2. Apply stacking to get meta features, $\mathcal{P}^{\text{train}}$ (it is
possible to include the first-level features as well);
3. Split the meta-features, $\mathcal{P}^{\text{train}}$, into train and validation
sets: fit on the former, test and select models on the latter;
4. Use regularization at each stage to choose the best strategy against
overfitting;
For the final prediction:
1. learn a regularized model on the whole $Z^{\text{train}}$;
2. get the meta-features, $\mathcal{P}^{\text{test}}$, on the inputs of $Z^{\text{test}}$;
3. fit a regularized aggregaton model on the whole train sample of
meta-fetatures $\mathcal{P}^{\text{train}}$;
4. use the fitted aggregation model to compute final prediction on
the $\mathcal{P}^{\text{test}}$.
Leave-one-out stacking
The idea is to compute the meta feature of each example based on
a base estimator learnt on the sample with that observation knocked out.
Let $\hat{f}^{-i}m$ the $m$-th base estimator learnt on the sample $Z{-i}$
(without observation $z_i$). Then the meta-features $(\hat{p}{mi}){i=1}^n$
are given by
$$ \hat{p}_{mi} = \hat{f}^{-i}_m(x_i) \,.$$
$K$-fold stacking
Leave-one-out stacking is in general computationally intensive, unless the
base estimator is linear in the targets, in which case this can be done
quite fast. A possible solution to this is inspiured by $K$-fold cross
validation technique.
Let $C_k\subset{1,\ldots, n}$ be the $k$-th fold in $K$-fold, and let
$C_{-k}$ be the rest of the dataset $Z$: $C_{-k} = {i\,:\,i\notin C_k}$.
$C_k$ has approximately $\frac{n}{K}$ observations. The dataset is randomly
shuffled before being partitioned into $K$ folds.
Define $\hat{f}^{-k}m$ as the $m$-th base estimator learnt on $Z^{-k}$
given by $(z_i){i\in C_{-k}}$. then the metafeatures are computed using
$$ \hat{p}{mi} = \hat{f}^{-k_i}_m(x_i) \,, $$
where $k_i$ is the unique index $k$ in the $K$-fold such that $i\in C_k$.
Basically we use the data outside the $k$-th fold, $C{-k}$ to construct
the meta-features inside the $k$-th fold, $C_k$.
Using the meta-features
For example, if we want to compute a linear combination of the regression
estimators, we must solve the following optimization problem (unrestricted
LS):
$$ \sum_{i=1}^n \bigl(y_i - \beta'\hat{p}i \bigr)^2\,, $$
where $\hat{p}_i = (\hat{p}{mi}){m=1}^M$ and $\beta, \hat{p}_i \in \mathbb{R}^{m\times1}$
for all $i$.
If a convex combination is required ($\beta_m\geq 0$ and $\sum{m=1}^M\beta_m = 1$),
one solves a constrained optimization problem. If pruning is desirable,
then one should use either lasso ($L_1$ regularization), or subset-selection
methods.
Usage
Below is a simple $K$-fold stacking procedure. It estimates each model
on the $K-1$ folds and predicts (with the specified method) the on the $K$-th
fold.
End of explanation
seed = random_state.randint(0x7FFFFFFF)
Explanation: Examples
Combining base classifiers using Logistic Regression is a typical example of how
first level features $x\in \mathcal{X}$ are transformed by $\hat{f}m:\mathcal{X}\mapsto \mathbb{R}$
into second-level meta features $(\hat{f}_m(x)){m=1}^M \in \mathbb{R}^M$, that
are finally fed into a logistic regression, that does the utlimate prediction.
Here $K$-fold stacking allows proper estimation of the second-level model for a
classification task.
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
estimators_ = [
RandomForestClassifier(n_estimators=200, max_features=0.5, n_jobs=-1, random_state=seed),
GradientBoostingClassifier(n_estimators=200, max_depth=3, learning_rate=0.75, random_state=seed),
BaggingClassifier(n_estimators=200, base_estimator=DecisionTreeClassifier(max_depth=None),
max_samples=0.5, n_jobs=-1, random_state=seed),
xg.XGBClassifier(n_estimators=200, max_depth=3, learning_rate=0.5, nthread=-1, seed=seed),
## Both SVM and AdaBoost (on stumps) are very good here
SVC(kernel="rbf", C=1.0, probability=True, gamma=1.0),
AdaBoostClassifier(n_estimators=200, base_estimator=DecisionTreeClassifier(max_depth=1),
random_state=seed),
]
estimator_names_ = [est_.__class__.__name__ for est_ in estimators_]
Explanation: Define the first-level predictors.
End of explanation
meta_train_ = kfold_stack(estimators_, X_train, y_train,
n_folds=5, predict_method="predict_proba")[..., 1]
Explanation: Create meta features for the train set: using $K$-fold stacking estimate the class-1
probabilities $\hat{p}i = (\hat{p}{mi}){m=1}^M = (\hat{f}^{-k_i}_m(x_i)){m=1}^M$
for every $i=1,\ldots, n$.
End of explanation
fitted_ = [clone(est_).fit(X_train, y_train) for est_ in estimators_]
meta_test_ = np.stack([fit_.predict_proba(X_test) for fit_ in fitted_], axis=1)[..., 1]
Explanation: Now using the whole train, create test set meta features: $p_j = (\hat{f}m(x_j)){m=1}^M$
for $j=1,\ldots, n_{\text{test}}$. Each $\hat{f}_m$ is estimated on the whole train set.
End of explanation
base_scores_ = pd.Series([1 - fit_.score(X_test, y_test) for fit_ in fitted_],
index=estimator_names_)
base_scores_
Explanation: The prediction error of each individual classifier (trained on the whole train dataset).
End of explanation
from sklearn.grid_search import GridSearchCV
grid_cv_ = GridSearchCV(LogisticRegression(penalty="l1"),
param_grid=dict(C=np.logspace(-3, 3, num=7)),
n_jobs=-1, cv=5).fit(meta_train_, y_train)
log_ = grid_cv_.best_estimator_
grid_cv_.grid_scores_
Explanation: Now using $10$-fold cross validation on the train dataset $(\hat{p}i, y_i){i=1}^n$,
find the best $L_1$ regularization coefficient $C$.
End of explanation
from math import exp
print "Intercept:", log_.intercept_, "\nBase probability:", 1.0/(1+exp(-log_.intercept_))
pd.Series(log_.coef_[0], index=estimator_names_)
Explanation: The weights chosen by logisitc regression are:
End of explanation
print "Logistic Regression (l1) error:", 1 - log_.score(meta_test_, y_test)
Explanation: Let's see how well the final model works on the test set:
End of explanation
log_
Explanation: and the best model
End of explanation
from sklearn.ensemble import VotingClassifier
Explanation: <hr/>
Voting Classifier
This is a very basic method of constructing an aggregated classifier from a finite dictionary.
Let $\mathcal{V}$ be the set of classifiers (voters), with each calssifier's class probablilites
given by $\hat{f}_v:\mathcal{X}\mapsto\mathbb{[0,1]}^K$ and prediction
$\hat{g}_v(x) = \mathtt{MAJ}(\hat{f}_v(x))$.
The majority vote over $K$ candidates with weights $(w_k){k=1}^K\in \mathbb{R}$ is defined as
$$ \mathtt{MAJ}(w) = \mathop{\text{argmax}}{k=1\,\ldots, K} w_k \,. $$
Hard voting collects label-prediction of each voter, counts the voting proportions and,
then predict the label with the most votes. Mathematically the following aggregation is
used:
$$ \hat{g}^{\text{maj}}\mathcal{V}(x)
= \mathtt{MAJ}\Bigl( W^{-1} \sum{v\in \mathcal{V}} w_v e_{\hat{g}v(x)} \Bigr)
\,, $$
where $e_k$ is the $k$-th unit vector in ${0,1}^{K\times 1}$, and $W = \sum{v\in \mathcal{V}} w_v$.
Soft voting uses the class-probabilities functions directly: it computes the weighted
average probability of each class over all voters, and then selects the class with the
highest posterior probability. Namely,
$$ \hat{g}^{\text{maj}}\mathcal{V}(x)
= \mathtt{MAJ}\bigl( W^{-1} \sum{v\in \mathcal{V}} w_v \hat{f}_v(x) \bigr)
\,. $$
As in Bagging, if the base classifiers are well calibrated, then hard voting
will ovrestimate probabilities.
Usage
End of explanation
clf1_ = VotingClassifier(list(zip(estimator_names_, estimators_)),
voting="hard", weights=None).fit(X_train, y_train)
clf2_ = VotingClassifier(list(zip(estimator_names_, estimators_)),
voting="soft", weights=None).fit(X_train, y_train)
print "Hard voting classifier error:", 1 - clf1_.score(X_test, y_test)
print "Soft voting classifier error:", 1 - clf2_.score(X_test, y_test)
Explanation: VotingClassifier options:
- estimators -- The list of classifiers;
- voting -- Vote aggregation strategy:
* "hard" -- use predicted class labels for majority voting;
* "soft" -- use sums of the predicted probalities for determine
the most likely class;
- weights -- weight the occurrences of predicted class labels (hard voting)
or class probabilities while averaging (soft voting);
Examples
Combine the estimators from the stacking example
End of explanation
from sklearn.linear_model import Lars
lars_ = Lars(fit_intercept=False, positive=True).fit(meta_train_, y_train)
weights_ = lars_.coef_
pd.Series(weights_, index=estimator_names_)
Explanation: Let's use LASSO Least Angle Regression (LARS, HTF p. 73) to select weights of the
base calssifiers.
End of explanation
print "LARS prediction R2: %.5g"%(lars_.score(meta_test_, y_test),)
base_scores_
Explanation: Show the RMSE of lars, and the error rates of the base classifiers.
End of explanation
clf1_ = VotingClassifier(list(zip(estimator_names_, estimators_)),
voting="soft", weights=weights_.tolist()).fit(X_train, y_train)
print "Soft voting ensemble with LARS weights:", 1 - clf1_.score(X_test, y_test)
Explanation: Let's see if there is improvement.
End of explanation
from sklearn.datasets import make_gaussian_quantiles
def scikit_example(n_samples, random_state=None):
X1, y1 = make_gaussian_quantiles(cov=2., n_samples=int(0.4*n_samples),
n_features=2, n_classes=2,
random_state=random_state)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5,
n_samples=int(0.6*n_samples),
n_features=2, n_classes=2,
random_state=random_state)
return np.concatenate((X1, X2)), np.concatenate((y1, 1 - y2))
Explanation: Indeed, this illustrates that clever selection of classifier weights might be profitable.
Another example on Voting Clasifier (from Scikit guide)
End of explanation
from sklearn.cross_validation import train_test_split
X2, y2 = scikit_example(n_samples=1500, random_state=random_state)
X2_train, X2_test, y2_train, y2_test = \
train_test_split(X2, y2, test_size=1000, random_state=random_state)
min_, max_ = np.min(X2, axis=0) - 1, np.max(X2, axis=0) + 1
xx, yy = np.meshgrid(np.linspace(min_[0], max_[0], num=51),
np.linspace(min_[1], max_[1], num=51))
Explanation: Get a train set, a test set, and a $2$-d mesh for plotting.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
classifiers_ = [
("AdaBoost (100) DTree (3 levels)",
AdaBoostClassifier(n_estimators=100, base_estimator=DecisionTreeClassifier(max_depth=3),
random_state=random_state)),
("KNN (k=3)", KNeighborsClassifier(n_neighbors=3)),
("Kernel SVM", SVC(kernel='rbf', C=1.0, gamma=1.0,
probability=True)),]
estimators_ = classifiers_ + [("Soft-voting ensemble",
VotingClassifier(estimators=classifiers_,
voting="soft",
weights=[2,1,2])),]
Explanation: Make a dictionary of simple classifers
End of explanation
from itertools import product
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
for i, (name_, clf_) in zip(product([0, 1], [0, 1]), estimators_):
clf_.fit(X2_train, y2_train)
prob_ = clf_.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1].reshape(xx.shape)
axes[i[0], i[1]].contourf(xx, yy, prob_, alpha=0.4, cmap=plt.cm.coolwarm_r,
levels=np.linspace(0,1, num=51), lw=0)
axes[i[0], i[1]].scatter(X2_train[:, 0], X2_train[:, 1], c=y2_train, alpha=0.8, lw=0)
axes[i[0], i[1]].set_title(name_)
plt.show()
Explanation: Show the decision boundary.
End of explanation
for name_, clf_ in estimators_:
print name_, " error:", 1-clf_.score(X2_test, y2_test)
Explanation: Let's see if this simple soft-voting ensemble improved the test error.
End of explanation
stump_ = DecisionTreeClassifier(max_depth=1).fit(X_train, y_train)
t224_ = DecisionTreeClassifier(max_depth=None, max_leaf_nodes=224).fit(X_train, y_train)
ada_ = AdaBoostClassifier(n_estimators=400, random_state=random_state).fit(X_train, y_train)
bag_ = BaggingClassifier(n_estimators=400, random_state=random_state,
n_jobs=-1).fit(X_train, y_train)
rdf_ = RandomForestClassifier(n_estimators=400, random_state=random_state,
n_jobs=-1).fit(X_train, y_train)
Explanation: <hr/>
Example from HTF pp. 339 - 340
Now let's inspect the test error as a function of the size of the ensemble
End of explanation
def get_staged_accuracy(ensemble, X, y):
prob_ = np.stack([est_.predict_proba(X)
for est_ in ensemble.estimators_],
axis=1).astype(float)
pred_ = np.cumsum(prob_[..., 1] > 0.5, axis=1).astype(float)
pred_ /= 1 + np.arange(ensemble.n_estimators).reshape((1, -1))
return np.mean((pred_ > .5).astype(int) == y[:, np.newaxis], axis=0)
bag_scores_ = get_staged_accuracy(bag_, X_test, y_test)
rdf_scores_ = get_staged_accuracy(rdf_, X_test, y_test)
ada_scores_ = np.array(list(ada_.staged_score(X_test, y_test)))
Explanation: Get the prediction as a function of the memebers in the ensemble.
End of explanation
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.set_ylim(0, 0.50) ; ax.set_xlim(-10, ada_.n_estimators)
ax.plot(1+np.arange(ada_.n_estimators), 1-ada_scores_, c="k", label="AdaBoost")
ax.plot(1+np.arange(bag_.n_estimators), 1-bag_scores_, c="m", label="Bagged DT")
ax.plot(1+np.arange(bag_.n_estimators), 1-rdf_scores_, c="c", label="RF")
ax.axhline(y=1 - stump_.score(X_test, y_test), c="r", linestyle="--", label="stump")
ax.axhline(y=1 - t224_.score(X_test, y_test), c="b", linestyle="--", label="DT $J=224$")
ax.legend(loc="best")
ax.set_xlabel("Iterations")
ax.set_ylabel("Test error")
Explanation: Plot the test error.
End of explanation |
10,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Is Unique
Is Unique
Step1: Check Permutation
Check Permutation
Step2: Palindrome Permutation
Step3: Palindrome Permutation
Given a string, write a function to check if it is a permutation of a palindrome. A palindrome is a word or phrase that is the same forwards and backwards. A permutation is a rearrangement of letters. The palindrome does not need to be limited to just dictionary words.
Step4: One Away
Step5: String Compression
Step6: Rotate Matrix | Python Code:
import random
#STR = random.uniform(('a').encode('ascii'), int('Z'))
#print(ord('A'))
#print(ord('z'))
#lowercase = [ chr(char) for char in range(ord('a'), ord('z') + 1)]
#uppercase = [ chr(char) for char in range(ord('A'), ord('Z') + 1)]
#string_seed = lowercase + uppercase
#print(string_seed)
def gen_randstr():
size = random.randint(0, 100)
return "".join((random.choice(string_seed) for _ in range(size)))
random_string = gen_randstr()
print(random_string)
def isUnique(input_string):
marks = [False for _ in range(256)]
for i in range(1, len(input_string)):
char = ord(input_string[i])
if marks[char] == True:
print("not unique {} at {}".format(input_string[i], i))
return False
else:
marks[char] = True
return True
print(isUnique(random_string))
Explanation: Is Unique
Is Unique: Implement an algorithm to determine if a string has all unique characters. What if you cannot use additional data structures?
End of explanation
def str_shuffle(str):
str_list = list(str)
random.shuffle(str_list)
return "".join(str_list)
str0 = gen_randstr()
str1 = gen_randstr()
str2 = gen_randstr()
str3 = str_shuffle(str2[:])
print(str2)
print(str3)
def check_permutation(str0, str1):
str0_ = "".join(sorted(str0))
str1_ = "".join(sorted(str1))
print(str0_)
print(str1_)
if len(str0_) != len(str1_):
return False
for i in range(len(str0_)):
if str0_[i] != str1_[i]:
print(i)
return False
return True
print("Test#1: %s"%("Pass" if False == check_permutation(str0, str1) else "Fail"))
print("Test#2: %s"%("Pass" if True == check_permutation(str2, str3) else 'Fail'))
Explanation: Check Permutation
Check Permutation: Given two strings, write a method to decide if one is a permutation of the other.
End of explanation
def check_palindrome(str0):
size = len(str0)
for i in range(size):
if str0[i] != str0[size - 1 - i]:
return False
return True
str0 = "AAAABBBBAAAA"
str1 = "AAAAAAAAABBB"
print("Test#1: %s"%("Pass" if True == check_palindrome(str0) else "Fail"))
print("Test#2: %s"%("Pass" if False == check_palindrome(str1) else "Fail"))
Explanation: Palindrome Permutation:
Given a string, write a function to check if it is a permutation of a palindrome. A palindrome is a word or phrase that is the same forwards and backwards. A permutation is a rearrangement of letters. The palindrome does not need to be limited to just dictionary words.
End of explanation
def check_palindrome_permutation(str0):
str0 = str0.lower()
histogram = {}
for ch in str0:
if ch != ' ':
histogram[ch] = histogram.get(ch, 0) + 1
# check one odd entries
found_odd = False
for ch, value in histogram.items():
if value%2 == 1:
if found_odd:
return False
found_odd = True
return True
print("Test#1: %s"%("Pass" if True == check_palindrome_permutation('Tact Coa') else "Fail"))
print("Test#2: %s"%("Pass" if False == check_palindrome_permutation('AAABBBCCC') else "Fail"))
Explanation: Palindrome Permutation
Given a string, write a function to check if it is a permutation of a palindrome. A palindrome is a word or phrase that is the same forwards and backwards. A permutation is a rearrangement of letters. The palindrome does not need to be limited to just dictionary words.
End of explanation
def check_same(str0, str1):
if len(str0) != len(str1):
return False
for i in range(len(str0)):
if str0[i] != str1[i]:
return False
return True
def check_oneaway(str0, str1):
for i in range(len(str0)):
if (i < len(str1) and str0[i] != str1[i]) or i > (len(str1) - 1):
if check_same(str0[i + 1:], str1[i:]):
# remove
return True
if check_same(str0[i:],str1[i + 1:]):
# insert
return True
if check_same(str0[i+1:], str1[i+1:]):
# replace
return True
return False
return False
print("Test#1: %s"%("Pass" if True == check_oneaway("pale", "ple") else "Fail"))
print("Test#2: %s"%("Pass" if True == check_oneaway("pales", "pale") else "Fail"))
print("Test#3: %s"%("Pass" if True == check_oneaway("pale", "bale") else "Fail"))
print("Test#4: %s"%("Pass" if False == check_oneaway("pale", "bake") else "Fail"))
print("Test#4: %s"%("Pass" if False == check_oneaway("AAAAABBBBBBBBB", "AAAAABBBB") else "Fail"))
Explanation: One Away:
There are three types of edits that can be performed on strings: insert a character,remove a character, or replace a character. Given two strings, write a function to check if they are one edit (or zero edits) away.
End of explanation
def _string_compression(str0):
dest = ""
cur_ch = str0[0]
count = 0
for i in range(len(str0)):
if cur_ch == str0[i]:
count += 1
else:
dest += cur_ch
dest += str(count)
cur_ch = str0[i]
count = 1
dest += cur_ch
dest += str(count)
cur_ch = str0[i]
count = 0
return dest
def string_compression(str0):
str_ = _string_compression(str0)
if len(str_) < len(str0):
return str_
else:
return str0
print("Test#1: %s"%("Pass" if "A4B5C3" == string_compression("AAAABBBBBCCC") else "Fail"))
print("Test#2: %s"%("Pass" if "ABCDEF" == string_compression("ABCDEF") else "Fail"))
print("Test#3: %s"%("Pass" if "a2b1c5a3" == string_compression("aabcccccaaa") else "Fail"))
Explanation: String Compression:
Implement a method to perform basic string compression using the counts of repeated characters. For example, the string aabcccccaaa would become a2b1c5a3. If the "compressed" string would not become smaller than the original string, your method should return the original string. You can assume the string has only uppercase and lowercase letters (a - z).
End of explanation
import numpy as np
matrix = np.random.randint(0, 20, (10,10))
print(matrix)
def set_matrix_zero(mat):
zero_rows = []
zero_cols = []
h, w = mat.shape
for i in range(h):
for j in range(w):
if mat[i][j] == 0:
zero_rows.append(i)
zero_cols.append(j)
for row in zero_rows:
mat[row, :] = 0
for col in zero_cols:
mat[:, col] = 0
return mat
print(set_matrix_zero(matrix))
Explanation: Rotate Matrix:
Given an image represented by an NxN matrix, where each pixel in the image is 4 bytes, write a method to rotate the image by 90 degrees. Can you do this in place?
Zero Matrix:
Write an algorithm such that if an element in an MxN matrix is 0, its entire row and column are set to 0.
End of explanation |
10,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create all the columns of the dataframe as series
Step2: Create a dictionary variable that assigns variable names
Step3: Create a dataframe and set the order of the columns using the columns attribute
Step4: Set the dataframe's index to be year
Step5: View the horsekick dataframe
Step6: Count the number of times each number of deaths occurs in each regiment
Step7: Count the number of times each monthly death total appears in guardCorps
Step8: List all the unique values in guardCorps | Python Code:
import pandas as pd
Explanation: Title: Count Values In Pandas Dataframe
Slug: pandas_dataframe_count_values
Summary: Count Values In Pandas Dataframe
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Import the pandas module
End of explanation
year = pd.Series([1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884,
1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894])
guardCorps = pd.Series([0,2,2,1,0,0,1,1,0,3,0,2,1,0,0,1,0,1,0,1])
corps1 = pd.Series([0,0,0,2,0,3,0,2,0,0,0,1,1,1,0,2,0,3,1,0])
corps2 = pd.Series([0,0,0,2,0,2,0,0,1,1,0,0,2,1,1,0,0,2,0,0])
corps3 = pd.Series([0,0,0,1,1,1,2,0,2,0,0,0,1,0,1,2,1,0,0,0])
corps4 = pd.Series([0,1,0,1,1,1,1,0,0,0,0,1,0,0,0,0,1,1,0,0])
corps5 = pd.Series([0,0,0,0,2,1,0,0,1,0,0,1,0,1,1,1,1,1,1,0])
corps6 = pd.Series([0,0,1,0,2,0,0,1,2,0,1,1,3,1,1,1,0,3,0,0])
corps7 = pd.Series([1,0,1,0,0,0,1,0,1,1,0,0,2,0,0,2,1,0,2,0])
corps8 = pd.Series([1,0,0,0,1,0,0,1,0,0,0,0,1,0,0,0,1,1,0,1])
corps9 = pd.Series([0,0,0,0,0,2,1,1,1,0,2,1,1,0,1,2,0,1,0,0])
corps10 = pd.Series([0,0,1,1,0,1,0,2,0,2,0,0,0,0,2,1,3,0,1,1])
corps11 = pd.Series([0,0,0,0,2,4,0,1,3,0,1,1,1,1,2,1,3,1,3,1])
corps14 = pd.Series([ 1,1,2,1,1,3,0,4,0,1,0,3,2,1,0,2,1,1,0,0])
corps15 = pd.Series([0,1,0,0,0,0,0,1,0,1,1,0,0,0,2,2,0,0,0,0])
Explanation: Create all the columns of the dataframe as series
End of explanation
variables = dict(guardCorps = guardCorps, corps1 = corps1,
corps2 = corps2, corps3 = corps3, corps4 = corps4,
corps5 = corps5, corps6 = corps6, corps7 = corps7,
corps8 = corps8, corps9 = corps9, corps10 = corps10,
corps11 = corps11 , corps14 = corps14, corps15 = corps15)
Explanation: Create a dictionary variable that assigns variable names
End of explanation
horsekick = pd.DataFrame(variables, columns = ['guardCorps',
'corps1', 'corps2',
'corps3', 'corps4',
'corps5', 'corps6',
'corps7', 'corps8',
'corps9', 'corps10',
'corps11', 'corps14',
'corps15'])
Explanation: Create a dataframe and set the order of the columns using the columns attribute
End of explanation
horsekick.index = [1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884,
1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894]
Explanation: Set the dataframe's index to be year
End of explanation
horsekick
Explanation: View the horsekick dataframe
End of explanation
result = horsekick.apply(pd.value_counts).fillna(0); result
Explanation: Count the number of times each number of deaths occurs in each regiment
End of explanation
pd.value_counts(horsekick['guardCorps'].values, sort=False)
Explanation: Count the number of times each monthly death total appears in guardCorps
End of explanation
horsekick['guardCorps'].unique()
Explanation: List all the unique values in guardCorps
End of explanation |
10,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Activité
Step1: Ajouter un cube a coté du mini_dof
Step2: L'objectif est d'éloigner le cube le long de l'axe y.
L'équipe qui déplace le cube le plus loin gagne !
Première methode, par tatonnement (essais - erreurs)
Step3: La méthode goto_position qui permet de donner un angle entre 0 et 150 à un moteur
Step4: Pour obtenir votre score, regardons les coordonnées du cube sur l'axe y
Step5: recommencer la simulation
Step6: Seconde méthode, par le calcul
Step7: Analyser la figure suivante pour trouver directement les angles de départ des moteurs (notez bien que les bras du robot forment un triangle équilatéral)
Step8: Calculer les angles finaux pour éloigner au maximum le cube
Step9: Pour obtenir votre score, regardons les coordonnées du cube sur l'axe y
Step10: Expérimentations appliquées sur le robot réel
Maintenant, vous pouvez mettre en oeuvre vos programmes sur un véritable mini_dof. Attention, il faut placer le cube à 10cm du bord du support.
Step11: Positionner le robot dans la position de départ, tous les moteurs à zéro.
Step12: Veillez à bien placer le robot par rapport au cube. C'est à dire dans le même sens que dans le simulateur, avec le cube à 10 cm du bord du support du robot. Il faut également veiller à ce que le support soit immobilisé pour ne pas basculer.
Vous pouvez ensuite recopier le code qui fonctionné dans le simulateur pour le tester avec le vrai robot.
Le code de la méthode par tatonnement
Step13: Mesurer le déplacement du cube et comparez-le avec l'éloignement obtenu lors de la simulation.
Remettez le robot en position de départ
Step14: N'oubliez pas de replacer le cube !
Maintenant avec les angles calculés | Python Code:
from poppy.creatures import Poppy4dofArmMini
mini_dof = Poppy4dofArmMini(simulator='vrep')
Explanation: Activité : déplacer un objet à l'aide d'un bras robotisé
Compétences visées par cette activité :
Résoudre un problème par l'expérimentation. Comparer l'approche par l'expérimentation avec avec l'approche par le calcul.
Aborder la robotique et la notion de mouvements dans l'espace.
Utiliser des théorèmes de géométrie et calculer des angles.
Lien avec les programmes scolaires, voir :
Pour le collège : http://www.poppy-prof.fr/?page_id=4&id=63 <br>
Pour ICN en classe de seconde : http://www.poppy-prof.fr/?page_id=4&id=62
Instancier le bras que l'on appellera mini_dof :
End of explanation
io = mini_dof._controllers[0].io
name = 'cube'
position = [0, 0, 0] # X, Y, Z
sizes = [0.07, 0.07, 0.07] # in meters
mass = 0.01 # in kg
io.add_cube(name, position, sizes, mass)
Explanation: Ajouter un cube a coté du mini_dof :
End of explanation
mini_dof.motors
Explanation: L'objectif est d'éloigner le cube le long de l'axe y.
L'équipe qui déplace le cube le plus loin gagne !
Première methode, par tatonnement (essais - erreurs) :
La liste des moteurs :
End of explanation
# Le code ci-dessous devrait être écrit par les élèves
# la correction est donnée à titre indicatif
mini_dof.m3.goto_position(90,2,wait='True')
mini_dof.m1.goto_position(90,3,wait='True')
mini_dof.m1.goto_position(-90,3,wait='True')
mini_dof.m3.goto_position(125,3,wait='True')
mini_dof.m2.goto_position(20,3,wait='True')
mini_dof.m4.goto_position(30,3,wait='True')
mini_dof.m4.goto_position(-30,3,wait='True')
mini_dof.m4.goto_position(-40,3,wait='True')
mini_dof.m4.goto_position(-60,3,wait='True')
mini_dof.m2.goto_position(30,3,wait='True')
mini_dof.m2.goto_position(40,3,wait='True')
mini_dof.m3.goto_position(110,3,wait='True')
mini_dof.m2.goto_position(50,3,wait='True')
mini_dof.m3.goto_position(90,3,wait='True')
mini_dof.m2.goto_position(60,3,wait='True')
mini_dof.m3.goto_position(70,3,wait='True')
Explanation: La méthode goto_position qui permet de donner un angle entre 0 et 150 à un moteur :
nom_du_robot.nom_du_moteur.goto_position(angle,durée en seconde)
End of explanation
d = io.get_object_position('cube')
dy = d[1]*100
print "La déplacement du cube sur l'axe y a été de %.2f cm" % dy
Explanation: Pour obtenir votre score, regardons les coordonnées du cube sur l'axe y :
End of explanation
mini_dof.close()
mini_dof = Poppy4dofArmMini(simulator='vrep')
Explanation: recommencer la simulation :
End of explanation
name = 'cube'
position = [0, 0, 0] # X, Y, Z
sizes = [0.07, 0.07, 0.07] # in meters
mass = 0.001 # in kg
io.add_cube(name, position, sizes, mass)
Explanation: Seconde méthode, par le calcul :
Replacer le cube :
End of explanation
# Le code ci-dessous devrait être écrit par les élèves
# la correction est donnée à titre indicatif
mini_dof.m1.goto_position(-90,4)
mini_dof.m2.goto_position(30,3,)
mini_dof.m3.goto_position(120,3,)
mini_dof.m4.goto_position(-60,3,wait='True')
Explanation: Analyser la figure suivante pour trouver directement les angles de départ des moteurs (notez bien que les bras du robot forment un triangle équilatéral) :
<img src="./images/triangle.jpg" alt="mini_dof" style="height: 500px;"/>
End of explanation
# Le code ci-dessous devrait être écrit par les élèves
# la correction est donnée à titre indicatif
mini_dof.m2.goto_position(90,3,)
mini_dof.m3.goto_position(0,3,)
mini_dof.m4.goto_position(0,3,wait='True')
Explanation: Calculer les angles finaux pour éloigner au maximum le cube :
End of explanation
d = io.get_object_position('cube')
dy = (d[1])*100
print "La déplacement du cube sur l'axe y a été de %.2f cm" % dy
mini_dof.close()
Explanation: Pour obtenir votre score, regardons les coordonnées du cube sur l'axe y :
End of explanation
mini_dof=Poppy4dofArmMini()
Explanation: Expérimentations appliquées sur le robot réel
Maintenant, vous pouvez mettre en oeuvre vos programmes sur un véritable mini_dof. Attention, il faut placer le cube à 10cm du bord du support.
End of explanation
for m in mini_dof.motors :
m.compliant=False
m.goto_position(0,2)
Explanation: Positionner le robot dans la position de départ, tous les moteurs à zéro.
End of explanation
# Le code ci-dessous devrait être écrit par les élèves
# la correction est donnée à titre indicatif
mini_dof.m3.goto_position(90,2,wait='True')
mini_dof.m1.goto_position(90,3,wait='True')
mini_dof.m1.goto_position(-90,3,wait='True')
mini_dof.m3.goto_position(125,3,wait='True')
mini_dof.m2.goto_position(20,3,wait='True')
mini_dof.m4.goto_position(30,3,wait='True')
mini_dof.m4.goto_position(-30,3,wait='True')
mini_dof.m4.goto_position(-40,3,wait='True')
mini_dof.m4.goto_position(-60,3,wait='True')
mini_dof.m2.goto_position(30,3,wait='True')
mini_dof.m2.goto_position(40,3,wait='True')
mini_dof.m3.goto_position(110,3,wait='True')
mini_dof.m2.goto_position(50,3,wait='True')
mini_dof.m3.goto_position(90,3,wait='True')
mini_dof.m2.goto_position(60,3,wait='True')
mini_dof.m3.goto_position(70,3,wait='True')
Explanation: Veillez à bien placer le robot par rapport au cube. C'est à dire dans le même sens que dans le simulateur, avec le cube à 10 cm du bord du support du robot. Il faut également veiller à ce que le support soit immobilisé pour ne pas basculer.
Vous pouvez ensuite recopier le code qui fonctionné dans le simulateur pour le tester avec le vrai robot.
Le code de la méthode par tatonnement :
End of explanation
for m in mini_dof.motors :
m.goto_position(0,2)
Explanation: Mesurer le déplacement du cube et comparez-le avec l'éloignement obtenu lors de la simulation.
Remettez le robot en position de départ :
End of explanation
# Le code ci-dessous devrait être écrit par les élèves
# la correction est donnée à titre indicatif
mini_dof.m2.goto_position(30,3,)
mini_dof.m3.goto_position(120,3,)
mini_dof.m4.goto_position(-60,3)
mini_dof.m1.goto_position(-90,4,wait='True')
mini_dof.m2.goto_position(90,3,)
mini_dof.m3.goto_position(0,3,)
mini_dof.m4.goto_position(0,3,wait='True')
mini_dof.close()
Explanation: N'oubliez pas de replacer le cube !
Maintenant avec les angles calculés :
End of explanation |
10,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating Synthetic Data
In data analysis, it is important that we have the ability to test our assumptions. One powerful tool to enable these tests is simulation. In 3ML, we have several ways to generate synthetic data sets both from models and from fits.
Synthetic data from spectra
Genertating data from models
Most of the current plugins support the ability to generate synthetic data directly from a model. This can be very useful to assertain the detectability of a source/component/line or simply to see how models look once they are transformed into data. Below we will demonstrate how different plugins transform a model into synthetic data.
XYLike
In many of the examples, the basic XYLike plugin has been used to generate synthetic data. Here, we will revisit the plugin for completeness.
Step1: SpectrumLike
Generating synthetic spectra from SpectrumLike (non-energy dispersed count spectra) can take many forms with different inputs.
First, let's set the energy bins we will use for all generated spectra
Step2: Now, let's use a blackbody for the source spectrum.
Step3: Poisson spectrum with no background
Step4: Gaussian spectrum with no background
Step5: Poisson spectrum with Poisson Background
Step6: Poisson spectrum with Gaussian background
Step7: DispersionSpectrumLike
DispersionSpectrumLike behaves in the same fashion as SpectrumLike except that a 3ML Instrument response must be set which means that the energy bins do not need to be specified as they are derived from the response
Let's grab a response from an instrument.
Step8: Generating spectra from fitted models
When performing goodness of fit tests, likelihood ratio tests (both automatic in 3ML) or posterior predictive checks, we need to generate synthetic data from our fitted models. Therefore, we proved methods to do this for most current plugins.
XYLike
Let's load some example, generic XY data and fit it with a power law.
Step9: Once our fit has been finished, we can produce simulated data sets from those model parameters.
Step10: SpectrumLike and DispersionSpectrumLike (OGIPLike)
Both spectrum plugins work in the same way when generating data from a fit. They both keep track of the statistical properties of the likelihoods in the plugin so that the simulated datasets have the appropriate statistical properties. Additionally, background, responsses, etc. are simulated and/or kept track of as well.
Let's fit an example energy dispersed spectrum.
Step11: Now we can now generate synthetic datasets from the fitted model. This will include the background sampled properly from the profile likelihood. The instrument response is automatically passed to the new plugin. | Python Code:
from threeML import *
import numpy as np
%matplotlib inline
jtplot.style(context="talk", fscale=1, ticks=True, grid=False)
import matplotlib.pyplot as plt
plt.style.use("mike")
import warnings
warnings.simplefilter("ignore")
# Select an astromodels function to from which to simualte
generating_function = Powerlaw(K=1.0, index=-2, piv=10.0)
# set up the x grig points
x_points = np.logspace(0, 2, 50)
# call the from_function classmethod
xyl_generator = XYLike.from_function(
"sim_data",
function=generating_function,
x=x_points,
yerr=0.3 * generating_function(x_points),
)
xyl_generator.plot(x_scale="log", y_scale="log")
Explanation: Generating Synthetic Data
In data analysis, it is important that we have the ability to test our assumptions. One powerful tool to enable these tests is simulation. In 3ML, we have several ways to generate synthetic data sets both from models and from fits.
Synthetic data from spectra
Genertating data from models
Most of the current plugins support the ability to generate synthetic data directly from a model. This can be very useful to assertain the detectability of a source/component/line or simply to see how models look once they are transformed into data. Below we will demonstrate how different plugins transform a model into synthetic data.
XYLike
In many of the examples, the basic XYLike plugin has been used to generate synthetic data. Here, we will revisit the plugin for completeness.
End of explanation
energies = np.logspace(0,2,51)
# create the low and high energy bin edges
low_edge = energies[:-1]
high_edge = energies[1:]
Explanation: SpectrumLike
Generating synthetic spectra from SpectrumLike (non-energy dispersed count spectra) can take many forms with different inputs.
First, let's set the energy bins we will use for all generated spectra
End of explanation
# get a BPL source function
source_function = Blackbody(K=1, kT = 5.)
Explanation: Now, let's use a blackbody for the source spectrum.
End of explanation
spectrum_generator = SpectrumLike.from_function('fake',
source_function=source_function,
energy_min=low_edge,
energy_max=high_edge)
spectrum_generator.view_count_spectrum()
Explanation: Poisson spectrum with no background
End of explanation
spectrum_generator = SpectrumLike.from_function('fake',
source_function=source_function,
source_errors= 0.5 * source_function(low_edge),
energy_min=low_edge,
energy_max=high_edge)
spectrum_generator.view_count_spectrum()
Explanation: Gaussian spectrum with no background
End of explanation
# power law background function
background_function = Powerlaw(K=.7,index=-1.5, piv=10.)
spectrum_generator = SpectrumLike.from_function('fake',
source_function=source_function,
background_function=background_function,
energy_min=low_edge,
energy_max=high_edge)
spectrum_generator.view_count_spectrum()
Explanation: Poisson spectrum with Poisson Background
End of explanation
spectrum_generator = SpectrumLike.from_function('fake',
source_function=source_function,
background_function=background_function,
background_errors= 0.1 * background_function(low_edge),
energy_min=low_edge,
energy_max=high_edge)
spectrum_generator.view_count_spectrum()
Explanation: Poisson spectrum with Gaussian background
End of explanation
from threeML.io.package_data import get_path_of_data_file
from threeML.utils.OGIP.response import OGIPResponse
# we will use a demo response
response = OGIPResponse(get_path_of_data_file("datasets/ogip_powerlaw.rsp"))
# rescale the functions for the response
source_function = Blackbody(K=1e-7, kT=500.0)
background_function = Powerlaw(K=1, index=-1.5, piv=1.0e3)
spectrum_generator = DispersionSpectrumLike.from_function(
"fake",
source_function=source_function,
background_function=background_function,
response=response,
)
spectrum_generator.view_count_spectrum();
Explanation: DispersionSpectrumLike
DispersionSpectrumLike behaves in the same fashion as SpectrumLike except that a 3ML Instrument response must be set which means that the energy bins do not need to be specified as they are derived from the response
Let's grab a response from an instrument.
End of explanation
data_path = get_path_of_data_file("datasets/xy_powerlaw.txt")
xyl = XYLike.from_text_file("xyl", data_path)
fit_function = Powerlaw()
xyl.fit(fit_function)
xyl.plot(x_scale="log", y_scale="log")
Explanation: Generating spectra from fitted models
When performing goodness of fit tests, likelihood ratio tests (both automatic in 3ML) or posterior predictive checks, we need to generate synthetic data from our fitted models. Therefore, we proved methods to do this for most current plugins.
XYLike
Let's load some example, generic XY data and fit it with a power law.
End of explanation
synthetic_xyl = xyl.get_simulated_dataset()
synthetic_xyl.plot(x_scale="log", y_scale="log")
Explanation: Once our fit has been finished, we can produce simulated data sets from those model parameters.
End of explanation
ogip_data = OGIPLike(
"ogip",
observation=get_path_of_data_file("datasets/ogip_powerlaw.pha"),
background=get_path_of_data_file("datasets/ogip_powerlaw.bak"),
response=get_path_of_data_file("datasets/ogip_powerlaw.rsp"),
)
ogip_data.view_count_spectrum()
# define the function
fit_function = Cutoff_powerlaw()
# define the point source
point_source = PointSource("ps", 0, 0, spectral_shape=fit_function)
# define the model
model = Model(point_source)
# create a data list
datalist = DataList(ogip_data)
# make the joint likelihood
jl = JointLikelihood(model, datalist)
# fit
jl.fit()
Explanation: SpectrumLike and DispersionSpectrumLike (OGIPLike)
Both spectrum plugins work in the same way when generating data from a fit. They both keep track of the statistical properties of the likelihoods in the plugin so that the simulated datasets have the appropriate statistical properties. Additionally, background, responsses, etc. are simulated and/or kept track of as well.
Let's fit an example energy dispersed spectrum.
End of explanation
synthetic_ogip = ogip_data.get_simulated_dataset()
synthetic_ogip.view_count_spectrum()
Explanation: Now we can now generate synthetic datasets from the fitted model. This will include the background sampled properly from the profile likelihood. The instrument response is automatically passed to the new plugin.
End of explanation |
10,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
All the measurement data is in the d variable. We can print it
Step10: Or check the measurements duration
Step11: Compute background
Compute the background using automatic threshold
Step12: Burst search and selection
Step14: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
Step15: Gaussian Fit
Fit the histogram with a gaussian
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables | Python Code:
ph_sel_name = "all-ph"
data_id = "22d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:34:36 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
Explanation: Burst search and selection
End of explanation
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
Explanation: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
End of explanation
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
Explanation: Gaussian Fit
Fit the histogram with a gaussian:
End of explanation
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
Explanation: KDE maximum
End of explanation
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
Explanation: Leakage summary
End of explanation
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst size distribution
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
10,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learn About Kernels
Do some SVM Classification
Step1: Can try it with Outliers if we have time
Let's look at some spectra
Step2: Notice that these training sets are unbalanced
Step3: Does this seem to be too good to be true? | Python Code:
from sklearn.svm import SVC
### SVC wants a 1d array, not a column vector
Targets = np.ravel(TargetOutputs)
InitSVM = SVC()
InitSVM
TrainedSVM = InitSVM.fit(AllSamps, Targets)
y = TrainedSVM.predict(AllSamps)
plt.figure(1)
plt.plot(y)
plt.show()
d = TrainedSVM.decision_function(AllSamps)
plt.figure(1)
plt.plot(d)
plt.show()
Explanation: Learn About Kernels
Do some SVM Classification
End of explanation
### Look at some Pine and Oak spectra from
### NEON Site D03 Ordway-Swisher Biological Station
### at UF
### Pinus palustris
### Quercus virginiana
InFile1 = 'Pines.mat'
InFile2 = 'Oaks.mat'
C1Dict = io.loadmat(InFile1)
C2Dict = io.loadmat(InFile2)
Pines = C1Dict['Pines']
Oaks = C2Dict['Oaks']
WvFile = 'NEONWvsNBB.mat'
WvDict = io.loadmat(WvFile)
Wv = WvDict['NEONWvsNBB']
Pines.shape
Oaks.shape
NBands=Wv.shape[0]
print(NBands)
Explanation: Can try it with Outliers if we have time
Let's look at some spectra
End of explanation
NTrainSampsClass = 600
NTestSampsClass = 200
Targets = np.ones((1200,1))
Targets[range(600)] = -Targets[range(600)]
Targets = np.ravel(Targets)
print(Targets.shape)
plt.figure(111)
plt.plot(Targets)
plt.show()
TrainPines = Pines[0:600,:]
TrainOaks = Oaks[0:600,:]
#TrainSet = np.concatenate?
TrainSet = np.concatenate((TrainPines, TrainOaks), axis=0)
print(TrainSet.shape)
plt.figure(3)
### Plot Pine Training Spectra ###
plt.subplot(121)
plt.plot(Wv, TrainPines.T)
plt.ylim((0.0,0.8))
plt.xlim((Wv[1], Wv[NBands-1]))
### Plot Oak Training Spectra ###
plt.subplot(122)
plt.plot(Wv, TrainOaks.T)
plt.ylim((0.0,0.8))
plt.xlim((Wv[1], Wv[NBands-1]))
plt.show()
InitSVM= SVC()
TrainedSVM=InitSVM.fit(TrainSet, Targets)
plt.figure(4)
plt.plot(d)
plt.show()
Explanation: Notice that these training sets are unbalanced
End of explanation
TestPines = Pines[600:800,:]
TestOaks = Oaks[600:800,:]
TestSet = np.concatenate((TestPines, TestOaks), axis=0)
print(TestSet.shape)
dtest = TrainedSVM.decision_function(TestSet)
plt.figure(5)
plt.plot(dtest)
plt.show()
Explanation: Does this seem to be too good to be true?
End of explanation |
10,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Apache Drill - Hansard Demo
Download and install Apache Drill.
Start Apache Drill in the Apache Drill directory
Step1: Make things faster
We can get a speed up on querying the CSV file by converting it to the parquet format.
In the Apache Drill terminal, run something like the following (change the path to the CSV file as required)
Step2: The Hansard data gives the date of each speech but not the session. To search for speeches within a particular session, we need the session dates. We can get these from the Parliament data API.
Step3: The data is currently in a long (tidy) format. To make it easier to plot, we can reshape it (unmelt it) by casting it into a wide format, with one row per session and and the gender averages arranged by column.
Step4: Now we can plot the data - the session axis should sort in an appropriate way (alphanumerically).
Step5: We can generalise the approach to look at a count of split by party.
Step6: Make a function out of that, as we did before.
Step7: We can write another query to look by gender and party.
Step8: Automating insight...
We can automate some of the observations we might want to make, such as years when M speak more, on average, than F, within a party. | Python Code:
#Download data file
!wget -P /Users/ajh59/Documents/parlidata/ https://zenodo.org/record/579712/files/senti_post_v2.csv
#Install some dependencies
!pip3 install pydrill
!pip3 install pandas
!pip3 install matplotlib
#Import necessary packages
import pandas as pd
from pydrill.client import PyDrill
#Set the notebooks up for inline plotting
%matplotlib inline
#Get a connection to the Apache Drill server
drill = PyDrill(host='localhost', port=8047)
Explanation: Apache Drill - Hansard Demo
Download and install Apache Drill.
Start Apache Drill in the Apache Drill directory: bin/drill-embedded
Tweak the settings as per Querying Large CSV Files With Apache Drill so you can query against column names.
End of explanation
#Test the setup
drill.query(''' SELECT * from dfs.tmp.`/senti_post_v2.parquet` LIMIT 3''').to_dataframe()
Explanation: Make things faster
We can get a speed up on querying the CSV file by converting it to the parquet format.
In the Apache Drill terminal, run something like the following (change the path to the CSV file as required):
CREATE TABLE dfs.tmp.`/senti_post_v2.parquet` AS SELECT * FROM dfs.`/Users/ajh59/Documents/parlidata/senti_post_v2.csv`;
(Running the command from the notebook suffers a timeout?)
End of explanation
#Get Parliament session dates from Parliament API
psd=pd.read_csv('http://lda.data.parliament.uk/sessions.csv?_view=Sessions&_pageSize=50')
psd
def getParliamentDate(session):
start=psd[psd['display name']==session]['start date'].iloc[0]
end=psd[psd['display name']==session]['end date'].iloc[0]
return start, end
getParliamentDate('2015-2016')
#Check the columns in the Hansard dataset, along with example values
df=drill.query(''' SELECT * from dfs.tmp.`/senti_post_v2.parquet` LIMIT 1''').to_dataframe()
print(df.columns.tolist())
df.iloc[0]
# Example of count of speeches by person in the dataset as a whole
q='''
SELECT proper_name, COUNT(*) AS number
FROM dfs.tmp.`/senti_post_v2.parquet`
GROUP BY proper_name
'''
df=drill.query(q).to_dataframe()
df.head()
# Example of count of speeches by gender in the dataset as a whole
q="SELECT gender, count(*) AS `Number of Speeches` FROM dfs.tmp.`/senti_post_v2.parquet` GROUP BY gender"
drill.query(q).to_dataframe()
#Query within session
session='2015-2016'
start,end=getParliamentDate(session)
q='''
SELECT '{session}' AS session, gender, count(*) AS `Number of Speeches`
FROM dfs.tmp.`/senti_post_v2.parquet`
WHERE speech_date>='{start}' AND speech_date<='{end}'
GROUP BY gender
'''.format(session=session, start=start, end=end)
drill.query(q).to_dataframe()
#Count number of speeches per person
start,end=getParliamentDate(session)
q='''
SELECT '{session}' AS session, gender, mnis_id, count(*) AS `Number of Speeches`
FROM dfs.tmp.`/senti_post_v2.parquet`
WHERE speech_date>='{start}' AND speech_date<='{end}'
GROUP BY mnis_id, gender
'''.format(session=session, start=start, end=end)
drill.query(q).to_dataframe().head()
# Example of finding the average number of speeches per person by gender in a particular session
q='''
SELECT AVG(gcount) AS average, gender, session
FROM (SELECT '{session}' AS session, gender, mnis_id, count(*) AS gcount
FROM dfs.tmp.`/senti_post_v2.parquet`
WHERE speech_date>='{start}' AND speech_date<='{end}'
GROUP BY mnis_id, gender)
GROUP BY gender, session
'''.format(session=session, start=start, end=end)
drill.query(q).to_dataframe()
#Note - the average is returned as a string not a numeric
#We can package that query up in a Python function
def avBySession(session):
start,end=getParliamentDate(session)
q='''SELECT AVG(gcount) AS average, gender, session FROM (SELECT '{session}' AS session, gender, mnis_id, count(*) AS gcount
FROM dfs.tmp.`/senti_post_v2.parquet`
WHERE speech_date>='{start}' AND speech_date<='{end}'
GROUP BY mnis_id, gender) GROUP BY gender, session
'''.format(session=session, start=start, end=end)
dq=drill.query(q).to_dataframe()
#Make the average a numeric type...
dq['average']=dq['average'].astype(float)
return dq
avBySession(session)
#Loop through sessions and create a dataframe containing gender based averages for each one
overall=pd.DataFrame()
for session in psd['display name']:
overall=pd.concat([overall,avBySession(session)])
#Tidy up the index
overall=overall.reset_index(drop=True)
overall.head()
Explanation: The Hansard data gives the date of each speech but not the session. To search for speeches within a particular session, we need the session dates. We can get these from the Parliament data API.
End of explanation
#Reshape the dataset
overall_wide = overall.pivot(index='session', columns='gender')
#Flatten the column names
overall_wide.columns = overall_wide.columns.get_level_values(1)
overall_wide
Explanation: The data is currently in a long (tidy) format. To make it easier to plot, we can reshape it (unmelt it) by casting it into a wide format, with one row per session and and the gender averages arranged by column.
End of explanation
overall_wide.plot(kind='barh');
overall_wide.plot();
Explanation: Now we can plot the data - the session axis should sort in an appropriate way (alphanumerically).
End of explanation
# Example of finding the average number of speeches per person by party in a particular session
# Simply tweak the query we used for gender...
q='''
SELECT AVG(gcount) AS average, party, session
FROM (SELECT '{session}' AS session, party, mnis_id, count(*) AS gcount
FROM dfs.tmp.`/senti_post_v2.parquet`
WHERE speech_date>='{start}' AND speech_date<='{end}'
GROUP BY mnis_id, party)
GROUP BY party, session
'''.format(session=session, start=start, end=end)
drill.query(q).to_dataframe()
Explanation: We can generalise the approach to look at a count of split by party.
End of explanation
def avByType(session,typ):
start,end=getParliamentDate(session)
q='''SELECT AVG(gcount) AS average, {typ}, session
FROM (SELECT '{session}' AS session, {typ}, mnis_id, count(*) AS gcount
FROM dfs.tmp.`/senti_post_v2.parquet`
WHERE speech_date>='{start}' AND speech_date<='{end}'
GROUP BY mnis_id, {typ})
GROUP BY {typ}, session
'''.format(session=session, start=start, end=end, typ=typ)
dq=drill.query(q).to_dataframe()
#Make the average a numeric type...
dq['average']=dq['average'].astype(float)
return dq
def avByParty(session):
return avByType(session,'party')
avByParty(session)
# Create a function to loop through sessions and create a dataframe containing specified averages for each one
# Note that this just generalises and packages up the code we had previously
def pivotAndFlatten(overall,typ):
#Tidy up the index
overall=overall.reset_index(drop=True)
overall_wide = overall.pivot(index='session', columns=typ)
#Flatten the column names
overall_wide.columns = overall_wide.columns.get_level_values(1)
return overall_wide
def getOverall(typ):
overall=pd.DataFrame()
for session in psd['display name']:
overall=pd.concat([overall,avByType(session,typ)])
return pivotAndFlatten(overall,typ)
overallParty=getOverall('party')
overallParty.head()
#Note that the function means it's now just as easy to query on another single column
getOverall('party_group')
overallParty.plot(kind='barh', figsize=(20,20));
parties=['Conservative','Labour']
overallParty[parties].plot();
Explanation: Make a function out of that, as we did before.
End of explanation
def avByGenderAndParty(session):
start,end=getParliamentDate(session)
q='''SELECT AVG(gcount) AS average, gender, party, session
FROM (SELECT '{session}' AS session, gender, party, mnis_id, count(*) AS gcount
FROM dfs.tmp.`/senti_post_v2.parquet`
WHERE speech_date>='{start}' AND speech_date<='{end}'
GROUP BY mnis_id, gender, party)
GROUP BY gender, party, session
'''.format(session=session, start=start, end=end)
dq=drill.query(q).to_dataframe()
#Make the average a numeric type...
dq['average']=dq['average'].astype(float)
return dq
gp=avByGenderAndParty(session)
gp
gp_overall=pd.DataFrame()
for session in psd['display name']:
gp_overall=pd.concat([gp_overall,avByGenderAndParty(session)])
#Pivot table it more robust than pivot - missing entries handled with NA
#Also limit what parties we are interested in
gp_wide = gp_overall[gp_overall['party'].isin(parties)].pivot_table(index='session', columns=['party','gender'])
#Flatten column names
gp_wide.columns = gp_wide.columns.droplevel(0)
gp_wide
gp_wide.plot(figsize=(20,10));
gp_wide.plot(kind='barh', figsize=(20,10));
Explanation: We can write another query to look by gender and party.
End of explanation
# Go back to the full dataset, not filtered by party
gp_wide = gp_overall.pivot_table(index='session', columns=['party','gender'])
#Flatten column names
gp_wide.columns = gp_wide.columns.droplevel(0)
gp_wide.head()
sp_wide = gp_wide.reset_index().melt(id_vars=['session']).pivot_table(index=['session','party'], columns=['gender'])
#Flatten column names
sp_wide.columns = sp_wide.columns.droplevel(0)
sp_wide#.dropna(how='all')
#Sessions when F spoke more, on average, then M
#Recall, this data has been previously filtered to limit data to Con and Lab
#Tweak the precision of the display
pd.set_option('precision',3)
sp_wide[sp_wide['Female'].fillna(0) > sp_wide['Male'].fillna(0) ]
Explanation: Automating insight...
We can automate some of the observations we might want to make, such as years when M speak more, on average, than F, within a party.
End of explanation |
10,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised Learning
Step1: Step 2
Step2: Now we can make out a slight trend that price increases along with the number of rooms in that house, which intuitively makes sense! Now let's use scikit learn to see if we can fit the data linearly.
Let's try to do the following
Step3: Now, you might be reminded of the seaborn lmplot function we used during the visualization lectures. You could use it here to do a linear fit automatically!
Step4: Step 3
Step5: Step 4
Step6: Now that we have our X and Y, let's go ahead and use numpy to create the single variable linear regression.
We know that a line has the equation
Step8: Step 5
Step9: Since the root mean square error (RMSE) corresponds approximately to the standard deviation we can now say that the price of a house won't vary more than 2 times the RMSE 95% of the time. Note
Step11: The functions we will be using are
Step12: Step 7
Step13: Step 8
Step14: It looks like our mean square error between our training and testing was pretty close.
Step 9 | Python Code:
# Standard imports
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
# Plotting
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
# scikit learn
import sklearn
from sklearn.datasets import load_boston
# Load the housing data sets
boston = load_boston()
print boston.DESCR
Explanation: Supervised Learning: Linear Regression
We'll be going over how to use the scikit-learn regression model, as well as how to train the regressor using the fit() method, and how to predict new labels using the predict() method. We'll be analyzing a data set consisting of house prices in Boston. We'll start off with a single variable linear regression using numpy and then move on to using scikit learn. We'll do an overview of the mathematics behind the method we're using, but mostly we'll dive deeper into pratical "hands-on" coding lessons.
In this section we will be working through linear regression with the following steps:
Step 1: Getting and setting up the data.
Step 2: Visualizing current data.
Step 3: The mathematics behind the Least Squares Method.
Step 4: Using Numpy for a Univariate Linear Regression.
Step 5: Getting the error.
Step 6: Using scikit learn to implement a multivariate regression.
Step 7: Using Training and Validation.
Step 8: Predicting Prices
Step 9 : Residual Plots
Step 1: Getting and setting up the data.
We'll start by looking a an example of a dataset from scikit-learn. First we'll import our usual data analysis imports, then sklearn's built-in boston dataset.
End of explanation
# histogram of prices
plt.hist(boston.target, bins=50)
plt.xlabel('Prices in 1000$')
plt.ylabel('Number of houses')
plt.title('Prices Vs Houses')
plt.savefig('house_vs_price.png')
plt.scatter(boston.data[:,5], boston.target)
#label
plt.ylabel('Price in $1000s')
plt.xlabel('Number of rooms')
Explanation: Step 2: Visualizing current data
You should always try to do a quick visualization fo the data you have. Let's go ahead an make a histogram of the prices.
End of explanation
# converting into dataFrame
boston_df = DataFrame(boston.data)
boston_df.columns= boston.feature_names
boston_df.head()
# Creating a price column in dataFrame
boston_df['PRICE'] = boston.target
boston_df.head()
Explanation: Now we can make out a slight trend that price increases along with the number of rooms in that house, which intuitively makes sense! Now let's use scikit learn to see if we can fit the data linearly.
Let's try to do the following:
1.) Use pandas to transform the boston dataset into a DataFrame:
2.) Then use seaborn to perform an lmplot on that DataFrame to reproduce the scatter plot with a linear fit line.
End of explanation
# linear regression plot
sns.lmplot('RM', 'PRICE', data=boston_df)
Explanation: Now, you might be reminded of the seaborn lmplot function we used during the visualization lectures. You could use it here to do a linear fit automatically!
End of explanation
# Quick display of image form wikipedia
from IPython.display import Image
url = 'http://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/Linear_least_squares_example2.svg/220px-Linear_least_squares_example2.svg.png'
Image(url)
Explanation: Step 3: The mathematics behind the Least Squares Method.
In this we'll use the least squares method as the way to estimate the coefficients. Here's a quick breakdown of how this method works mathematically:
Take a quick look at the plot we created above using seaborn. Now consider each point, and know that they each have a coordinate in the form (X,Y). Now draw an imaginary line between each point and our current "best-fit" line. We'll call the distanace between each point and our current best-fit line, D. To get a quick image of what we're currently trying to visualize, take a look at the picture below:
End of explanation
# Numpy linear algebra needs to have data in the form data and parameters
x = boston_df.RM
#print x.shape
x = np.vstack(boston_df.RM)
#print x.shape
y = boston_df.PRICE
Explanation: Step 4: Using Numpy for a Univariate Linear Regression
Numpy has a built in Least Square Method in its linear algebra library. We'll use this first for our Univariate regression and then move on to scikit learn for out Multi variate regression.
We will start by setting up the X and Y arrays for numpy to take in. An important note for the X array: Numpy expects a two-dimensional array, the first dimension is the different example values, and the second dimension is the attribute number. In this case we have our value as the mean number of rooms per house, and this is a single attribute so the second dimension of the array is just 1. So we'll need to create a (506,1) shape array. There are a few ways to do this, but an easy way to do this is by using numpy's built-in vertical stack tool, vstack.
End of explanation
# using list comprehension
x = np.array([[value, 1] for value in x])
# Now get out m and b values for our best fit line
m, b = np.linalg.lstsq(x, y)[0]
# Plotting the same lm plot that we plotted earlier using seaborn
plt.plot(boston_df.RM,boston_df.PRICE ,'o')
# plotting line
X = boston_df.RM
plt.plot(X,m*X + b,'red', label = 'Best Fit')
plt.savefig('bestfit.png')
Explanation: Now that we have our X and Y, let's go ahead and use numpy to create the single variable linear regression.
We know that a line has the equation:
y=mx+b
which we can rewrite using matrices:
y=Ap
where:
A=[x 1]
and
p=[m b]
This is the same as the first equation if you carry out the linear algebra. So we'll start by creating the A matrix using numpy. We'll do this by creating a matrix in the form [X 1], so we'll call every value in our original X using a list comprehension and then set up an array in the form [X 1]
End of explanation
Dependent variable always on y axis and independent variable on x axis while plotting.
result = np.linalg.lstsq(x,y)[1]
# Total error
total_error = np.sqrt(result/len(x))
print "The root mean square error is: {}" .format(float(total_error))
Explanation: Step 5: Getting the error
We've just completed a single variable regression using the least squares method with Python! Let's see if we can find the error in our fitted check the link. Checking out the documentation here, we see that the resulting array has the total squared error. For each element, it checks the the difference between the line and the true value (our original D value), squares it, and returns the sum of all these. This was the summed D^2 value we discussed earlier.
It's probably easier to understand the root mean squared error, which is similar to the standard deviation. In this case, to find the root mean square error we divide by the number of elements and then take the square root. There is also an issue of bias and an unbiased regression, but we'll delve into those topics later.
For now let's see how we can get the root mean squared error of the line we just fitted.
End of explanation
# sklearn imports
from sklearn.linear_model import LinearRegression
# Create a LinearRegression Object
lreg = LinearRegression()
Explanation: Since the root mean square error (RMSE) corresponds approximately to the standard deviation we can now say that the price of a house won't vary more than 2 times the RMSE 95% of the time. Note: Review the Normal Distribution Appendix lecture if this doesn't make sense to you or check out this link.
Thus we can reasonably expect a house price to be within $13,200 of our line fit.
Step 6: Using scikit learn to implement a multivariate regression
Now, we'll keep moving along with using scikit learn to do a multi variable regression. This will be a similar apporach to the above example, but sci kit learn will be able to take into account more than just a single data variable effecting the target!
We'll start by importing the linear regression library from the sklearn module.
The sklearn.linear_model.LinearRegression class is an estimator. Estimators predict a value based on the observed data. In scikit-learn, all estimators implement the fit() and predict() methods. The former method is used to learn the parameters of a model, and the latter method is used to predict the value of a response variable for an explanatory variable using the learned parameters. It is easy to experiment with different models using scikit-learn because all estimators implement the fit and predict methods.
End of explanation
# In order to drop a coloumn we use '1'
x_multi = boston_df.drop('PRICE', 1)
y_target = boston_df.PRICE
# Implement Linear Regression
lreg.fit(x_multi, y_target)
print "The estimated intercept {}" .format(lreg.intercept_)
print "The number of coefficients used {}." .format(len(lreg.coef_))
coeff_df = DataFrame(boston_df.columns)
coeff_df.columns = ['Features']
coeff_df['Coefficient'] = Series(lreg.coef_)
These 13 coefficients are used to bild the line that is used as best fit line by
scikit learn
coeff_df
Explanation: The functions we will be using are:
lreg.fit() which fits a linear model
lreg.predict() which is used to predict Y using the linear model with estimated coefficients
lreg.score() which returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, learn more about it here
End of explanation
# Getting the tranning and testing data sets
X_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split(x, boston_df.PRICE)
# The outputs
print X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
Explanation: Step 7: Using Training and Validation
In a dataset a training set is implemented to build up a model, while a validation set is used to validate the model built. Data points in the training set are excluded from the validation set. The correct way to pick out samples from your dataset to be part either the training or validation (also called test) set is randomly.
Fortunately, scikit learn has a built in function specifically for this called train_test_split.
The parameters passed are your X and Y, then optionally test_size parameter, representing the proportion of the dataset to include in the test split. As well a train_size parameter. ou can learn more about these parameters here
End of explanation
legr = LinearRegression()
legr.fit(X_train, Y_train)
pred_train = legr.predict(X_train)
pred_test = legr.predict(X_test)
print "Fit a model X_train, and calculate MSE with Y_train: {}" .format(np.mean((Y_train - pred_train)**2))
print "Fit a model X_train, and calculate MSE with X_test and Y_test: {}" .format(np.mean((Y_test - pred_test)**2))
Explanation: Step 8: Predicting Prices
Now that we have our training and testing sets, let's go ahead and try to use them to predict house prices. We'll use our training set for the prediction and then use our testing set for validation.
End of explanation
# Scater plot the training data
train = plt.scatter(pred_train, (pred_train - Y_train), c='b', alpha=0.8)
# Scatter plot the testing data
test = plt.scatter(pred_test, (pred_test - Y_test), c='r', alpha=0.6)
# Horizontal line
plt.hlines(y=0, xmin=-10, xmax=50)
#Labels
plt.legend((train,test),('Training','Test'),loc='lower left')
plt.title('Residual Plots')
plt.savefig('residualplot.png')
Explanation: It looks like our mean square error between our training and testing was pretty close.
Step 9 : Residual Plots
In regression analysis, the difference between the observed value of the dependent variable (y) and the predicted value (ŷ) is called the residual (e). Each data point has one residual, so that:
Residual=Observedvalue−Predictedvalue
You can think of these residuals in the same way as the D value we discussed earlier, in this case however, there were multiple data points considered.
A residual plot is a graph that shows the residuals on the vertical axis and the independent variable on the horizontal axis. If the points in a residual plot are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a non-linear model is more appropriate.
Residual plots are a good way to visualize the errors in your data. If you have done a good job then your data should be randomly scattered around line zero. If there is some strucutre or pattern, that means your model is not capturing some thing. There could be an interaction between 2 variables that you're not considering, or may be you are measuring time dependent data. If this is the case go back to your model and check your data set closely.
So now let's go ahead and create the residual plot. For more info on the residual plots check out this great link.
End of explanation |
10,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../images/qiskit-heading.gif" alt="Note
Step1: We first set the number of qubits used in the experiment, and the hidden integer $a$ to be found by the Bernstein-Vazirani algorithm. The hidden integer $a$ determines the circuit for the quantum oracle.
Step2: We then use Qiskit to program the Bernstein-Vazirani algorithm.
Step3: Experiment with Simulators
We can run the above circuit on the simulator.
Step4: We can see that the result of the measurement is the binary representation of the hidden integer $a$.
Experiment with Real Devices
We can run the circuit on the real device as below. | Python Code:
#initialization
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import available_backends, execute, register, get_backend
from qiskit.wrapper.jupyter import *
# import basic plot tools
from qiskit.tools.visualization import circuit_drawer, plot_histogram
# Load our saved IBMQ accounts.
IBMQ.load_accounts()
Explanation: <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
The Bernstein-Vazirani Algorithm
In this tutorial, we introduce the Bernstein-Vazirani algorithm, which is one of the earliest algorithms demonstrating the power of quantum computing. Despite its simplicity, it is often used and is the inspiration for many other quantum algorithms even today; it is the basis of the power of the short-depth quantum circuits, as in Bravyi et al. that uses its non-oracular version, or in Linke et al. that uses it to test the performance of the quantum processors (see also the talk by Ken Brown at the ThinkQ 2017 conference). Here, we show the implementation of the Bernstein-Vazirani algorithm without using entanglement based on Du et al. on Qiskit and test it on IBM Q systems.
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
Contributors
Rudy Raymond
Introduction <a id='introduction'></a>
The Bernstein-Vazirani algorithm deals with finding a hidden integer $a \in {0,1}^n$ from an oracle $f_a$ that returns a bit $a \cdot x \equiv \sum_i a_i x_i \mod 2$ upon receiving an input $x \in {0,1}^n$. A classical oracle returns $f_a(x) = a \cdot x \mod 2$ given an input $x$. Meanwhile, a quantum oracle behaves similarly but can be queried with superposition of input $x$'s.
Classically, the hidden integer $a$ can be revealed by querying the oracle with $x = 1, 2, \ldots, 2^i, \ldots, 2^{n-1}$, where each query reveals the $i$-th bit of $a$ (or, $a_i$). For example, with $x=1$ one can obtain the least significant bit of $a$, and so on. This turns out to be an optimal strategy; any classical algorithm that finds the hidden integer with high probability must query the oracle $\Omega(n)$ times. However, given a corresponding quantum oracle, the hidden integer can be found with only $1$ query using the Bernstein-Vazirani algorithm.
The Algorithm
The Bernstein-Vazirani algorithm to find the hidden integer is very simple: start from a $|0\rangle$ state, apply Hadamard gates, query the oracle, apply Hadamard gates, and measure. The correctness of the algorithm is best explained by looking at the transformation of a quantum register $|a \rangle$ by $n$ Hadamard gates, each applied to the qubit of the register. It can be shown that
$$
|a\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in {0,1}^n} (-1)^{a\cdot x}|x\rangle.
$$
In particular, when we start with a quantum register $|0\rangle$ and apply $n$ Hadamard gates to it, we have the familiar quantum superposition as below
$$
|0\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in {0,1}^n} |x\rangle,
$$
which is slightly different from the Hadamard transform of the reqister $|a \rangle$ by the phase $(-1)^{a\cdot x}$.
Now, the quantum oracle $f_a$ returns $1$ on input $x$ such that $a \cdot x \equiv 1 \mod 2$, and returns $0$ otherwise. This means we have the following transformation:
$$
|x \rangle \left(|0\rangle - |1\rangle \right) \xrightarrow{f_a} | x \rangle \left(|0 \oplus f_a(x) \rangle - |1 \oplus f_a(x) \rangle \right) = (-1)^{a\cdot x} |x \rangle \left(|0\rangle - |1\rangle \right).
$$
Notice that the second register $|0\rangle - |1\rangle$ in the above does not change and can be omitted for simplicity. In short, the oracle can be used to create $(-1)^{a\cdot x}|x\rangle$ from the input $|x \rangle$. In this tutorial, we follow Du et al. to generate a circuit for a quantum oracle without the need of an ancilla qubit (often used in the standard quantum oracle).
The algorithm to reveal the hidden integer follows naturally by querying the quantum oracle $f_a$ with the quantum superposition obtained from the Hadamard transformation of $|0\rangle$. Namely,
$$
|0\rangle \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}} \sum_{x\in {0,1}^n} |x\rangle \xrightarrow{f_a} \frac{1}{\sqrt{2^n}} \sum_{x\in {0,1}^n} (-1)^{a\cdot x}|x\rangle.
$$
Because the inverse of the $n$ Hadamard gates is again the $n$ Hadamard gates, we can obtain $a$ by
$$
\frac{1}{\sqrt{2^n}} \sum_{x\in {0,1}^n} (-1)^{a\cdot x}|x\rangle \xrightarrow{H^{\otimes n}} |a\rangle.
$$
The (Inner-Product) Oracle <a id='oracle'></a>
Here, we describe how to build the oracle used in the Bernstein-Vazirani algorithm. The oracle is also referred to as the inner-product oracle (while the oracle of the Grover search is known as the Equivalence, or EQ, oracle). Notice that it transforms $|x\rangle$ into $(-1)^{a\cdot x} |x\rangle$. Clearly, we can observe that
$$
(-1)^{a\cdot x} = (-1)^{a_1 x_1} \ldots (-1)^{a_ix_i} \ldots (-1)^{a_nx_n} = \prod_{i: a_i = 1} (-1)^{x_i}.
$$
Therefore, the inner-product oracle can be realized by the following unitary transformation, which is decomposable as single-qubit unitaries:
$$
O_{f_a} = O^1 \otimes O^2 \otimes \ldots \otimes O^i \otimes \ldots \otimes O^n,
$$
where $O^i = (1 - a_i)I + a_i Z$, where $Z$ is the Pauli $Z$ matrix and $I$ is the identity matrix for $a_i \in {0,1}$.
Notice that we start from a separable quantum state $|0\rangle$ and apply a series of transformations that are separable (i.e., can be described by unitaries acting on a single qubit): Hadamard gates to each qubit, followed by the call to the decomposable quantum oracle as Du et al., and another Hadamard gate. Hence, there is no entanglement created during the computation. This is in contrast with the circuit at Linke et al. that used CNOT gates to realize the oracle and an ancilla qubit to store the answer of the oracle.
The Circuit <a id="circuit"></a>
We now implement the Bernstein-Vazirani algorithm with Qiskit by first preparing the environment.
End of explanation
nQubits = 14 # number of physical qubits
a = 101 # the hidden integer whose bitstring is 1100101
# make sure that a can be represented with nQubits
a = a % 2**(nQubits)
Explanation: We first set the number of qubits used in the experiment, and the hidden integer $a$ to be found by the Bernstein-Vazirani algorithm. The hidden integer $a$ determines the circuit for the quantum oracle.
End of explanation
# Creating registers
# qubits for querying the oracle and finding the hidden integer
qr = QuantumRegister(nQubits)
# for recording the measurement on qr
cr = ClassicalRegister(nQubits)
circuitName = "BernsteinVazirani"
bvCircuit = QuantumCircuit(qr, cr)
# Apply Hadamard gates before querying the oracle
for i in range(nQubits):
bvCircuit.h(qr[i])
# Apply barrier so that it is not optimized by the compiler
bvCircuit.barrier()
# Apply the inner-product oracle
for i in range(nQubits):
if (a & (1 << i)):
bvCircuit.z(qr[i])
else:
bvCircuit.iden(qr[i])
# Apply barrier
bvCircuit.barrier()
#Apply Hadamard gates after querying the oracle
for i in range(nQubits):
bvCircuit.h(qr[i])
# Measurement
bvCircuit.barrier(qr)
bvCircuit.measure(qr, cr)
circuit_drawer(bvCircuit)
Explanation: We then use Qiskit to program the Bernstein-Vazirani algorithm.
End of explanation
# use local simulator
backend = Aer.get_backend('qasm_simulator')
shots = 1000
results = execute(bvCircuit, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
Explanation: Experiment with Simulators
We can run the above circuit on the simulator.
End of explanation
%%qiskit_job_status
backend = IBMQ.get_backend('ibmq_16_melbourne')
shots = 1000
job_exp = execute(bvCircuit, backend=backend, shots=shots)
results = job_exp.result()
answer = results.get_counts(bvCircuit)
threshold = int(0.03 * shots) #the threshold of plotting significant measurements
filteredAnswer = {k: v for k,v in answer.items() if v >= threshold} #filter the answer for better view of plots
removedCounts = np.sum([ v for k,v in answer.items() if v < threshold ]) #number of counts removed
filteredAnswer['other_bitstring'] = removedCounts #the removed counts is assigned to a new index
plot_histogram(filteredAnswer)
print(filteredAnswer)
Explanation: We can see that the result of the measurement is the binary representation of the hidden integer $a$.
Experiment with Real Devices
We can run the circuit on the real device as below.
End of explanation |
10,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using KNearestNeighbors (submission 2)
<a rel="license" href="https
Step1: Load training data
Step2: Build features
In the real world it would be unusual to have neutron-density cross-plot porosity (i.e. PHIND) without the corresponding raw input curves, namely bulk density and neutron porosity, as we have in this contest dataset. So as part of the feature engineering process, I back-calculate estimates of those raw curves from the provided DeltaPHI and PHIND curves. One issue with this approach though is that cross-plot porosity differs between vendors, toolstrings, and software packages, and it is not known exactly how the PHIND in this dataset was computed. So I make the assumption here that PHIND ≈ sum of squares porosity, which is usually an adequate approximation of neutron-density crossplot porosity. That equation looks like this
Step3: Because solving the sum of squares equation involved the quadratic formula, in some cases imaginary numbers result due to porosities being negative, which is what the warning below is about.
Step4: Regress missing PE values
Step5: Apply regression model to missing PE values and merge back into dataframe
Step6: Compute UMAA for lithology model
Step7: Just for fun, below is a basic Umaa-Rhomaa plot to view relative abundances of quartz, calcite, dolomite, and clay. The red triangle represents a ternary solution for QTZ, CAL, and DOL, while the green triangle represents a solution for QTZ, CAL, and CLAY (illite).
Step8: Here I use matrix inversion to "solve" the ternary plot for each lithologic component. Essentially each datapoint is a mix of the three components defined by the ternary diagram, with abundances of each defined by the relative distances from each endpoint. I use a GR cutoff of 40 API to determine when to use either the QTZ-CAL-DOL or QTZ-CAL-CLAY ternary solutions. In other words, it is assumed that below 40 API, there is 0% clay, and above 40 API there is 0% dolomite, and also that these four lithologic components are the only components in these rocks. Admittedly it's not a great assumption, especially since the ternary plot indicates other stuff is going on. For example the high Umaa datapoints near the Calcite endpoint may indicate some heavy minerals (e.g., pyrite) or even barite-weighted mud. The "pull" of datapoints to the northwest quadrant probably reflects some gas effect, so my lithologies in those gassy zones will be skewed.
Step9: Below I train the model using 1 to 599 n_neighbors and select a value for n_neighbors to use in my classifier with a high average on the LOGO test. In this case I will use 62. I recommend not running this cell as it takes a while to complete.
Step10: Fit KNearestNeighbors model and apply LeaveOneGroupOut test
There is some bad log data in this dataset which I'd guess is due to rugose hole. PHIND gets as high at 80%, which is certainly spurious. For now I'll leave them in, since the validation wells may have rugose hole, too.
Step11: On average the scores are slightly worse than in my KNN_submission_1 model, but that is partially because this time I've included the CROSS H CATTLE well, which performs markedly worse than the other LOGO cases. I am hoping that since the scores for several of the wells have increased, the performance of this model against the validation data will improve.
Apply model to validation dataset
Load validation data (vd), build features, and use the classfier from above to predict facies. Ultimately the PE_EST curve seemed to be slightly more predictive than the PE curve proper (?). I use that instead of PE in the classifer so I need to compute it with the validation data. | Python Code:
import pandas as pd
import numpy as np
from sklearn import neighbors
from sklearn import preprocessing
from sklearn.model_selection import LeaveOneGroupOut
import inversion
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Facies classification using KNearestNeighbors (submission 2)
<a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License BY-SA" align="left" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png">
</a>
<br>
Dan Hallau
Here is another KNearestNeighbors solution to the facies classification contest described at https://github.com/seg/2016-ml-contest. A lot of sophisticated models have been submitted for the contest so far (deep neural nets, random forests, etc.) so I thought I'd try submitting a simpler model to see how it stacks up. In that spirit here's another KNearestNeighbors classifier.
Note: The main differences between my KNN Submission 1 and KNN Submission 2 are:
- In submission 2 I use a KNearestNeighborsRegressor to predict PE in records where there is no data. This gives me much more data with which to train the classifier.
- In submission 1 I excluded the CROSS H CATTLE well from the training set, but in submission 2 I include it.
- In submission 1 I excluded records where PHIND was greater than 40%, but in submission 2 I leave those records in the training set, in case rugose hole is an issue in the validation wells.
- In submission 2 I basically did a bootstrapped grid search to optimize the n_neighbors parameter.
I spend a few cells back-calculating some more standard logging curves (RHOB, NPHI, etc), use a KNN regressor to regress missing PE values from other logs, then create a log-based lithology model from a Umaa-Rhomaa plot. After training, I finish it up with a LeaveOneGroupOut test.
End of explanation
df = pd.read_csv('../facies_vectors.csv')
Explanation: Load training data
End of explanation
def estimate_dphi(df):
return ((4*(df['PHIND']**2) - (df['DeltaPHI']**2))**0.5 - df['DeltaPHI']) / 2
def estimate_rhob(df):
return (2.71 - (df['DPHI_EST']/100) * 1.71)
def estimate_nphi(df):
return df['DPHI_EST'] + df['DeltaPHI']
def compute_rhomaa(df):
return (df['RHOB_EST'] - (df['PHIND'] / 100)) / (1 - df['PHIND'] / 100)
def compute_umaa(df):
return ((df['PE'] * df['RHOB_EST']) - (df['PHIND']/100 * 0.398)) / (1 - df['PHIND'] / 100)
Explanation: Build features
In the real world it would be unusual to have neutron-density cross-plot porosity (i.e. PHIND) without the corresponding raw input curves, namely bulk density and neutron porosity, as we have in this contest dataset. So as part of the feature engineering process, I back-calculate estimates of those raw curves from the provided DeltaPHI and PHIND curves. One issue with this approach though is that cross-plot porosity differs between vendors, toolstrings, and software packages, and it is not known exactly how the PHIND in this dataset was computed. So I make the assumption here that PHIND ≈ sum of squares porosity, which is usually an adequate approximation of neutron-density crossplot porosity. That equation looks like this:
$$PHIND ≈ \sqrt{\frac{NPHI^2 + DPHI^2}{2}}$$
and it is assumed here that DeltaPHI is:
$$DeltaPHI = NPHI - DPHI$$
The functions below use the relationships from the above equations (...two equations, two unknowns...) to estimate NPHI and DPHI (and consequently RHOB).
Once we have RHOB, we can use it combined with PE to estimate apparent grain density (RHOMAA) and apparent photoelectric capture cross-section (UMAA), which are useful in lithology estimations from well logs.
End of explanation
df['DPHI_EST'] = df.apply(lambda x: estimate_dphi(x), axis=1).astype(float)
df['RHOB_EST'] = df.apply(lambda x: estimate_rhob(x), axis=1)
df['NPHI_EST'] = df.apply(lambda x: estimate_nphi(x), axis=1)
df['RHOMAA_EST'] = df.apply(lambda x: compute_rhomaa(x), axis=1)
Explanation: Because solving the sum of squares equation involved the quadratic formula, in some cases imaginary numbers result due to porosities being negative, which is what the warning below is about.
End of explanation
pe = df.dropna()
PE = pe['PE'].values
wells = pe['Well Name'].values
drop_list_pe = ['Formation', 'Well Name', 'Facies', 'Depth', 'PE', 'RELPOS']
fv_pe = pe.drop(drop_list_pe, axis=1).values
X_pe = preprocessing.StandardScaler().fit(fv_pe).transform(fv_pe)
y_pe = PE
reg = neighbors.KNeighborsRegressor(n_neighbors=40, weights='distance')
logo = LeaveOneGroupOut()
f1knn_pe = []
for train, test in logo.split(X_pe, y_pe, groups=wells):
well_name = wells[test[0]]
reg.fit(X_pe[train], y_pe[train])
score = reg.fit(X_pe[train], y_pe[train]).score(X_pe[test], y_pe[test])
print("{:>20s} {:.3f}".format(well_name, score))
f1knn_pe.append(score)
print("-Average leave-one-well-out F1 Score: %6f" % (np.mean(f1knn_pe)))
Explanation: Regress missing PE values
End of explanation
reg.fit(X_pe, y_pe)
fv_apply = df.drop(drop_list_pe, axis=1).values
X_apply = preprocessing.StandardScaler().fit(fv_apply).transform(fv_apply)
df['PE_EST'] = reg.predict(X_apply)
df.PE = df.PE.combine_first(df.PE_EST)
Explanation: Apply regression model to missing PE values and merge back into dataframe:
End of explanation
df['UMAA_EST'] = df.apply(lambda x: compute_umaa(x), axis=1)
Explanation: Compute UMAA for lithology model
End of explanation
df[df.GR < 125].plot(kind='scatter', x='UMAA_EST', y='RHOMAA_EST', c='GR', figsize=(8,6))
plt.ylim(3.1, 2.2)
plt.xlim(0.0, 17.0)
plt.plot([4.8, 9.0, 13.8, 4.8], [2.65, 2.87, 2.71, 2.65], c='r')
plt.plot([4.8, 11.9, 13.8, 4.8], [2.65, 3.06, 2.71, 2.65], c='g')
plt.scatter([4.8], [2.65], s=50, c='r')
plt.scatter([9.0], [2.87], s=50, c='r')
plt.scatter([13.8], [2.71], s=50, c='r')
plt.scatter([11.9], [3.06], s=50, c='g')
plt.text(2.8, 2.65, 'Quartz', backgroundcolor='w')
plt.text(14.4, 2.71, 'Calcite', backgroundcolor='w')
plt.text(9.6, 2.87, 'Dolomite', backgroundcolor='w')
plt.text(12.5, 3.06, 'Illite', backgroundcolor='w')
plt.text(7.0, 2.55, "gas effect", ha="center", va="center", rotation=-55,
size=8, bbox=dict(boxstyle="larrow,pad=0.3", fc="pink", ec="red", lw=2))
plt.text(15.0, 2.78, "barite?", ha="center", va="center", rotation=0,
size=8, bbox=dict(boxstyle="rarrow,pad=0.3", fc="yellow", ec="orange", lw=2))
Explanation: Just for fun, below is a basic Umaa-Rhomaa plot to view relative abundances of quartz, calcite, dolomite, and clay. The red triangle represents a ternary solution for QTZ, CAL, and DOL, while the green triangle represents a solution for QTZ, CAL, and CLAY (illite).
End of explanation
# QTZ-CAL-CLAY
ur1 = inversion.UmaaRhomaa()
ur1.set_dol_uma(11.9)
ur1.set_dol_rhoma(3.06)
# QTZ-CAL-DOL
ur2 = inversion.UmaaRhomaa()
df['UR_QTZ'] = np.nan
df['UR_CLY'] = np.nan
df['UR_CAL'] = np.nan
df['UR_DOL'] = np.nan
df.ix[df.GR >= 40, 'UR_QTZ'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR >= 40, 'UR_CLY'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR >= 40, 'UR_CAL'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR >= 40, 'UR_DOL'] = 0
df.ix[df.GR < 40, 'UR_QTZ'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR < 40, 'UR_DOL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR < 40, 'UR_CAL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
df.ix[df.GR < 40, 'UR_CLY'] = 0
Explanation: Here I use matrix inversion to "solve" the ternary plot for each lithologic component. Essentially each datapoint is a mix of the three components defined by the ternary diagram, with abundances of each defined by the relative distances from each endpoint. I use a GR cutoff of 40 API to determine when to use either the QTZ-CAL-DOL or QTZ-CAL-CLAY ternary solutions. In other words, it is assumed that below 40 API, there is 0% clay, and above 40 API there is 0% dolomite, and also that these four lithologic components are the only components in these rocks. Admittedly it's not a great assumption, especially since the ternary plot indicates other stuff is going on. For example the high Umaa datapoints near the Calcite endpoint may indicate some heavy minerals (e.g., pyrite) or even barite-weighted mud. The "pull" of datapoints to the northwest quadrant probably reflects some gas effect, so my lithologies in those gassy zones will be skewed.
End of explanation
#score_list = []
#for i in range(1,600):
# clf = neighbors.KNeighborsClassifier(n_neighbors=i, weights='distance')
# f1knn = []
#
# for train, test in logo.split(X, y, groups=wells):
# well_name = wells[test[0]]
# clf.fit(X[train], y[train])
# score = clf.fit(X[train], y[train]).score(X[test], y[test])
# #print("{:>20s} {:.3f}".format(well_name, score))
# f1knn.append(score)
#
# score_list.append([i, np.mean(f1knn)])
#
#score_list
Explanation: Below I train the model using 1 to 599 n_neighbors and select a value for n_neighbors to use in my classifier with a high average on the LOGO test. In this case I will use 62. I recommend not running this cell as it takes a while to complete.
End of explanation
facies = df['Facies'].values
wells = df['Well Name'].values
drop_list = ['Formation', 'Well Name', 'Facies', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI',
'RHOMAA_EST', 'UMAA_EST', 'UR_QTZ', 'UR_DOL', 'PE']
fv = df.drop(drop_list, axis=1).values
X = preprocessing.StandardScaler().fit(fv).transform(fv)
y = facies
clf = neighbors.KNeighborsClassifier(n_neighbors=62, weights='distance')
logo = LeaveOneGroupOut()
f1knn = []
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
clf.fit(X[train], y[train])
score = clf.fit(X[train], y[train]).score(X[test], y[test])
print("{:>20s} {:.3f}".format(well_name, score))
f1knn.append(score)
print("-Average leave-one-well-out F1 Score: %6f" % (np.mean(f1knn)))
f1knn.pop(7)
print("-Average leave-one-well-out F1 Score, no Recruit F1: %6f" % (np.mean(f1knn)))
Explanation: Fit KNearestNeighbors model and apply LeaveOneGroupOut test
There is some bad log data in this dataset which I'd guess is due to rugose hole. PHIND gets as high at 80%, which is certainly spurious. For now I'll leave them in, since the validation wells may have rugose hole, too.
End of explanation
clf.fit(X, y)
vd = pd.read_csv('../validation_data_nofacies.csv')
vd['DPHI_EST'] = vd.apply(lambda x: estimate_dphi(x), axis=1).astype(float)
vd['RHOB_EST'] = vd.apply(lambda x: estimate_rhob(x), axis=1)
vd['NPHI_EST'] = vd.apply(lambda x: estimate_nphi(x), axis=1)
vd['RHOMAA_EST'] = vd.apply(lambda x: compute_rhomaa(x), axis=1)
drop_list_vd = ['Formation', 'Well Name', 'Depth', 'PE', 'RELPOS']
fv_vd = vd.drop(drop_list_vd, axis=1).values
X_vd = preprocessing.StandardScaler().fit(fv_vd).transform(fv_vd)
vd['PE_EST'] = reg.predict(X_vd)
vd.PE = vd.PE.combine_first(vd.PE_EST)
vd['UMAA_EST'] = vd.apply(lambda x: compute_umaa(x), axis=1)
vd['UR_QTZ'] = np.nan
vd['UR_CLY'] = np.nan
vd['UR_CAL'] = np.nan
vd['UR_DOL'] = np.nan
vd.ix[vd.GR >= 40, 'UR_QTZ'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR >= 40, 'UR_CLY'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR >= 40, 'UR_CAL'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR >= 40, 'UR_DOL'] = 0
vd.ix[vd.GR < 40, 'UR_QTZ'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR < 40, 'UR_DOL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR < 40, 'UR_CAL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)
vd.ix[vd.GR < 40, 'UR_CLY'] = 0
drop_list1 = ['Formation', 'Well Name', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI',
'RHOMAA_EST', 'UMAA_EST', 'UR_QTZ', 'UR_DOL', 'PE']
fv_vd1 = vd.drop(drop_list1, axis=1).values
X_vd1 = preprocessing.StandardScaler().fit(fv_vd1).transform(fv_vd1)
vd_predicted_facies = clf.predict(X_vd1)
vd_predicted_facies
Explanation: On average the scores are slightly worse than in my KNN_submission_1 model, but that is partially because this time I've included the CROSS H CATTLE well, which performs markedly worse than the other LOGO cases. I am hoping that since the scores for several of the wells have increased, the performance of this model against the validation data will improve.
Apply model to validation dataset
Load validation data (vd), build features, and use the classfier from above to predict facies. Ultimately the PE_EST curve seemed to be slightly more predictive than the PE curve proper (?). I use that instead of PE in the classifer so I need to compute it with the validation data.
End of explanation |
10,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Milestone report
Instruction
You have proposed a project, collected a data set, cleaned up the data and explored it with descriptive and inferential statistics techniques. Now’s the time to take stock of what you’ve learned. The project milestone is an opportunity for you to practice your data story skills. Your milestone will be reached when you produce an early draft of your final Capstone report. This is a slightly longer (3-5 page) draft that should have the following
Step1: Introduction
Crowdfunding has become a new and exciting way to get capitale and to invest. Lending club has jumped into the trend by offering loans with fixed interest rates and terms that the public can choose to invest in. Lending club screens the loans that are applied for and only 10% gets approved and is subsequently offered to the public. By investing a small proportion in many different loans investors can diversify their portfolio and in this way keep the default risk to a minimum (which is estimated by lending club to be 4%). For their services lending club asks a fee of 1%. For investors this is an interesting way to get profit on their investment since it supposedly gives more stable returns than the stock market and higher interest rates than a savings account. The profits depend on the interest rate and the default rate. Therefore it is interesting to see whether certain characteristics of the loan or the buyer give a bigger chance of default. And whether loans with higher interest rates have a bigger chance to default.
For this project the lending club loans dataset is used from Kaggle. (https
Step2: percentage charged off
The first question is what the percentage of 'charged off' loans actually is, so our investors know the risk. Lending club claims its around 4%. But in the loans that went to full term we see that the percentage is a shocking 18%. So hopefully lending club's selection of the loans will become better in the future in order to get this risk down. This is a question that is left for the future when the current loans went to full term.
Step3: features
There are 74 features in this dataset. They are displayed below. A couple have to do with the loan (32) and a couple have to do with the one that's asking for the loan (39). A few are about loans that were applied for by more than one borrower, namely 'annual_inc_joint', 'dti_joint' and 'verification_status_joint'. But in the loans that went to full term there is only one loan that is not an individual loan, hence these features are not interesting in this case. Also a lot of features have missing values. If we concentrate only on features that have less than 5% missing values, we are left with only 48 features.
Loan
- id
Step4: limitations
To answer the questions about the 'charged off' status and whether investing with lending club is profitable we use only the loans that went to full term. The term the loans run are 3 or 5 years. And the latest loan information is from 2015. Hence the most recent loan we can look at is already from 2012 and the rest is even older. It might be that lending club has changed its protocols and the found results on this dataset might therefore not apply anymore on new loans. Also 1/3 of the features have so many missing values that they can't be used for analysis. There is one feature 'initial_list_status' where they do not explain what it means (values w/f), hence cannot be used for interpretation. Some of the features are unique for different loans like 'desc', 'url', 'id', 'title' and are therefore not interesting for our analysis. It might be that there are other features about a borrower that might have an influence on 'charged off' rate for instance 'gender', 'age', 'nr-of-kids', 'nr-of-pets', 'marital status', 'political preference'. But we will not be able to investigate this, since we are restricted to features that lending club collected. Also some features might have been registrered better for newer loans than older loans or in a different way (because protocols changed) and this might influence our results.
cleaning and wrangling
First the selection of only loans that went to full term and selecting only the loans with not to much missing values. In a later stage, we want to use features for prediction that are selected based on their ability to lead to insights for new investors. Since we work with sklearn non-numerical features will have to be transformed to numerical features. Dates can be transformed into timestamps, categorical features will be transformed as good as possible into numerical values. Ordering is important for most algorithms, hence it's important to find an order in the categorical features to keep during transformation to numerical features. Also scaling/normalizing is important for some algorithms and we have to keep in mind that we have to use the exact same transformation for the test set as we did on the training set. Lastly, missing values, infinity and minus infinity values are not possible during prediction so also need to be transformed.
other datasets
The American gouvernment has a lot of other datasets available that can be used in combination with this dataset. For instance both zipcode and state information is available. Hence we might add a feature that describes what the political preference is of the state the person lives in. Secondly we might transform the state feature to 'north/west/south/east'. Also we might use the average income for a certain zipcode or state as extra feature or the average age.
## Explorations
features of the loans
We will look at a few interesting features to see if what the loans characteristics look like. The funded amount turns out to be between 0 and 35,000. Hence more like an amount to buy a car than to buy a house. Lending club therefore competes with creditcards and consumer credits. The loans are either 3 or 5 years of length. Furthermore, the purpose of the loan could have something to do with the chance whether someone would pay the loan back. If it's for debt consolidation, someone has more loans and therefore will probably be more likely to get into trouble. As it turns out almost all loans are for debt consolidation or creditcard debt, which is practically the same thing. Hence it looks like not the most interesting to base your choice of investment on. Moreover, debt-to-income seems of course also a very interesting feature. But the difference between loans that were paid fully or is only 16% debt-to-income versus 18% debt-to-income. Nevertheless, this difference is significant with a T-test. Lastly, people with a mortgage do seem to pay off their loans more often than people who rent. The order is mortgage (16% charged off), own (18% charged off) and rent (20% charged off).
Step5: grade
Lending club has made its own risk assesment of the loans and gives them categories namely A-F. Including subcategories like A1 etc. As we can see below, the proportion of loans that get charged off does increase nicely with the increase in risk category (grade). In the highest risk still more than half gets fully paid. To compensate for the higher risk, investors in these higher risk loans get higher interest rates. Although it's not completely linear.
Step6: To answer the question whether it's profitable to invest in the higher risk categories. One could calculate the charged off % and calculate the average interest rate. But than you don't take into account that some loans might default very quickly and other loans might default right before the end and this difference makes a huge difference in how much profit/loss one got on that loan. Hence it's important to know how much money came back in total per loan minus the money one put in to see if it turned out to be profitable in the end. Therefore 'total_recevied_interest', 'total_recieved_principal', 'total_recieved_late_fee', 'recoveries', 'collections_12_mths_ex_med' will all be used as income from the loan. While 'funded_amount' is seen as what was put in in the loan at the start and 'collection_recovery_fee' is what was paid to the person who collected the money that was recovered after the loan was charged off. This leads to the conclusion that of one had invested in all loans of that category only the A-C category was profitable and that the higher interest rates of the riskier categories did not compensate for the loss of money due to charging off of the loans. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
sns.set_style('white')
Explanation: Milestone report
Instruction
You have proposed a project, collected a data set, cleaned up the data and explored it with descriptive and inferential statistics techniques. Now’s the time to take stock of what you’ve learned. The project milestone is an opportunity for you to practice your data story skills. Your milestone will be reached when you produce an early draft of your final Capstone report. This is a slightly longer (3-5 page) draft that should have the following:
An introduction to the problem: What is the problem? Who is the Client? (Feel free to reuse points 1-2 from your proposal document)
A deeper dive into the data set:
What important fields and information does the data set have?
What are its limitations i.e. what are some questions that you cannot answer with this data set?
What kind of cleaning and wrangling did you need to do?
Are there other datasets you can find, use and combine with, to answer the questions that matter?
Any preliminary exploration you’ve performed and your initial findings. Test the hypotheses one at a time. Often, the data story emerges as a result of a sequence of testing hypothesis e.g. You first tested if X was true, and because it wasn't, you tried Y, which turned out to be true.
Based on these findings, what approach are you going to take? How has your approach changed from what you initially proposed, if applicable?
Add your code and milestone report to the github repository. As before, once your mentor has approved your milestone document, please share the github repository URL on the community and ask the community for feedback.
While we require only one milestone report, we encourage you and your mentor to plan multiple milestones, especially for more complex projects.
End of explanation
loans = pd.read_csv('../data/loan.csv')
print(loans.shape)
closed_status = ['Fully Paid', 'Charged Off',
'Does not meet the credit policy. Status:Fully Paid',
'Does not meet the credit policy. Status:Charged Off']
closed_loans = loans[loans['loan_status'].isin(closed_status)]
print(closed_loans.shape)
sns.countplot(loans['loan_status'], color='turquoise')
plt.xticks(rotation=90)
plt.show()
sns.countplot(closed_loans['loan_status'], color='turquoise')
plt.xticks(rotation=90)
plt.show()
Explanation: Introduction
Crowdfunding has become a new and exciting way to get capitale and to invest. Lending club has jumped into the trend by offering loans with fixed interest rates and terms that the public can choose to invest in. Lending club screens the loans that are applied for and only 10% gets approved and is subsequently offered to the public. By investing a small proportion in many different loans investors can diversify their portfolio and in this way keep the default risk to a minimum (which is estimated by lending club to be 4%). For their services lending club asks a fee of 1%. For investors this is an interesting way to get profit on their investment since it supposedly gives more stable returns than the stock market and higher interest rates than a savings account. The profits depend on the interest rate and the default rate. Therefore it is interesting to see whether certain characteristics of the loan or the buyer give a bigger chance of default. And whether loans with higher interest rates have a bigger chance to default.
For this project the lending club loans dataset is used from Kaggle. (https://www.kaggle.com/wendykan/lending-club-loan-data). Their file contains complete loans data for loans issued between 2007 and 2015. The client is the investor who wants to get the most profit on his portfolio of loans and wants to know whether investing with lending club is profitable. The problem is that some of the loans will not be fully paid, therefore interest rate is not the only interesting characteristic of the loan. We will therefore investigate the characteristics of the loans that have an effect on the chance a loan gets 'charged off'.
Data set
loan status
The complete dataset consists of 887,379 loans with 74 features. We select only the loans that went to fullterm, because we don't know whether the loans that are still ongoing will end in 'charged off' or 'fully paid'. Most loans are current loans, but there are four categories of loans that went to full term: 'Fully Paid', 'Charged Off', 'Does not meet the credit policy. Status:Fully Paid', 'Does not meet the credit policy. Status:Charged Off'. When selecting only those categories, 255,720 of the loans are left of which most are 'fully paid'.
End of explanation
nr_charged_off = (len(closed_loans[closed_loans['loan_status']=='Charged Off']) +
len(closed_loans[closed_loans['loan_status']=='Does not meet the credit policy. Status:Charged Off']))
round(nr_charged_off / len(closed_loans) * 100)
Explanation: percentage charged off
The first question is what the percentage of 'charged off' loans actually is, so our investors know the risk. Lending club claims its around 4%. But in the loans that went to full term we see that the percentage is a shocking 18%. So hopefully lending club's selection of the loans will become better in the future in order to get this risk down. This is a question that is left for the future when the current loans went to full term.
End of explanation
nr_nulls = closed_loans.isnull().apply(sum, 0)
nr_nulls = nr_nulls[nr_nulls != 0]
print(nr_nulls.sort_values(ascending=False) / 255720)
print('nr of features having more than 5% missing values:', sum(nr_nulls.sort_values(ascending=False) / 255720 > 0.05))
Explanation: features
There are 74 features in this dataset. They are displayed below. A couple have to do with the loan (32) and a couple have to do with the one that's asking for the loan (39). A few are about loans that were applied for by more than one borrower, namely 'annual_inc_joint', 'dti_joint' and 'verification_status_joint'. But in the loans that went to full term there is only one loan that is not an individual loan, hence these features are not interesting in this case. Also a lot of features have missing values. If we concentrate only on features that have less than 5% missing values, we are left with only 48 features.
Loan
- id: loan
- loan_amnt: 1914 times is loan amount bigger than funded amount
- funded_amnt
- funded_amnt_inv
- term: 36 or 60 months
- int_rate: interest rates
- installment: height monthly pay
- grade: A-G, A low risk, G high risk
- sub_grade
- issue_d: month-year loan was funded
- loan_status
- pymnt_plan: n/y
- url
- desc: description provided by borrower
- purpose: 'credit_card', 'car', 'small_business', 'other', 'wedding', 'debt_consolidation', 'home_improvement', 'major_purchase', 'medical', 'moving', 'vacation', 'house', 'renewable_energy','educational'
- title: provided by borrower
- initial_list_status: w/f (what is this?)
- out_prncp: outstanding prinicipal --> still >0 in fully paid?!
- out_prncp_inv
- total_pymnt
- total_pymnt_inv
- total_rec_prncp
- total_rec_int: total recieved interest
- total_rec_late_fee
- recoveries: post charged off gross recovery
- collection_recovery_fee: post charged off collection fee
- last_pymnt_d
- last_pymnt_amnt
- next_pymnt_d
- collections_12_mths_ex_med: almost all 0
- policy_code: 1 publicly available, 2 not
- application_type (only 1 JOINT, rest INDIVIDUAL)
Borrower
- emp_title
- emp_length: 0-10 (10 stands for >=10)
- home_ownership: 'RENT', 'OWN', 'MORTGAGE', 'OTHER', 'NONE', 'ANY'
- member_id: person
- annual_inc (stated by borrower)
- verification_status: 'Verified', 'Source Verified', 'Not Verified' (income verified by LC?)
- zip_code
- addr_state
- dti: debt to income (without mortgage)
- delinq_2yrs: The number of 30+ days past-due incidences of delinquency in the borrower's credit file for the past 2 years
- mths_since_last_delinq
- mths_since_last_record
- pub_rec
- earliest_cr_line
- inq_last_6mths
- open_acc (nr of open credit lines)
- total_acc (nr of total credit lines in credit file)
- revol_bal
- last_credit_pull_d
- mths_since_last_major_derog: Months since most recent 90-day or worse rating
- acc_now_delinq: The number of accounts on which the borrower is now delinquent.
- tot_coll_amt: Total collection amounts ever owed
- tot_cur_bal: Total current balance of all accounts
- open_acc_6m: Number of open trades in last 6 months
- open_il_6m: Number of currently active installment trades
- open_il_12m: Number of installment accounts opened in past 12 months
- open_il_24m
- mths_since_rcnt_il: Months since most recent installment accounts opened
- total_bal_il: Total current balance of all installment accounts
- il_util: Ratio of total current balance to high credit/credit limit on all install acct
- open_rv_12m: Number of revolving trades opened in past 12 months
- open_rv_24m
- max_bal_bc: Maximum current balance owed on all revolving accounts
- all_util: Balance to credit limit on all trades
- total_rev_hi_lim: Total revolving high credit/credit limit
- inq_fi: Number of personal finance inquiries
- total_cu_tl: Number of finance trades
- inq_last_12m: Number of credit inquiries in past 12 months
Two borrowers (only in 1 case)
- annual_inc_joint
- dti_joint
- verification_status_joint
End of explanation
paid_status = ['Fully Paid', 'Does not meet the credit policy. Status:Fully Paid']
closed_loans['charged_off'] = [False if loan in paid_status else True for loan in closed_loans['loan_status']]
sns.distplot(closed_loans['funded_amnt'], kde=False, bins=50)
plt.show()
sns.countplot(closed_loans['term'], color='turquoise')
plt.show()
purpose_paid = closed_loans.groupby(['purpose', 'charged_off'])['id'].count()
sns.barplot(data=pd.DataFrame(purpose_paid).reset_index(), x='purpose', y='id', hue='charged_off')
plt.xticks(rotation=90)
plt.show()
sns.boxplot(data=closed_loans, x='charged_off', y='dti')
plt.show()
home_paid = closed_loans.groupby(['home_ownership', 'charged_off'])['id'].count()
sns.barplot(data=pd.DataFrame(home_paid).reset_index(), x='home_ownership', y='id', hue='charged_off')
plt.xticks(rotation=90)
plt.show()
from scipy.stats import ttest_ind
print(ttest_ind(closed_loans[closed_loans['charged_off']==True]['dti'], closed_loans[closed_loans['charged_off']==False]['dti']))
print((closed_loans[closed_loans['charged_off']==True]['dti']).mean())
print((closed_loans[closed_loans['charged_off']==False]['dti']).mean())
print(closed_loans.groupby(['home_ownership', 'charged_off'])['id'].count()[1:3])
print(closed_loans.groupby(['home_ownership', 'charged_off'])['id'].count()[7:11])
print('mortgage:', 20226/(105874+20226))
print('own:', 4074/(18098+4074))
print('rent:', 21663/(85557+21663))
Explanation: limitations
To answer the questions about the 'charged off' status and whether investing with lending club is profitable we use only the loans that went to full term. The term the loans run are 3 or 5 years. And the latest loan information is from 2015. Hence the most recent loan we can look at is already from 2012 and the rest is even older. It might be that lending club has changed its protocols and the found results on this dataset might therefore not apply anymore on new loans. Also 1/3 of the features have so many missing values that they can't be used for analysis. There is one feature 'initial_list_status' where they do not explain what it means (values w/f), hence cannot be used for interpretation. Some of the features are unique for different loans like 'desc', 'url', 'id', 'title' and are therefore not interesting for our analysis. It might be that there are other features about a borrower that might have an influence on 'charged off' rate for instance 'gender', 'age', 'nr-of-kids', 'nr-of-pets', 'marital status', 'political preference'. But we will not be able to investigate this, since we are restricted to features that lending club collected. Also some features might have been registrered better for newer loans than older loans or in a different way (because protocols changed) and this might influence our results.
cleaning and wrangling
First the selection of only loans that went to full term and selecting only the loans with not to much missing values. In a later stage, we want to use features for prediction that are selected based on their ability to lead to insights for new investors. Since we work with sklearn non-numerical features will have to be transformed to numerical features. Dates can be transformed into timestamps, categorical features will be transformed as good as possible into numerical values. Ordering is important for most algorithms, hence it's important to find an order in the categorical features to keep during transformation to numerical features. Also scaling/normalizing is important for some algorithms and we have to keep in mind that we have to use the exact same transformation for the test set as we did on the training set. Lastly, missing values, infinity and minus infinity values are not possible during prediction so also need to be transformed.
other datasets
The American gouvernment has a lot of other datasets available that can be used in combination with this dataset. For instance both zipcode and state information is available. Hence we might add a feature that describes what the political preference is of the state the person lives in. Secondly we might transform the state feature to 'north/west/south/east'. Also we might use the average income for a certain zipcode or state as extra feature or the average age.
## Explorations
features of the loans
We will look at a few interesting features to see if what the loans characteristics look like. The funded amount turns out to be between 0 and 35,000. Hence more like an amount to buy a car than to buy a house. Lending club therefore competes with creditcards and consumer credits. The loans are either 3 or 5 years of length. Furthermore, the purpose of the loan could have something to do with the chance whether someone would pay the loan back. If it's for debt consolidation, someone has more loans and therefore will probably be more likely to get into trouble. As it turns out almost all loans are for debt consolidation or creditcard debt, which is practically the same thing. Hence it looks like not the most interesting to base your choice of investment on. Moreover, debt-to-income seems of course also a very interesting feature. But the difference between loans that were paid fully or is only 16% debt-to-income versus 18% debt-to-income. Nevertheless, this difference is significant with a T-test. Lastly, people with a mortgage do seem to pay off their loans more often than people who rent. The order is mortgage (16% charged off), own (18% charged off) and rent (20% charged off).
End of explanation
grade_paid = closed_loans.groupby(['grade', 'charged_off'])['id'].count()
risk_grades = dict.fromkeys(closed_loans['grade'].unique())
for g in risk_grades.keys():
risk_grades[g] = grade_paid.loc[(g, True)] / (grade_paid.loc[(g, False)] + grade_paid.loc[(g, True)])
risk_grades = pd.DataFrame(risk_grades, index=['proportion_unpaid_loans'])
sns.stripplot(data=risk_grades, color='darkgray', size=15)
closed_loans['grade'] = closed_loans['grade'].astype('category', ordered=True)
sns.boxplot(data=closed_loans, x='grade', y='int_rate', color='turquoise')
Explanation: grade
Lending club has made its own risk assesment of the loans and gives them categories namely A-F. Including subcategories like A1 etc. As we can see below, the proportion of loans that get charged off does increase nicely with the increase in risk category (grade). In the highest risk still more than half gets fully paid. To compensate for the higher risk, investors in these higher risk loans get higher interest rates. Although it's not completely linear.
End of explanation
closed_loans['profit'] = (closed_loans['total_rec_int'] + closed_loans['total_rec_prncp'] + closed_loans['collections_12_mths_ex_med']
+ closed_loans['total_rec_late_fee'] + closed_loans['recoveries'] - closed_loans['funded_amnt']
- closed_loans['collection_recovery_fee'])
profits = closed_loans.groupby('grade')['profit'].sum()
sns.barplot(data=profits.reset_index(), x='grade', y='profit', color='gray')
plt.show()
profits = closed_loans.groupby('charged_off')['profit'].sum()
sns.barplot(data=profits.reset_index(), x='charged_off', y='profit')
plt.show()
profits = closed_loans.groupby(['grade', 'charged_off'])['profit'].sum()
sns.barplot(data=profits.reset_index(), x='profit', y='grade', hue='charged_off', orient='h')
plt.show()
Explanation: To answer the question whether it's profitable to invest in the higher risk categories. One could calculate the charged off % and calculate the average interest rate. But than you don't take into account that some loans might default very quickly and other loans might default right before the end and this difference makes a huge difference in how much profit/loss one got on that loan. Hence it's important to know how much money came back in total per loan minus the money one put in to see if it turned out to be profitable in the end. Therefore 'total_recevied_interest', 'total_recieved_principal', 'total_recieved_late_fee', 'recoveries', 'collections_12_mths_ex_med' will all be used as income from the loan. While 'funded_amount' is seen as what was put in in the loan at the start and 'collection_recovery_fee' is what was paid to the person who collected the money that was recovered after the loan was charged off. This leads to the conclusion that of one had invested in all loans of that category only the A-C category was profitable and that the higher interest rates of the riskier categories did not compensate for the loss of money due to charging off of the loans.
End of explanation |
10,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src='https
Step1: or...
Step2: For parallel computing in python, map is a key abstraction.
Step3: lambda
Anonymous function
Step4: reduce
Apply a function with two arguments cumulatively to the container.
Step5: filter
Constructs a new list for items where the applied function is True.
Step6: Spark Programming Model
Everything starts with a SparkContext
Step7: Create RDDs
RDD Documentation
The parallelize method is a utility for initializing RDDs.
NB
Step8: Transformations and Actions
Transformations return edges to new vertex in DAG, lazy evaluation, wide and narrow evaluation
map, flatmap
reduceByKey
filter
glom
Actions return values- beware of memory limitations!
collect
reduce
take
count
What does this look like?
glom
Step9: map and Flatmap
Return a new RDD by first applying a function and then flattening the results.
Step10: Or I can flatten the results...
Step11: Or flatten the original results
Step12: Reduction
(Associative operation)
Step13: Reading HDF5 with PySpark
Example courtesy Freeman Lab
Step14: Now write it to a CSV (from stackoverflow user Daniel Darabos) | Python Code:
def square(x):
return x*x
numbers = [1,2,3]
def map_squares(nums):
res = []
for x in nums:
res.append( square(x) )
return res
map_squares(numbers)
Explanation: <img src='https://www.rc.colorado.edu/sites/all/themes/research/logo.png'>
Introduction to Spark
Many examples courtesy Monte Lunacek
Outline
Functional programming in Python
Spark's programming model
As many examples as we can get through!
Functional Python
<blockquote>
Python acquired lambda, reduce, filter and map, courtesy of a Lisp hacker who missed them and submitted working patches. -Guido van Rossum
</blockquote>
map
reduce
filter
lambda
And more: itertools, pytoolz
We will use these concepts (and more) in Spark
The map abstraction
For the category theory inclined: a functor over functions (morphisms)! Basically an association of functions.
End of explanation
results = map(square, numbers)
results
Explanation: or...
End of explanation
from multiprocessing import Pool
pool = Pool(5)
results = pool.map(square, numbers)
results
Explanation: For parallel computing in python, map is a key abstraction.
End of explanation
lambda_square = lambda x: x*x
map(lambda_square, range(10))
map(lambda x: x*x, range(10))
res = map(lambda x: x*x, range(10))
Explanation: lambda
Anonymous function: a function without a name, like inlining
End of explanation
def add_num(x1, x2):
return x1+x2
print reduce(add_num, res)
print reduce(lambda x,y: x+y, res)
Explanation: reduce
Apply a function with two arguments cumulatively to the container.
End of explanation
def less_than(x):
return x>10
filter(less_than, res)
filter(lambda x: x>10, res)
Explanation: filter
Constructs a new list for items where the applied function is True.
End of explanation
import findspark
import os
findspark.init() # you need that before import pyspark in Jupyter notebook
import pyspark
sc = pyspark.SparkContext()
Explanation: Spark Programming Model
Everything starts with a SparkContext
End of explanation
import numpy as np
rdd = sc.parallelize(np.arange(20), numSlices=5)
Explanation: Create RDDs
RDD Documentation
The parallelize method is a utility for initializing RDDs.
NB: parallelized structure must fit in driver memory!
End of explanation
for x in rdd.glom().collect():
print x
rdd = sc.parallelize(np.arange(20), numSlices=10)
for x in rdd.glom().collect():
print x
Explanation: Transformations and Actions
Transformations return edges to new vertex in DAG, lazy evaluation, wide and narrow evaluation
map, flatmap
reduceByKey
filter
glom
Actions return values- beware of memory limitations!
collect
reduce
take
count
What does this look like?
glom: Return an RDD created by coalescing all elements within each partition into a list.
collect: Returns a list from all elements of an RDD.
End of explanation
rdd = sc.parallelize([ [2, 3, 4],[0, 1],[5, 6, 7, 8] ])
rdd.collect()
rdd.map(lambda x: range(len(x))).collect()
Explanation: map and Flatmap
Return a new RDD by first applying a function and then flattening the results.
End of explanation
rdd.flatMap(lambda x: range(len(x))).collect()
Explanation: Or I can flatten the results...
End of explanation
rdd.flatMap(lambda x: x).collect()
Explanation: Or flatten the original results
End of explanation
rdd.flatMap(lambda x: x).reduce(lambda x,y: x+y)
rdd = sc.parallelize([("a", 1), ("b", 1), ("a", 2)])
rdd.collect()
rdd.reduceByKey(lambda x,y: x+y).collect()
rdd = sc.parallelize([("hamlet", 1), ("claudius", 1), ("hamlet", 1)])
rdd.countByKey()
Explanation: Reduction
(Associative operation)
End of explanation
import h5py
h5file_path='../data/hdf5_ex.h5'
def readchunk(v):
chunk = h5py.File(h5file_path, 'r')
return chunk['/chunked'][v,:]
chunked_array = sc.parallelize(range(0,10)).map(lambda v: readchunk(v))
chunked_array.take(3)
Explanation: Reading HDF5 with PySpark
Example courtesy Freeman Lab: https://github.com/freeman-lab/hdf5-and-spark
End of explanation
def toCSV(data):
return ','.join(str(d) for d in data)
lines = chunked_array.map(toCSV).repartition(1)
lines.saveAsTextFile('hdf5_ex.csv')
Explanation: Now write it to a CSV (from stackoverflow user Daniel Darabos)
End of explanation |
10,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the NYC Subway Dataset
Intro to Data Science
Step1: Functions for Getting, Mapping, and Plotting Data
Step2: Function for Basic Statistics
Step3: Formulas Implemented
(i.e., not included in modules/packages)
Wendt's rank-biserial correlation $r$
$$r = 1 - \frac{2U}{n_{1}n_{2}}$$
Cohen's $d$ (and pooled standard deviation $s$)
$$d = \frac{\bar{x}{1} - \bar{x}{2}}{s}$$
$$s = \sqrt{\frac{(n_{1} - 1)s_{1}^{2} + (n_{2} - 1)s_{2}^{2}}{n_{1} + n_{2} - 2}}$$
Class for Mann-Whitney U Test
Step4: Section 1. Statistical Test
<h3 id='1_1_a'>1.1.a Which statistical test did you use to analyse the NYC subway data?</h3>
The Mann-Whitney $U$ test was used to determine if there was a statistically significant difference between the number of reported entries on rainy and non-rainy occasions. This nonparametric test of the equality of two population medians from independent samples was used since the distribution of entries is non-normal (right-skewed) and their shape is the same, as seen visually via histograms, probability plots, and box plots, and as the result of the Shapiro-Wilk normality test (see <a href='IntroDS-ProjectOne-DataExploration-Supplement.ipynb#prep-for-stats' target='_blank'>Preparation for Statistical Tests</a>). However, since the sample sizes are so large, the parametric Welch's $t$-test likely could have been used (and, it was implemented for confirmation purposes, along with the nonparametric Wilcoxon signed-rank test; both agreed with the Mann-Whitney $U$ test results).
Testing Average Values of $p$, $r$, and $d$ for Various Sample Sizes
Step5: As witnessed above, when rainy and non-rainy days from the data set are considered populations (as opposed to samples themselves), it takes significantly large sample sizes from each population (e.g., $n = 3000$, which is more than $30\%$ of the total number of rainy days in the data set) to attain low $p$-values<sup>1</sup> frequently enough to reject the null hypothesis of the Mann-Whitney $U$ test<sup>2</sup> with the critical values proposed below.
Moreover, using Wendt's rank-biserial correlation $r$ and Cohen's $d$ to measure effect size, the relatively low average value of $r$<sup>3</sup> and the low average value of $d$<sup>4</sup> both suggest that the difference between the two samples (and, thus, the two populations) is trivial, even though, according to the Mann-Whitney U test, the difference appears to be statistically signficant (and only then with extremely large samples)<sup>5</sup>. In other words, statistical significance $\neq$ practical significance.
Notes
<sup>1</sup> Identical samples would produce a large $p$ (e.g., $p \approx 0.49$); extremely different samples would produce a very small number (e.g., $p \approx 0$).
<sup>2</sup> Identical samples would produce $U = \frac{n^{2}}{2}$ (e.g., when $n = 450$, $U = 101250$); extremely different samples can produce a $U$ that is orders of magnitude smaller (e.g., when $n = 450$, possibly $U = 1293$).
<sup>3</sup> For very different samples, $r \rightarrow 1$; in the above tests, as $n$ increases, $r \rightarrow 0$.
<sup>4</sup> For very different samples, $d \rightarrow 1$; in the above tests, as $n$ increases, $d$ tends to remain constant, $d \approx 0.06$, even when the sample size is extremely large. $d$ is interpreted as the difference in the number of standard deviations.
<sup>5</sup> On the issue of $p$-values and large data sets, see Lin, M., Lucas, H.C., and Shmueli, G. Research Commentary—Too big to fail
Step6: The Mann-Whitney $U$ test is a nonparametric test of the null hypothesis that the distributions of two populations are the same.
To verify the assumption that the simple, randomly sampled values are independent, the sample sizes should be less than $5\%$ of the population sizes ($n \lt 0.05N$). Since the maximum number of rainy days is $9585$ ($N = 9585$), a reasonable sample size for each group would be $450$ ($n = 450$).
Null Hyptohesis
$H_{0}$ | Python Code:
import inflect # for string manipulation
import numpy as np
import pandas as pd
import scipy as sp
import scipy.stats as st
import matplotlib.pyplot as plt
%matplotlib inline
filename = '/Users/excalibur/py/nanodegree/intro_ds/final_project/improved-dataset/turnstile_weather_v2.csv'
# import data
data = pd.read_csv(filename)
Explanation: Analyzing the NYC Subway Dataset
Intro to Data Science: Final Project 1, Part 2
(Short Questions)
Section 1. Statistical Test
Austin J. Alexander
Import Directives and Initial DataFrame Creation
End of explanation
entries_hourly_by_row = data['ENTRIESn_hourly'].values
def map_column_to_entries_hourly(column):
instances = column.values # e.g., longitude_instances = data['longitude'].values
# reduce
entries_hourly = {} # e.g., longitude_entries_hourly = {}
for i in np.arange(len(instances)):
if instances[i] in entries_hourly:
entries_hourly[instances[i]] += float(entries_hourly_by_row[i])
else:
entries_hourly[instances[i]] = float(entries_hourly_by_row[i])
return entries_hourly # e.g., longitudes, entries
def create_df(entries_hourly_dict, column1name):
# e.g, longitude_df = pd.DataFrame(data=longitude_entries_hourly.items(), columns=['longitude','entries'])
df = pd.DataFrame(data=entries_hourly_dict.items(), columns=[column1name,'entries'])
return df # e.g, longitude_df
rain_entries_hourly = map_column_to_entries_hourly(data['rain'])
rain_df = create_df(rain_entries_hourly, 'rain')
rain_days = data[data['rain'] == 1]
no_rain_days = data[data['rain'] == 0]
def plot_box(sample1, sample2):
plt.boxplot([sample2, sample1], vert=False)
plt.title('NUMBER OF ENTRIES PER SAMPLE')
plt.xlabel('ENTRIESn_hourly')
plt.yticks([1, 2], ['Sample 2', 'Sample 1'])
plt.show()
Explanation: Functions for Getting, Mapping, and Plotting Data
End of explanation
def describe_samples(sample1, sample2):
size1, min_max1, mean1, var1, skew1, kurt1 = st.describe(sample1)
size2, min_max2, mean2, var2, skew2, kurt2 = st.describe(sample2)
med1 = np.median(sample1)
med2 = np.median(sample2)
std1 = np.std(sample1)
std2 = np.std(sample2)
print "Sample 1 (rainy days):\n min = {0}, max = {1},\n mean = {2:.2f}, median = {3}, var = {4:.2f}, std = {5:.2f}".format(min_max1[0], min_max1[1], mean1, med1, var1, std1)
print "Sample 2 (non-rainy days):\n min = {0}, max = {1},\n mean = {2:.2f}, median = {3}, var = {4:.2f}, std = {5:.2f}".format(min_max2[0], min_max2[1], mean2, med2, var2, std2)
Explanation: Function for Basic Statistics
End of explanation
class MannWhitneyU:
def __init__(self,n):
self.n = n
self.num_of_tests = 1000
self.sample1 = 0
self.sample2 = 0
def sample_and_test(self, plot, describe):
self.sample1 = np.random.choice(rain_days['ENTRIESn_hourly'], size=self.n, replace=False)
self.sample2 = np.random.choice(no_rain_days['ENTRIESn_hourly'], size=self.n, replace=False)
### the following two self.sample2 assignments are for testing purposes ###
#self.sample2 = self.sample1 # test when samples are same
#self.sample2 = np.random.choice(np.random.randn(self.n),self.n) # test for when samples are very different
if plot == True:
plot_box(self.sample1,self.sample2)
if describe == True:
describe_samples(self.sample1,self.sample2)
return st.mannwhitneyu(self.sample1, self.sample2)
def effect_sizes(self, U):
# Wendt's rank-biserial correlation
r = (1 - np.true_divide((2*U),(self.n*self.n)))
# Cohen's d
s = np.sqrt(np.true_divide((((self.n-1)*np.std(self.sample1)**2) + ((self.n-1)*np.std(self.sample2)**2)), (self.n+self.n-2)))
d = np.true_divide((np.mean(self.sample1) - np.mean(self.sample2)), s)
return r,d
def trial_series(self):
success = 0
U_values = []
p_values = []
d_values = []
r_values = []
for i in np.arange(self.num_of_tests):
U, p = self.sample_and_test(False, False)
r, d = self.effect_sizes(U)
U_values.append(U)
# scipy.stats.mannwhitneyu returns p for a one-sided hypothesis,
# so multiply by 2 for two-sided
p_values.append(p*2)
d_values.append(d)
r_values.append(r)
if p <= 0.05:
success += 1
print "n = {0}".format(self.n)
print "average U value: {0:.2f}".format(np.mean(U_values))
print "number of times p <= 0.05: {0}/{1} ({2}%)".format(success, self.num_of_tests, (np.true_divide(success,self.num_of_tests)*100))
print "average p value: {0:.2f}".format(np.mean(p_values))
print "average rank-biserial r value: {0:.2f}".format(np.mean(r_values))
print "average Cohen's d value: {0:.2f}".format(np.mean(d_values))
plt.hist(p_values, color='green', alpha=0.3)
plt.show()
Explanation: Formulas Implemented
(i.e., not included in modules/packages)
Wendt's rank-biserial correlation $r$
$$r = 1 - \frac{2U}{n_{1}n_{2}}$$
Cohen's $d$ (and pooled standard deviation $s$)
$$d = \frac{\bar{x}{1} - \bar{x}{2}}{s}$$
$$s = \sqrt{\frac{(n_{1} - 1)s_{1}^{2} + (n_{2} - 1)s_{2}^{2}}{n_{1} + n_{2} - 2}}$$
Class for Mann-Whitney U Test
End of explanation
sample_sizes = [30, 100, 500, 1500, 3000, 5000, 9585]
for n in sample_sizes:
MannWhitneyU(n).trial_series()
Explanation: Section 1. Statistical Test
<h3 id='1_1_a'>1.1.a Which statistical test did you use to analyse the NYC subway data?</h3>
The Mann-Whitney $U$ test was used to determine if there was a statistically significant difference between the number of reported entries on rainy and non-rainy occasions. This nonparametric test of the equality of two population medians from independent samples was used since the distribution of entries is non-normal (right-skewed) and their shape is the same, as seen visually via histograms, probability plots, and box plots, and as the result of the Shapiro-Wilk normality test (see <a href='IntroDS-ProjectOne-DataExploration-Supplement.ipynb#prep-for-stats' target='_blank'>Preparation for Statistical Tests</a>). However, since the sample sizes are so large, the parametric Welch's $t$-test likely could have been used (and, it was implemented for confirmation purposes, along with the nonparametric Wilcoxon signed-rank test; both agreed with the Mann-Whitney $U$ test results).
Testing Average Values of $p$, $r$, and $d$ for Various Sample Sizes
End of explanation
print "Shape of rainy-days data:" +str(rain_days.shape)
N = rain_days.shape[0]
print "N = " + str(N)
print "0.05 * N = " + str(0.05 * N)
Explanation: As witnessed above, when rainy and non-rainy days from the data set are considered populations (as opposed to samples themselves), it takes significantly large sample sizes from each population (e.g., $n = 3000$, which is more than $30\%$ of the total number of rainy days in the data set) to attain low $p$-values<sup>1</sup> frequently enough to reject the null hypothesis of the Mann-Whitney $U$ test<sup>2</sup> with the critical values proposed below.
Moreover, using Wendt's rank-biserial correlation $r$ and Cohen's $d$ to measure effect size, the relatively low average value of $r$<sup>3</sup> and the low average value of $d$<sup>4</sup> both suggest that the difference between the two samples (and, thus, the two populations) is trivial, even though, according to the Mann-Whitney U test, the difference appears to be statistically signficant (and only then with extremely large samples)<sup>5</sup>. In other words, statistical significance $\neq$ practical significance.
Notes
<sup>1</sup> Identical samples would produce a large $p$ (e.g., $p \approx 0.49$); extremely different samples would produce a very small number (e.g., $p \approx 0$).
<sup>2</sup> Identical samples would produce $U = \frac{n^{2}}{2}$ (e.g., when $n = 450$, $U = 101250$); extremely different samples can produce a $U$ that is orders of magnitude smaller (e.g., when $n = 450$, possibly $U = 1293$).
<sup>3</sup> For very different samples, $r \rightarrow 1$; in the above tests, as $n$ increases, $r \rightarrow 0$.
<sup>4</sup> For very different samples, $d \rightarrow 1$; in the above tests, as $n$ increases, $d$ tends to remain constant, $d \approx 0.06$, even when the sample size is extremely large. $d$ is interpreted as the difference in the number of standard deviations.
<sup>5</sup> On the issue of $p$-values and large data sets, see Lin, M., Lucas, H.C., and Shmueli, G. Research Commentary—Too big to fail: Large samples and the P-value problem. Inf. Syst. Res. 2013; 24: 906–917. PDF <a href='http://www.galitshmueli.com/system/files/Print%20Version.pdf' target='_blank'>here</a>.
<h3 id='1_1_b'>1.1.b Did you use a one-tail or a two-tail P value?</h3>
A two-tail $p$-value was selected since an appropriate initial question, given the results of the <a href='IntroDS-ProjectOne-DataExploration-Supplement.ipynb#weather-related' target='_blank'>Weather-Related Data</a> section of the DataExploration supplement, is simply whether or not there is a statistically significant difference between the populations (i.e., not whether one population is statistically-significantly greater than another).
<h3 id='1_1_c'>1.1.c What is the null hypothesis?</h3>
End of explanation
n = 450
mwu = MannWhitneyU(n)
U, p = mwu.sample_and_test(True,True)
r, d = mwu.effect_sizes(U)
print "\nMann-Whitney U test results:"
print "n = {0}".format(n)
print "U = {0}".format(U)
print "p = {0:.2f}".format(np.mean(p))
print "rank-biserial r value: {0:.2f}".format(np.mean(r))
print "Cohen's d value: {0:.2f}".format(np.mean(d))
Explanation: The Mann-Whitney $U$ test is a nonparametric test of the null hypothesis that the distributions of two populations are the same.
To verify the assumption that the simple, randomly sampled values are independent, the sample sizes should be less than $5\%$ of the population sizes ($n \lt 0.05N$). Since the maximum number of rainy days is $9585$ ($N = 9585$), a reasonable sample size for each group would be $450$ ($n = 450$).
Null Hyptohesis
$H_{0}$: $M_{1} = M_{2}$ or $H_{0}$: $M_{1} - M_{2} = 0$
Alternate Hypothesis
$H_{1}$: $M_{1} \neq M_{2}$ or $H_{1}$: $M_{1} - M_{2} \neq 0$
A $95\%$ level of confidence would suggest that $95\%$ of samples would produce similar statistical results.
For a $95\%$ level of confidence, the level of significance (i.e., the probability of making a Type I error) $\alpha = (1 - 0.05) \cdot 100\% = 0.05$.
<h3 id='1_1_d'>1.1.d What is your p-critical value?</h3>
$p \leq 0.05$
Gather New Samples and Perform Statistical Test
End of explanation |
10,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Surface Observations in Siphon and MetPy
What is METAR?
Surface observational data
Access via a URL constructed from a web form
Returns csv, xml, or NetCDF formatted data
http
Step1: Construct our request to the TDS using the expected base URL and our query string
Open your browser and go to http
Step2: What kind of access methods are available?
Step3: NetcdfSubset it is!
Step4: Ok...but what is NetcdfSubset
A web service for subsetting CDM scientific datasets
The subsetting is specified using earth coordinates
lat/lon or projection coordinates, bounding boxes, date ranges
<b>Not</b> index based!
Check out the details in your browser
Step5: Whaddya want?
What variables do we have available?
Step6: Let's say we want the past days worth of data...
...for "here" (i.e. the lat/lon)
...for the variables mean sea level pressure, air temperature, wind direction, and wind_speed
...and for fun, let's get the data back as a netCDF file
Step7: Let's get the data!
Step8: What did we get back?
That's right, a netcdf4-python dataset!
Is that what you expected?
Step9: What station did we get?
Step10: Notice anything funny?
the "b" means it's a byte encoded string
let's use something sane like, uhh, utf-8
Step11: Let's get the time (in seconds since 1970-01-01) into a datetime object
Step12: Now for the obligatory time series plot...
Step13: So we have dewpoint, can we do a time series of mixing ratio?
We can use MetPy's unit support and calculations for this.
Step14: Look at the docs for metpy.calc.mixing_ratio and metpy.calc.saturation_vapor_pressure.
So to get mixing ratio, we need the ambient partial pressure--we can get that by passing dewpoint temperature to metpy.calc.saturation_vapor_pressure().
Step15: So we can pass these to mixing ratio and get...
Step16: Hmmm....need to get this to reduce
Step17: Exercise
Time to make your own meteogram
Step18: Let's get the data!
Step19: Create the map using cartopy and MetPy!
Pull out the code to create the basic plot and add mapping features so we aren't repeating this everywhere.
Step20: Simple station plotting using plot methods
One way to create station plots with MetPy is to create an instance of StationPlot and call various plot methods, like plot_parameter, to plot arrays of data at locations relative to the center point.
Step21: In addition to plotting values, StationPlot has support for plotting text strings, symbols, and plotting values using custom formatting.
Plotting symbols involves mapping integer values to various custom font glyphs in our custom weather symbols font. MetPy provides mappings for converting WMO codes to their appropriate symbol. The sky_cover function below is one such mapping.
Below we also use a custom formatter to take the sea level pressure values and plot them in the prototypical 3 digit form.
Step22: Station plots using layouts
Station plots can also be created using layouts, which encapsulate the formatting elements for various data types (based on string name). Formatting elements include
Step23: Using the layout takes data arrays stored in a dictionary like object, where the keys represent data fields. (In an ideal world, you could just pass a netCDF4 Dataset instance with CF-compliant names...)
The layout will ignore any fields that aren't present. When given data, it will ignore any names (keys) that are not specified in the layout. It is designed to be as forgiving as possible.
Step24: You can also create your own layout
Step25: So we'll put data into a dictionary to display...
Step26: and create the plot.
Step27: MetPy has many more symbols for current weather and cloud types; we just lack the data to readily show them off at present. Exhaustive list
Step28: Or sky cover symbols | Python Code:
%matplotlib inline
import numpy as np
from datetime import datetime, timedelta
Explanation: Working with Surface Observations in Siphon and MetPy
What is METAR?
Surface observational data
Access via a URL constructed from a web form
Returns csv, xml, or NetCDF formatted data
http://thredds.ucar.edu/thredds/catalog/nws/metar/ncdecoded/catalog.html?dataset=nws/metar/ncdecoded/Metar_Station_Data_fc.cdmr
End of explanation
from siphon.catalog import TDSCatalog
# copied from the browser url box
metar_cat_url = 'http://thredds.ucar.edu/thredds/catalog/nws/metar/ncdecoded/catalog.xml?dataset=nws/metar/ncdecoded/Metar_Station_Data_fc.cdmr'
# parse the xml
metar_cat = TDSCatalog(metar_cat_url)
# what datasets are here? only one "dataset" in this catalog
dataset = list(metar_cat.datasets.values())[0]
print(dataset.name)
Explanation: Construct our request to the TDS using the expected base URL and our query string
Open your browser and go to http://thredds.ucar.edu/thredds/catalog.html
Find METR data under "Observation Data/"
Go to the METR dataset called "Feature Collection" - this is the THREDDS catalog in html form
In the url, change ".html" to ".xml" - this is actual THREDDS Catalog. The THREDDS catalog tells you what data is available and how it can be accessed.
We will use the Unidata python library "siphon" to read the catalog and access the actual METAR data.
End of explanation
print(list(dataset.access_urls))
Explanation: What kind of access methods are available?
End of explanation
ncss_url = dataset.access_urls["NetcdfSubset"]
Explanation: NetcdfSubset it is!
End of explanation
from siphon.ncss import NCSS
ncss = NCSS(ncss_url)
Explanation: Ok...but what is NetcdfSubset
A web service for subsetting CDM scientific datasets
The subsetting is specified using earth coordinates
lat/lon or projection coordinates, bounding boxes, date ranges
<b>Not</b> index based!
Check out the details in your browser: http://www.unidata.ucar.edu/software/thredds/v4.6/tds/reference/NetcdfSubsetServiceReference.html
Rather than construct the request "by hand", let's use siphon!
End of explanation
ncss.variables
Explanation: Whaddya want?
What variables do we have available?
End of explanation
# get current date and time
now = datetime.utcnow() - timedelta(days=5)
now = datetime(now.year, now.month, now.day, now.hour)
# define the time range we are interested in
start_time = now - timedelta(days=1)
end_time = now
# build the query
query = ncss.query()
query.lonlat_point(-104.66, 39.85)
query.time_range(start_time, end_time)
query.variables('inches_ALTIM', 'air_temperature', 'dew_point_temperature',
'wind_from_direction', 'wind_speed')
query.accept('netcdf')
# what does the request url look like?
print(query)
Explanation: Let's say we want the past days worth of data...
...for "here" (i.e. the lat/lon)
...for the variables mean sea level pressure, air temperature, wind direction, and wind_speed
...and for fun, let's get the data back as a netCDF file
End of explanation
data = ncss.get_data(query)
Explanation: Let's get the data!
End of explanation
print(list(data.variables))
Explanation: What did we get back?
That's right, a netcdf4-python dataset!
Is that what you expected?
End of explanation
station_id = data['station_id'][:].tostring()
print(station_id)
Explanation: What station did we get?
End of explanation
station_id = station_id.decode("utf-8")
print(station_id)
Explanation: Notice anything funny?
the "b" means it's a byte encoded string
let's use something sane like, uhh, utf-8
End of explanation
time = [datetime.fromtimestamp(t) for t in data['time']]
Explanation: Let's get the time (in seconds since 1970-01-01) into a datetime object
End of explanation
from matplotlib import pyplot as plt
from matplotlib.dates import HourLocator,DateFormatter, AutoDateLocator
fig, ax1 = plt.subplots(1, 1)
ax1.plot(time, data['air_temperature'], '*')
locator = AutoDateLocator()
hoursFmt = DateFormatter('%H')
ax1.xaxis.set_major_locator(locator)
ax1.xaxis.set_major_formatter(hoursFmt)
ax1.autoscale_view()
ax1.set_title('Site: {} Date: {}'.format(station_id, time[0].strftime('%Y/%m/%d')))
ax1.set_xlabel('Hour of day')
#ax1.set_ylabel(t_air_label)
fig.autofmt_xdate()
plt.show()
Explanation: Now for the obligatory time series plot...
End of explanation
import metpy.calc
from metpy.units import units
print(data.variables['dew_point_temperature'].units)
print(data.variables['inches_ALTIM'].units)
dewp = data.variables['dew_point_temperature'][:] * units('degC')
slp = data.variables['inches_ALTIM'][:] * units('inHg')
Explanation: So we have dewpoint, can we do a time series of mixing ratio?
We can use MetPy's unit support and calculations for this.
End of explanation
e = metpy.calc.saturation_vapor_pressure(dewp)
e
slp
Explanation: Look at the docs for metpy.calc.mixing_ratio and metpy.calc.saturation_vapor_pressure.
So to get mixing ratio, we need the ambient partial pressure--we can get that by passing dewpoint temperature to metpy.calc.saturation_vapor_pressure().
End of explanation
m = metpy.calc.mixing_ratio(e, slp)
m
Explanation: So we can pass these to mixing ratio and get...
End of explanation
# Could also to m.ito('dimensionless')
m.ito_base_units()
m
from matplotlib import pyplot as plt
from matplotlib.dates import HourLocator,DateFormatter, AutoDateLocator
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.plot(time, data['air_temperature'], 'r*')
ax1.plot(time, data['dew_point_temperature'], 'go')
ax1.grid(True)
ax2.plot(time, m, 'x')
ax2.grid(True)
locator = AutoDateLocator()
hoursFmt = DateFormatter('%H')
ax1.xaxis.set_major_locator(locator)
ax1.xaxis.set_major_formatter(hoursFmt)
ax1.autoscale_view()
ax1.set_title('Site: {} Date: {}'.format(station_id, time[0].strftime('%Y/%m/%d')))
ax1.set_xlabel('Hour of day')
fig.autofmt_xdate()
plt.show()
Explanation: Hmmm....need to get this to reduce
End of explanation
bb = {'north' : 45,
'south' : 35,
'east' : -100,
'west' : -110}
query = ncss.query()
query.lonlat_box(north=bb['north'], south=bb['south'], east=bb['east'], west=bb['west'])
query.time(start_time)
query.variables('air_temperature', 'dew_point_temperature', 'inches_ALTIM',
'wind_speed', 'wind_from_direction', 'cloud_area_fraction')
query.accept('csv')
Explanation: Exercise
Time to make your own meteogram:
1. Calculate your own quantity/quantities of interest. (e.g. relative humidity, wind chill, heat index)
2. Plot the values as a function of time.
3. Bonus points: Use more than one subplot
4. More bonus points: Explore time formatting
Now, let's request all stations within a bounding box for a given time and create a surface "station plot"
Make new NCSS query
Request data closest to "now"
This time, let's ask for the data in csv format
End of explanation
from metpy.calc import get_wind_components
data = ncss.get_data(query)
# Access is just like netcdf4-python
lats = data['latitude'][:]
lons = data['longitude'][:]
tair = data['air_temperature'][:]
dewp = data['dew_point_temperature'][:]
slp = (data['inches_ALTIM'][:] * units('inHg')).to('mbar')
# Convert wind to components
u, v = get_wind_components(data['wind_speed'], np.deg2rad(data['wind_from_direction']))
# Need to handle missing (NaN) and convert to proper code
cloud_cover = 8 * data['cloud_area_fraction']
cloud_cover[np.isnan(cloud_cover)] = 9
cloud_cover = cloud_cover.astype(np.int)
# For some reason these come back as bytes instead of strings
stid = [s.decode() for s in data['station']]
Explanation: Let's get the data!
End of explanation
import cartopy
def default_map(bb):
fig = plt.figure(figsize=(24, 12))
proj = cartopy.crs.Stereographic(central_longitude=-95, central_latitude=35)
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add map features
ax.add_feature(cartopy.feature.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lakes',
scale='50m',
facecolor='none'))
ax.add_feature(cartopy.feature.BORDERS)
ax.coastlines()
ax.gridlines()
# Set extent to match requested bounding box
ax.set_extent([bb['west'], bb['east'], bb['south'], bb['north']])
return ax
Explanation: Create the map using cartopy and MetPy!
Pull out the code to create the basic plot and add mapping features so we aren't repeating this everywhere.
End of explanation
from metpy.plots import StationPlot
from metpy.plots.wx_symbols import sky_cover
ax = default_map(bb)
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also set the fontsize.
stationplot = StationPlot(ax, lons, lats, transform=cartopy.crs.PlateCarree(),
fontsize=12)
# Plot the temperature and dew point to the upper and lower left, respectively, of
# the center point. Each one uses a different color.
stationplot.plot_parameter('NW', tair, color='red')
stationplot.plot_parameter('SW', dewp, color='darkgreen')
# Add wind barbs
stationplot.plot_barb(u, v)
Explanation: Simple station plotting using plot methods
One way to create station plots with MetPy is to create an instance of StationPlot and call various plot methods, like plot_parameter, to plot arrays of data at locations relative to the center point.
End of explanation
from metpy.plots.wx_symbols import sky_cover
ax = default_map(bb)
# Same as before
stationplot = StationPlot(ax, lons, lats, transform=cartopy.crs.PlateCarree(),
fontsize=12)
stationplot.plot_parameter('NW', tair, color='red')
stationplot.plot_parameter('SW', dewp, color='darkgreen')
stationplot.plot_barb(u, v)
# Plot the sky cover symbols in the center. We give it the integer code values that
# should be plotted, as well as a mapping class that can convert the integer values
# to the appropriate font glyph.
stationplot.plot_symbol('C', cloud_cover, sky_cover)
# Plot station id -- using an offset pair instead of a string location
stationplot.plot_text((2, 0), stid)
# A more complex example uses a custom formatter to control how the sea-level pressure
# values are plotted. This uses the standard trailing 3-digits of the pressure value
# in tenths of millibars.
stationplot.plot_parameter('NE', slp,
formatter=lambda v: format(10 * v.magnitude, '.0f')[-3:])
Explanation: In addition to plotting values, StationPlot has support for plotting text strings, symbols, and plotting values using custom formatting.
Plotting symbols involves mapping integer values to various custom font glyphs in our custom weather symbols font. MetPy provides mappings for converting WMO codes to their appropriate symbol. The sky_cover function below is one such mapping.
Below we also use a custom formatter to take the sea level pressure values and plot them in the prototypical 3 digit form.
End of explanation
from metpy.plots import simple_layout
simple_layout.names()
Explanation: Station plots using layouts
Station plots can also be created using layouts, which encapsulate the formatting elements for various data types (based on string name). Formatting elements include:
- Text format (e.g. color)
- Location (N, S, etc.)
- Custom string formatting
- Barbs
- units
- Symbol mapping (such as sky cover)
End of explanation
sfc_data = {'air_temperature': tair, 'dew_point_temperature': dewp, 'eastward_wind': u,
'northward_wind': v, 'cloud_coverage': cloud_cover,
'air_pressure_at_sea_level': slp}
ax = default_map(bb)
stationplot = StationPlot(ax, lons, lats, fontsize=12,
transform=cartopy.crs.PlateCarree())
simple_layout.plot(stationplot, sfc_data)
Explanation: Using the layout takes data arrays stored in a dictionary like object, where the keys represent data fields. (In an ideal world, you could just pass a netCDF4 Dataset instance with CF-compliant names...)
The layout will ignore any fields that aren't present. When given data, it will ignore any names (keys) that are not specified in the layout. It is designed to be as forgiving as possible.
End of explanation
from metpy.plots import StationPlotLayout
layout = StationPlotLayout()
layout.add_barb('u', 'v', 'knots')
layout.add_symbol('C', 'sky', sky_cover)
# These are wider fields, so we'll put them out wider
layout.add_text((2, 0), 'station', color='blue')
layout.add_value((-2, 0), 'temp', fmt='0.1f', units='degF', color='red')
Explanation: You can also create your own layout:
End of explanation
sfc_data = {'temp': tair * units('degC'), 'sky':cloud_cover, 'u': u, 'v': v,
'station': stid}
Explanation: So we'll put data into a dictionary to display...
End of explanation
ax = default_map(bb)
stationplot = StationPlot(ax, lons, lats, fontsize=12,
transform=cartopy.crs.PlateCarree())
layout.plot(stationplot, sfc_data)
Explanation: and create the plot.
End of explanation
# Get the mapper that converts a code for current weather (from WMO) to the appropriate
# unicode code point
from metpy.plots.wx_symbols import current_weather, wx_symbol_font
# Need to make a new copy of the font so we can make it bigger
big_font = wx_symbol_font.copy()
big_font.set_size(36)
# Create a plot to loop over all of the codes and display it
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
for i in range(100):
ax.text(i % 10, i // 10, current_weather(i), fontproperties=big_font)
ax.set_xlim(0, 10)
ax.set_ylim(0, 10);
Explanation: MetPy has many more symbols for current weather and cloud types; we just lack the data to readily show them off at present. Exhaustive list:
- current_weather
- current_weather_auto (automated station)
- low_clouds
- mid_clouds
- high_clouds
- sky_cover
- pressure_tendency
These all assume WMO code values for these items, and they all can be passed as a mapper for a symbol field in the station plot code (either plot_symbol for StationPlot or add_symbol for StationPlotLayout)
Below we show code to display all of the weather symbols:
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(8, 2))
for i in range(10):
ax.text(i, 0, sky_cover(i), fontproperties=big_font)
ax.set_xlim(0, 10)
ax.set_ylim(0, 1);
Explanation: Or sky cover symbols:
End of explanation |
10,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Analyzing Shreddit's Q2 Top 5 voting
This started out as a curiosity. I was interested in what I'd need to do to take a bunch of "Top X" lists, combine them and then ask questions to the data like, "What thing was number one the most?" or "If the votes are weighted, what does the actual top X look like?" I then remembered that Shreddit just did a voting. ;)
This isn't a scientifically accurate analysis rooted in best practices. But I'm also just getting started with data analysis. So there's that.
Step3: Equal Placement Ballots
The equal placement ballot assumes that any position on the ballot is equal to any other. And given that this how the voting was designed, it makes the most sense to look at this first. There are some differences, but given that /u/kaptain_carbon was tallying by hand, and I manually copy-pasted ballots (regex is hard) and then had to manually massage some data (fixing names and the like), differences are to be expected. Another note, all the data in my set is lower cased in an effort to normalize to make the data more accurate. My analysis also includes submissions from after voting was closed, mostly because I was too lazy to check dates.
I'm also playing fast and loose with items that end up with the same total, rather than doing the "right thing" and marking them at the same position. So, there's that.
Here's the top ten of the table in the post.
Step4: And here's the top ten from my computed tally
Step5: Weighted Tally Ballot
But that's boring. What if we pretended for a second that everyone submitted a ballot where the albums were actually ranked one through five. What would the top ten look like then? There's a few ways to figure this one out. Initially, my thought was to provide a number 1 to 5 based on position to each vote and then find the lowest sum. However, the problem is that an item that only appears once will be considered the most preferred. That won't work. But going backwards from five to one for each item and then finding the largest total probably would
Step6: This handles the situation where a ballot may not be full (five votes), which make up a surpsingly non trival amount of the ballots
Step7: Anyways, what does a top ten for weighted votes end up looking like?
Step9: Hm, it's not actually all the different. Some bands move around a little bit, Deathhammer moves into the top ten using this method. But overall, the general spread is pretty much the same.
It's also interesting to look at the difference in position from the weighted tally vs the way it's done in the thread. There's major differences between the two due to the voting difference and from including submissions from after voting expired. There's also a missing band.
Step10: What album appeared at number one most often?
Another question I've been pondering is, "How do you figure out what thing appears at number one most often?" Again, this is assuming everyone submitted a ballot with the intention of it being read as ranked. Turns out, doing this isn't that hard either
Step11: This paints a slightly different picture of the top ten. While the names are largely the same, Scar Sighted was thought of as the top album most often, despite being at two or three through the other methods. And Misþyrming is at four (okay, "2", again fast and loose with numbering) despite being the solid top choice for all other methods.
The Take Away
There's lot of different ways to look at the ballots and different ways to tally them. Weighted voting is certainly an interesting avenue to explore.
Originally, I had wondered if something like something along the lines of Instant Runoff Voting or data processing packages like Panadas, Numpy or SciPy would be needed. But for basic prodding and poking, it turns out the stdlib is just fine.
Also | Python Code:
# set up all the data for the rest of the notebook
import json
from collections import Counter
from itertools import chain
from IPython.display import HTML
def vote_table(votes):
Render a crappy HTML table for easy display. I'd use Pandas, but that seems like
complete overkill for this simple task.
base_table =
<table>
<tr><td>Position</td><td>Album</td><td>Votes</td></tr>
{}
</table>
base_row = "<tr><td>{0}</td><td>{1}</td><td>{2}</td></tr>"
vote_rows = [base_row.format(idx, name, vote) for idx, (name, vote) in enumerate(votes, 1)]
return HTML(base_table.format('\n'.join(vote_rows)))
with open('shreddit_q2_votes.json', 'r') as fh:
ballots = json.load(fh)
with open('tallied_votes.json', 'r') as fh:
tallied = Counter(json.load(fh))
equal_placement_ballots = Counter(chain.from_iterable(ballots))
Explanation: Analyzing Shreddit's Q2 Top 5 voting
This started out as a curiosity. I was interested in what I'd need to do to take a bunch of "Top X" lists, combine them and then ask questions to the data like, "What thing was number one the most?" or "If the votes are weighted, what does the actual top X look like?" I then remembered that Shreddit just did a voting. ;)
This isn't a scientifically accurate analysis rooted in best practices. But I'm also just getting started with data analysis. So there's that.
End of explanation
vote_table(tallied.most_common(10))
Explanation: Equal Placement Ballots
The equal placement ballot assumes that any position on the ballot is equal to any other. And given that this how the voting was designed, it makes the most sense to look at this first. There are some differences, but given that /u/kaptain_carbon was tallying by hand, and I manually copy-pasted ballots (regex is hard) and then had to manually massage some data (fixing names and the like), differences are to be expected. Another note, all the data in my set is lower cased in an effort to normalize to make the data more accurate. My analysis also includes submissions from after voting was closed, mostly because I was too lazy to check dates.
I'm also playing fast and loose with items that end up with the same total, rather than doing the "right thing" and marking them at the same position. So, there's that.
Here's the top ten of the table in the post.
End of explanation
vote_table(equal_placement_ballots.most_common(10))
Explanation: And here's the top ten from my computed tally:
End of explanation
weighted_ballot = Counter()
for ballot in ballots:
for item, weight in zip(ballot, range(5, 0, -1)):
weighted_ballot[item] += weight
Explanation: Weighted Tally Ballot
But that's boring. What if we pretended for a second that everyone submitted a ballot where the albums were actually ranked one through five. What would the top ten look like then? There's a few ways to figure this one out. Initially, my thought was to provide a number 1 to 5 based on position to each vote and then find the lowest sum. However, the problem is that an item that only appears once will be considered the most preferred. That won't work. But going backwards from five to one for each item and then finding the largest total probably would:
End of explanation
sum(1 for _ in filter(lambda x: len(x) < 5, ballots)) / len(ballots)
Explanation: This handles the situation where a ballot may not be full (five votes), which make up a surpsingly non trival amount of the ballots:
End of explanation
vote_table(weighted_ballot.most_common(10))
Explanation: Anyways, what does a top ten for weighted votes end up looking like?
End of explanation
regular_tally_spots = {name.lower(): pos for pos, (name, _) in enumerate(tallied.most_common(), 1)}
base_table =
<table>
<tr><td>Album</td><td>Regular Spot</td><td>Weighted Spot</td></tr>
{}
</table>
base_row = "<tr><td>{0}</td><td>{1}</td><td>{2}</td></tr>"
rows = [base_row.format(name, regular_tally_spots[name], pos)
for pos, (name, _) in enumerate(weighted_ballot.most_common(), 1)
# some albums didn't make it, like Arcturian D:
if name in regular_tally_spots]
HTML(base_table.format('\n'.join(rows)))
Explanation: Hm, it's not actually all the different. Some bands move around a little bit, Deathhammer moves into the top ten using this method. But overall, the general spread is pretty much the same.
It's also interesting to look at the difference in position from the weighted tally vs the way it's done in the thread. There's major differences between the two due to the voting difference and from including submissions from after voting expired. There's also a missing band. :?
End of explanation
number_one = Counter([b[0] for b in ballots])
vote_table(number_one.most_common(10))
Explanation: What album appeared at number one most often?
Another question I've been pondering is, "How do you figure out what thing appears at number one most often?" Again, this is assuming everyone submitted a ballot with the intention of it being read as ranked. Turns out, doing this isn't that hard either:
End of explanation
#regular tallying
vote_table(equal_placement_ballots.most_common())
#weighted ballot
vote_table(weighted_ballot.most_common())
#number one count
vote_table(number_one.most_common())
Explanation: This paints a slightly different picture of the top ten. While the names are largely the same, Scar Sighted was thought of as the top album most often, despite being at two or three through the other methods. And Misþyrming is at four (okay, "2", again fast and loose with numbering) despite being the solid top choice for all other methods.
The Take Away
There's lot of different ways to look at the ballots and different ways to tally them. Weighted voting is certainly an interesting avenue to explore.
Originally, I had wondered if something like something along the lines of Instant Runoff Voting or data processing packages like Panadas, Numpy or SciPy would be needed. But for basic prodding and poking, it turns out the stdlib is just fine.
Also: a lot of awesome music I haven't listened to at all this year (been tied up with Peace is the Mission the last few weeks, too, sorry guys).
The full tables
Because someone will ask for them, here's the full tables from my analysis:
End of explanation |
10,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyPanair Tutorial#2 Tapered Wing
In this tutorial we will perform an analysis of a tapered wing.
The wing is defined by five different wing sections at $\eta=0.000, 0.126, 0.400, 0.700, 1.000$.
Below are the wing planform and airfoil stack, respectively.
(The wing is based on the DLR-F4<sup>1</sup>)
Step1: 1.Defining the geometry
Just as we have done in tutorial 1, we will use the wgs_creator module to define the geometry of the wing.
First off, we initialize a LaWGS object.
Step2: Next, we create a Line object that defines the coordinates of the airfoil at the root of the wing.
To do so, we will read a csv file that contains the coordinates of the airfoil, using the read_airfoil function.
Five csv files, eta0000.csv, eta0126.csv, eta0400.csv, eta0700.csv, and eta1000.csv have been prepared for this tutorial.
Before creating the Line object, we will take a quick view at these files.
For example, eta0000.csv looks like ...
Step3: The first and second columns xup and zup represent the xz-coordinates of the upper surface of the airfoil.
The third and fourth columns xlow and zlow represent the xz-coordinates of the lower surface of the airfoil.
The csv file must follow four rules
Step4: The first variable specifies the name of the csv file.
The y_coordinate variable defines the y-coordinate of the points included in the Line.
Line objects for the remaining four wing sections can be created in the same way.
Step5: Next, we create four networks by linearly interpolating these wing sections.
Step6: Then, we concatenate the networks using the concat_row method.
Step7: The concatenated network is displayed below.
Step8: After creating the Network for the wing, we create networks for the wingtip and wake.
Step9: Next, the Networks will be registered to the wgs object.
Step10: Then, we create a stl file to check that there are no errors in the model.
Step11: Last, we create input files for panin
Step12: 2. Analysis
The analysis can be done in the same way as tutorial 1.
Place panair, panin, tapered_wing.aux, and tapered_wing.wgs in the same directory,
and run panin and panair.
bash
$ ./panin
Prepare input for PanAir
Version 1.0 (4Jan2000)
Ralph L. Carmichael, Public Domain Aeronautical Software
Enter the name of the auxiliary file
Step13: The ffmf file can be parsed using the read_ffmf and write_ffmf methods. | Python Code:
%matplotlib notebook
import matplotlib.pyplot as plt
from pyPanair.preprocess import wgs_creator
for eta in ("0000", "0126", "0400", "0700", "1000"):
af = wgs_creator.read_airfoil("eta{}.csv".format(eta))
plt.plot(af[:,0], af[:,2], "k-", lw=1.)
plt.plot((0.5049,), (0,), "ro", label="Center of rotation")
plt.legend(loc="best")
plt.xlabel("$x$ [m]")
plt.xlabel("$z$ [m]")
plt.show()
Explanation: pyPanair Tutorial#2 Tapered Wing
In this tutorial we will perform an analysis of a tapered wing.
The wing is defined by five different wing sections at $\eta=0.000, 0.126, 0.400, 0.700, 1.000$.
Below are the wing planform and airfoil stack, respectively.
(The wing is based on the DLR-F4<sup>1</sup>)
End of explanation
from pyPanair.preprocess import wgs_creator
wgs = wgs_creator.LaWGS("tapered_wing")
Explanation: 1.Defining the geometry
Just as we have done in tutorial 1, we will use the wgs_creator module to define the geometry of the wing.
First off, we initialize a LaWGS object.
End of explanation
import pandas as pd
pd.set_option("display.max_rows", 10)
pd.read_csv("eta0000.csv")
Explanation: Next, we create a Line object that defines the coordinates of the airfoil at the root of the wing.
To do so, we will read a csv file that contains the coordinates of the airfoil, using the read_airfoil function.
Five csv files, eta0000.csv, eta0126.csv, eta0400.csv, eta0700.csv, and eta1000.csv have been prepared for this tutorial.
Before creating the Line object, we will take a quick view at these files.
For example, eta0000.csv looks like ...
End of explanation
wingsection1 = wgs_creator.read_airfoil("eta0000.csv", y_coordinate=0.)
Explanation: The first and second columns xup and zup represent the xz-coordinates of the upper surface of the airfoil.
The third and fourth columns xlow and zlow represent the xz-coordinates of the lower surface of the airfoil.
The csv file must follow four rules:
1. Data in the first row correspond to the xz-coordinates of the leading edge of the airfoil
2. Data in the last row correspond to the xz-coordinates of the trailing edge of the airfoil
3. For the first row, the coordinates (xup, zup) and (xlow, zlow) are the same
4. For the last row, the coordinates (xup, zup) and (xlow, zlow) are the same (i.e. the airfoil has a sharp TE)
Now we shall create a Line object for the root of the wing.
End of explanation
wingsection2 = wgs_creator.read_airfoil("eta0126.csv", y_coordinate=0.074211)
wingsection3 = wgs_creator.read_airfoil("eta0400.csv", y_coordinate=0.235051)
wingsection4 = wgs_creator.read_airfoil("eta0700.csv", y_coordinate=0.410350)
wingsection5 = wgs_creator.read_airfoil("eta1000.csv", y_coordinate=0.585650)
Explanation: The first variable specifies the name of the csv file.
The y_coordinate variable defines the y-coordinate of the points included in the Line.
Line objects for the remaining four wing sections can be created in the same way.
End of explanation
wingnet1 = wingsection1.linspace(wingsection2, num=4)
wingnet2 = wingsection2.linspace(wingsection3, num=8)
wingnet3 = wingsection3.linspace(wingsection4, num=9)
wingnet4 = wingsection4.linspace(wingsection5, num=9)
Explanation: Next, we create four networks by linearly interpolating these wing sections.
End of explanation
wing = wingnet1.concat_row((wingnet2, wingnet3, wingnet4))
Explanation: Then, we concatenate the networks using the concat_row method.
End of explanation
wing.plot_wireframe()
Explanation: The concatenated network is displayed below.
End of explanation
wingtip_up, wingtip_low = wingsection5.split_half()
wingtip_low = wingtip_low.flip()
wingtip = wingtip_up.linspace(wingtip_low, num=5)
wake_length = 50 * 0.1412
wingwake = wing.make_wake(edge_number=3, wake_length=wake_length)
Explanation: After creating the Network for the wing, we create networks for the wingtip and wake.
End of explanation
wgs.append_network("wing", wing, 1)
wgs.append_network("wingtip", wingtip, 1)
wgs.append_network("wingwake", wingwake, 18)
Explanation: Next, the Networks will be registered to the wgs object.
End of explanation
wgs.create_stl()
Explanation: Then, we create a stl file to check that there are no errors in the model.
End of explanation
wgs.create_aux(alpha=(-2, 0, 2), mach=0.6, cbar=0.1412, span=1.1714, sref=0.1454, xref=0.5049, zref=0.)
wgs.create_wgs()
Explanation: Last, we create input files for panin
End of explanation
from pyPanair.postprocess import write_vtk
write_vtk(n_wake=1)
from pyPanair.postprocess import calc_section_force
calc_section_force(aoa=2, mac=0.1412, rot_center=(0.5049,0,0), casenum=3, networknum=1)
section_force = pd.read_csv("section_force.csv")
section_force
plt.plot(section_force.pos / 0.5857, section_force.cl * section_force.chord, "s", mfc="None", mec="b")
plt.xlabel("spanwise position [normalized]")
plt.ylabel("cl * chord")
plt.grid()
plt.show()
Explanation: 2. Analysis
The analysis can be done in the same way as tutorial 1.
Place panair, panin, tapered_wing.aux, and tapered_wing.wgs in the same directory,
and run panin and panair.
bash
$ ./panin
Prepare input for PanAir
Version 1.0 (4Jan2000)
Ralph L. Carmichael, Public Domain Aeronautical Software
Enter the name of the auxiliary file:
tapered_wing.aux
10 records copied from auxiliary file.
9 records in the internal data file.
Geometry data to be read from tapered_wing.wgs
Reading WGS file...
Reading network wing
Reading network wingtip
Reading network wingwake
Reading input file instructions...
Command 1 MACH 0.6
Command 11 ALPHA -2 0 2
Command 6 cbar 0.1412
Command 7 span 1.1714
Command 2 sref 0.1454
Command 3 xref 0.5049
Command 5 zref 0.0
Command 35 BOUN 1 1 18
Writing PanAir input file...
Files a502.in added to your directory.
Also, file panin.dbg
Normal termination of panin, version 1.0 (4Jan2000)
Normal termination of panin
bash
$ ./panair
Panair High Order Panel Code, Version 15.0 (10 December 2009)
Enter name of input file:
a502.in
After the analysis finishes, place panair.out, agps, and ffmf in the tutorial2 directory.
3. Visualization
Visualization of the results can be done in the same manner as tutorial 2.
End of explanation
from pyPanair.postprocess import write_ffmf, read_ffmf
read_ffmf()
write_ffmf()
Explanation: The ffmf file can be parsed using the read_ffmf and write_ffmf methods.
End of explanation |
10,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
데이터 시각화
Step1: 주요 내용
데이터 분석을 위해 가장 기본적으로 할 수 있고, 해야 하는 일이 데이터 시각화이다.
데이터를 시각화하는 것은 어렵지 않지만, 적합한 시각화를 만드는 일은 매우 어려우며,
많은 훈련과 직관이 요구된다.
여기서는 데이터를 탐색하여 얻어진 데이터를 시각화하는 기본적인 방법 네 가지를 배운다.
선그래프
막대그래프
히스토그램
산점도
오늘이 주요 예제
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="../../images/population/Seoul_pop04.jpg" style="width
Step2: 선그래프
아래 테이블은 1949년부터 측정된 서울시 인구수를 담은 데이터이다.
```
년도 인구수(명)
1949 1,437,670
1955 1,568,746
1960 2,445,402
1966 3,793,280
1970 5,525,262
1975 6,879,464
1980 8,350,616
1985 9,625,755
1990 10,603,250
1995 10,217,177
2000 9,853,972
2005 9,762,546
2010 9,631,482
출처
Step3: 막대그래프
동일한 데이터를 막대그래프를 이용하여 보여줄 수 있다.
그렇게 하면 년도별 미세한 차이를 보다 자세히 나타낼 수 있다.
Step4: 그런데 이렇게 하면 막대 그래프의 두께가 좀 좁아 보인다. 그리고
년도가 정확히 5년 단위로 쪼개진 것이 아니기에 막대들 사이의 간격이 불규칙해 보인다.
따라서 먼저 막대의 두께를 좀 조절해보자.
힌트
Step5: 막대들의 간격이 완전히 규칙적으로 되지는 않았지만 이전 그래프와는 좀 다른 느낌을 준다.
이와 같이 막대들의 두께 뿐만아니라, 간격, 색상 모두 조절할 수 있지만,
여기서는 그럴 수 있다는 사실만 언급하고 넘어간다.
예제
대한민국이 하계 올림픽에서 가장 많은 메일을 획득한 상위 여섯 종목과 메달 숫자는 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>종목</td>
<td>메달 수</td>
</tr>
<tr>
<td>Archery(양궁)</td>
<td>39</td>
</tr>
<tr>
<td>Badminton(배드민턴)</td>
<td>19</td>
</tr>
<tr>
<td>Boxing(복싱)</td>
<td>20</td>
</tr>
<tr>
<td>Judo(유도)</td>
<td>43</td>
</tr>
<tr>
<td>Taekwondo(태권도)</td>
<td>19</td>
</tr>
<tr>
<td>Wrestling(레슬링)</td>
<td>36</td>
</tr>
<caption align='bottom'>출처
Step6: x축에 종목 이름 대신에 숫자를 넣으면 되기는 하지만 정확한 정보를 전달하지 못한다.
Step7: 따라서 x축에 6개의 막대가 필요하고 각각의 막대에 레이블 형식으로 종목 이름을 지정해야 한다.
Step8: 여전히 그래프가 좀 어색하다. 막대들이 좀 두껍다. 이럴 때는 x축에 사용되는 점들의 간격을 좀 벌리는 게 좋다.
Step9: 이번에는 막대 두께가 좁아 보인다. 그래서 좀 넓히는 게 좋다.
Step10: 지금까지 살펴보았듯이 적합한 시각화는 경우에 따라 상당히 많은 노력을 요구하기도 한다.
여기서는 matplotlib.pyplot 라이브러리에 다양한 설정 옵션이 있다는 정도만 기억하면 좋겠다.
히스토그램
히스토 그램은 막대그래프와 비슷하다. 다만 막대 사이에 공간이 없다.
따라서 연속적인 구간으로 구분된 범위에 포함된 숫자들의 도수를 나타내는 데에 효과적이다.
아래 예제는 임의로 생성된 1000개의 실수들의 분포를 보여주는 히스토그램이다.
주의
Step11: 산점도
두 변수 간의 연관관계를 보여 주는 데에 매우 적합한 그래프이다.
예를 들어, 카카오톡에 등록된 친구 수와 하룻동안의 스마트폰 사용시간 사이의 연관성을 보여주는 데이터가 아래와 같이 주어졌다고 하자.
주의
Step12: 위 산점도를 보면 카카오톡에 등록된 친구 수가 많을 수록 스마트폰 사용시간이 증가하는 경향을 한 눈에 확인할 수 있다.
물론, 이는 주어진 (조작된) 데이터에 근거한 정보이다.
오늘의 주요 예제 해결
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="../../images/population/Seoul_pop04.jpg" style="width
Step13: 주의
Step14: 단계 3 | Python Code:
from __future__ import division, print_function
Explanation: 데이터 시각화
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 주요 내용
데이터 분석을 위해 가장 기본적으로 할 수 있고, 해야 하는 일이 데이터 시각화이다.
데이터를 시각화하는 것은 어렵지 않지만, 적합한 시각화를 만드는 일은 매우 어려우며,
많은 훈련과 직관이 요구된다.
여기서는 데이터를 탐색하여 얻어진 데이터를 시각화하는 기본적인 방법 네 가지를 배운다.
선그래프
막대그래프
히스토그램
산점도
오늘이 주요 예제
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="../../images/population/Seoul_pop04.jpg" style="width:360">
</td>
</tr>
</table>
</p>
이제 위 파일을 읽어서 서울과 수도권의 인구증가율 추이를 아래 그림에서처럼 선그래프로 나타내 보자.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="../../images/population/Seoul_pop05.png" style="tyle:"width:360">
</td>
</tr>
</table>
</p>
데이터 시각화 도구 소개: matplotlib 라이브러리
데이터 시각화를 위한 도구 중에서 간단한 막대 그래프, 히스토그램, 선 그래프, 산점도를 쉽게 그릴 수 있는
많은 도구들을 포함한 라이브러리이다.
이 라이브러리에 포함된 모듈 중에서 여기서는 pyplot 모듈에 포함된 가장 기본적인 몇 개의 도구들의 활용법을
간단한 예제를 배우고자 한다.
End of explanation
# 년도
years = [1949, 1955, 1960, 1966, 1970, \
1975, 1980, 1985, 1990, 1995, \
2000, 2005, 2010]
# 인구수
populaions = [1437670, 1568746, 2445402, 3793280, 5525262, \
6879464, 8350616, 9625755, 10603250, 10217177, \
9853972, 9762546, 9631482]
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축에 년도, y축에 인구수가 있는 선 그래프 만들기
plt.plot(years, populaions, color='green', marker='o', linestyle='solid')
# 제목 더하기
plt.title("Seoul Population Change")
# y축에 레이블 추가하기
plt.ylabel("10Million")
plt.show()
Explanation: 선그래프
아래 테이블은 1949년부터 측정된 서울시 인구수를 담은 데이터이다.
```
년도 인구수(명)
1949 1,437,670
1955 1,568,746
1960 2,445,402
1966 3,793,280
1970 5,525,262
1975 6,879,464
1980 8,350,616
1985 9,625,755
1990 10,603,250
1995 10,217,177
2000 9,853,972
2005 9,762,546
2010 9,631,482
출처: 국가통계포털(kosis.kr)
```
예를 들어, 연도별 서울시 인구수의 연도별 변화추이를 간단한 선그래프를 이용하여 확인할 수 있다.
End of explanation
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# 막대그래프 그리기
plt.bar(years, populaions)
# 제목 더하기
plt.title("Seoul Population Change")
# y축에 레이블 추가하기
plt.ylabel("10Million")
plt.show()
Explanation: 막대그래프
동일한 데이터를 막대그래프를 이용하여 보여줄 수 있다.
그렇게 하면 년도별 미세한 차이를 보다 자세히 나타낼 수 있다.
End of explanation
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# 막대그래프 그리기, 막대 두께 조절
plt.bar(years, populaions, 2.5)
# 제목 더하기
plt.title("Seoul Population Change")
# y축에 레이블 추가하기
plt.ylabel("10Million")
plt.show()
Explanation: 그런데 이렇게 하면 막대 그래프의 두께가 좀 좁아 보인다. 그리고
년도가 정확히 5년 단위로 쪼개진 것이 아니기에 막대들 사이의 간격이 불규칙해 보인다.
따라서 먼저 막대의 두께를 좀 조절해보자.
힌트: plt.bar() 함수의 세 번째 인자는 막대들의 두께를 지정한다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
plt.bar(sports, medals)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 막대들의 간격이 완전히 규칙적으로 되지는 않았지만 이전 그래프와는 좀 다른 느낌을 준다.
이와 같이 막대들의 두께 뿐만아니라, 간격, 색상 모두 조절할 수 있지만,
여기서는 그럴 수 있다는 사실만 언급하고 넘어간다.
예제
대한민국이 하계 올림픽에서 가장 많은 메일을 획득한 상위 여섯 종목과 메달 숫자는 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>종목</td>
<td>메달 수</td>
</tr>
<tr>
<td>Archery(양궁)</td>
<td>39</td>
</tr>
<tr>
<td>Badminton(배드민턴)</td>
<td>19</td>
</tr>
<tr>
<td>Boxing(복싱)</td>
<td>20</td>
</tr>
<tr>
<td>Judo(유도)</td>
<td>43</td>
</tr>
<tr>
<td>Taekwondo(태권도)</td>
<td>19</td>
</tr>
<tr>
<td>Wrestling(레슬링)</td>
<td>36</td>
</tr>
<caption align='bottom'>출처: 위키피디아</caption>
</table>
</p>
이제 위 데이터를 막대 그래프로 시각화할 수 있다.
그런데 x축에 종목 이름이 들어가면 오류가 발생한다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
plt.bar(range(6), medals)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: x축에 종목 이름 대신에 숫자를 넣으면 되기는 하지만 정확한 정보를 전달하지 못한다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
xs = range(6)
plt.bar(xs, medals)
plt.xticks(xs, sports)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 따라서 x축에 6개의 막대가 필요하고 각각의 막대에 레이블 형식으로 종목 이름을 지정해야 한다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
xs = range(0, 12, 2)
plt.bar(xs, medals)
plt.xticks(xs, sports)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 여전히 그래프가 좀 어색하다. 막대들이 좀 두껍다. 이럴 때는 x축에 사용되는 점들의 간격을 좀 벌리는 게 좋다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
xs = range(0, 12, 2)
plt.bar(xs, medals, 1.2)
plt.xticks(xs, sports)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 이번에는 막대 두께가 좁아 보인다. 그래서 좀 넓히는 게 좋다.
End of explanation
import numpy as np
gaussian_numbers = np.random.randn(1000)
plt.hist(gaussian_numbers, bins=10)
plt.title("Gaussian Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.show()
Explanation: 지금까지 살펴보았듯이 적합한 시각화는 경우에 따라 상당히 많은 노력을 요구하기도 한다.
여기서는 matplotlib.pyplot 라이브러리에 다양한 설정 옵션이 있다는 정도만 기억하면 좋겠다.
히스토그램
히스토 그램은 막대그래프와 비슷하다. 다만 막대 사이에 공간이 없다.
따라서 연속적인 구간으로 구분된 범위에 포함된 숫자들의 도수를 나타내는 데에 효과적이다.
아래 예제는 임의로 생성된 1000개의 실수들의 분포를 보여주는 히스토그램이다.
주의:
* numpy 모듈의 randn 함수는 표준정규분포를 따르도록 실수들을 임의로 생성한다.
* 표준정규분포: 데이터들의 평균이 0이고 표준편차가 1인 정규분포
* 여기서는 표준정규분포가 확률과 통계 분야에서 매우 중요한 역할을 수행한다는 정도만 알고 넘어간다.
End of explanation
num_friends = [41, 26, 90, 50, 18, 124, 152, 88, 72, 51]
phone_time = [4.1, 3.3, 5.7, 4.2, 3.2, 6.4, 6.0, 5.1, 6.2, 3.7]
plt.scatter(num_friends, phone_time)
plt.show()
Explanation: 산점도
두 변수 간의 연관관계를 보여 주는 데에 매우 적합한 그래프이다.
예를 들어, 카카오톡에 등록된 친구 수와 하룻동안의 스마트폰 사용시간 사이의 연관성을 보여주는 데이터가 아래와 같이 주어졌다고 하자.
주의: 아래 데이터는 강의를 위해 임의로 조작되었으며, 어떠한 근거도 갖지 않는다.
End of explanation
import csv
with open('Seoul_pop2.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
if len(row) == 0 or row[0][0] == '#':
continue
else:
print(row)
Explanation: 위 산점도를 보면 카카오톡에 등록된 친구 수가 많을 수록 스마트폰 사용시간이 증가하는 경향을 한 눈에 확인할 수 있다.
물론, 이는 주어진 (조작된) 데이터에 근거한 정보이다.
오늘의 주요 예제 해결
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="../../images/population/Seoul_pop04.jpg" style="width:360">
</td>
</tr>
</table>
</p>
위 도표의 데이터는 'Seoul_pop2.csv' 파일에 아래와 같이 저장되어 있다.
```
1949년부터 2010년 사이의 서울과 수도권 인구 증가율(%)
구간,서울,수도권
1949-1955,9.12,-5.83
1955-1960,55.88,32.22
1960-1966,55.12,32.76
1966-1970,45.66,28.76
1970-1975,24.51,22.93
1975-1980,21.38,21.69
1980-1985,15.27,18.99
1985-1990,10.15,17.53
1990-1995,-3.64,8.54
1995-2000,-3.55,5.45
2000-2005,-0.93,6.41
2005-2010,-1.34,3.71
```
이제 위 파일을 읽어서 서울과 수도권의 인구증가율 추이를 선그래프로 나타내 보자.
단계 1: csv 파일 읽어드리기
확장자가 csv인 파일은 데이터를 저장하기 위해 주로 사용한다.
csv는 Comma-Separated Values의 줄임말로 데이터가 쉼표(콤마)로 구분되어 정리되어 있는 파일을 의미한다.
csv 파일을 읽어드리는 방법은 csv 모듈의 reader() 함수를 활용하면 매우 쉽다.
End of explanation
year_intervals = []
Seoul_pop = []
Capital_region_pop = []
with open('Seoul_pop2.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
if len(row) == 0 or row[0][0] == '#':
continue
else:
year_intervals.append(row[0])
Seoul_pop.append(float(row[1]))
Capital_region_pop.append(float(row[2]))
print(year_intervals)
print(Seoul_pop)
print(Capital_region_pop)
Explanation: 주의: 5번 줄을 아래와 같이 하면 오류 발생
if row[0][0] == '#' or len(row) == 0:
이유: 'A or B'의 경우 먼저 A의 참, 거짓을 먼저 판단한 후에, A참이면 참으로 처리하고 끝낸다.
그리고 A가 거짓이면 그제서야 B의 참, 거짓을 확인한다.
그래서 A의 참, 거짓을 판단하면서 오류가 발생하면 바로 멈추게 된다.
위 예제의 경우 row[0][0]이 셋째줄의 인덱스 오류가 발생하게 되서 코드 전체가 멈추게 된다.
주의:
다음 형식은
with open('Seoul_pop2.csv', 'rb') as f:
...
아래 코드에 대응한다.
f = open('Seoul_pop2.csv', 'rb')
...
f.close()
단계 2: 선그래프에 사용될 데이터 정리하기
End of explanation
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축에 년도, y축에 인구수가 있는 선 그래프 만들기
plt.plot(range(12), Seoul_pop, color='green', marker='o', linestyle='solid', \
label='Seoul')
plt.plot(range(12), Capital_region_pop, color='red', marker='o', linestyle='solid', \
label='Capital Region')
plt.xticks(range(12), year_intervals, rotation=45)
plt.title("Population Change")
plt.ylabel("Percentage")
plt.legend()
plt.show()
Explanation: 단계 3: 그래프 그리기
End of explanation |
10,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Maximum-Inner-Product" data-toc-modified-id="Maximum-Inner-Product-1"><span class="toc-item-num">1 </span>Maximum Inner Product</a></span><ul class="toc-item"><li><span><a href="#Order-Preserving-Transformations" data-toc-modified-id="Order-Preserving-Transformations-1.1"><span class="toc-item-num">1.1 </span>Order Preserving Transformations</a></span></li><li><span><a href="#Matrix-Factorization" data-toc-modified-id="Matrix-Factorization-1.2"><span class="toc-item-num">1.2 </span>Matrix Factorization</a></span></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.3"><span class="toc-item-num">1.3 </span>Implementation</a></span></li><li><span><a href="#Benchmark" data-toc-modified-id="Benchmark-1.4"><span class="toc-item-num">1.4 </span>Benchmark</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Maximum Inner Product
Matrix factorization are potent techniques in solving the collaborative filtering problem. It mainly involves building up the user-item interaction matrix, then decomposing it into a user latent factor (a.k.a embedding) and item latent factor each with some user specified dimension (a hyperparameter that we get to tweak).
<img src="img/matrix_factorization.png" width="60%" height="60%">
To generate the items recommended for each user, we would perform a dot product between the two matrices and retrieve the top-k items that have the highest "scores". This process, however, can often times becomes a large bottleneck for these type of algorithms when the number of users and items becomes fairly large. As exhaustive computation of the dot product is extremely expensive. This document's focus is to demonstrate a order preserving transformation that converts the maximum inner product into a nearest neighborhood search problem to significantly speed up the process for generating the top-k recommendations.
Order Preserving Transformations
We'll first describe the notation we'll be using. Lower case is for scalars, $x$, bold lower case for vectors, $\mathbf{x}$, and bold upper case for matrices, $\mathbf{X}$.
Given a vector, $\mathbf{x}$. The norm is denoted by $\Vert \mathbf{x} \Vert = \sqrt{\sum^d_{i=1} x_i^2}$. The inner product is represented as $\mathbf{x} \cdot \mathbf{y}$. Last but not least, $(a, \mathbf{x}^T)^T$ is for denoting a concatenation of a scalar $a$ with a vector $\mathbf{x}$.
On one hand, we have a matrix of $n$ vectors $\mathbf{Y} = [\mathbf{y}_1, \mathbf{y}_2, ..., \mathbf{y}_n]$, such that $\mathbf{y}_i \in \mathbb{R}^d$. Where $d$ is the number of dimensions we set for the latent factor. Whereas, our query vector $\mathbf{x} \in \mathbb{R}^d$.
Our objective is to retrieve an index according to the maximum inner product.
$$
\begin{align}
f(\mathbf{Y}, \mathbf{x}) = \underset{i}{\text{argmax}} \space \mathbf{x} \cdot \mathbf{y}_i
\end{align}
$$
The idea behind speeding up the workload for maximum inner product operations is to transform the problem into a distance minimization problem or nearest neighborhood search.
\begin{align}
f(\mathbf{Y}, \mathbf{x}) = \underset{i}{\text{argmin}} \space {\Vert \mathbf{x} - \mathbf{y}_i \Vert}^2
\end{align}
Once we transform the problem into a euclidean distance problem, there are plethora of algorithms/packages available for doing fast similarity search. To do so, we are going to apply a transformation function on our matrix, $\mathbf{Y}$and our query vector, $\mathbf{x}$. Note that the idea here is only to perform transformation on top of the existing $\mathbf{x}$ and $\mathbf{y}$, not to design a whole new algorithm in itself to learn embeddings/latent factors that directly uses distance minimization to generate the prediction, as this prevents us from using the existing matrix factorization algorithms.
The order transformation is to add an additional dimension to each of the latent factors
Step2: For the train/test split, the process is to split each user's behavior based on chronological order. e.g. If an user interacted with 10 items, and we specify a test set of size, 0.2. Then the first 8 items that the user first interacted with will fall in the training set, and the last 2 items will belong to the test set.
Step3: The model we'll be using is Bayesian Personalized Ranking from the implicit library.
Step4: The model object also provides a .recommend method that generates the recommendation for a user.
Step5: We can also generate the recommendations ourselves. We'll first confirm that the recommend function that we've implemented matches the one provided by the library, also implement a recommend_all function that generates the recommendation for all the user, this will be used to compare against the nearest neighborhood search on the order transformed matrix later.
Step6: Different model/library have different ways of extracting the item and user factors/embeddings, we assign it to index_factors and query_factors to make all downstream code agnostic of libraries' implementation.
Step7: Implementation
To implement our order preserving transformation, we first apply the transformation on our index factors. Recall that the formula is
Step8: Our next step is to use our favorite nearest neighborhood search algorithm/library to conduct the search. We'll be leveraging hnswlib in this example, explaining the details behind the this nearest neighborhood search algorithm is beyond the scope of this document.
Step9: To generate the the prediction, we first transform the incoming "queries". $\mathbf{x}^* = h(\mathbf{x}) = (0, \mathbf{x}^T)^T$.
Step10: Benchmark
We can time the original recommend method using maximum inner product versus the new method of using the order preserving transformed matrices with nearest neighborhood search.
Step11: Note that the timing is highly dependent on the dataset. We'll observe a much larger speedup if the number of items/labels in the output/index factor is larger. In the movielens dataset, we only had to rank the top items for each user among 1.6K items, in a much larger dataset, the number of items could easily go up to 100K or even million, that's when we'll see the real potential of this method.
Another thing worth checking is the quality of the prediction using the new method. Here we're using hnswlib library to generate the nearest neighborhood items, as hnswlib is technically an approximate nearest neighborhood algorithm. We can measure how much overlap the approximate top recommendations are to the original top recommendations to make sure we are using the right parameters for the nearest neighborhood search algorithm. Notation-wise | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import time
import hnswlib
import numpy as np
import pandas as pd
from subprocess import call
from scipy.sparse import csr_matrix
from implicit.bpr import BayesianPersonalizedRanking
# prevent scientific notations
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,scipy,implicit,hnswlib
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Maximum-Inner-Product" data-toc-modified-id="Maximum-Inner-Product-1"><span class="toc-item-num">1 </span>Maximum Inner Product</a></span><ul class="toc-item"><li><span><a href="#Order-Preserving-Transformations" data-toc-modified-id="Order-Preserving-Transformations-1.1"><span class="toc-item-num">1.1 </span>Order Preserving Transformations</a></span></li><li><span><a href="#Matrix-Factorization" data-toc-modified-id="Matrix-Factorization-1.2"><span class="toc-item-num">1.2 </span>Matrix Factorization</a></span></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.3"><span class="toc-item-num">1.3 </span>Implementation</a></span></li><li><span><a href="#Benchmark" data-toc-modified-id="Benchmark-1.4"><span class="toc-item-num">1.4 </span>Benchmark</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
file_dir = 'ml-100k'
file_path = os.path.join(file_dir, 'u.data')
if not os.path.isdir(file_dir):
call(['curl', '-O', 'http://files.grouplens.org/datasets/movielens/' + file_dir + '.zip'])
call(['unzip', file_dir + '.zip'])
names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv(file_path, sep='\t', names=names)
print('data dimension: \n', df.shape)
df.head()
users_col = 'user_id'
items_col = 'item_id'
value_col = 'rating'
time_col = 'timestamp'
for col in (users_col, items_col):
df[col] = df[col].astype('category')
Explanation: Maximum Inner Product
Matrix factorization are potent techniques in solving the collaborative filtering problem. It mainly involves building up the user-item interaction matrix, then decomposing it into a user latent factor (a.k.a embedding) and item latent factor each with some user specified dimension (a hyperparameter that we get to tweak).
<img src="img/matrix_factorization.png" width="60%" height="60%">
To generate the items recommended for each user, we would perform a dot product between the two matrices and retrieve the top-k items that have the highest "scores". This process, however, can often times becomes a large bottleneck for these type of algorithms when the number of users and items becomes fairly large. As exhaustive computation of the dot product is extremely expensive. This document's focus is to demonstrate a order preserving transformation that converts the maximum inner product into a nearest neighborhood search problem to significantly speed up the process for generating the top-k recommendations.
Order Preserving Transformations
We'll first describe the notation we'll be using. Lower case is for scalars, $x$, bold lower case for vectors, $\mathbf{x}$, and bold upper case for matrices, $\mathbf{X}$.
Given a vector, $\mathbf{x}$. The norm is denoted by $\Vert \mathbf{x} \Vert = \sqrt{\sum^d_{i=1} x_i^2}$. The inner product is represented as $\mathbf{x} \cdot \mathbf{y}$. Last but not least, $(a, \mathbf{x}^T)^T$ is for denoting a concatenation of a scalar $a$ with a vector $\mathbf{x}$.
On one hand, we have a matrix of $n$ vectors $\mathbf{Y} = [\mathbf{y}_1, \mathbf{y}_2, ..., \mathbf{y}_n]$, such that $\mathbf{y}_i \in \mathbb{R}^d$. Where $d$ is the number of dimensions we set for the latent factor. Whereas, our query vector $\mathbf{x} \in \mathbb{R}^d$.
Our objective is to retrieve an index according to the maximum inner product.
$$
\begin{align}
f(\mathbf{Y}, \mathbf{x}) = \underset{i}{\text{argmax}} \space \mathbf{x} \cdot \mathbf{y}_i
\end{align}
$$
The idea behind speeding up the workload for maximum inner product operations is to transform the problem into a distance minimization problem or nearest neighborhood search.
\begin{align}
f(\mathbf{Y}, \mathbf{x}) = \underset{i}{\text{argmin}} \space {\Vert \mathbf{x} - \mathbf{y}_i \Vert}^2
\end{align}
Once we transform the problem into a euclidean distance problem, there are plethora of algorithms/packages available for doing fast similarity search. To do so, we are going to apply a transformation function on our matrix, $\mathbf{Y}$and our query vector, $\mathbf{x}$. Note that the idea here is only to perform transformation on top of the existing $\mathbf{x}$ and $\mathbf{y}$, not to design a whole new algorithm in itself to learn embeddings/latent factors that directly uses distance minimization to generate the prediction, as this prevents us from using the existing matrix factorization algorithms.
The order transformation is to add an additional dimension to each of the latent factors:
\begin{align}
\mathbf{y}_i^ &= \big(\sqrt{\phi^2 - {\Vert \mathbf{y_i} \Vert}^2 }, \mathbf{y_i}^T\big)^T, \text{where } \phi = \underset{i}{\text{max}} \Vert \mathbf{y}_i \Vert \
\mathbf{x}^ &= (0, \mathbf{x}^T)^T
\end{align}
As
\begin{align}
{\Vert \mathbf{x}^ \Vert}^2 &= {\Vert \mathbf{x} \Vert}^2 \
{\Vert \mathbf{y}_i^ \Vert}^2 &= \phi^2 - {\Vert \mathbf{y}_i \Vert}^2 + {\Vert \mathbf{y}_i \Vert}^2 = \phi^2 \
\mathbf{x}^ \cdot \mathbf{y}^_i &= \sqrt{\phi^2 - {\Vert \mathbf{y}_i \Vert}^2 } \cdot 0 + \mathbf{x} \cdot \mathbf{y}_i = \mathbf{x} \cdot \mathbf{y}_i
\end{align}
To link the maximum inner product to the distance minimization problem, we would then have:
\begin{align}
{\Vert \mathbf{x}^ - \mathbf{y}_i^ \Vert}^2 = {\Vert \mathbf{x}^ \Vert}^2 + {\Vert \mathbf{y}_i^ \Vert}^2 - 2 \cdot \mathbf{x}^ \cdot \mathbf{y}^_i = {\Vert \mathbf{x} \Vert}^2 + \phi^2 - 2 \cdot \mathbf{x} \cdot \mathbf{y}_i
\end{align}
Since both $\mathbf{x}$ and $\phi$ are independent of the term $i$, that concludes our order preserving transformation.
Upon building the transformation, our original matrices would have 1 extra dimension. Then the next step is to pick our favorite nearest neighborhood algorithm and use it to generate the predictions. Popular options at the time of writing this includes, faiss, nmslib, or hnswlib. The ann-benchmarks also lists down the comparison between different open-source nearest neighborhood search algorithms/packages.
Let's now take a look at these concepts in practice.
Matrix Factorization
We'll be using the movielens data to illustrate to concept.
End of explanation
def train_test_user_time_split(df: pd.DataFrame, test_size: float=0.2):
train_size = 1 - test_size
df_train_user = []
df_test_user = []
df_grouped = df.sort_values(time_col).groupby(users_col)
for name, df_group in df_grouped:
n_train = int(df_group.shape[0] * train_size)
df_group_train = df_group.iloc[:n_train]
df_group_test = df_group.iloc[n_train:]
df_train_user.append(df_group_train)
df_test_user.append(df_group_test)
df_train = pd.concat(df_train_user, ignore_index=True)
df_test = pd.concat(df_test_user, ignore_index=True)
return df_train, df_test
test_size = 0.2
df_train, df_test = train_test_user_time_split(df, test_size)
print('train size: ', df_train.shape[0])
print('test size: ', df_test.shape[0])
Explanation: For the train/test split, the process is to split each user's behavior based on chronological order. e.g. If an user interacted with 10 items, and we specify a test set of size, 0.2. Then the first 8 items that the user first interacted with will fall in the training set, and the last 2 items will belong to the test set.
End of explanation
n_users = df[users_col].cat.categories.shape[0]
n_items = df[items_col].cat.categories.shape[0]
# implicit library expects items to be rows
# and users to be columns of the sparse matrix
rows = df_train[items_col].cat.codes.values
cols = df_train[users_col].cat.codes.values
values = df_train[value_col].astype(np.float32)
item_user = csr_matrix((values, (rows, cols)), shape=(n_items, n_users))
item_user
# we won't be doing any hyperparameter tuning
# as training the "best" model is not the main purpose here
bpr = BayesianPersonalizedRanking()
bpr.fit(item_user)
Explanation: The model we'll be using is Bayesian Personalized Ranking from the implicit library.
End of explanation
user_id = 0
topn = 5
user_item = item_user.T.tocsr()
recommendations = bpr.recommend(user_id, user_item, topn, filter_already_liked_items=False)
recommendations
Explanation: The model object also provides a .recommend method that generates the recommendation for a user.
End of explanation
def recommend(query_factors, index_factors, query_id, topn=5):
output = query_factors[query_id].dot(index_factors.T)
argpartition_indices = np.argpartition(output, -topn)[-topn:]
sort_indices = np.argsort(output[argpartition_indices])[::-1]
labels = argpartition_indices[sort_indices]
distances = output[labels]
return labels, distances
Explanation: We can also generate the recommendations ourselves. We'll first confirm that the recommend function that we've implemented matches the one provided by the library, also implement a recommend_all function that generates the recommendation for all the user, this will be used to compare against the nearest neighborhood search on the order transformed matrix later.
End of explanation
index_factors = bpr.item_factors
query_factors = bpr.user_factors
labels, distances = recommend(query_factors, index_factors, user_id, topn)
print(labels)
print(distances)
def recommend_all(query_factors, index_factors, topn=5):
output = query_factors.dot(index_factors.T)
argpartition_indices = np.argpartition(output, -topn)[:, -topn:]
x_indices = np.repeat(np.arange(output.shape[0]), topn)
y_indices = argpartition_indices.flatten()
top_value = output[x_indices, y_indices].reshape(output.shape[0], topn)
top_indices = np.argsort(top_value)[:, ::-1]
y_indices = top_indices.flatten()
top_indices = argpartition_indices[x_indices, y_indices]
labels = top_indices.reshape(-1, topn)
distances = output[x_indices, top_indices].reshape(-1, topn)
return labels, distances
labels, distances = recommend_all(query_factors, index_factors)
print(labels)
print(distances)
Explanation: Different model/library have different ways of extracting the item and user factors/embeddings, we assign it to index_factors and query_factors to make all downstream code agnostic of libraries' implementation.
End of explanation
def augment_inner_product(factors):
normed_factors = np.linalg.norm(factors, axis=1)
max_norm = normed_factors.max()
extra_dim = np.sqrt(max_norm ** 2 - normed_factors ** 2).reshape(-1, 1)
augmented_factors = np.append(factors, extra_dim, axis=1)
return max_norm, augmented_factors
print('pre shape: ', index_factors.shape)
max_norm, augmented_index_factors = augment_inner_product(index_factors)
augmented_index_factors.shape
Explanation: Implementation
To implement our order preserving transformation, we first apply the transformation on our index factors. Recall that the formula is: Let $\phi = \underset{i}{\text{max}} \Vert \mathbf{y}_i \Vert$. $\mathbf{y}_i^* = g(\mathbf{y}_i) = \big(\sqrt{\phi^2 - {\Vert \mathbf{y_i} \Vert}^2 }, \mathbf{y_i}^T\big)^T$.
End of explanation
def build_hnsw(factors, space, ef_construction, M):
# Declaring index
max_elements, dim = factors.shape
hnsw = hnswlib.Index(space, dim) # possible options for space are l2, cosine or ip
# Initing index - the maximum number of elements should be known beforehand
hnsw.init_index(max_elements, M, ef_construction)
# Element insertion (can be called several times)
hnsw.add_items(factors)
return hnsw
# the library directly supports inner product,
# this might not be the case for all the nearest neighborhood search library
space = 'ip'
ef_construction = 400
M = 24
start = time.time()
hnsw = build_hnsw(augmented_index_factors, space, ef_construction, M)
build_time = time.time() - start
build_time
Explanation: Our next step is to use our favorite nearest neighborhood search algorithm/library to conduct the search. We'll be leveraging hnswlib in this example, explaining the details behind the this nearest neighborhood search algorithm is beyond the scope of this document.
End of explanation
extra_zero = np.zeros((query_factors.shape[0], 1))
augmented_query_factors = np.append(query_factors, extra_zero, axis=1)
augmented_query_factors.shape
k = 5
# Controlling the recall by setting ef, should always be > k
hnsw.set_ef(70)
# retrieve the top-n search neighbors
label, distance = hnsw.knn_query(augmented_query_factors, k=k)
print(label)
# the distance returned by hnsw is 1 - inner product, hence
# we convert it back to just inner product
print(1 - distance)
Explanation: To generate the the prediction, we first transform the incoming "queries". $\mathbf{x}^* = h(\mathbf{x}) = (0, \mathbf{x}^T)^T$.
End of explanation
%%timeit
recommend_all(query_factors, index_factors, topn=k)
%%timeit
extra_zero = np.zeros((query_factors.shape[0], 1))
augmented_query_factors = np.append(query_factors, extra_zero, axis=1)
hnsw.knn_query(query_factors, k=k)
Explanation: Benchmark
We can time the original recommend method using maximum inner product versus the new method of using the order preserving transformed matrices with nearest neighborhood search.
End of explanation
labels, distances = recommend_all(query_factors, index_factors, topn=k)
hnsw_labels, hnsw_distances = hnsw.knn_query(query_factors, k=k)
def compute_label_precision(optimal_labels, reco_labels):
n_labels = len(optimal_labels)
label_precision = 0.0
for optimal_label, reco_label in zip(optimal_labels, reco_labels):
topn = len(reco_label)
precision = len(set(optimal_label) & set(reco_label)) / topn
label_precision += (precision / n_labels)
return round(label_precision, 3)
# as expected, the precision between itself should be 1
label_precision = compute_label_precision(labels, labels)
label_precision
# ensure the approximate neighborhood search is of good quality
label_precision = compute_label_precision(labels, hnsw_labels)
label_precision
Explanation: Note that the timing is highly dependent on the dataset. We'll observe a much larger speedup if the number of items/labels in the output/index factor is larger. In the movielens dataset, we only had to rank the top items for each user among 1.6K items, in a much larger dataset, the number of items could easily go up to 100K or even million, that's when we'll see the real potential of this method.
Another thing worth checking is the quality of the prediction using the new method. Here we're using hnswlib library to generate the nearest neighborhood items, as hnswlib is technically an approximate nearest neighborhood algorithm. We can measure how much overlap the approximate top recommendations are to the original top recommendations to make sure we are using the right parameters for the nearest neighborhood search algorithm. Notation-wise:
\begin{align}
\text{overlap@k} = \frac{|L_{rec} \cap L_{opt}|}{k}
\end{align}
Where $L_{rec}$ and $L_{opt}$ are the lists of top k approximate recommendations and top k optimal/original recommendations respectively.
End of explanation |
10,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step2: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
return (2/L) * np.sin((nx * np.pi * x)/L) * np.sin((ny * np.pi * y)/L)
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
print(well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1))
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
# YOUR CODE HERE
x, y = np.meshgrid(np.linspace(0, 1, 10), np.linspace(0, 1, 10))
plt.contourf(well2d(x, y, 3, 2, 1), 20, cmap="gnuplot")
plt.colorbar()
plt.tick_params(axis = "x", direction = "out", length = 5)
plt.tick_params(axis = "y", direction = "out", length = 5)
plt.box(False)
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
# YOUR CODE HERE
x, y = np.meshgrid(np.linspace(0, 1, 10), np.linspace(0, 1, 10))
plt.pcolormesh(well2d(x, y, 3, 2, 1), cmap="gnuplot", alpha=0.9)
plt.colorbar()
plt.tick_params(axis = "x", direction = "out", length = 5)
plt.tick_params(axis = "y", direction = "out", length = 5)
plt.box(False)
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
10,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Support Vector Machines
Let's create the same fake income / age clustered data that we used for our K-Means clustering example
Step1: Now we'll use linear SVC to partition our graph into clusters
Step2: By setting up a dense mesh of points in the grid and classifying all of them, we can render the regions of each cluster as distinct colors
Step3: Or just use predict for a given point | Python Code:
import numpy as np
#Create fake income/age clusters for N people in k clusters
def createClusteredData(N, k):
pointsPerCluster = float(N)/k
X = []
y = []
for i in range (k):
incomeCentroid = np.random.uniform(20000.0, 200000.0)
ageCentroid = np.random.uniform(20.0, 70.0)
for j in range(int(pointsPerCluster)):
X.append([np.random.normal(incomeCentroid, 10000.0), np.random.normal(ageCentroid, 2.0)])
y.append(i)
X = np.array(X)
y = np.array(y)
return X, y
%matplotlib inline
from pylab import *
(X, y) = createClusteredData(100, 5)
plt.figure(figsize=(8, 6))
plt.scatter(X[:,0], X[:,1], c=y.astype(np.float))
plt.show()
Explanation: Support Vector Machines
Let's create the same fake income / age clustered data that we used for our K-Means clustering example:
End of explanation
from sklearn import svm, datasets
C = 1.0
svc = svm.SVC(kernel='linear', C=C).fit(X, y)
Explanation: Now we'll use linear SVC to partition our graph into clusters:
End of explanation
def plotPredictions(clf):
xx, yy = np.meshgrid(np.arange(0, 250000, 10),
np.arange(10, 70, 0.5))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
plt.figure(figsize=(8, 6))
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
plt.scatter(X[:,0], X[:,1], c=y.astype(np.float))
plt.show()
plotPredictions(svc)
Explanation: By setting up a dense mesh of points in the grid and classifying all of them, we can render the regions of each cluster as distinct colors:
End of explanation
print(svc.predict([[200000, 40]]))
print(svc.predict([[50000, 65]]))
Explanation: Or just use predict for a given point:
End of explanation |
10,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Teste de BI
Candidato
Step1: 1. Usando mtcars, trazer a média de miles per galon da marca Mercedez. Atribuir isso a uma variável x.
Step2: 2. Testar se há correlação entre o peso do carro e o consumo de gasolina. Existe? Por quê?
Step3: <font size="4"> Há "forte" correlação linear negativa ou inversa entre peso e quilometragem dos veículos
Step4: 4. Usando cars, qual é a distância de frenagem se o carro estiver a 90 milhas por hora.
Step5: Questões SQL Básico
Step6: Qual o resultado da query abaixo?
SELECT * FROM users LEFT JOIN tasks ON users.id = tasks.id_resp;
Step7: <font size="4"> A query retorna todos os valores da tabela da esquerda (users), os registros pareados na tabela da direita (tasks). O resultado é NULL (NA em R, None em Python) na tabela da direita nas linhas não pareadas.</font>
2. Especifique cada tipo de JOIN
Step8: a. Escreva a clausula WHERE que retorne as quantidades do período 201705 para o estado do PR quando as quantidades forem superiores a 80.
Step9: b. Quais id linhas serão retornadas?
As linhas id cujo valor são 1 e 2.
6. Dadas as tabelas abaixo
Step10: a. Faça uma query contendo o resultado das duas tabelas juntas, renomenando o campo status da tabela users para funcionario_ativo.
A query seria
Step11: Entretanto, SQLite não suporta RIGHT e FULL OUTER JOIN. Portanto, eu tive que "emular" o comando FULL OUTER JOIN usando as cláusulas UNION e LEFT JOIN.
Fonte
Step12: b. Faça outra query que traga os eventos com o nome do responsável. O resultado não deve trazer os campos de status de ambas tabelas, porém deve trazer um novo campo de status_do_evento que deve construindo da seguinte forma | Python Code:
# Links para as bases de dados do R:
mtcars_link = 'https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/mtcars.csv'
quakes_link = 'https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/datasets/quakes.csv'
cars_link = 'https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/datasets/cars.csv'
Explanation: Teste de BI
Candidato: Jayme Anchante
Questões R Básico:
End of explanation
import pandas as pd
mtcars = pd.read_csv(mtcars_link)
mtcars.head()
mtcars.rename(columns = {'Unnamed: 0': 'name'}, inplace = True)
mtcars.head()
x = mtcars.mpg[mtcars.name.str[:4] == 'Merc'].mean()
x
Explanation: 1. Usando mtcars, trazer a média de miles per galon da marca Mercedez. Atribuir isso a uma variável x.
End of explanation
mtcars[['mpg', 'wt']].corr()
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt; plt.style.use('ggplot')
mpg_wt = mtcars[['mpg', 'wt']]
joint = sns.jointplot('wt', 'mpg', data = mpg_wt, kind = 'reg', size = 12)
plt.subplots_adjust(top=0.95)
joint.fig.suptitle('Correlação entre peso do veículo e consumo de combustível', fontsize = 28)
plt.xlabel('peso (1000 lbs) ', fontsize = 22)
plt.ylabel('consumo (Miles/(US) gallon)', fontsize = 22);
Explanation: 2. Testar se há correlação entre o peso do carro e o consumo de gasolina. Existe? Por quê?
End of explanation
quakes = pd.read_csv(quakes_link)
quakes.head()
quakes.rename(columns = {'Unnamed: 0': 'id'}, inplace = True)
print('A maior magnitude de um terremoto é', quakes['mag'].max(), 'na escala Richter!')
print('A magnitude média é de', round(quakes['mag'].mean(), 4), 'na escala Richter')
print('O desvio das magnitudes é de', round(quakes['mag'].std(), 4))
Explanation: <font size="4"> Há "forte" correlação linear negativa ou inversa entre peso e quilometragem dos veículos: quanto maior o peso, menor a quilometragem. Provavelmente, o motor "exige" mais combustível de veículos mais pesados do que dos mais leves para locomover-se. </font>
3. Usando quakes, qual é a maior magnitude de um terremoto? e qual a magnitude média? e o desvio entre as magnitudes?
End of explanation
cars = pd.read_csv(cars_link)
cars.tail()
del cars['Unnamed: 0']
cars['speed'].max()
joint = sns.jointplot('speed', 'dist', data = cars, kind = 'reg', size = 12)
plt.subplots_adjust(top=0.95)
joint.fig.suptitle('Correlação entre a velocidade e a distância de frenagem', fontsize = 28)
plt.xlabel('velocidade (mph)', fontsize = 22)
plt.ylabel('distância (ft)', fontsize = 22);
speed = cars['speed'].reshape(50, 1)
dist = cars['dist'].reshape(50, 1)
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit(X = speed, y = dist)
reg.coef_
print('A distância de frenagem é de', reg.predict(90), 'ft caso o carro esteja a 90 mph')
Explanation: 4. Usando cars, qual é a distância de frenagem se o carro estiver a 90 milhas por hora.
End of explanation
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
# Create table
c.execute('''CREATE TABLE users
(id int, name text)''')
c.execute('''CREATE TABLE tasks
(id int, event text, id_resp int)''')
# Insert a row of data
c.execute('''INSERT INTO users VALUES
(1, 'Igor Sanchez'),
(2, 'Joao Junior'),
(3, 'Rodrigo Pinto'),
(4, 'Amandio Pereira'),
(5, 'Karoline Leal')''')
# Insert a row of data
c.execute('''INSERT INTO tasks VALUES
(1, 'send report', 3),
(2, 'drink coffee', 2),
(3, 'travel CWB', 3),
(4, 'call mkt', 6)''')
# Save (commit) the changes
conn.commit()
for row in c.execute("SELECT * from users"):
print(row)
for row in c.execute("SELECT * from tasks"):
print(row)
Explanation: Questões SQL Básico:
1. Dadas as tabelas abaixo:
End of explanation
for row in c.execute('''SELECT * FROM users
LEFT JOIN tasks
ON users.id = tasks.id_resp'''):
print(row)
Explanation: Qual o resultado da query abaixo?
SELECT * FROM users LEFT JOIN tasks ON users.id = tasks.id_resp;
End of explanation
# Create table
c.execute('''CREATE TABLE firmas
(id int, periodo int, estado text, origem text, qtd_users int)''')
# Insert a row of data
c.execute('''INSERT INTO firmas VALUES
(3, 201705, 'PR', 'MGservico', 80),
(1, 201705, 'PR', 'MGservico', 100),
(2, 201705, 'PR', 'MGservico', 110),
(4, 201705, 'RS', 'MGcomercio', 50),
(5, 201706, 'RS', 'MGcomercio', 200),
(6, 201706, 'SP', 'Abertura', 250),
(7, 201706, 'SP', 'Abertura', 400),
(8, 201706, 'SP', 'Abertura', 310)''')
# Save (commit) the changes
conn.commit()
for row in c.execute("SELECT * from firmas"):
print(row)
Explanation: <font size="4"> A query retorna todos os valores da tabela da esquerda (users), os registros pareados na tabela da direita (tasks). O resultado é NULL (NA em R, None em Python) na tabela da direita nas linhas não pareadas.</font>
2. Especifique cada tipo de JOIN:
• Left Join: A query retorna todos os valores da tabela da esquerda, os registros pareados na tabela da direita. O resultado é NULL (NA em R, None em Python) na tabela da direita nas linhas não pareadas.
• Right Join: A query retorna todos os valores da tabela da direita, os registros pareados na tabela da esquerda. O resultado é NULL (NA em R, None em Python) na tabela da esquerda nas linhas não pareadas.
• Inner Join: A query retorna apenas registros em que houve pareamento de valores em ambas as tabelas.
• Full Join: A query retorna todos os registros em que houve pareamento ou na tabela da esquerda ou na tabela da direita. Ou seja, retorna todos os valores de ambas as tabelas.
3. O que é uma chave primária de uma tabela?
A chave primária identifica de forma exclusiva cada registro de uma tabela.
A chave primária deve conter apenas valores únicos, e não pode conter valores NULL (NA em R, None em Python).
Uma tabela pode conter apenas uma chave primária, que pode consistir de uma ou múltiplos campos (colunas).
4. Quais dessas funções são destinadas a agregação de dados, GROUP BY?
LEN(), RIGHT(), SUM(), REPLACE(), COUNT(), CONCAT(), ABS()
As funções de agregação de dados utilizadas com GROUP BY nessa amostra são SUM() e COUNT().
5. Dado a tabela:
End of explanation
for row in c.execute('''SELECT * FROM firmas
WHERE periodo = 201705 AND estado = "PR" AND qtd_users > 80 '''):
print(row)
Explanation: a. Escreva a clausula WHERE que retorne as quantidades do período 201705 para o estado do PR quando as quantidades forem superiores a 80.
End of explanation
c = conn.cursor()
c.execute("DROP TABLE users")
c.execute("DROP TABLE tasks")
# Create table
c.execute('''CREATE TABLE users
(id int, name text, status text)''')
c.execute('''CREATE TABLE tasks
(id int, event text, id_resp int, status text)''')
# Insert a row of data
c.execute('''INSERT INTO users VALUES
(1, 'Igor Sanchez', 'ativo'),
(2, 'Joao Junior', 'ativo'),
(3, 'Rodrigo Pinto', 'inativo'),
(4, 'Amandio Pereira', 'inativo'),
(5, 'Karoline Leal', 'ativo')''')
# Insert a row of data
c.execute('''INSERT INTO tasks VALUES
(1, 'send report', 3, 'null'),
(2, 'drink coffee', 2, 'undone'),
(3, 'travel CWB', 3, 'null'),
(4, 'call mkt', 6, 'done'),
(5, 'feed the badger', 2, 'undone'),
(4, 'buy a badger', 6, 'done')''')
# Save (commit) the changes
conn.commit()
for row in c.execute("SELECT * from users"):
print(row)
for row in c.execute("SELECT * FROM tasks"):
print(row)
Explanation: b. Quais id linhas serão retornadas?
As linhas id cujo valor são 1 e 2.
6. Dadas as tabelas abaixo:
End of explanation
for row in c.execute('''SELECT *, users.status AS funcionario_ativo FROM users
FULL OUTER JOIN tasks ON users.id = tasks.id_resp'''):
print(row)
Explanation: a. Faça uma query contendo o resultado das duas tabelas juntas, renomenando o campo status da tabela users para funcionario_ativo.
A query seria:
End of explanation
for row in c.execute('''SELECT u.id, u.name, u.status AS funcionário_ativo, t.id, t.event, t.id_resp, t.status
FROM users u
LEFT JOIN tasks t ON u.id = t.id_resp
UNION ALL
SELECT u.id, u.name, u.status, t.id, t.event, t.id_resp, t.status
FROM tasks t
LEFT JOIN users u ON u.id = t.id_resp
WHERE u.status IS NULL'''):
print(row)
Explanation: Entretanto, SQLite não suporta RIGHT e FULL OUTER JOIN. Portanto, eu tive que "emular" o comando FULL OUTER JOIN usando as cláusulas UNION e LEFT JOIN.
Fonte: http://www.sqlitetutorial.net/sqlite-full-outer-join/
End of explanation
for row in c.execute('''SELECT users.name, tasks.event, CASE
WHEN users.status = "ativo" AND tasks.status = "done" THEN "sucesso"
WHEN users.status = "ativo" AND tasks.status = "undone" THEN "falha"
WHEN users.status = "inativo" AND tasks.status = "null" then "reatribuir"
END AS status_do_evento FROM tasks
LEFT JOIN users
ON users.id = tasks.id_resp'''):
print(row)
# We can also close the connection if we are done with it.
# Just be sure any changes have been committed or they will be lost.
conn.close()
Explanation: b. Faça outra query que traga os eventos com o nome do responsável. O resultado não deve trazer os campos de status de ambas tabelas, porém deve trazer um novo campo de status_do_evento que deve construindo da seguinte forma:
• se o status do funcionário for ativo e o status do evento for done, marcar como sucesso
• se o status do funcionário for ativo e o status do evento for undone, marcar como falha
• se o status do funcionário for inativo e o status do evento for nulo, marcar como reatribuir
End of explanation |
10,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Widget Events
In this lecture we will discuss widget events, such as button clicks!
Special events
Step1: The Button is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked. The doc string of the on_click can be seen below.
Step2: Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the on_click method, a button that prints a message when it has been clicked is shown below.
Step3: on_submit
The Text widget also has a special on_submit event. The on_submit event fires when the user hits return.
Step4: Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the on_trait_change method of the widget can be used to register a callback. The doc string for on_trait_change can be seen below.
Step5: Signatures
Mentioned in the doc string, the callback registered can have 4 possible signatures
Step6: Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
Linking traitlets attributes from the server side
The first method is to use the link and dlink functions from the traitlets module.
Step7: Function traitlets.link and traitlets.dlink return a Link or DLink object. The link can be broken by calling the unlink method.
Step8: Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency due to the roundtrip to the server side. You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.
Step9: Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method. | Python Code:
from __future__ import print_function
Explanation: Widget Events
In this lecture we will discuss widget events, such as button clicks!
Special events
End of explanation
import ipywidgets as widgets
print(widgets.Button.on_click.__doc__)
Explanation: The Button is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked. The doc string of the on_click can be seen below.
End of explanation
from IPython.display import display
button = widgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
Explanation: Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the on_click method, a button that prints a message when it has been clicked is shown below.
End of explanation
text = widgets.Text()
display(text)
def handle_submit(sender):
print(text.value)
text.on_submit(handle_submit)
Explanation: on_submit
The Text widget also has a special on_submit event. The on_submit event fires when the user hits return.
End of explanation
print(widgets.Widget.on_trait_change.__doc__)
Explanation: Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the on_trait_change method of the widget can be used to register a callback. The doc string for on_trait_change can be seen below.
End of explanation
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(name, value):
print(value)
int_range.on_trait_change(on_value_change, 'value')
Explanation: Signatures
Mentioned in the doc string, the callback registered can have 4 possible signatures:
callback()
callback(trait_name)
callback(trait_name, new_value)
callback(trait_name, old_value, new_value)
Using this method, an example of how to output an IntSlider's value as it is changed can be seen below.
End of explanation
import traitlets
# Create Caption
caption = widgets.Latex(value = 'The values of slider1 and slider2 are synchronized')
# Create IntSlider
slider1 = widgets.IntSlider(description='Slider 1')
slider2 = widgets.IntSlider(description='Slider 2')
# Use trailets to link
l = traitlets.link((slider1, 'value'), (slider2, 'value'))
# Display!
display(caption, slider1, slider2)
# Create Caption
caption = widgets.Latex(value = 'Changes in source values are reflected in target1')
# Create Sliders
source = widgets.IntSlider(description='Source')
target1 = widgets.IntSlider(description='Target 1')
# Use dlink
dl = traitlets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
Explanation: Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
Linking traitlets attributes from the server side
The first method is to use the link and dlink functions from the traitlets module.
End of explanation
# May get an error depending on order of cells being run!
l.unlink()
dl.unlink()
Explanation: Function traitlets.link and traitlets.dlink return a Link or DLink object. The link can be broken by calling the unlink method.
End of explanation
# NO LAG VERSION
caption = widgets.Latex(value = 'The values of range1 and range2 are synchronized')
range1 = widgets.IntSlider(description='Range 1')
range2 = widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
# NO LAG VERSION
caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1')
source_range = widgets.IntSlider(description='Source range')
target_range1 = widgets.IntSlider(description='Target range ')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
Explanation: Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency due to the roundtrip to the server side. You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.
End of explanation
l.unlink()
dl.unlink()
Explanation: Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method.
End of explanation |
10,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate Jensen-Shannon Divergence Between Luke and John
The KL Divergence between two discrete distributions $P$ and $Q$ with pdfs $p$ and $q$, defined over the same sample space $X={x_0,x_1, \dots, x_N}$, is given by
\begin{equation}
KL(P||Q) = \sum_{i=0}^N p(x_i) \ln \Big( \frac{p(x_i)}{q(x_i)} \Big)
\end{equation}
This divergence is not a metric because it is not symmetric, i.e. it is often the case that $KL(P||Q) \ne KL(Q||P)$. To address this, we will use the Jensen-Shannon Divergence which is a true metric and is defined as
\begin{equation}
JSD(P||Q) = \frac{1}{2}KL(P||R) + \frac{1}{2}KL(Q||R)
\end{equation}
where $R$ is defined as the average of the two distributions $R=\frac{P+Q}{2}$
Step1: If we assume that our histograms are exact, then we can trivally calculate the Jensen-Shannon Divergence between them by normalizing the histograms and summing up the terms on the RHS of the JSD equation...
Step2: It is symmetric...
Step3: and returns zero when operating on the same image... | Python Code:
from PIL import Image
import numpy as np
luke = Image.open("/home/nathan/Downloads/Luke_Van_Poppering.jpeg")
luke.thumbnail((300,300)) # Thanks for this, John....
john = Image.open("/home/nathan/Downloads/John_Abascal.jpg")
john.thumbnail((300,300))
luke
john
Explanation: Calculate Jensen-Shannon Divergence Between Luke and John
The KL Divergence between two discrete distributions $P$ and $Q$ with pdfs $p$ and $q$, defined over the same sample space $X={x_0,x_1, \dots, x_N}$, is given by
\begin{equation}
KL(P||Q) = \sum_{i=0}^N p(x_i) \ln \Big( \frac{p(x_i)}{q(x_i)} \Big)
\end{equation}
This divergence is not a metric because it is not symmetric, i.e. it is often the case that $KL(P||Q) \ne KL(Q||P)$. To address this, we will use the Jensen-Shannon Divergence which is a true metric and is defined as
\begin{equation}
JSD(P||Q) = \frac{1}{2}KL(P||R) + \frac{1}{2}KL(Q||R)
\end{equation}
where $R$ is defined as the average of the two distributions $R=\frac{P+Q}{2}$
End of explanation
def KL(p: np.array, q: np.array):
val = 0
for i,pi in enumerate(p):
if pi == 0:
continue
val += pi*np.log2(pi/q[i])
return val
def JSD(p: np.array, q: np.array):
r = (p+q)/2
p = p[r != 0] # If r_i is zero, then it is in neither p nor q and can be ignored
q = q[r != 0]
r = r[r != 0]
val = 0.5*(KL(p,r)+KL(q,r))
return val
def hist_loss(im1, im2):
im1_channels = im1.split()
im2_channels = im2.split()
loss = []
for im1_c, im2_c in zip(im1_channels,im2_channels):
hist1 = np.array(im1_c.histogram())
hist2 = np.array(im2_c.histogram())
loss.append(JSD(hist1/hist1.sum(), hist2/hist2.sum()))
return np.mean(loss)
Explanation: If we assume that our histograms are exact, then we can trivally calculate the Jensen-Shannon Divergence between them by normalizing the histograms and summing up the terms on the RHS of the JSD equation...
End of explanation
hist_loss(luke, john) == hist_loss(john, luke)
Explanation: It is symmetric...
End of explanation
hist_loss(john, john)
hist_loss(luke, luke)
hist_loss(np.zeros(255),np.ones(255))
JSD(np.zeros(255),np.ones(255)/255)
Explanation: and returns zero when operating on the same image...
End of explanation |
10,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python入门 第一周和第二周的练习
练习
回答下列粗体文字所描述的问题,如果需要,使用任何合适的方法,以掌握技能,完成自己想要的程序为目标,不用太在意实现的过程。
7 的四次方是多少?
分割以下字符串
s = "Hi there Sam!"
到一个列表中
提供了一下两个变量
planet = "Earth"
diameter = 12742
使用format()函数输出一下字符串
The diameter of Earth is 12742 kilometers.
Step1: 提供了以下嵌套列表,使用索引的方法获取单词‘hello'
Step2: 提供以下嵌套字典,从中抓去单词 “hello”
Step3: 字典和列表之间的差别是什么??
Step4: 编写一个函数,该函数能够获取类似于以下email地址的域名部分
[email protected]
因此,对于这个示例,传入 "[email protected]" 将返回 | Python Code:
planet = "Earth"
diameter = 12742
Explanation: Python入门 第一周和第二周的练习
练习
回答下列粗体文字所描述的问题,如果需要,使用任何合适的方法,以掌握技能,完成自己想要的程序为目标,不用太在意实现的过程。
7 的四次方是多少?
分割以下字符串
s = "Hi there Sam!"
到一个列表中
提供了一下两个变量
planet = "Earth"
diameter = 12742
使用format()函数输出一下字符串
The diameter of Earth is 12742 kilometers.
End of explanation
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
Explanation: 提供了以下嵌套列表,使用索引的方法获取单词‘hello'
End of explanation
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
Explanation: 提供以下嵌套字典,从中抓去单词 “hello”
End of explanation
# Just answer with text, no code necessary
Explanation: 字典和列表之间的差别是什么??
End of explanation
def fib_dyn(n):
a,b = 1,1
for i in range(n-1):
a,b = b,a+b
return a
fib_dyn(10)
Explanation: 编写一个函数,该函数能够获取类似于以下email地址的域名部分
[email protected]
因此,对于这个示例,传入 "[email protected]" 将返回: domain.com
创建一个函数,如果输入的字符串中包含‘dog’,(请忽略corn case)统计一下'dog'的个数
创建一个函数,判断'dog' 是否包含在输入的字符串中(请同样忽略corn case)
如果你驾驶的过快,交警就会拦下你。编写一个函数来返回以下三种可能的情况之一:"No ticket", "Small ticket", 或者 "Big Ticket".
如果速度小于等于60, 结果为"No Ticket". 如果速度在61和80之间, 结果为"Small Ticket". 如果速度大于81,结果为"Big Ticket". 除非这是你的生日(传入一个boolean值),如果是生日当天,就允许超速5公里/小时。(同样,请忽略corn case)。
计算斐波那契数列,使用生成器实现
End of explanation |
10,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
Explanation: Try out your own text!
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.