Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
练习 1:仿照求$ \sum_{i=1}^mi + \sum_{i=1}^ni + \sum_{i=1}^ki$的完整代码,写程序,可求m!+n!+k!
Step1: 练习 2:写函数可返回1 - 1/3 + 1/5 - 1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。
Step2: 练习 3:将task3中的练习1及练习4改写为函数,并进行调用。
TASK3练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
Step3: TASK3练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),liupengyuan'.endswith('n'))。
Step4: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。 | Python Code:
def total (m):
i=0
result=1
while i<m:
i+=1
result*=i
return result
m=int(input('please enter an integer. '))
n=int(input('please enter an integer. '))
k=int(input('please enter an integer. '))
print('The result of ',m,'!+',n,'!+',k,'! is :',total(m)+total(n)+total(k))
Explanation: 练习 1:仿照求$ \sum_{i=1}^mi + \sum_{i=1}^ni + \sum_{i=1}^ki$的完整代码,写程序,可求m!+n!+k!
End of explanation
def sum (n):
i=0
total=0
while i<n:
i+=1
if i%2==0:
total-=1/(2*i-1)
else:
total+=1/(2*i-1)
return total
n=int(input('please enter an integer. '))
m=int(input('please enter an integer. '))
print('resoult is :',4*sum(n))
print('resoult is :',4*sum(m))
Explanation: 练习 2:写函数可返回1 - 1/3 + 1/5 - 1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。
End of explanation
def find_star ( n,name):
if 321<=n<=419 :
print('Mr',name,'你是白羊座')
elif 420<=n<=520 :
print('Mr',name,'你是非常有个性的金牛座!')
elif 521<=n<=621 :
print('Mr',name,'你是双子座')
elif 622<=n<=722 :
print('Mr',name,'你是巨蟹座')
elif 723<=n<=822 :
print('Mr',name,'你是狮子座')
elif 823<=n<=922 :
print('Mr',name,'你是处女座')
elif 923<=n<=1023 :
print('Mr',name,'你是天枰座')
elif 1024<=n<=1122 :
print('Mr',name,'你是天蝎座')
elif 1123<=n<=1221 :
print('Mr',name,'你是射手座')
elif 1222<=n<=1231 or 101<=n<=119 :
print('Mr',name,'你是摩羯座')
elif 120<=n<=218 :
print('Mr',name,'你是水平座')
else :
print('Mr',name,'你是双鱼座')
print('what is your name ?')
name=input()
print('when is your birthday ?for example 3 月 4日 以如下格式输如:304')
dat=int(input())
find_star(n,name)
Explanation: 练习 3:将task3中的练习1及练习4改写为函数,并进行调用。
TASK3练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
End of explanation
def change(word):
if word.endswith('sh') or word.endswith('ch') or word.endswith('x') or word.endswith('s'):
print(word,'es',sep='')
else:
print(word,'s',sep='')
word=input('please enter a word :')
change(word)
Explanation: TASK3练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),liupengyuan'.endswith('n'))。
End of explanation
def sum (m,n,k):
i=m
total=m
while i<n and i+k<=n :
i+=k
total+=i
return total
m=int(input('please enter the from integer :'))
n=int(input('please enter the to integer : '))
k=int(input('please enter the gap :'))
print('The result :',sum(m,n,k))
Explanation: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。
End of explanation |
13,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tICA vs. PCA
This example uses OpenMM to generate example data to compare two methods for dimensionality reduction
Step1: Run Dynamics
Okay, let's run the dynamics. The first plot below shows the x, y and z coordinate vs. time for the trajectory, and
the second plot shows each of the 1D and 2D marginal distributions.
Step2: Note that the variance of x is much lower than the variance in y or z, despite its bi-modal distribution.
Fit tICA and PCA models
Step3: See what they find
Step4: Note that the first tIC "finds" a projection that just resolves the x coordinate, whereas PCA doesn't. | Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
xx, yy = np.meshgrid(np.linspace(-2,2), np.linspace(-3,3))
zz = 0 # We can only visualize so many dimensions
ww = 5 * (xx-1)**2 * (xx+1)**2 + yy**2 + zz**2
c = plt.contourf(xx, yy, ww, np.linspace(-1, 15, 20), cmap='viridis_r')
plt.contour(xx, yy, ww, np.linspace(-1, 15, 20), cmap='Greys')
plt.xlabel('$x$', fontsize=18)
plt.ylabel('$y$', fontsize=18)
plt.colorbar(c, label='$E(x, y, z=0)$')
plt.tight_layout()
import simtk.openmm as mm
def propagate(n_steps=10000):
system = mm.System()
system.addParticle(1)
force = mm.CustomExternalForce('5*(x-1)^2*(x+1)^2 + y^2 + z^2')
force.addParticle(0, [])
system.addForce(force)
integrator = mm.LangevinIntegrator(500, 1, 0.02)
context = mm.Context(system, integrator)
context.setPositions([[0, 0, 0]])
context.setVelocitiesToTemperature(500)
x = np.zeros((n_steps, 3))
for i in range(n_steps):
x[i] = (context.getState(getPositions=True)
.getPositions(asNumpy=True)
._value)
integrator.step(1)
return x
Explanation: tICA vs. PCA
This example uses OpenMM to generate example data to compare two methods for dimensionality reduction:
tICA and PCA.
Define dynamics
First, let's use OpenMM to run some dynamics on the 3D potential energy function
$$E(x,y,z) = 5 \cdot (x-1)^2 \cdot (x+1)^2 + y^2 + z^2$$
From looking at this equation, we can see that along the x dimension,
the potential is a double-well, whereas along the y and z dimensions,
we've just got a harmonic potential. So, we should expect that x is the slow
degree of freedom, whereas the system should equilibrate rapidly along y and z.
End of explanation
trajectory = propagate(10000)
ylabels = ['x', 'y', 'z']
for i in range(3):
plt.subplot(3, 1, i+1)
plt.plot(trajectory[:, i])
plt.ylabel(ylabels[i])
plt.xlabel('Simulation time')
plt.tight_layout()
Explanation: Run Dynamics
Okay, let's run the dynamics. The first plot below shows the x, y and z coordinate vs. time for the trajectory, and
the second plot shows each of the 1D and 2D marginal distributions.
End of explanation
from msmbuilder.decomposition import tICA, PCA
tica = tICA(n_components=1, lag_time=100)
pca = PCA(n_components=1)
tica.fit([trajectory])
pca.fit([trajectory])
Explanation: Note that the variance of x is much lower than the variance in y or z, despite its bi-modal distribution.
Fit tICA and PCA models
End of explanation
plt.subplot(1,2,1)
plt.title('1st tIC')
plt.bar([1,2,3], tica.components_[0], color='b')
plt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])
plt.subplot(1,2,2)
plt.title('1st PC')
plt.bar([1,2,3], pca.components_[0], color='r')
plt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])
plt.show()
print('1st tIC', tica.components_ / np.linalg.norm(tica.components_))
print('1st PC ', pca.components_ / np.linalg.norm(pca.components_))
Explanation: See what they find
End of explanation
c = plt.contourf(xx, yy, ww, np.linspace(-1, 15, 20), cmap='viridis_r')
plt.contour(xx, yy, ww, np.linspace(-1, 15, 20), cmap='Greys')
plt.plot([0, tica.components_[0, 0]],
[0, tica.components_[0, 1]],
lw=5, color='b', label='tICA')
plt.plot([0, pca.components_[0, 0]],
[0, pca.components_[0, 1]],
lw=5, color='r', label='PCA')
plt.xlabel('$x$', fontsize=18)
plt.ylabel('$y$', fontsize=18)
plt.legend(loc='best')
plt.tight_layout()
Explanation: Note that the first tIC "finds" a projection that just resolves the x coordinate, whereas PCA doesn't.
End of explanation |
13,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linux Interactive System Analysis DEMO
Get LISA and start the Notebook Server
Official repository on GitHub - ARM Software
Step1: <br><br><br><br>
Advanced usage
Step2: Commands execution on remote target
Step3: Example of frameworks configuration on remote target
Configure CPUFreq governor to be "sched-freq"
Step4: Create a big/LITTLE partition using CGroups
Step5: <br><br><br><br>
Advanced usage
Step6: <br><br><br><br>
Advanced usage
Step7: Example of energy collected data
Step8: Example of platform description
Step9: <br><br><br><br>
Advanced Workload Execution
Step10: Using the TRAPpy Trace Plotter
Step11: Example of Trace Analysis
Generate DataFrames from Trace Events
Step12: <br><br><br><br>
Advanced DataFrame usage
Step13: Example of Behavioral Analysis
Step14: Get tasks behaviors
Step15: Check for expected behaviros
Step16: Examples of Data analysis
Which task is the most active switcher?
Step17: What are the relative residency on different OPPs?
Step18: Example of Custom Plotting | Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Execute this cell to enable verbose SSH commands
logging.getLogger('ssh').setLevel(logging.DEBUG)
# Other python modules required by this notebook
import json
import os
Explanation: Linux Interactive System Analysis DEMO
Get LISA and start the Notebook Server
Official repository on GitHub - ARM Software:<br>
https://github.com/ARM-software/lisa
Installation dependencies are listed in the main page of the repository:<br>
https://github.com/ARM-software/lisa#required-dependencies
Once cloned, source init_env to initialized the LISA Shell, which provides a convenient set of shell commands for easy access to many LISA related functions.
shell
$ source init_env
To start the IPython Notebook Server required to use this Notebook, on a LISAShell run:
```shell
[LISAShell lisa] > lisa-ipython start
Starting IPython Notebooks...
Starting IPython Notebook server...
IP Address : http://127.0.0.1:8888/
Folder : /home/derkling/Code/lisa/ipynb
Logfile : /home/derkling/Code/lisa/ipynb/server.log
PYTHONPATH :
/home/derkling/Code/lisa/libs/bart
/home/derkling/Code/lisa/libs/trappy
/home/derkling/Code/lisa/libs/devlib
/home/derkling/Code/lisa/libs/wlgen
/home/derkling/Code/lisa/libs/utils
Notebook server task: [1] 24745
```
The main folder served by the server is:<br>
http://127.0.0.1:8888/
While the tutorial notebooks are accessible starting from this link:<br>
http://127.0.0.1:8888/notebooks/tutorial/00_LisaInANutshell.ipynb
Note that the lisa-ipython command allows to specify also interface and port in case you have several network interfaces on your host:
lisa-ipython start [interface [port]]
What is an IPython Notebook?
Let's do some example!
Logging configuration and support modules import
End of explanation
# Setup a target configuration
conf = {
# Target is localhost
"platform" : 'linux',
# Board descriptions are described through json files in lisa/libs/utils/platforms/
"board" : "juno",
# Login credentials
"host" : "192.168.0.1",
"username" : "root",
"password" : "",
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# Comment the following line to force rt-app calibration on your target
# "rtapp-calib" : {
# "0": 355, "1": 138, "2": 138, "3": 355, "4": 354, "5": 354
# },
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_frequency",
"cpu_capacity",
],
"buffsize" : 10240
},
# Where results are collected
"results_dir" : "LisaInANutshell",
# Devlib module required (or not required)
'modules' : [ "cpufreq", "cgroups", "cpufreq" ],
#"exclude_modules" : [ "hwmon" ],
}
# Support to access the remote target
from env import TestEnv
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
# the provided test configuration (my_test_conf)
te = TestEnv(conf)
target = te.target
print "DONE"
Explanation: <br><br><br><br>
Advanced usage: get more confident with IPython notebooks and discover some hidden features<br>
notebooks/tutorial/01_IPythonNotebooksUsage.ipynb
<br><br><br><br>
Remote target connection and control
End of explanation
# Enable Energy-Aware scheduler
target.execute("echo ENERGY_AWARE > /sys/kernel/debug/sched_features");
# Check which sched_feature are enabled
sched_features = target.read_value("/sys/kernel/debug/sched_features");
print "sched_features:"
print sched_features
# It's possible also to run custom script
# my_script = target.get_installed()
# target.execute(my_script)
Explanation: Commands execution on remote target
End of explanation
target.cpufreq.set_all_governors('sched');
# Check which governor is enabled on each CPU
enabled_governors = target.cpufreq.get_all_governors()
print enabled_governors
Explanation: Example of frameworks configuration on remote target
Configure CPUFreq governor to be "sched-freq"
End of explanation
cpuset = target.cgroups.controller('cpuset')
# Configure a big partition
cpuset_bigs = cpuset.cgroup('/big')
cpuset_bigs.set(cpus=te.target.bl.bigs, mems=0)
# Configure a LITTLE partition
cpuset_littles = cpuset.cgroup('/LITTLE')
cpuset_littles.set(cpus=te.target.bl.littles, mems=0)
# Dump the configuraiton of each controller
cgroups = cpuset.list_all()
for cgname in cgroups:
cgroup = cpuset.cgroup(cgname)
attrs = cgroup.get()
cpus = attrs['cpus']
print '{}:{:<15} cpus: {}'.format(cpuset.kind, cgroup.name, cpus)
Explanation: Create a big/LITTLE partition using CGroups::CPUSet
End of explanation
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Periodic, Ramp
# Light workload
light = Periodic(
duty_cycle_pct = 10,
duration_s = 3,
period_ms = 32,
)
# Ramp workload
ramp = Ramp(
start_pct=10,
end_pct=60,
delta_pct=20,
time_s=0.5,
period_ms=16
)
# Heavy workload
heavy = Periodic(
duty_cycle_pct=60,
duration_s=3,
period_ms=16
)
# Composed workload
lrh_task = light + ramp + heavy
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp = RTA(target, 'test', calibration=te.calibration())
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind = 'profile',
# 2. define the "profile" of each task
params = {
# 3. Composed task
'task_lrh': lrh_task.get(),
},
#loadref='big',
loadref='LITTLE',
run_dir=target.working_directory
# Alternatively, it is possible to specify a json file for rt-app through:
# kind = 'custom',
# params = '/path/file.json',
);
# Inspect the JSON file used to run the application
with open('./test_00.json', 'r') as fh:
rtapp_json = json.load(fh)
logging.info('Generated RTApp JSON file:')
print json.dumps(rtapp_json, indent=4, sort_keys=True)
Explanation: <br><br><br><br>
Advanced usage: exploring more APIs exposed by TestEnv and Devlib<br>
notebooks/tutorial/02_TestEnvUsage.ipynb
<br><br><br><br>
Using syntethic workloads
Generate an RTApp configuration
End of explanation
def execute(te, wload, res_dir):
logging.info('# Setup FTrace')
te.ftrace.start()
logging.info('## Start energy sampling')
te.emeter.reset()
logging.info('### Start RTApp execution')
wload.run(out_dir=res_dir)
logging.info('## Read energy consumption: %s/energy.json', res_dir)
nrg_report = te.emeter.report(out_dir=res_dir)
logging.info('# Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(res_dir, 'trace.dat')
logging.info('# Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('# Save platform description: %s/platform.json', res_dir)
plt, plt_file = te.platform_dump(res_dir)
logging.info('# Report collected data:')
logging.info(' %s', res_dir)
!tree {res_dir}
return nrg_report, plt, plt_file, trace_file
nrg_report, plt, plt_file, trace_file = execute(te, rtapp, te.res_dir)
Explanation: <br><br><br><br>
Advanced usage: using WlGen to create more complex RTApp configurations or run other banchmarks (e.g. hackbench)<br>
notebooks/tutorial/03_WlGenUsage.ipynb
<br><br><br><br>
Execution and Energy Sampling
End of explanation
import pandas as pd
df = pd.DataFrame(list(nrg_report.channels.iteritems()),
columns=['Cluster', 'Energy'])
df = df.set_index('Cluster')
df
Explanation: Example of energy collected data
End of explanation
# Show the collected platform description
with open(os.path.join(te.res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
print json.dumps(platform, indent=4)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
Explanation: Example of platform description
End of explanation
# Let's look at the trace using kernelshark...
trace_file = te.res_dir + '/trace.dat'
!kernelshark {trace_file} 2>/dev/null
Explanation: <br><br><br><br>
Advanced Workload Execution: using the Executor module to automate data collection for multiple tests<br>
notebooks/tutorial/04_ExecutorUsage.ipynb
<br><br><br><br>
Trace Visualization (the kernelshark way)
Using kernelshark
End of explanation
# Suport for FTrace events parsing and visualization
import trappy
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(trace_file)
Explanation: Using the TRAPpy Trace Plotter
End of explanation
# Load the LISA::Trace parsing module
from trace import Trace
# Define which event we are interested into
trace = Trace(te.res_dir, [
"sched_switch",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_boost_cpu",
"sched_boost_task",
"cpu_frequency",
"cpu_capacity",
], te.platform)
# Let's have a look at the set of events collected from the trace
ftrace = trace.ftrace
logging.info("List of events identified in the trace:")
for event in ftrace.class_definitions.keys():
logging.info(" %s", event)
# Trace events are converted into tables, let's have a look at one
# of such tables
df = trace.data_frame.trace_event('sched_load_avg_task')
df.head()
# Simple selection of events based on conditional values
#df[df.comm == 'task_lrh'].head()
# Simple selection of specific signals
#df[df.comm == 'task_lrh'][['util_avg']].head()
# Simple statistics reporting
#df[df.comm == 'task_lrh'][['util_avg']].describe()
Explanation: Example of Trace Analysis
Generate DataFrames from Trace Events
End of explanation
# Signals can be easily plot using the ILinePlotter
trappy.ILinePlot(
# FTrace object
ftrace,
# Signals to be plotted
signals=[
'sched_load_avg_cpu:util_avg',
'sched_load_avg_task:util_avg'
],
# # Generate one plot for each value of the specified column
# pivot='cpu',
# # Generate only plots which satisfy these filters
# filters={
# 'comm': ['task_lrh'],
# 'cpu' : [0,5]
# },
# Formatting style
per_line=2,
drawstyle='steps-post',
marker = '+'
).view()
Explanation: <br><br><br><br>
Advanced DataFrame usage: filtering by columns/rows, merging tables, plotting data<br>
notebooks/tutorial/05_TrappyUsage.ipynb
<br><br><br><br>
Easy plot signals from DataFrams
End of explanation
from bart.sched.SchedMultiAssert import SchedAssert
# Create an object to get/assert scheduling pbehaviors
sa = SchedAssert(ftrace, te.topology, execname='task_lrh')
Explanation: Example of Behavioral Analysis
End of explanation
# Check the residency of a task on the LITTLE cluster
print "Task residency [%] on LITTLE cluster:",\
sa.getResidency(
"cluster",
te.target.bl.littles,
percent=True
)
# Check on which CPU the task start its execution
print "Task initial CPU:",\
sa.getFirstCpu()
Explanation: Get tasks behaviors
End of explanation
import operator
# Define the time window where we want focus our assertions
start_s = sa.getStartTime()
little_residency_window = (start_s, start_s + 10)
# Defined the expected task residency
EXPECTED_RESIDENCY_PCT=99
result = sa.assertResidency(
"cluster",
te.target.bl.littles,
EXPECTED_RESIDENCY_PCT,
operator.ge,
window=little_residency_window,
percent=True
)
print "Task running {} [%] of its time on LITTLE? {}"\
.format(EXPECTED_RESIDENCY_PCT, result)
result = sa.assertFirstCpu(te.target.bl.bigs)
print "Task starting on a big CPU? {}".format(result)
Explanation: Check for expected behaviros
End of explanation
# Focus on sched_switch events
df = ftrace.sched_switch.data_frame
# # Select only interesting columns
# df = df.ix[:,'next_comm':'prev_state']
# # Group sched_switch event by task switching into the CPU
# df = df.groupby('next_pid').describe(include=['object'])
# # Sort sched_switch events by number of time a task switch into the CPU
# df = df['next_comm'].sort_values(by=['count'], ascending=False)
df.head()
# # Get topmost task name and PID
# most_switching_pid = df.index[1]
# most_switching_task = df.values[1][2]
# task_name = "{}:{}".format(most_switching_pid, most_switching_task)
# # Print result
# logging.info("The most swithing task is: [%s]", task_name)
Explanation: Examples of Data analysis
Which task is the most active switcher?
End of explanation
# Focus on cpu_frequency events for CPU0
df = ftrace.cpu_frequency.data_frame
df = df[df.cpu == 0]
# # Compute the residency on each OPP before switching to the next one
# df.loc[:,'start'] = df.index
# df.loc[:,'delta'] = (df['start'] - df['start'].shift()).fillna(0).shift(-1)
# # Group by frequency and sum-up the deltas
# freq_residencies = df.groupby('frequency')['delta'].sum()
# logging.info("Residency time per OPP:")
# df = pd.DataFrame(freq_residencies)
df.head()
# # Compute the relative residency time
# tot = sum(freq_residencies)
# #df = df.apply(lambda delta : 100*delta/tot)
# for f in freq_residencies.index:
# logging.info("Freq %10dHz : %5.1f%%", f, 100*freq_residencies[f]/tot)
# Plot residency time
import matplotlib.pyplot as plt
# Enable generation of Notebook emebedded plots
%matplotlib inline
fig, axes = plt.subplots(1, 1, figsize=(16, 5));
df.plot(kind='bar', ax=axes);
Explanation: What are the relative residency on different OPPs?
End of explanation
from perf_analysis import PerfAnalysis
# Full analysis function
def analysis(t_min=None, t_max=None):
test_dir = te.res_dir
platform_json = '{}/platform.json'.format(test_dir)
trace_file = '{}/trace.dat'.format(test_dir)
# Load platform description data
with open(platform_json, 'r') as fh:
platform = json.load(fh)
# Load RTApp Performance data
pa = PerfAnalysis(test_dir)
logging.info("Loaded performance data for tasks: %s", pa.tasks())
# Load Trace data
#events = my_tests_conf['ftrace']['events']
events = [
"sched_switch",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"cpu_frequency",
"cpu_capacity",
]
trace = Trace(test_dir, events, platform)
# Define time ranges for all the temporal plots
trace.setXTimeRange(t_min, t_max)
# Tasks performances plots
for task in pa.tasks():
pa.plotPerf(task)
# Tasks plots
trace.analysis.tasks.plotTasks(pa.tasks())
# Cluster and CPUs plots
trace.analysis.frequency.plotClusterFrequencies()
analysis()
Explanation: Example of Custom Plotting
End of explanation |
13,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
APS terrain analysis
Imports
Step1: |Id |Name|
|---|---|
|3003 |Nordenskiöld Land|
|3007 |Vest-Finnmark|
|3009 |Nord-Troms|
|3010 |Lyngen|
|3011 |Tromsø|
|3012 |Sør-Troms|
|3013 |Indre Troms|
|3014 |Lofoten og Vesterålen|
|3015 |Ofoten|
|3016 |Salten|
|3017 |Svartisen|
|3022 |Trollheimen|
|3023 |Romsdal|
|3024 |Sunnmøre|
|3027 |Indre Fjordane|
|3028 |Jotunheimen|
|3029 |Indre Sogn|
|3031 |Voss|
|3032 |Hallingdal|
|3034 |Hardanger|
|3035 |Vest-Telemark|
Step2: We can calculate area above tree-line by combining elevations with a treeline mask.
Extract from region | Python Code:
# -*- coding: utf-8 -*-
%matplotlib inline
from __future__ import print_function
import pylab as plt
import datetime
import netCDF4
import numpy as np
import numpy.ma as ma
from linecache import getline
plt.rcParams['figure.figsize'] = (14, 6)
Explanation: APS terrain analysis
Imports
End of explanation
### From thredds.met.no with 2.5 km resolution
nc_thredds = netCDF4.Dataset("http://thredds.met.no/thredds/dodsC/meps25epsarchive/2017/11/12/meps_mbr0_pp_2_5km_20171112T00Z.nc")
thredds_altitude = nc_thredds.variables["altitude"]
### From hdata\grid with 1 km resolution
#nc_dem = netCDF4.Dataset(r"Y:\metdata\prognosis\meps\det\archive\2017\mepsDet00_PTW_1km_20171111.nc", "r")
nc_alpdtm = netCDF4.Dataset(r"../data/terrain_parameters/AlpDtm.nc", "r")
alpdtm = nc_alpdtm.variables["AlpDtm"]
nc_meandtm = netCDF4.Dataset(r"../data/terrain_parameters/MEANHeight.nc", "r")
meandtm = nc_meandtm.variables["MEANHeight"]
f, (ax1, ax2, ax3) = plt.subplots(1, 3)#, sharex=True, sharey=True)
#plt_thredds = ax1.imshow(np.flipud(thredds_altitude[:]), cmap=plt.cm.hsv,vmin=0, vmax=2500)
plt_meandem = ax1.imshow(meandtm[:], cmap=plt.cm.hsv,vmin=0, vmax=2500)
plt.colorbar(ax=ax1, mappable=plt_meandem)
plt_alpdem = ax2.imshow(alpdtm[:], cmap=plt.cm.hsv,vmin=0, vmax=2500)
plt.colorbar(ax=ax2, mappable=plt_alpdem)
plt_diffdem = ax3.imshow(alpdtm[:]-meandtm[:], cmap=plt.cm.seismic)
plt.colorbar(ax=ax3, mappable=plt_diffdem)
#cbar_dir.set_ticks([-180, -135, -90, -45, 0, 45, 90, 135, 180])
#cbar_dir.set_ticklabels(['S', 'SV', 'V', 'NV', 'N', 'NØ', 'Ø', 'SØ', 'S'])
#plt.title(ts)
plt.show()
# Load region mask
vr = netCDF4.Dataset(r"../data/terrain_parameters/VarslingsOmr_2017.nc", "r")
regions = vr.variables["VarslingsOmr_2017"][:]
ID = 3014 # Lofoten & Vesterålen
#ID = 3029 # Indre Sogn
region_mask = np.where(regions==ID)
# get the lower left and upper right corner of a rectangle around the region
y_min, y_max, x_min, x_max = min(region_mask[0].flatten()), max(region_mask[0].flatten()), min(region_mask[1].flatten()), max(region_mask[1].flatten())
plt.imshow(z)
plt.colorbar(label="M.A.S.L.")
hist, bin_edges = np.histogram(z, bins=[0, 300, 600, 900, 1200, 3000])
hist_perc = hist / (z.shape[1]*z.shape[0] )*100.0
plt.bar(bin_edges[:-1], hist_perc, width=300, color='lightgrey')
Explanation: |Id |Name|
|---|---|
|3003 |Nordenskiöld Land|
|3007 |Vest-Finnmark|
|3009 |Nord-Troms|
|3010 |Lyngen|
|3011 |Tromsø|
|3012 |Sør-Troms|
|3013 |Indre Troms|
|3014 |Lofoten og Vesterålen|
|3015 |Ofoten|
|3016 |Salten|
|3017 |Svartisen|
|3022 |Trollheimen|
|3023 |Romsdal|
|3024 |Sunnmøre|
|3027 |Indre Fjordane|
|3028 |Jotunheimen|
|3029 |Indre Sogn|
|3031 |Voss|
|3032 |Hallingdal|
|3034 |Hardanger|
|3035 |Vest-Telemark|
End of explanation
from netCDF4 import Dataset
fr = Dataset(r"../data/terrain_parameters/VarslingsOmr_2017.nc", "r")
print(fr)
regions = fr.variables["VarslingsOmr_2017"][:]
plt.imshow(regions)
plt.colorbar(label="Region ID")
stroms_mask = np.where(regions==3012)
print(stroms_mask)
# get the lower left and upper right corner of a rectangle around the region
y_min, y_max, x_min, x_max = min(stroms_mask[0].flatten()), max(stroms_mask[0].flatten()), min(stroms_mask[1].flatten()), max(stroms_mask[1].flatten())
region_3012 = regions.copy()
region_3012[y_min:y_max, x_min:x_max] = 32000 # rectangle around the region
region_3012[stroms_mask] = 48000 # exact pixels within the region
plt.imshow(region_3012)
plt.colorbar(label="Region ID")
rr_data = Dataset("../data/met_obs_grid/rr_2016_12_12.nc")
rr = rr_data.variables["precipitation_amount"][:, y_min:y_max, x_min:x_max].squeeze()
#rr = rr_data.variables["precipitation_amount"][:, stroms_mask[0], stroms_mask[1]].squeeze() #crashes the script
print(np.shape(rr))
plt.imshow(rr)
plt.colorbar(label="Precipitation - Sør Troms")
Explanation: We can calculate area above tree-line by combining elevations with a treeline mask.
Extract from region
End of explanation |
13,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Geekbench benchmark on Android
Geekbench4 is an app offering several benchmarks to run on android smartphones. The one used in this notebook is the 'CPU' benchmark, which runs several workloads that follow the lines of what is commonly run by smartphones (AES, JPEG codec, FFT, and so on). The benchmark runs all the tests in 'Single-Core' mode as well as in 'Multi-Core' in order to compare the single-thread and multi-thread performances of the device.
Do note that the benchmark will attempt to upload its results, which includes some hardware information
Step1: Support Functions
This function helps us run our experiments
Step2: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
Step3: Workloads execution
This is done using the experiment helper function defined above which is configured to run a Geekbench - CPU experiment.
Step10: Results analysis
Geekbench4 uses a baseline score of 4000, which is the benchmark score of an Intel Core i7-6600U. Higher scores are better, with double the score indicating double the performance. You can have a look at the results for several android phones here https
Step11: Analysing several runs
It can be interesting to compare Geekbench results with different parameters (kernel, drivers) and even different devices to gauge the impact of these parameters. As Geekbench results can vary a bit from one run to another, having a set of repeated runs is preferable.
The following section will grab the results of all the Geekbench_exemple_* results found in the LISA results directory | Python Code:
from conf import LisaLogging
LisaLogging.setup()
%pylab inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Import support for Android devices
from android import Screen, Workload
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
import pandas as pd
Explanation: Geekbench benchmark on Android
Geekbench4 is an app offering several benchmarks to run on android smartphones. The one used in this notebook is the 'CPU' benchmark, which runs several workloads that follow the lines of what is commonly run by smartphones (AES, JPEG codec, FFT, and so on). The benchmark runs all the tests in 'Single-Core' mode as well as in 'Multi-Core' in order to compare the single-thread and multi-thread performances of the device.
Do note that the benchmark will attempt to upload its results, which includes some hardware information
End of explanation
def experiment():
# Configure governor
target.cpufreq.set_all_governors('sched')
# Get workload
wload = Workload.getInstance(te, 'Geekbench')
# Run Geekbench workload
wload.run(te.res_dir, test_name='CPU', collect='ftrace')
# Dump platform descriptor
te.platform_dump(te.res_dir)
Explanation: Support Functions
This function helps us run our experiments:
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'android',
"board" : 'pixel',
# Device
"device" : "0123456789ABCDEF",
# Android home
"ANDROID_HOME" : "/home/vagrant/lisa/tools/android-sdk-linux/",
# Folder where all the results will be collected
"results_dir" : datetime.datetime.now()\
.strftime("Geekbench_example_" + '%Y%m%d_%H%M%S'),
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"cpu_capacity",
"cpu_frequency",
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'taskset'],
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False)
target = te.target
Explanation: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
End of explanation
# Initialize Workloads for this test environment
results = experiment()
Explanation: Workloads execution
This is done using the experiment helper function defined above which is configured to run a Geekbench - CPU experiment.
End of explanation
class Geekbench(object):
Geekbench json results parsing class
def __init__(self, filepath):
with open(filepath) as fd:
self.__json = json.loads(fd.read())
self.benchmarks = {}
for section in self.__json["sections"]:
self.benchmarks[section["name"]] = section
for workload in section["workloads"]:
self.benchmarks[section["name"]][workload["name"]] = workload
def name(self):
Get a human-readable name for the geekbench run
gov = ""
build = ""
for metric in self.__json["metrics"]:
if metric["name"] == "Governor":
gov = metric["value"]
elif metric["name"] == "Build":
build = metric["value"]
return "[build]=\"{}\" [governor]=\"{}\"".format(build, gov)
def benchmarks_names(self):
Get a list of benchmarks (e.g. Single-Core, Multi-Core) found in the run results
return [section["name"] for section in self.__json["sections"]]
def workloads_names(self):
Get a list of unique workloads (e.g. EAS, Dijkstra) found in the run results
return [workload["name"] for workload in self.benchmarks.values()[0]["workloads"]]
def global_scores(self):
Get the overall scores of each benchmark
data = {}
for benchmark in self.benchmarks_names():
data[benchmark] = self.benchmarks[benchmark]["score"]
return data
def detailed_scores(self):
Get the detailed workload scores of each benchmark
benchmark_fields = ["score", "runtime_mean", "rate_string"]
benches = {}
benchmarks = self.benchmarks_names()
workloads = self.workloads_names()
for benchmark in benchmarks:
data = {}
for workload in workloads:
data[workload] = {}
for field in benchmark_fields:
data[workload][field] = self.benchmarks[benchmark][workload][field]
benches[benchmark] = data
return benches
def display_bench_results(geekbench, detailed=False):
print "===== Global results ====="
scores = geekbench.global_scores()
# Build dataframe for display
row = []
for bench_type, score in scores.iteritems():
row.append(score)
df = pd.DataFrame(data=row, index=scores.keys(), columns=["Global score"])
display(df)
if not detailed:
return
print "===== Detailed results ====="
scores = geekbench.detailed_scores()
for benchmark, results in geekbench.detailed_scores().iteritems():
print "----- {} benchmark -----".format(benchmark)
# Build dataframe for display
data = []
idx = []
columns = results.values()[0].keys()
for workload, fields in results.iteritems():
data.append(tuple(fields.values()))
idx.append(workload)
display (pd.DataFrame(data=data, index=idx, columns=columns))
for f in os.listdir(te.res_dir):
if f.endswith(".gb4"):
geekbench = Geekbench(te.res_dir + "/" + f)
print "Analysing geekbench {}".format(geekbench.name())
display_bench_results(geekbench, True)
Explanation: Results analysis
Geekbench4 uses a baseline score of 4000, which is the benchmark score of an Intel Core i7-6600U. Higher scores are better, with double the score indicating double the performance. You can have a look at the results for several android phones here https://browser.primatelabs.com/android-benchmarks
End of explanation
import glob
def fetch_results():
results_path = os.path.join(te.LISA_HOME, "results")
results_dirs = [results_path + "/" + d for d in os.listdir(results_path) if d.startswith("Geekbench_example_")]
res = []
for d in results_dirs:
bench_file = glob.glob("{}/*.gb4".format(d))[0]
res.append(Geekbench(bench_file))
return res
def compare_runs():
geekbenches = fetch_results()
# Pick one run to build a baseline template
benchmarks = geekbenches[0].benchmarks_names()
workloads = geekbenches[0].workloads_names()
stats = ["avg", "min", "max"]
count = len(geekbenches)
print "Parsing {} runs".format(count)
# Initialize stats
results = {benchmark :
{"min" : sys.maxint, "max" : 0, "avg" : 0}
for benchmark in benchmarks}
# Get all the data
for benchmark in results.iterkeys():
for bench in geekbenches:
score = bench.global_scores()[benchmark]
if score > results[benchmark]["max"]:
results[benchmark]["max"] = score
if score < results[benchmark]["min"]:
results[benchmark]["min"] = score
results[benchmark]["avg"] += score
results[benchmark]["avg"] /= count
# Convert data to Dataframe
data = []
for benchmark in results.iterkeys():
row = []
for stat in stats:
row.append(results[benchmark][stat])
data.append(tuple(row))
df = pd.DataFrame(data, index=results.iterkeys(), columns=stats)
return df
display(compare_runs())
Explanation: Analysing several runs
It can be interesting to compare Geekbench results with different parameters (kernel, drivers) and even different devices to gauge the impact of these parameters. As Geekbench results can vary a bit from one run to another, having a set of repeated runs is preferable.
The following section will grab the results of all the Geekbench_exemple_* results found in the LISA results directory
End of explanation |
13,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading network data
CSV -> List of Dictionaries -> igraph
sand's underlying graph implementation is igraph. igraph offers several ways to load data, but sand provides a few convenience functions that simplify the workflow
Step1: Read network data from csv with csv_to_dicts
csv_to_dicts reads a CSV into a list of Python dictionaries. Each column in the CSV becomes a corresponding key in each dictionary.
Let's load a CSV with function dependencies in a Clojure library from lein-topology into a list of Dictionaries
Step2: Use from_edges with an adjacency list consisting of two vertex names and an edge weight
Step3: ... or use from_vertices_and_edges with two lists of dictionaries
A richer network model includes attributes on the vertex and edge collections, including unique identifiers for each vertex.
We can use Jupyter's cell magic to generate some sample data. Here we'll represent a network of students reviewing one another's work. Students (vertices) will be in people.csv and reviews (edges) will be in reviews.csv
Step4: We again load this data into Lists of Dictionaries with csv_to_dicts
Step5: Several vertex attributes are automatically computed when the graph is loaded
Step6: Groups
Groups represent modules or communities in the network. Groups are based on the labels by default.
Step7: The vertices in the lein topology data set contain fully-qualified namespaces for functions. Grouping by name isn't particularly useful here
Step8: Because sand was build specifically for analyzing software and system networks, a fqn_to_groups grouping function is built in | Python Code:
import sand
Explanation: Loading network data
CSV -> List of Dictionaries -> igraph
sand's underlying graph implementation is igraph. igraph offers several ways to load data, but sand provides a few convenience functions that simplify the workflow:
End of explanation
edgelist_file = './data/lein-topology-57af741.csv'
edgelist_data = sand.csv_to_dicts(edgelist_file,header=['source', 'target', 'weight'])
edgelist_data[:5]
Explanation: Read network data from csv with csv_to_dicts
csv_to_dicts reads a CSV into a list of Python dictionaries. Each column in the CSV becomes a corresponding key in each dictionary.
Let's load a CSV with function dependencies in a Clojure library from lein-topology into a list of Dictionaries:
End of explanation
functions = sand.from_edges(edgelist_data)
functions.summary()
Explanation: Use from_edges with an adjacency list consisting of two vertex names and an edge weight
End of explanation
people_file = './data/people.csv'
%%writefile $people_file
uuid,name,cohort
6aacd73c-0be5-412d-95a3-ca54149c9952,Mark Taylor,Day 1 - Period 6
5205741f-3ea9-4c30-9c50-4bab229a51ce,Aidin Aslani,Day 1 - Period 6
14a36491-5a3d-42c9-b012-6a53654d9bac,Charlie Brown,Day 1 - Period 2
9dc7633a-e493-4ec0-a252-8616f2148705,Armin Norton,Day 1 - Period 2
review_file = './data/reviews.csv'
%%writefile $review_file
reviewer_uuid,student_uuid,feedback,date,weight
6aacd73c-0be5-412d-95a3-ca54149c9952,14a36491-5a3d-42c9-b012-6a53654d9bac,Awesome work!,2015-02-12,1
5205741f-3ea9-4c30-9c50-4bab229a51ce,9dc7633a-e493-4ec0-a252-8616f2148705,WOW!,2014-02-12,1
Explanation: ... or use from_vertices_and_edges with two lists of dictionaries
A richer network model includes attributes on the vertex and edge collections, including unique identifiers for each vertex.
We can use Jupyter's cell magic to generate some sample data. Here we'll represent a network of students reviewing one another's work. Students (vertices) will be in people.csv and reviews (edges) will be in reviews.csv:
End of explanation
people_data = sand.csv_to_dicts(people_file)
people_data
review_data = sand.csv_to_dicts(review_file)
review_data
reviews = sand.from_vertices_and_edges(
vertices=people_data,
edges=review_data,
vertex_name_key='name',
vertex_id_key='uuid',
edge_foreign_keys=('reviewer_uuid', 'student_uuid'))
reviews.summary()
Explanation: We again load this data into Lists of Dictionaries with csv_to_dicts:
End of explanation
reviews.vs['indegree']
reviews.vs['outdegree']
reviews.vs['label']
reviews.vs['name']
Explanation: Several vertex attributes are automatically computed when the graph is loaded:
End of explanation
reviews.vs['group']
Explanation: Groups
Groups represent modules or communities in the network. Groups are based on the labels by default.
End of explanation
len(set(functions.vs['group']))
len(functions.vs)
Explanation: The vertices in the lein topology data set contain fully-qualified namespaces for functions. Grouping by name isn't particularly useful here:
End of explanation
functions.vs['group'] = sand.fqn_to_groups(functions.vs['label'])
len(set(functions.vs['group']))
Explanation: Because sand was build specifically for analyzing software and system networks, a fqn_to_groups grouping function is built in:
End of explanation |
13,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames 008
Step1: Table 1 - VOTable with all source properties
Step3: Cross match with SIMBAD
Step4: Save the data table locally. | Python Code:
import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
import pandas as pd
Explanation: ApJdataFrames 008: Luhman2012
Title: THE DISK POPULATION OF THE UPPER SCORPIUS ASSOCIATION
Authors: K. L. Luhman and E. E. Mamajek
Data is from this paper:
http://iopscience.iop.org/0004-637X/758/1/31/article#apj443828t1
End of explanation
tbl1 = ascii.read("http://iopscience.iop.org/0004-637X/758/1/31/suppdata/apj443828t1_mrt.txt")
tbl1.columns
tbl1[0:5]
len(tbl1)
Explanation: Table 1 - VOTable with all source properties
End of explanation
from astroquery.simbad import Simbad
import astropy.coordinates as coord
import astropy.units as u
customSimbad = Simbad()
customSimbad.add_votable_fields('otype', 'sptype')
query_list = tbl1["Name"].data.data
result = customSimbad.query_objects(query_list, verbose=True)
result[0:3]
print "There were {} sources queried, and {} sources found.".format(len(query_list), len(result))
if len(query_list) == len(result):
print "Hooray! Everything matched"
else:
print "Which ones were not found?"
def add_input_column_to_simbad_result(self, input_list, verbose=False):
Adds 'INPUT' column to the result of a Simbad query
Parameters
----------
object_names : sequence of strs
names of objects from most recent query
verbose : boolean, optional
When `True`, verbose output is printed
Returns
-------
table : `~astropy.table.Table`
Query results table
error_string = self.last_parsed_result.error_raw
fails = []
for error in error_string.split("\n"):
start_loc = error.rfind(":")+2
fail = error[start_loc:]
fails.append(fail)
successes = [s for s in input_list if s not in fails]
if verbose:
out_message = "There were {} successful Simbad matches and {} failures."
print out_message.format(len(successes), len(fails))
self.last_parsed_result.table["INPUT"] = successes
return self.last_parsed_result.table
result_fix = add_input_column_to_simbad_result(customSimbad, query_list, verbose=True)
tbl1_pd = tbl1.to_pandas()
result_pd = result_fix.to_pandas()
tbl1_plusSimbad = pd.merge(tbl1_pd, result_pd, how="left", left_on="Name", right_on="INPUT")
Explanation: Cross match with SIMBAD
End of explanation
tbl1_plusSimbad.head()
! mkdir ../data/Luhman2012/
tbl1_plusSimbad.to_csv("../data/Luhman2012/tbl1_plusSimbad.csv", index=False)
Explanation: Save the data table locally.
End of explanation |
13,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
quickcat calibration
This notebook is the quickcat calibration script.
- Its input is a redshift catalog merged with a target list and a truth table from simulations.
- Its output is a set of coefficients saved in a yaml file
to be copied to desisim/py/desisim/data/quickcat.yaml .
This notebook does sequencially
Step1: ELG redshift efficiency
We assume the ELG redshift efficiency is a function of
- the S/N in the emission lines, approximately proportional to OII flux.
- the S/N in the continuum, approximately proportional to the r-band flux.
- the redshift
We know that for a given ELG, the S/N in the lines varies with redshift according to the flux limit defined in the FDR.
So, we will scale the OII flux with this flux limit to account for some of the redshift dependency.
We ignore the evolution of the continuum S/N with redshift for fixed r-band magnitude.
We model the efficiency with an error function,
$ Eff(SNR) = \frac{1}{2} \left( 1+Erf \left( \frac{SNR-3}{b \sqrt{2}} \right) \right) $
with
$SNR = \sqrt{ \left( 7 \frac{OII flux}{fluxlimit} \right)^2 + \left( a \times rflux \right)^2 }$
$a$ is the continuum $SNR$ normalization, which is proportionnal to the r-band flux.
$b$ is a fudge factor. One would have $b = 1$ if $SNR$ was the variable that determines the redshift efficiency.
However $SNR$ is only a proxy that is not 100% correlated with the efficiency, so we expect $b>1$.
Step2: Measured ELG efficiency as a function of rmag and oii flux
Step3: Model
Step4: ELG redshift uncertainty
Power law of [OII] flux (proxy for all lines)
Step5: ELG catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$
Step6: LRG redshift efficiency
Sigmoid function of the r-band magnitude
$Eff = \frac{1}{1+exp (( rmag - a ) / b))}$
Step7: LRG redshift uncertainty
Power law of broad band flux
Step8: LRG catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$
Step9: QSO tracers (z<~2) redshift efficiency
Sigmoid function of the r-band magnitude
$Eff = \frac{1}{1+exp (( rmag - a ) / b))}$
Step10: QSO (z<2) redshift uncertainty
Power law of broad band flux
Step11: Tracer QSO (z<~2) catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$
Step12: Lya QSO (z>~2) redshift efficiency
Sigmoid function of the r-band magnitude
$Eff = \frac{1}{1+exp (( rmag - a ) / b))}$
Step13: Lya QSO (z>2) redshift uncertainty
Power law of broad band flux
Step14: Lya QSO (z>~2) catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$ | Python Code:
# input merged catalog (from simulations for now)
simulation_catalog_filename="/home/guy/Projets/DESI/analysis/quickcat/20180926/zcatalog-redwood-target-truth.fits"
# output quickcat parameter file that this code will write
quickcat_param_filename="/home/guy/Projets/DESI/analysis/quickcat/20180926/quickcat.yaml"
# output quickcat catalog (same input target and truth)
quickcat_catalog_filename="/home/guy/Projets/DESI/analysis/quickcat/20180926/zcatalog-redwood-target-truth-quickcat.fits"
import os.path
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as pyfits
import scipy.optimize
from pkg_resources import resource_filename
import yaml
from desisim.quickcat import eff_model,get_observed_redshifts
from desisim.simexp import reference_conditions
def eff_err(k,n) :
# given k and n
# the most probable efficiency is k/n but the uncertainty is complicated
# I choose to define error bar as FWHM/2.35 , converging to sigma for large k,n-k,and n
# this is the Bayesian probability
# P(eff|k,n) = gamma(n+2)/(gamma(k+1)*gamma(n-k+1)) * eff**k*(1-eff)**(n-k)
if k>10 and n-k>10 and n>10 :
return np.sqrt(k*(1-k/n))/n
ns=300
e=np.arange(ns)/ns
p=e**k*(1-e)**(n-k)
xc=float(k)/n
i=int(ns*xc+0.5)
if i>ns-1 : i=ns-1
p/=p[i]
if k==0 :
xl=0
else :
xl = np.interp(0.5*p[i],p[:i],e[:i])
if k==n :
xh=1
else :
xh = np.interp(0.5*p[i],p[i:][::-1],e[i:][::-1])
sigma = (xh-xl)/2.35
return sigma
def efficiency(x,selection,bins=40) :
h0,bins=np.histogram(x,bins=bins)
hx,bins=np.histogram(x,bins=bins,weights=x)
h1,bins=np.histogram(x[selection],bins=bins)
ii=(h0>1)
n=h0[ii]
k=h1[ii]
meanx=hx[ii]/n
eff=k/n
err=np.zeros(eff.shape)
for i in range(err.size) :
err[i] = eff_err(k[i],n[i])
return meanx,eff,err
def prof(x,y,bins=40) :
h0,bins=np.histogram(x,bins=bins)
hx,bins=np.histogram(x,bins=bins,weights=x)
hy,bins=np.histogram(x,bins=bins,weights=y)
hy2,bins=np.histogram(x,bins=bins,weights=y**2)
ii=(h0>1)
n=h0[ii]
x=hx[ii]/n
y=hy[ii]/n
y2=hy2[ii]/n
var=y2-y**2
err=np.zeros(x.size)
err[var>0]=np.sqrt(var[var>0])
return x,y,err,n
def efficiency2d(x,y,selection,bins=20) :
h0,xx,yy=np.histogram2d(x,y,bins=bins)
h1,xx,yy=np.histogram2d(x[selection],y[selection],bins=(xx,yy))
shape=h0.shape
n=h0.ravel()
k=h1.ravel()
eff=np.zeros(n.size)
err=np.zeros(n.size)
for i in range(n.size) :
if n[i]==0 :
err[i]=1000.
else :
eff[i]=k[i]/n[i]
err[i]=eff_err(k[i],n[i])
return xx,yy,eff.reshape(shape),err.reshape(shape),n.reshape(shape)
def prof2d(x,y,z,bins=20) :
h0,xx,yy=np.histogram2d(x,y,bins=bins)
hz,xx,yy=np.histogram2d(x,y,bins=(xx,yy),weights=z)
hz2,xx,yy=np.histogram2d(x,y,bins=(xx,yy),weights=z**2)
n=(h0+(h0==0)).astype(float)
z=hz/n
z2=hz2/n
var=z2-z**2
err=np.sqrt(var*(var>0))
x=xx[:-1]+(xx[1]-xx[0])/2.
y=yy[:-1]+(yy[1]-yy[0])/2.
return x,y,z,err
## open input file
hdulist = pyfits.open(simulation_catalog_filename)
table = hdulist["ZCATALOG"].data
print(table.dtype.names)
# quickcat parameters
quickcat_params=dict()
# quickcat output table (for display purpose only)
qtable=None
if True :
# run the quickcat simulation in this cell (don't necessarily have to do this to
# follow the rest of the notebook)
# use default parameters or the ones in the file specified above
# (and probably obtained with a previous run of this script) if exist
input_quickcat_param_filename = None
if os.path.isfile(quickcat_param_filename) :
input_quickcat_param_filename = quickcat_param_filename
# dummy tiles
targets_in_tile=dict()
targets_in_tile[0]=table["TARGETID"]
# dummy obs. conditions
tmp = reference_conditions['DARK']
tmp['TILEID']=0
obsconditions=dict()
for k in tmp :
obsconditions[k]=np.array([tmp[k],])
#qtable = table.copy()
hdulist = pyfits.open(simulation_catalog_filename)
qtable = hdulist["ZCATALOG"].data
# run quickcat
# ignore_obsconditions because it only add extra noise
z, zerr, zwarn = get_observed_redshifts(qtable,qtable,targets_in_tile,obsconditions,
parameter_filename=quickcat_param_filename,
ignore_obscondition=True)
# replace z,zwarn and write quickcat
qtable["Z"]=z
qtable["ZWARN"]=zwarn
hdulist["ZCATALOG"].data = qtable
hdulist.writeto(quickcat_catalog_filename,overwrite=True)
print("done")
# open quickcat catalog
if qtable is None and os.path.isfile(quickcat_catalog_filename) :
qcat_hdulist = pyfits.open(quickcat_catalog_filename)
qtable = qcat_hdulist["ZCATALOG"].data
Explanation: quickcat calibration
This notebook is the quickcat calibration script.
- Its input is a redshift catalog merged with a target list and a truth table from simulations.
- Its output is a set of coefficients saved in a yaml file
to be copied to desisim/py/desisim/data/quickcat.yaml .
This notebook does sequencially :
- open the merged redshift catalog
- run quickcat on it
- for each target class
- fit a model for redshift efficiency, and display input, quickcat, best fit model
- fit a model for redshift uncertainty, and display input, (quickcat,) best fit model
- save the best fit parameters in a yaml file (to be copied in desisim/data/quickcat.yaml)
Please first edit first the following path:
End of explanation
# OII flux limit (FDR), the has-build version should be recomputed but is probably not very different
filename = resource_filename('desisim', 'data/elg_oii_flux_threshold_fdr.txt')
fdr_z, fdr_flux_limit = np.loadtxt(filename, unpack=True)
plt.figure()
plt.plot(fdr_z, fdr_flux_limit)
plt.ylim([0,1.5e-16])
plt.xlabel("Redshift")
plt.ylabel("OII flux limit (ergs/s/cm2)")
plt.grid()
Explanation: ELG redshift efficiency
We assume the ELG redshift efficiency is a function of
- the S/N in the emission lines, approximately proportional to OII flux.
- the S/N in the continuum, approximately proportional to the r-band flux.
- the redshift
We know that for a given ELG, the S/N in the lines varies with redshift according to the flux limit defined in the FDR.
So, we will scale the OII flux with this flux limit to account for some of the redshift dependency.
We ignore the evolution of the continuum S/N with redshift for fixed r-band magnitude.
We model the efficiency with an error function,
$ Eff(SNR) = \frac{1}{2} \left( 1+Erf \left( \frac{SNR-3}{b \sqrt{2}} \right) \right) $
with
$SNR = \sqrt{ \left( 7 \frac{OII flux}{fluxlimit} \right)^2 + \left( a \times rflux \right)^2 }$
$a$ is the continuum $SNR$ normalization, which is proportionnal to the r-band flux.
$b$ is a fudge factor. One would have $b = 1$ if $SNR$ was the variable that determines the redshift efficiency.
However $SNR$ is only a proxy that is not 100% correlated with the efficiency, so we expect $b>1$.
End of explanation
######################
elgs=(table["TEMPLATETYPE"]=="ELG")&(table["TRUEZ"]>0.6)&(table["TRUEZ"]<1.6)
z=table["Z"][elgs]
tz=table["TRUEZ"][elgs]
dz=z-tz
good=(table["ZWARN"][elgs]==0)
rflux=table["FLUX_R"][elgs]
print("Number of ELGs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
oiiflux=table["OIIFLUX"][elgs]
oiiflux=oiiflux*(oiiflux>0)+1e-20*(oiiflux<=0)
qgood=None
if qtable is not None : #quickcat output
qgood=(qtable["ZWARN"][elgs]==0)
######################
#good=oiiflux>8e-17 #debug to verify indexation
bins2d=20
rmag=-2.5*np.log10(rflux)+22.5
xx,yy,eff2d,err2d,nn2d = efficiency2d(np.log10(oiiflux),rmag,good,bins=bins2d)
plt.figure()
plt.imshow(eff2d.T,origin=0,extent=(xx[0],xx[-1],yy[0],yy[-1]),vmin=0.2,aspect="auto")
plt.xlabel("log10([OII] flux)")
plt.ylabel("rmag")
plt.colorbar()
Explanation: Measured ELG efficiency as a function of rmag and oii flux
End of explanation
# model ELG efficiency vs rflux and oiiflux
oiiflux = table["OIIFLUX"][elgs]
oiiflux = oiiflux*(oiiflux>=0)+0.00001*(oiiflux<=0)
fluxlimit=np.interp(z,fdr_z,fdr_flux_limit)
fluxlimit[fluxlimit<=0]=1e-20
snr_lines=7*oiiflux/fluxlimit
def elg_efficiency_model_2d(params,log10_snr_lines,rmag) :
p = params
snr_tot = np.sqrt( (p[0]*10**log10_snr_lines)**2 + (p[1]*10**(-0.4*(rmag-22.5))) **2 )
return 0.5*(1.+np.erf((snr_tot-3)/(np.sqrt(2.)*p[2])))
def elg_efficiency_2d_residuals(params,log10_snr_lines,mean_rmag,eff2d,err2d) :
model = elg_efficiency_model_2d(params,log10_snr_lines,mean_rmag)
#res = (eff2d-model)
res = (eff2d-model)/err2d #np.sqrt(err2d**2+(0.1*(eff2d>0.9))**2)
res = res[(err2d<2)&(mean_rmag>22)]
#chi2 = np.sum(res**2)
#print("params={} chi2/ndata={}/{}={}".format(params,chi2,res.size,chi2/res.size))
return res
# 2d fit
#good=snr_lines>4. # debug
#good=rmag<22. # debug
xx,yy,eff2d_bis,err2d_bis,nn = efficiency2d(np.log10(snr_lines),rmag,good)
x1d = xx[:-1]+(xx[1]-xx[0])
y1d = yy[:-1]+(yy[1]-yy[0])
x2d=np.tile(x1d,(y1d.size,1)).T
y2d=np.tile(y1d,(x1d.size,1))
#elg_efficiency_params=[1,3,2]
elg_efficiency_params=[1,2.,1,0,0]#,1,2,1,0,0]
if 0 :
meff2d=elg_efficiency_model_2d(elg_efficiency_params,x2d,y2d)
i=(y2d.ravel()>22.)&(y2d.ravel()<22.4)&(err2d.ravel()<1)
plt.plot(x2d.ravel()[i],eff2d.ravel()[i],"o")
plt.plot(x2d.ravel()[i],meff2d.ravel()[i],"o")
result=scipy.optimize.least_squares(elg_efficiency_2d_residuals,elg_efficiency_params,args=(x2d,y2d,eff2d_bis,err2d_bis))
elg_efficiency_params=result.x
quickcat_params["ELG"]=dict()
quickcat_params["ELG"]["EFFICIENCY"]=dict()
quickcat_params["ELG"]["EFFICIENCY"]["SNR_LINES_SCALE"]=float(elg_efficiency_params[0])
quickcat_params["ELG"]["EFFICIENCY"]["SNR_CONTINUUM_SCALE"]=float(elg_efficiency_params[1])
quickcat_params["ELG"]["EFFICIENCY"]["SIGMA_FUDGE"]=float(elg_efficiency_params[2])
print("Best fit parameters for ELG efficiency model:")
print(elg_efficiency_params)
print("SNR_lines = {:4.3f} * 7 * OIIFLUX/limit".format(elg_efficiency_params[0]))
print("SNR_cont = {:4.3f} * R_FLUX".format(elg_efficiency_params[1]))
print("sigma fudge = {:4.3f}".format(elg_efficiency_params[2]))
#params[0]=0.001 # no dependence on rmag
meff=elg_efficiency_model_2d(elg_efficiency_params,np.log10(snr_lines),rmag)
xx,yy,meff2d,merr=prof2d(np.log10(oiiflux),rmag,meff,bins=bins2d)
#plt.imshow(meff2d.T,aspect="auto")
plt.imshow(meff2d.T,origin=0,extent=(xx[0],xx[-1],yy[0],yy[-1]),aspect="auto")
plt.colorbar()
if 1 :
plt.figure()
print("meff2d.shape=",meff2d.shape)
ii=np.arange(meff2d.shape[0])
y1=eff2d[ii,-ii]
e1=err2d[ii,-ii]
y2=meff2d[ii,-ii]
ok=(e1<1)
plt.errorbar(ii[ok],y1[ok],e1[ok],fmt="o",label="input")
plt.plot(ii[ok],y2[ok],"-",label="model")
plt.legend(loc="lower right")
plt.xlabel("linear combination of log10([OII] flux) and rmag")
plt.ylabel("efficiency")
plt.figure()
bins1d=20
x,eff1d,err1d = efficiency(rmag,good,bins=bins1d)
x,meff1d,merr,nn = prof(rmag,meff,bins=bins1d)
plt.errorbar(x,eff1d,err1d,fmt="o",label="input")
plt.plot(x,meff1d,"-",label="model")
if qgood is not None : #quickcat output
x,eff1d,err1d = efficiency(rmag,qgood,bins=bins1d)
plt.errorbar(x,eff1d,err1d,fmt="x",label="qcat run")
plt.legend(loc="lower left")
plt.xlabel("rmag")
plt.ylabel("efficiency")
plt.figure()
bins1d=20
x,eff1d,err1d = efficiency(np.log10(oiiflux),good,bins=bins1d)
x,meff1d,merr,nn = prof(np.log10(oiiflux),meff,bins=bins1d)
plt.errorbar(x,eff1d,err1d,fmt="o",label="input")
plt.plot(x,meff1d,"-",label="model")
if qgood is not None : #quickcat output
x,eff1d,err1d = efficiency(np.log10(oiiflux),qgood,bins=bins1d)
plt.errorbar(x,eff1d,err1d,fmt="x",label="qcat run")
plt.legend(loc="lower right")
plt.xlabel("log10(oiiflux)")
plt.ylabel("efficiency")
plt.figure()
fcut=8e-17
mcut=22.5
s=(oiiflux<fcut)&(rmag>mcut) # select faint ones to increase contrast in z
bins=100
x,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)
x,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)
plt.errorbar(x,eff1d,err1d,fmt="o",label="input")
plt.plot(x,meff1d,"-",label="model")
if qgood is not None : #quickcat output
x,eff1d,err1d = efficiency(tz[s],qgood[s],bins=bins1d)
plt.errorbar(x,eff1d,err1d,fmt="x",label="qcat run")
plt.legend(loc="upper left",title="Faint ELGs with [OII] flux<{} and rmag>{}".format(fcut,mcut))
plt.xlabel("redshift")
plt.ylabel("efficiency")
plt.ylim([0.,1.4])
Explanation: Model
End of explanation
#ELG redshift uncertainty
######################
elgs=(table["TEMPLATETYPE"]=="ELG")&(table["TRUEZ"]>0.6)&(table["TRUEZ"]<1.6)
z=table["Z"][elgs]
dz=z-table["TRUEZ"][elgs]
good=(table["ZWARN"][elgs]==0)&(np.abs(dz/(1+z))<0.003)
rflux=table["FLUX_R"][elgs]
print("Number of ELGs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
oiiflux=table["OIIFLUX"][elgs]
oiiflux=oiiflux*(oiiflux>0)+1e-20*(oiiflux<=0)
lflux=np.log10(oiiflux)
qz=None
qdz=None
if qtable is not None : # quickcat output
qz=qtable["Z"][elgs]
qdz=qz-qtable["TRUEZ"][elgs]
qgood=(qtable["ZWARN"][elgs]==0)&(np.abs(qdz/(1+qz))<0.003)
######################
bins=20
binlflux,var,err,nn=prof(lflux[good],((dz/(1+z))**2)[good],bins=bins)
binflux=10**(binlflux)
var_err = np.sqrt(2/nn)*var
rms=np.sqrt(var)
rmserr=0.5*var_err/rms
def redshift_error(params,flux) :
return params[0]/(1e-9+flux)**params[1]
def redshift_error_residuals(params,flux,rms,rmserror) :
model = redshift_error(params,flux)
res = (rms-model)/np.sqrt(rmserror**2+1e-6**2)
return res
#plt.plot(binlflux,rms,"o",label="meas")
plt.errorbar(binlflux,rms,rmserr,fmt="o",label="sim")
params=[0.0006,1.]
binoiiflux=np.array(10**binlflux)
result=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binoiiflux*1e17,rms,rmserr))
params=result.x
elg_uncertainty_params = params
print("ELG redshift uncertainty parameters = ",params)
quickcat_params["ELG"]["UNCERTAINTY"]=dict()
quickcat_params["ELG"]["UNCERTAINTY"]["SIGMA_17"]=float(elg_uncertainty_params[0])
quickcat_params["ELG"]["UNCERTAINTY"]["POWER_LAW_INDEX"]=float(elg_uncertainty_params[1])
m=redshift_error(params,10**binlflux*1e17)
plt.plot(binlflux,m,"-",label="model")
if qz is not None :
qbinlflux,qvar,qerr,nn=prof(lflux[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)
qbinflux=10**(qbinlflux)
qvar_err = np.sqrt(2/nn)*qvar
qrms=np.sqrt(qvar)
qrmserr=0.5*qvar_err/qrms
plt.errorbar(qbinlflux,qrms,qrmserr,fmt="x",label="quickcat")
plt.legend(loc="upper right",title="ELG")
plt.xlabel("log10([oII] flux)")
plt.ylabel("rms dz/(1+z)")
Explanation: ELG redshift uncertainty
Power law of [OII] flux (proxy for all lines)
End of explanation
nbad = np.sum((table["ZWARN"][elgs]==0)&(np.abs(dz/(1+z))>0.003))
ntot = np.sum(table["ZWARN"][elgs]==0)
frac = float(nbad/float(ntot))
print("ELG catastrophic failure rate={}/{}={:4.3f}".format(nbad,ntot,frac))
quickcat_params["ELG"]["FAILURE_RATE"]=frac
qnbad = np.sum((qtable["ZWARN"][elgs]==0)&(np.abs(qdz/(1+qz))>0.003))
qntot = np.sum(qtable["ZWARN"][elgs]==0)
qfrac = float(qnbad/float(qntot))
print("quickcat run ELG catastrophic failure rate={}/{}={:4.3f}".format(qnbad,qntot,qfrac))
Explanation: ELG catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$
End of explanation
# simply use RFLUX for snr
######################
lrgs=(table["TEMPLATETYPE"]=="LRG")
z=table["Z"][lrgs]
tz=table["TRUEZ"][lrgs]
dz=z-tz
good=(table["ZWARN"][lrgs]==0)
rflux=table["FLUX_R"][lrgs]
print("Number of LRGs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
rmag=-2.5*np.log10(rflux)+22.5
qgood=None
if qtable is not None : #quickcat output
qgood=(qtable["ZWARN"][lrgs]==0)
######################
bins=15
bin_rmag,eff,err=efficiency(rmag,good,bins=bins)
print("eff=",eff)
print("err=",err)
plt.errorbar(bin_rmag,eff,err,fmt="o",label="sim")
def sigmoid(params,x) :
return 1/(1+np.exp((x-params[0])/params[1]))
def sigmoid_residuals(params,x,y,err) :
m = sigmoid(params,x)
res = (m-y)/err
return res
lrg_efficiency_params=[26.,1.]
result=scipy.optimize.least_squares(sigmoid_residuals,lrg_efficiency_params,args=(bin_rmag,eff,err))
lrg_efficiency_params=result.x
plt.plot(bin_rmag,sigmoid(lrg_efficiency_params,bin_rmag),"-",label="model")
if qgood is not None:
bin_rmag,eff,err=efficiency(rmag,qgood,bins=bins)
plt.errorbar(bin_rmag,eff,err,fmt="x",label="quickcat run")
plt.xlabel("rmag")
plt.ylabel("efficiency")
plt.legend(loc="lower left")
print("LRG redshift efficiency parameters = ",lrg_efficiency_params)
quickcat_params["LRG"]=dict()
quickcat_params["LRG"]["EFFICIENCY"]=dict()
quickcat_params["LRG"]["EFFICIENCY"]["SIGMOID_CUTOFF"]=float(lrg_efficiency_params[0])
quickcat_params["LRG"]["EFFICIENCY"]["SIGMOID_FUDGE"]=float(lrg_efficiency_params[1])
meff=sigmoid(lrg_efficiency_params,rmag)
plt.figure()
mcut=22.
s=(rmag>mcut) # select faint ones to increase contrast in z
bins=50
x,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)
x,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)
plt.errorbar(x,eff1d,err1d,fmt="o",label="sim")
plt.plot(x,meff1d,"-",label="model")
plt.legend(loc="upper left",title="Faint LRGs with rmag>{}".format(mcut))
plt.xlabel("redshift")
plt.ylabel("efficiency")
Explanation: LRG redshift efficiency
Sigmoid function of the r-band magnitude
$Eff = \frac{1}{1+exp (( rmag - a ) / b))}$
End of explanation
# LRGs redshift uncertainties
######################
lrgs=(table["TEMPLATETYPE"]=="LRG")
z=table["Z"][lrgs]
tz=table["TRUEZ"][lrgs]
dz=z-tz
good=(table["ZWARN"][lrgs]==0)&(np.abs(dz/(1+z))<0.003)
rflux=table["FLUX_R"][lrgs]
print("Number of LRGs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
rmag=-2.5*np.log10(rflux)+22.5
qz=None
qdz=None
if qtable is not None : # quickcat output
qz=qtable["Z"][lrgs]
qdz=qz-qtable["TRUEZ"][lrgs]
qgood=(qtable["ZWARN"][lrgs]==0)&(np.abs(qdz/(1+qz))<0.003)
######################
bins=20
binmag,var,err,nn=prof(rmag[good],((dz/(1+z))**2)[good],bins=bins)
binflux=10**(-0.4*(binmag-22.5))
var_err = np.sqrt(2/nn)*var
rms=np.sqrt(var)
rmserr=0.5*var_err/rms
params=[1.,1.2]
result=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binflux,rms,rmserr))
params=result.x
print("LRG redshift error parameters = ",params)
quickcat_params["LRG"]["UNCERTAINTY"]=dict()
quickcat_params["LRG"]["UNCERTAINTY"]["SIGMA_17"]=float(params[0])
quickcat_params["LRG"]["UNCERTAINTY"]["POWER_LAW_INDEX"]=float(params[1])
model = redshift_error(params,binflux)
plt.errorbar(binmag,rms,rmserr,fmt="o",label="sim")
plt.plot(binmag,model,"-",label="model")
if qz is not None :
qbinmag,qvar,qerr,nn=prof(rmag[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)
qvar_err = np.sqrt(2/nn)*qvar
qrms=np.sqrt(qvar)
qrmserr=0.5*qvar_err/qrms
plt.errorbar(qbinmag,qrms,qrmserr,fmt="x",label="quickcat")
plt.legend(loc="upper left",title="LRG")
plt.xlabel("rmag")
plt.ylabel("rms dz/(1+z)")
Explanation: LRG redshift uncertainty
Power law of broad band flux
End of explanation
nbad = np.sum((table["ZWARN"][lrgs]==0)&(np.abs(dz/(1+z))>0.003))
ntot = np.sum(table["ZWARN"][lrgs]==0)
frac = float(nbad/float(ntot))
print("LRG catastrophic failure rate={}/{}={:4.3f}".format(nbad,ntot,frac))
quickcat_params["LRG"]["FAILURE_RATE"]=frac
qnbad = np.sum((qtable["ZWARN"][lrgs]==0)&(np.abs(qdz/(1+qz))>0.003))
qntot = np.sum(qtable["ZWARN"][lrgs]==0)
qfrac = float(qnbad/float(qntot))
print("quickcat run LRG catastrophic failure rate={}/{}={:4.3f}".format(qnbad,qntot,qfrac))
# choice of redshift for splitting between "lower z / tracer" QSOs and Lya QSOs
zsplit = 2.0
Explanation: LRG catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$
End of explanation
# simply use RFLUX for snr
######################
qsos=(table["TEMPLATETYPE"]=="QSO")&(table["TRUEZ"]<zsplit)
z=table["Z"][qsos]
tz=table["TRUEZ"][qsos]
dz=z-tz
good=(table["ZWARN"][qsos]==0)
rflux=table["FLUX_R"][qsos]
print("Number of QSOs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
rmag=-2.5*np.log10(rflux)+22.5
qgood=None
if qtable is not None : # quickcat output
qgood=(qtable["ZWARN"][qsos]==0)
######################
bins=30
bin_rmag,eff,err=efficiency(rmag,good,bins=bins)
plt.errorbar(bin_rmag,eff,err,fmt="o",label="sim")
qso_efficiency_params=[23.,0.3]
result=scipy.optimize.least_squares(sigmoid_residuals,qso_efficiency_params,args=(bin_rmag,eff,err))
qso_efficiency_params=result.x
plt.plot(bin_rmag,sigmoid(qso_efficiency_params,bin_rmag),"-",label="model")
if qgood is not None :
bin_rmag,eff,err=efficiency(rmag,qgood,bins=bins)
plt.errorbar(bin_rmag,eff,err,fmt="x",label="quickcat run")
plt.xlabel("rmag")
plt.ylabel("efficiency")
plt.legend(loc="lower left")
print("QSO redshift efficiency parameters = ",qso_efficiency_params)
quickcat_params["QSO_ZSPLIT"]=zsplit
quickcat_params["LOWZ_QSO"]=dict()
quickcat_params["LOWZ_QSO"]["EFFICIENCY"]=dict()
quickcat_params["LOWZ_QSO"]["EFFICIENCY"]["SIGMOID_CUTOFF"]=float(qso_efficiency_params[0])
quickcat_params["LOWZ_QSO"]["EFFICIENCY"]["SIGMOID_FUDGE"]=float(qso_efficiency_params[1])
meff=sigmoid(qso_efficiency_params,rmag)
plt.figure()
mcut=22.
s=(rmag>mcut) # select faint ones to increase contrast in z
bins=50
x,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)
x,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)
plt.errorbar(x,eff1d,err1d,fmt="o",label="input")
plt.plot(x,meff1d,"-",label="model")
if qgood is not None :
x,qeff1d,qerr1d = efficiency(tz[s],qgood[s],bins=bins)
plt.errorbar(x,qeff1d,qerr1d,fmt="x",label="quickcat")
plt.legend(loc="upper left",title="Faint tracer QSOs with rmag>{}".format(mcut))
plt.xlabel("redshift")
plt.ylabel("efficiency")
plt.ylim([0.5,1.2])
Explanation: QSO tracers (z<~2) redshift efficiency
Sigmoid function of the r-band magnitude
$Eff = \frac{1}{1+exp (( rmag - a ) / b))}$
End of explanation
# QSO redshift uncertainties
qsos=(table["TEMPLATETYPE"]=="QSO")&(table["TRUEZ"]<zsplit)
z=table["Z"][qsos]
dz=z-table["TRUEZ"][qsos]
good=(table["ZWARN"][qsos]==0)&(np.abs(dz/(1+z))<0.01)
rflux=table["FLUX_R"][qsos]
print("Number of QSOs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
rmag=-2.5*np.log10(rflux)+22.5
qgood=None
qz=None
qdz=None
if qtable is not None : # quickcat output
qz=qtable["Z"][qsos]
qdz=qz-qtable["TRUEZ"][qsos]
qgood=(qtable["ZWARN"][qsos]==0)&(np.abs(qdz/(1+qz))<0.01)
bins=20
binmag,var,err,nn=prof(rmag[good],((dz/(1+z))**2)[good],bins=bins)
binflux=10**(-0.4*(binmag-22.5))
var_err = np.sqrt(2/nn)*var
rms=np.sqrt(var)
rmserr=0.5*var_err/rms
params=[1.,1.2]
result=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binflux,rms,rmserr))
params=result.x
print("QSO redshift error parameters = ",params)
quickcat_params["LOWZ_QSO"]["UNCERTAINTY"]=dict()
quickcat_params["LOWZ_QSO"]["UNCERTAINTY"]["SIGMA_17"]=float(params[0])
quickcat_params["LOWZ_QSO"]["UNCERTAINTY"]["POWER_LAW_INDEX"]=float(params[1])
model = redshift_error(params,binflux)
plt.errorbar(binmag,rms,rmserr,fmt="o",label="sim")
plt.plot(binmag,model,"-",label="model")
if qz is not None :
qbinmag,qvar,qerr,nn=prof(rmag[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)
qvar_err = np.sqrt(2/nn)*qvar
qrms=np.sqrt(qvar)
qrmserr=0.5*qvar_err/qrms
plt.errorbar(qbinmag,qrms,qrmserr,fmt="x",label="quickcat")
plt.legend(loc="upper left",title="Tracer QSO")
plt.xlabel("rmag")
plt.ylabel("rms dz/(1+z)")
Explanation: QSO (z<2) redshift uncertainty
Power law of broad band flux
End of explanation
nbad = np.sum((table["ZWARN"][qsos]==0)&(np.abs(dz/(1+z))>0.003))
ntot = np.sum(table["ZWARN"][qsos]==0)
frac = float(nbad/float(ntot))
print("Tracer QSO catastrophic failure rate={}/{}={:4.3f}".format(nbad,ntot,frac))
quickcat_params["LOWZ_QSO"]["FAILURE_RATE"]=frac
qnbad = np.sum((qtable["ZWARN"][qsos]==0)&(np.abs(qdz/(1+qz))>0.003))
qntot = np.sum(qtable["ZWARN"][qsos]==0)
qfrac = float(qnbad/float(qntot))
print("quickcat run tracer QSO catastrophic failure rate={}/{}={:4.3f}".format(qnbad,qntot,qfrac))
Explanation: Tracer QSO (z<~2) catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$
End of explanation
# simply use RFLUX for snr
######################
qsos=(table["TEMPLATETYPE"]=="QSO")&(table["TRUEZ"]>zsplit)
z=table["Z"][qsos]
tz=table["TRUEZ"][qsos]
dz=z-tz
good=(table["ZWARN"][qsos]==0)
rflux=table["FLUX_R"][qsos]
print("Number of QSOs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
rmag=-2.5*np.log10(rflux)+22.5
qgood=None
if qtable is not None : # quickcat output
qgood=(qtable["ZWARN"][qsos]==0)
######################
bins=30
bin_rmag,eff,err=efficiency(rmag,good,bins=bins)
plt.errorbar(bin_rmag,eff,err,fmt="o",label="sim")
qso_efficiency_params=[23.,0.3]
result=scipy.optimize.least_squares(sigmoid_residuals,qso_efficiency_params,args=(bin_rmag,eff,err))
qso_efficiency_params=result.x
plt.plot(bin_rmag,sigmoid(qso_efficiency_params,bin_rmag),"-",label="model")
if qgood is not None :
bin_rmag,eff,err=efficiency(rmag,qgood,bins=bins)
plt.errorbar(bin_rmag,eff,err,fmt="x",label="quickcat run")
plt.xlabel("rmag")
plt.ylabel("efficiency")
plt.legend(loc="lower left")
print("QSO redshift efficiency parameters = ",qso_efficiency_params)
quickcat_params["LYA_QSO"]=dict()
quickcat_params["LYA_QSO"]["EFFICIENCY"]=dict()
quickcat_params["LYA_QSO"]["EFFICIENCY"]["SIGMOID_CUTOFF"]=float(qso_efficiency_params[0])
quickcat_params["LYA_QSO"]["EFFICIENCY"]["SIGMOID_FUDGE"]=float(qso_efficiency_params[1])
meff=sigmoid(qso_efficiency_params,rmag)
plt.figure()
mcut=22.5
s=(rmag>mcut) # select faint ones to increase contrast in z
bins=50
x,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)
x,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)
plt.errorbar(x,eff1d,err1d,fmt="o",label="sim")
plt.plot(x,meff1d,"-",label="model")
plt.legend(loc="upper left",title="Faint Lya QSOs with rmag>{}".format(mcut))
plt.xlabel("redshift")
plt.ylabel("efficiency")
plt.ylim([0.,1.4])
Explanation: Lya QSO (z>~2) redshift efficiency
Sigmoid function of the r-band magnitude
$Eff = \frac{1}{1+exp (( rmag - a ) / b))}$
End of explanation
# QSO redshift uncertainties
qsos=(table["TEMPLATETYPE"]=="QSO")&(table["TRUEZ"]>zsplit)
z=table["Z"][qsos]
dz=z-table["TRUEZ"][qsos]
good=(table["ZWARN"][qsos]==0)&(np.abs(dz/(1+z))<0.01)
rflux=table["FLUX_R"][qsos]
print("Number of QSOs={}".format(rflux.size))
rflux=rflux*(rflux>0)+0.00001*(rflux<=0)
rmag=-2.5*np.log10(rflux)+22.5
qgood=None
qz=None
qdz=None
if qtable is not None : # quickcat output
qz=qtable["Z"][qsos]
qdz=qz-qtable["TRUEZ"][qsos]
qgood=(qtable["ZWARN"][qsos]==0)&(np.abs(qdz/(1+qz))<0.01)
bins=20
binmag,var,err,nn=prof(rmag[good],((dz/(1+z))**2)[good],bins=bins)
binflux=10**(-0.4*(binmag-22.5))
var_err = np.sqrt(2/nn)*var
rms=np.sqrt(var)
rmserr=0.5*var_err/rms
params=[1.,1.2]
result=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binflux,rms,rmserr))
params=result.x
print("LYA_QSO redshift error parameters = ",params)
quickcat_params["LYA_QSO"]["UNCERTAINTY"]=dict()
quickcat_params["LYA_QSO"]["UNCERTAINTY"]["SIGMA_17"]=float(params[0])
quickcat_params["LYA_QSO"]["UNCERTAINTY"]["POWER_LAW_INDEX"]=float(params[1])
model = redshift_error(params,binflux)
plt.errorbar(binmag,rms,rmserr,fmt="o",label="sim")
plt.plot(binmag,model,"-",label="model")
if qz is not None :
qbinmag,qvar,qerr,nn=prof(rmag[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)
qvar_err = np.sqrt(2/nn)*qvar
qrms=np.sqrt(qvar)
qrmserr=0.5*qvar_err/qrms
plt.errorbar(qbinmag,qrms,qrmserr,fmt="x",label="quickcat")
plt.legend(loc="upper left",title="Lya QSO")
plt.xlabel("rmag")
plt.ylabel("rms dz/(1+z)")
Explanation: Lya QSO (z>2) redshift uncertainty
Power law of broad band flux
End of explanation
nbad = np.sum((table["ZWARN"][qsos]==0)&(np.abs(dz/(1+z))>0.003))
ntot = np.sum(table["ZWARN"][qsos]==0)
frac = float(nbad/float(ntot))
print("Lya QSO catastrophic failure rate={}".format(frac))
quickcat_params["LYA_QSO"]["FAILURE_RATE"]=frac
# write results to a yaml file
with open(quickcat_param_filename, 'w') as outfile:
yaml.dump(quickcat_params, outfile, default_flow_style=False)
Explanation: Lya QSO (z>~2) catastrophic failure rate
Fraction of targets with ZWARN=0 and $|\Delta z/(1+z)|>0.003$
End of explanation |
13,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 3 - More Pandas
All questions are weighted the same in this assignment. This assignment requires more individual learning then the last one did - you are encouraged to check out the pandas documentation to find functions or methods you might not have used yet, or ask questions on Stack Overflow and tag them as pandas and python related. And of course, the discussion forums are open for interaction with your peers and the course staff.
Question 1 (20%)
Load the energy data from the file Energy Indicators.xls, which is a list of indicators of energy supply and renewable electricity production from the United Nations for the year 2013, and should be put into a DataFrame with the variable name of energy.
Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are
Step1: Question 2 (6.6%)
The previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?
This function should return a single number.
Step2: Question 3 (6.6%)
What are the top 15 countries for average GDP over the last 10 years?
This function should return a Series named avgGDP with 15 countries and their average GDP sorted in descending order.
Step3: Question 4 (6.6%)
By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?
This function should return a single number.
Step4: Question 5 (6.6%)
What is the mean energy supply per capita?
This function should return a single number.
Step5: Question 6 (6.6%)
What country has the maximum % Renewable and what is the percentage?
This function should return a tuple with the name of the country and the percentage.
Step6: Question 7 (6.6%)
Create a new column that is the ratio of Self-Citations to Total Citations.
What is the maximum value for this new column, and what country has the highest ratio?
This function should return a tuple with the name of the country and the ratio.
Step7: Question 8 (6.6%)
Create a column that estimates the population using Energy Supply and Energy Supply per capita.
What is the third most populous country according to this estimate?
This function should return a single string value.
Step8: Question 9
Create a column that estimates the number of citable documents per person.
What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the .corr() method, (Pearson's correlation).
This function should return a single number.
(Optional
Step9: Question 10 (6.6%)
Create a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.
This function should return a series named HighRenew whose index is the country name sorted in ascending order of rank.
Step10: Question 11 (6.6%)
Use the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.
python
ContinentDict = {'China'
Step11: Question 12 (6.6%)
Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?
This function should return a Series with a MultiIndex of Continent, then the bins for % Renewable. Do not include groups with no countries.
Step12: Question 13 (6.6%)
Convert the Population Estimate series to a string with thousands separator (using commas). Use all significant digits (do not round the results).
e.g. 12345678.90 -> 12,345,678.90
This function should return a Series PopEst whose index is the country name and whose values are the population estimate string.
Step13: Optional
Use the built in function plot_optional() to see an example visualization. | Python Code:
import pandas as pd
def answer_one():
energy = pd.read_excel('Energy Indicators.xls', skiprows=18, skip_footer=38, header=None, na_values=['...'])
energy.drop([0,1], axis=1, inplace=True)
energy.columns = ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
energy['Energy Supply'] = energy['Energy Supply'] * 1000000
energy = energy.replace({'Country':{
"Republic of Korea": "South Korea",
"United States of America": "United States",
"United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
"China, Hong Kong Special Administrative Region": "Hong Kong"
}
})
return energy
answer_one()
Explanation: You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 3 - More Pandas
All questions are weighted the same in this assignment. This assignment requires more individual learning then the last one did - you are encouraged to check out the pandas documentation to find functions or methods you might not have used yet, or ask questions on Stack Overflow and tag them as pandas and python related. And of course, the discussion forums are open for interaction with your peers and the course staff.
Question 1 (20%)
Load the energy data from the file Energy Indicators.xls, which is a list of indicators of energy supply and renewable electricity production from the United Nations for the year 2013, and should be put into a DataFrame with the variable name of energy.
Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:
['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable]
Convert Energy Supply to gigajoules (there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with "...") make sure this is reflected as np.NaN values.
Rename the following list of countries (for use in later questions):
"Republic of Korea": "South Korea",
"United States of America": "United States",
"United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
"China, Hong Kong Special Administrative Region": "Hong Kong"
There are also several countries with parenthesis in their name. Be sure to remove these, e.g. 'Bolivia (Plurinational State of)' should be 'Bolivia'.
<br>
Next, load the GDP data from the file world_bank.csv, which is a csv containing countries' GDP from 1960 to 2015 from World Bank. Call this DataFrame GDP.
Make sure to skip the header, and rename the following list of countries:
"Korea, Rep.": "South Korea",
"Iran, Islamic Rep.": "Iran",
"Hong Kong SAR, China": "Hong Kong"
<br>
Finally, load the Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology from the file scimagojr-3.xlsx, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame ScimEn.
Join the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15).
The index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations',
'Citations per document', 'H index', 'Energy Supply',
'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008',
'2009', '2010', '2011', '2012', '2013', '2014', '2015'].
This function should return a DataFrame with 20 columns and 15 entries.
End of explanation
%%HTML
<svg width="800" height="300">
<circle cx="150" cy="180" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="blue" />
<circle cx="200" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="red" />
<circle cx="100" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="green" />
<line x1="150" y1="125" x2="300" y2="150" stroke="black" stroke-width="2" fill="black" stroke-dasharray="5,3"/>
<text x="300" y="165" font-family="Verdana" font-size="35">Everything but this!</text>
</svg>
def answer_two():
return "ANSWER"
Explanation: Question 2 (6.6%)
The previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?
This function should return a single number.
End of explanation
def answer_three():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 3 (6.6%)
What are the top 15 countries for average GDP over the last 10 years?
This function should return a Series named avgGDP with 15 countries and their average GDP sorted in descending order.
End of explanation
def answer_four():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 4 (6.6%)
By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?
This function should return a single number.
End of explanation
def answer_five():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 5 (6.6%)
What is the mean energy supply per capita?
This function should return a single number.
End of explanation
def answer_six():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 6 (6.6%)
What country has the maximum % Renewable and what is the percentage?
This function should return a tuple with the name of the country and the percentage.
End of explanation
def answer_seven():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 7 (6.6%)
Create a new column that is the ratio of Self-Citations to Total Citations.
What is the maximum value for this new column, and what country has the highest ratio?
This function should return a tuple with the name of the country and the ratio.
End of explanation
def answer_eight():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 8 (6.6%)
Create a column that estimates the population using Energy Supply and Energy Supply per capita.
What is the third most populous country according to this estimate?
This function should return a single string value.
End of explanation
def answer_nine():
Top15 = answer_one()
return "ANSWER"
def plot9():
import matplotlib as plt
%matplotlib inline
Top15 = answer_one()
Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst']
Top15.plot(x='Citable docs per Capita', y='Energy Supply per Capita', kind='scatter', xlim=[0, 0.0006])
#plot9() # Be sure to comment out plot9() before submitting the assignment!
Explanation: Question 9
Create a column that estimates the number of citable documents per person.
What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the .corr() method, (Pearson's correlation).
This function should return a single number.
(Optional: Use the built-in function plot9() to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)
End of explanation
def answer_ten():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 10 (6.6%)
Create a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.
This function should return a series named HighRenew whose index is the country name sorted in ascending order of rank.
End of explanation
def answer_eleven():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 11 (6.6%)
Use the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.
python
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
This function should return a DataFrame with index named Continent ['Asia', 'Australia', 'Europe', 'North America', 'South America'] and columns ['size', 'sum', 'mean', 'std']
End of explanation
def answer_twelve():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 12 (6.6%)
Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?
This function should return a Series with a MultiIndex of Continent, then the bins for % Renewable. Do not include groups with no countries.
End of explanation
def answer_thirteen():
Top15 = answer_one()
return "ANSWER"
Explanation: Question 13 (6.6%)
Convert the Population Estimate series to a string with thousands separator (using commas). Use all significant digits (do not round the results).
e.g. 12345678.90 -> 12,345,678.90
This function should return a Series PopEst whose index is the country name and whose values are the population estimate string.
End of explanation
def plot_optional():
import matplotlib as plt
%matplotlib inline
Top15 = answer_one()
ax = Top15.plot(x='Rank', y='% Renewable', kind='scatter',
c=['#e41a1c','#377eb8','#e41a1c','#4daf4a','#4daf4a','#377eb8','#4daf4a','#e41a1c',
'#4daf4a','#e41a1c','#4daf4a','#4daf4a','#e41a1c','#dede00','#ff7f00'],
xticks=range(1,16), s=6*Top15['2014']/10**10, alpha=.75, figsize=[16,6]);
for i, txt in enumerate(Top15.index):
ax.annotate(txt, [Top15['Rank'][i], Top15['% Renewable'][i]], ha='center')
print("This is an example of a visualization that can be created to help understand the data. \
This is a bubble chart showing % Renewable vs. Rank. The size of the bubble corresponds to the countries' \
2014 GDP, and the color corresponds to the continent.")
#plot_optional() # Be sure to comment out plot_optional() before submitting the assignment!
Explanation: Optional
Use the built in function plot_optional() to see an example visualization.
End of explanation |
13,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: Note especially the use of colons (
Step2: Notice the simplicity of the for loop
Step3: Note that the range starts at zero by default, and that by convention the top of the range is not included in the output.
Range objects can also have more complicated values
Step4: You might notice that the meaning of range arguments is very similar to the slicing syntax that we covered in Lists.
Note that the behavior of range() is one of the differences between Python 2 and Python 3
Step5: The argument of the while loop is evaluated as a boolean statement, and the loop is executed until the statement evaluates to False.
break and continue
Step6: Here is an example of a break statement used for a less trivial task.
This loop will fill a list with all Fibonacci numbers up to a certain value
Step7: Notice that we use a while True loop, which will loop forever unless we have a break statement!
Loops with an else Block
One rarely used pattern available in Python is the else statement as part of a for or while loop.
We discussed the else block earlier | Python Code:
x = -15
if x == 0:
print(x, "is zero")
elif x > 0:
print(x, "is positive")
elif x < 0:
print(x, "is negative")
else:
print(x, "is unlike anything I've ever seen...")
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
This notebook contains an excerpt from the Whirlwind Tour of Python by Jake VanderPlas; the content is available on GitHub.
The text and code are released under the CC0 license; see also the companion project, the Python Data Science Handbook.
<!--NAVIGATION-->
< Built-In Data Structures | Contents | Defining and Using Functions >
Control Flow
Control flow is where the rubber really meets the road in programming.
Without it, a program is simply a list of statements that are sequentially executed.
With control flow, you can execute certain code blocks conditionally and/or repeatedly: these basic building blocks can be combined to create surprisingly sophisticated programs!
Here we'll cover conditional statements (including "if", "elif", and "else"), loop statements (including "for" and "while" and the accompanying "break", "continue", and "pass").
Conditional Statements: if-elif-else:
Conditional statements, often referred to as if-then statements, allow the programmer to execute certain pieces of code depending on some Boolean condition.
A basic example of a Python conditional statement is this:
End of explanation
for N in [2, 3, 5, 7]:
print(N, end=' ') # print all on same line
Explanation: Note especially the use of colons (:) and whitespace to denote separate blocks of code.
Python adopts the if and else often used in other languages; its more unique keyword is elif, a contraction of "else if".
In these conditional clauses, elif and else blocks are optional; additionally, you can optinally include as few or as many elif statements as you would like.
for loops
Loops in Python are a way to repeatedly execute some code statement.
So, for example, if we'd like to print each of the items in a list, we can use a for loop:
End of explanation
for i in range(10):
print(i, end=' ')
Explanation: Notice the simplicity of the for loop: we specify the variable we want to use, the sequence we want to loop over, and use the "in" operator to link them together in an intuitive and readable way.
More precisely, the object to the right of the "in" can be any Python iterator.
An iterator can be thought of as a generalized sequence, and we'll discuss them in Iterators.
For example, one of the most commonly-used iterators in Python is the range object, which generates a sequence of numbers:
End of explanation
# range from 5 to 10
list(range(5, 10))
# range from 0 to 10 by 2
list(range(0, 10, 2))
Explanation: Note that the range starts at zero by default, and that by convention the top of the range is not included in the output.
Range objects can also have more complicated values:
End of explanation
i = 0
while i < 10:
print(i, end=' ')
i += 1
Explanation: You might notice that the meaning of range arguments is very similar to the slicing syntax that we covered in Lists.
Note that the behavior of range() is one of the differences between Python 2 and Python 3: in Python 2, range() produces a list, while in Python 3, range() produces an iterable object.
while loops
The other type of loop in Python is a while loop, which iterates until some condition is met:
End of explanation
for n in range(20):
# if the remainder of n / 2 is 0, skip the rest of the loop
if n % 2 == 0:
continue
print(n, end=' ')
Explanation: The argument of the while loop is evaluated as a boolean statement, and the loop is executed until the statement evaluates to False.
break and continue: Fine-Tuning Your Loops
There are two useful statements that can be used within loops to fine-tune how they are executed:
The break statement breaks-out of the loop entirely
The continue statement skips the remainder of the current loop, and goes to the next iteration
These can be used in both for and while loops.
Here is an example of using continue to print a string of odd numbers.
In this case, the result could be accomplished just as well with an if-else statement, but sometimes the continue statement can be a more convenient way to express the idea you have in mind:
End of explanation
a, b = 0, 1
amax = 100
L = []
while True:
(a, b) = (b, a + b)
if a > amax:
break
L.append(a)
print(L)
Explanation: Here is an example of a break statement used for a less trivial task.
This loop will fill a list with all Fibonacci numbers up to a certain value:
End of explanation
L = []
nmax = 30
for n in range(2, nmax):
for factor in L:
if n % factor == 0:
break
else: # no break
L.append(n)
print(L)
Explanation: Notice that we use a while True loop, which will loop forever unless we have a break statement!
Loops with an else Block
One rarely used pattern available in Python is the else statement as part of a for or while loop.
We discussed the else block earlier: it executes if all the if and elif statements evaluate to False.
The loop-else is perhaps one of the more confusingly-named statements in Python; I prefer to think of it as a nobreak statement: that is, the else block is executed only if the loop ends naturally, without encountering a break statement.
As an example of where this might be useful, consider the following (non-optimized) implementation of the Sieve of Eratosthenes, a well-known algorithm for finding prime numbers:
End of explanation |
13,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cable Property Comparison - 6/28/2016
This program reads a set of neuron hoc files from a given directory and spits out a tabular comparison of each neuron's cable properties; these properties include the number of branch points, total cable length, mean path length, and overall tortuosity.
First, here are the required imports
Step1: Next, load up a set of neuron hoc files within a directory as a list of geo objects
Step2: Now that we have our list of geo objects ready to go, let's make a parallel list of names
Step3: Next up, let's make a parallel list containing the number of branch points present in each neuron
Step4: Now, let's make another parallel list containing the total cable length of each neuron
Step5: Next, let's make yet another parallel list containing the mean path length of each neuron
Step6: Now, let's make a parallel list containing the standard deviation in path length for each neuron
Step7: Next up, let's make another parallel list containing the mean tortuosity of each neuron
Step8: Now, let's make yet another parallel list containing the standard deviation in tortuosity for each neuron
Step9: Making the Table - 6/29/16
Hooray! We now have all of the numbers we set out to calculate! All that's left is to organize them in a table | Python Code:
# Required for system access (utilized below)
import sys
# Required for os access (utilized below)
import os
sys.path.append(os.path.join(os.path.dirname(os.getcwd()),
'dependencies'))
# Required to interpret hoc files
from neuron_readExportedGeometry import *
# Required for efficient calculation of mean and standard deviation
import numpy as np
# Required to display and save data in tabular/graphical form
import matplotlib.pyplot as plt
Explanation: Cable Property Comparison - 6/28/2016
This program reads a set of neuron hoc files from a given directory and spits out a tabular comparison of each neuron's cable properties; these properties include the number of branch points, total cable length, mean path length, and overall tortuosity.
First, here are the required imports:
End of explanation
# Specify a directory full of hoc files
directory = '/home/cosmo/marderlab/test/'
# Convert the given directory into a list of hoc files
hocs = [(directory + h) for h in os.listdir(directory)]
# Convert the list of hoc files into a list of geo objects
geos = [demoReadsilent(h) for h in hocs]
Explanation: Next, load up a set of neuron hoc files within a directory as a list of geo objects:
End of explanation
# Loop through the filenames in directory and split off irrelevant
# information
names = [h.split('_s')[0].split('_f')[0].split('.h')[0].split('_r')[0] for h in os.listdir(directory)]
names
Explanation: Now that we have our list of geo objects ready to go, let's make a parallel list of names:
End of explanation
# Count the number of branches, which is equivalent to the number of
# branch points
branchpoints = [len(g.branches) for g in geos]
branchpoints
Explanation: Next up, let's make a parallel list containing the number of branch points present in each neuron:
End of explanation
# More detailed documentation for this code can be found in cable-length-calculator.ipynb
cablelengths = [] # Initialize a list of cablelengths
for geo in geos:
tips, ends = geo.getTips() # Store all the tip segments in a list, "tips"
# Also store the associated ends in "ends"
find = PathDistanceFinder(geo, geo.soma) # Set up a PDF object for the
# given geo object, anchored at
# the soma
paths = [find.pathTo(seg) for seg in tips] # List of all paths
counted = [] # Initialize a list for keeping track of which segments have
# already been measured
cablelength = 0 # Initialize a running total of cable length
for path in paths: # Sort through each path
pruned = [seg for seg in path if seg not in counted] # Limit the paths
# we work with to
# those which have
# not already been
# measured
forfind = PathDistanceFinder(geo, pruned[0]) # Initialize a PDF
# anchored at the earliest
# unmeasured segment
cablelength += forfind.distanceTo(pruned[-1]) # Add the distance
# between the anchor and
# the tip segment to the
# running total
for seg in pruned: # Add all of the measured segments to "counted"
counted.append(seg)
cablelengths.append(cablelength)
cablelengths
Explanation: Now, let's make another parallel list containing the total cable length of each neuron:
End of explanation
# Loops through geos and calculates the mean path length for each neuron
plmeans = [np.mean(g.getProperties()[0]['Path Length']) for g in geos]
plmeans
Explanation: Next, let's make yet another parallel list containing the mean path length of each neuron:
End of explanation
# Loops through geos and calculates the standard deviation in path
# length for each neuron
plstds = [np.std(g.getProperties()[0]['Path Length']) for g in geos]
plstds
Explanation: Now, let's make a parallel list containing the standard deviation in path length for each neuron:
End of explanation
# Loops through geos and calculates the mean tortuosity for each neuron
tortmeans = [np.mean(g.getProperties()[0]['Tortuosity']) for g in geos]
tortmeans
Explanation: Next up, let's make another parallel list containing the mean tortuosity of each neuron:
End of explanation
# Loops through geos and calculates the standard deviation in tortuosity
# for each neuron
tortstds = [np.std(g.getProperties()[0]['Tortuosity']) for g in geos]
tortstds
Explanation: Now, let's make yet another parallel list containing the standard deviation in tortuosity for each neuron:
End of explanation
filename = 'Test.txt'
f = open(filename, 'w')
props = ['Neuron ID', 'Branches', 'Total Cable Length (um)',
'Mean Path Length (um)', 'Path Length Standard Deviation',
'Mean Tortuosity', 'Tortuosity Standard Deviation']
tablebar = \
'+---------------------------------------------------------------------------------+'
f.write('\n' + tablebar + '\n')
f.write('|{0:<26}|{1:<22}|{2:<31}|'.format(props[0], props[1],
props[2]) + '\n')
f.write(tablebar + '\n')
for n in range(len(geos)):
f.write('|{0:<26}|{1:<22}|{2:<31}|'.format(names[n],branchpoints[n],
cablelengths[n]) + '\n')
f.write(tablebar + '\n')
f.write('\n' + tablebar + '\n')
f.write('|{0:<26}|{1:<22}|{2:<31}|'.format(props[0], props[3],
props[4]) + '\n')
f.write(tablebar + '\n')
for n in range(len(geos)):
f.write('|{0:<26}|{1:<22}|{2:<31}|'.format(names[n], plmeans[n],
plstds[n]) + '\n')
f.write(tablebar + '\n')
f.write('\n' + tablebar + '\n')
f.write('|{0:<26}|{1:<22}|{2:<31}|'.format(props[0], props[5],
props[6]) + '\n')
f.write(tablebar + '\n')
for n in range(len(geos)):
f.write('|{0:<26}|{1:<22}|{2:<31}|'.format(names[n], tortmeans[n],
tortstds[n]) + '\n')
f.write(tablebar + '\n')
f.close()
with(open(os.path.join(os.getcwd(), filename))) as fr:
print(fr.read())
Explanation: Making the Table - 6/29/16
Hooray! We now have all of the numbers we set out to calculate! All that's left is to organize them in a table:
End of explanation |
13,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning a sensorimotor model with a sensorimotor context
In this notebook, we will see how to use the Explauto libarary to allow the learning and control of local actions that depend on a sensory and motor context. We suppose that the reader is familiar with the main components of the Explauto library explained in another notebook (full tutorial)
Step1: Now we use the class 'ContextEnvironment' to convert an Explauto environment that takes as input a motor position and outputs a sensory position to an environment that takes a motor command $\Delta m$ or $(m, \Delta m)$ and outputs a sensory position $(s, \Delta s)$.
To instanciate such an environment, one must provide the class and configuration of the underlying environment, and define a simple config for local actions called 'context_mode'.
The 'choose_m' parameter defines if the robot is allowed to choose $m$ and $\Delta m$ at each iteration instead of only $\Delta m$.
A rest position also has to be specified, and the bounds for the delta motor actions and delta sensory goals.
Step2: Here we sample and execute a few $\delta m$ actions.
Step3: II. Sensorimotor model
In this Section we show how to store the motor and sensory signals into the database and how to predict the result of an action. The inference of a motor action given a sensory goal is explicited later in Sections IV and V.
The adapted sensorimotor models are 'NN', 'LWLR-BFGS', and 'LWLR-CMAES' altough CMAES' exploration sigma and bounds might need to be adapted.
The database contains tuples of $(M, \Delta M, S, \Delta S)$ so we create the sensorimotor model with the dimensions and bounds of the environment.
Step4: In the following we ramdomly draw delta actions and update the sensorimotor model and the environment.
Step5: Now we can query the sensorimotor model given $(m, \Delta m)$ and the context on given dimensions.
Let's say we want the hand y position to be considered as the context (but in more complex setups it could be the position of some objects in the environment).
In the plot, the black dot and red x are the predicted $s$ and $s'$ given the motor position $m$ and delta $\Delta m$, in the context c.
The corresponding reached arm positions are also represented.
Step6: III. Goal babbling using interest models
In this section, we create an interest model that can sample given a context $s$ and output an interesting delta goal $\Delta s$ only on the dimensions that are not in the context.
This feature is implemented with the Random and Discretized interest models.
Step7: Sampling with context
Step8: IV. Learning choosing m
In this section, we consider that the agent can choose the motor position $m$ at each iteration (parameter 'choose_m'=True).
Here we run the whole procedure without resetting the arm to its rest position during the experiment, first using motor babbling and after using goal babbling.
We also describe how to automatically create the environment, sensorimotor model, interest model and the learning procedure.
Step9: Motor Babbling
Step10: Goal Babbling
Step11: Here we test the learned sensorimotor model on a given goal ds_goal in sensory context s_goal. The agent chooses also the starting $m$ position.
In the plot, the black dot and red x are the goal $s$ and $s + \Delta s$.
The corresponding reached arm positions are represented.
Step12: Using 'Experiment'
Step13: V. Learning without choosing m
In this section, we consider that the agent can't choose the motor position $m$ at each iteration (parameter 'choose_m'=False). In that case, the environment can be resetted to the rest position each N iterations if the parameter 'reset_iterations' is provided in 'context_mode'.
Here we run the whole procedure first using motor babbling and after using goal babbling.
We also describe how to automatically create the environment, sensorimotor model, interest model and the learning procedure.
Step14: Motor Babbling
Step15: Goal Babbling
Step16: Here we test the learned sensorimotor model on a given goal ds_goal in sensory context s_current.
In the plot, the black dot and red x are the current $s$ and goal $s + \Delta s$.
The corresponding reached arm positions are represented.
Step17: Using 'Experiment' | Python Code:
from explauto.environment.simple_arm import SimpleArmEnvironment
from explauto.environment import environments
env_cls = SimpleArmEnvironment
env_conf = environments['simple_arm'][1]['low_dimensional']
Explanation: Learning a sensorimotor model with a sensorimotor context
In this notebook, we will see how to use the Explauto libarary to allow the learning and control of local actions that depend on a sensory and motor context. We suppose that the reader is familiar with the main components of the Explauto library explained in another notebook (full tutorial): the environment, the sensorimotor model and the interest model.
Another tutorial describes how to define actions (not local) that only depends on a context provided by the environment.
Let's suppose we are in a motor state $m$ and a sensory state $s$.
If the goal is to reach a sensory state $s'$ from $s$,
a local action $\Delta m$ has to be found, and $m + \Delta m$ will be evaluated in the environment.
The result of the command in the environment is defined as $(s, \Delta s)$, with $\Delta s = s' - s$.
Section I will show how to simply define an environment suited to the control of local actions from a usual Explauto environment.
In Section II we explain how the sensorimotor model is thus adapted to store and learn with tuples of $(m, \Delta m, s, \Delta s)$ instances.
We will explain different possibilities to query the sensorimotor model.
To predict the result of an action in the environment, we can use a forward prediction of $\Delta s$ given $(m, \Delta m, s$.
To infer the right motor command that should best reach a sensory state $s'$, we can query $\Delta m$ from the sensorimotor model given $(m, s, \Delta s = s' - s)$. This use case is explained in Section V.
Another use case is to query $(m, \Delta m)$ from $(s, \Delta s)$.
This can be used when the robot is allowed to choose the starting and end position of a movement $m \rightarrow m + \Delta m$ at each iteration of the learning algorithm. This use case is explained in Section IV.
An interest model is also used to estimate how a given action is useful for learning, and to sample the best ones.
In our case, if we are in a state $(m, s)$, the local action to be sampled is defined as $\Delta s$, and depends on $s$. In Section III we show how the interest models are adapted to this purpose.
In Sections IV and V, we explain how to automoatically create the environment, sensorimotor model, interest model and the learning procedure adapted to local actions with the class 'Experiment'.
I. Environment
In this section we define an environment suited to the control of local actions from a usual Explauto environment.
We will use the available SimpleArm environment (this one is by default perturbated by a small random noise).
End of explanation
from explauto.environment.context_environment import ContextEnvironment
context_mode = dict(mode='mdmsds',
choose_m=False,
rest_position=[0]*3,
dm_bounds=[[-0.2, -0.2, -0.2],
[0.2, 0.2, 0.2]],
ds_bounds=[[-0.2, -0.2],
[0.2, 0.2]])
environment = ContextEnvironment(env_cls, env_conf, context_mode)
Explanation: Now we use the class 'ContextEnvironment' to convert an Explauto environment that takes as input a motor position and outputs a sensory position to an environment that takes a motor command $\Delta m$ or $(m, \Delta m)$ and outputs a sensory position $(s, \Delta s)$.
To instanciate such an environment, one must provide the class and configuration of the underlying environment, and define a simple config for local actions called 'context_mode'.
The 'choose_m' parameter defines if the robot is allowed to choose $m$ and $\Delta m$ at each iteration instead of only $\Delta m$.
A rest position also has to be specified, and the bounds for the delta motor actions and delta sensory goals.
End of explanation
# Create the axes for plotting:
%pylab inline
ax = axes()
for dm in environment.random_dm(n=10):
m = environment.current_motor_position
mdm = np.hstack((m, dm))
environment.update(mdm, reset=False)
environment.plot(ax)
Explanation: Here we sample and execute a few $\delta m$ actions.
End of explanation
from explauto import SensorimotorModel
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
Explanation: II. Sensorimotor model
In this Section we show how to store the motor and sensory signals into the database and how to predict the result of an action. The inference of a motor action given a sensory goal is explicited later in Sections IV and V.
The adapted sensorimotor models are 'NN', 'LWLR-BFGS', and 'LWLR-CMAES' altough CMAES' exploration sigma and bounds might need to be adapted.
The database contains tuples of $(M, \Delta M, S, \Delta S)$ so we create the sensorimotor model with the dimensions and bounds of the environment.
End of explanation
# Create the axes for plotting:
%pylab inline
ax = axes()
for dm in environment.random_dm(n=1000):
m = environment.current_motor_position
mdm = np.hstack((m, dm))
sds = environment.update(mdm, reset=False)
sm_model.update(mdm, sds)
environment.plot(ax, alpha=0.3)
print "Size of database:", sm_model.size()
Explanation: In the following we ramdomly draw delta actions and update the sensorimotor model and the environment.
End of explanation
# Predict with sensori context
m = environment.current_motor_position
s = environment.current_sensori_position
dm = [0.1]*3
context = s # context
c_dims = [0, 1] # hand dimensions
sds = sm_model.predict_given_context(np.hstack((m, dm)), context, c_dims)
s = sds[0:2]
ds = sds[2:4]
print "Predicted s=", s, "predicted ds=", ds
ax = axes()
environment.plot(ax)
environment.update(np.hstack((m, dm)), reset=False)
environment.plot(ax, color='red')
ax.plot(*s, marker='o', color='k')
ax.plot(*list(np.array(s)+np.array(ds)), marker='x', color='red')
Explanation: Now we can query the sensorimotor model given $(m, \Delta m)$ and the context on given dimensions.
Let's say we want the hand y position to be considered as the context (but in more complex setups it could be the position of some objects in the environment).
In the plot, the black dot and red x are the predicted $s$ and $s'$ given the motor position $m$ and delta $\Delta m$, in the context c.
The corresponding reached arm positions are also represented.
End of explanation
# Random interest model
from explauto.interest_model.random import RandomInterest
im_model = RandomInterest(environment.conf, environment.conf.s_dims)
# Discretized interest model
from explauto.interest_model.discrete_progress import DiscretizedProgress, competence_dist
im_model = DiscretizedProgress(environment.conf, environment.conf.s_dims, **{'x_card': 1000,
'win_size': 10,
'measure': competence_dist})
Explanation: III. Goal babbling using interest models
In this section, we create an interest model that can sample given a context $s$ and output an interesting delta goal $\Delta s$ only on the dimensions that are not in the context.
This feature is implemented with the Random and Discretized interest models.
End of explanation
c = [0.7, 0.6] # context
c_dims = [0, 1] # hand position's dimensions
ds = im_model.sample_given_context(c, c_dims)
#print im_model.discrete_progress.progress()
print "Sampling interesting goal with hand position=", c, ": ds=", ds
Explanation: Sampling with context:
End of explanation
context_mode = dict(mode='mdmsds',
choose_m=True,
rest_position=[0]*3,
dm_bounds=[[-0.2, -0.2, -0.2],
[0.2, 0.2, 0.2]],
ds_bounds=[[-0.2, -0.2],
[0.2, 0.2]])
environment = ContextEnvironment(env_cls, env_conf, context_mode)
Explanation: IV. Learning choosing m
In this section, we consider that the agent can choose the motor position $m$ at each iteration (parameter 'choose_m'=True).
Here we run the whole procedure without resetting the arm to its rest position during the experiment, first using motor babbling and after using goal babbling.
We also describe how to automatically create the environment, sensorimotor model, interest model and the learning procedure.
End of explanation
# Random Motor Babbling
ax = axes()
environment.reset()
motor_configurations = environment.random_dm(n=500)
# Plotting 10 random motor configurations:
for dm in motor_configurations:
m = environment.current_motor_position
environment.update(np.hstack((m, dm)), reset=False)
environment.plot(ax)
Explanation: Motor Babbling
End of explanation
# Random Goal Babbling
im_model = RandomInterest(environment.conf, environment.conf.s_dims)
# Reset environment
environment.reset()
# Reset sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
c_dims = [0, 1] # hand position's dimensions
# Add one point to boostrap sensorimotor model
sm_model.update([0.]*6, np.hstack((environment.current_sensori_position, [0., 0.])))
ax = axes()
for _ in range(500):
# Get current context
s = environment.current_sensori_position
# sample a random sensory goal using the interest model:
ds_g = im_model.sample_given_context(s, c_dims)
# infer a motor command to reach that goal using the sensorimotor model:
mdm = sm_model.inverse_prediction(np.hstack((s, ds_g)))
# execute this command and observe the corresponding sensory effect:
sds = environment.update(mdm, reset=False)
# update the sensorimotor model:
sm_model.update(mdm, sds)
# update interest model
im_model.update(hstack((mdm, s, ds_g)), hstack((mdm, sds)))
# plot arm
environment.plot(ax, alpha=0.3)
Explanation: Goal Babbling
End of explanation
# Inverse without context: (M, dM) <- i(S, dS)
sm_model.mode = "exploit" # no exploration noise
print sm_model.size()
s_goal = [0.8, 0.5]
ds_goal = [-0.1, 0.1]
mdm = sm_model.inverse_prediction(s_goal + ds_goal)
m = mdm[0:3]
dm = mdm[3:6]
print "Inverse without context: m =", m, "dm =", dm
ax = axes()
environment.update(np.hstack((m, [0]*3)))
environment.plot(ax)
environment.update(np.hstack((m, dm)), reset=False)
environment.plot(ax, color='red')
ax.plot(*s_goal, marker='o', color='k')
ax.plot(*list(np.array(s_goal)+np.array(ds_goal)), marker='x', color='red')
Explanation: Here we test the learned sensorimotor model on a given goal ds_goal in sensory context s_goal. The agent chooses also the starting $m$ position.
In the plot, the black dot and red x are the goal $s$ and $s + \Delta s$.
The corresponding reached arm positions are represented.
End of explanation
import numpy as np
from explauto import Agent
from explauto import Experiment
from explauto.utils import rand_bounds
from explauto.experiment import make_settings
%pylab inline
context_mode = dict(mode='mdmsds',
choose_m=True,
rest_position=[0]*3,
dm_bounds=[[-0.2, -0.2, -0.2],
[0.2, 0.2, 0.2]],
ds_bounds=[[-0.2, -0.2],
[0.2, 0.2]])
goal_babbling = make_settings(environment='simple_arm', environment_config = 'low_dimensional',
babbling_mode='goal',
interest_model='discretized_progress',
sensorimotor_model='nearest_neighbor',
context_mode=context_mode)
expe = Experiment.from_settings(goal_babbling)
expe.evaluate_at([50, 100, 150, 200, 500],
rand_bounds(np.vstack(([0.8, -0.1, -0.1, -0.2], [1., 0.1, 0.1, 0.2])), n=50))
expe.run()
ax = axes()
expe.log.plot_learning_curve(ax)
Explanation: Using 'Experiment'
End of explanation
from explauto.environment.context_environment import ContextEnvironment
from explauto.environment.simple_arm import SimpleArmEnvironment
from explauto.environment import environments
env_cls = SimpleArmEnvironment
env_conf = environments['simple_arm'][1]['low_dimensional']
context_mode = dict(mode='mdmsds',
choose_m=False,
rest_position=[0]*3,
reset_iterations=20,
dm_bounds=[[-0.2, -0.2, -0.2],
[0.2, 0.2, 0.2]],
ds_bounds=[[-0.2, -0.2],
[0.2, 0.2]])
environment = ContextEnvironment(env_cls, env_conf, context_mode)
Explanation: V. Learning without choosing m
In this section, we consider that the agent can't choose the motor position $m$ at each iteration (parameter 'choose_m'=False). In that case, the environment can be resetted to the rest position each N iterations if the parameter 'reset_iterations' is provided in 'context_mode'.
Here we run the whole procedure first using motor babbling and after using goal babbling.
We also describe how to automatically create the environment, sensorimotor model, interest model and the learning procedure.
End of explanation
# Random Motor Babbling
ax = axes()
environment.reset()
motor_configurations = environment.random_dm(n=500)
for dm in motor_configurations:
m = list(environment.current_motor_position)
environment.update(np.hstack((m, dm)), reset=False)
environment.plot(ax, alpha=0.3)
Explanation: Motor Babbling
End of explanation
ax = axes()
# Random Goal Babbling
im_model = RandomInterest(environment.conf, environment.conf.s_dims)
# Reset environment
environment.reset()
# Reset sensorimotor model
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
# Add points to boostrap sensorimotor model
for i in range(10):
sm_model.update([0.]*6, np.hstack((environment.current_sensori_position, [0., 0.])))
in_dims = range(3) + range(6,10)
out_dims = range(3, 6)
for i in range(500):
if np.mod(i, context_mode['reset_iterations']) == 0:
environment.reset()
m = list(environment.current_motor_position)
s = list(environment.current_sensori_position)
ds_g = list(im_model.sample_given_context(s, range(environment.conf.s_ndims/2)))
#print "ds_g", ds_g
dm = sm_model.infer(in_dims,
out_dims,
m + s + ds_g)
mdm = np.hstack((m, dm))
#print "mdm", mdm
sds = environment.update(mdm, reset=False)
# update the sensorimotor model:
sm_model.update(mdm, sds)
# update interest model
im_model.update(np.hstack((mdm, s, ds_g)), np.hstack((mdm, sds)))
# plot arm
environment.plot(ax, alpha=0.3)
#print "m", m, "s", s, "ds_g", ds_g, "dm", dm
Explanation: Goal Babbling
End of explanation
# Inverse with sensorimotor context: dM <- i(M, S, dS)
ax = axes()
dm = [0.1]*3
environment.update(np.hstack(([0]*3, dm)))
environment.plot(ax)
in_dims = range(3) + range(6,10)
out_dims = range(3, 6)
ds_goal = [-0.05, 0.1]
m_current = list(environment.current_motor_position)
s_current = list(environment.current_sensori_position)
print "current m = ", m_current
print "current s = ", s_current
dm = sm_model.infer(in_dims,
out_dims,
m_current + s_current + ds_goal)
print "Inverse with context: dm =", dm
sds = environment.update(np.hstack((m_current, dm)), reset=False)
environment.plot(ax, color='red')
ax.plot(*s_current, marker='o', color='k')
ax.plot(*list(np.array(s_current) + np.array(ds_goal)), marker='x', color='red')
print "Goal ds=", ds_goal, "Reached ds=", environment.current_sensori_position - s_current
Explanation: Here we test the learned sensorimotor model on a given goal ds_goal in sensory context s_current.
In the plot, the black dot and red x are the current $s$ and goal $s + \Delta s$.
The corresponding reached arm positions are represented.
End of explanation
import numpy as np
from explauto import Agent
from explauto import Experiment
from explauto.utils import rand_bounds
from explauto.experiment import make_settings
%pylab inline
n_dims = 3
context_mode = dict(mode='mdmsds',
choose_m=False,
reset_iterations=20,
rest_position=[0]*n_dims,
dm_bounds=[[-0.2]*n_dims,
[0.2]*n_dims],
ds_bounds=[[-0.2, -0.2],
[0.2, 0.2]])
goal_babbling = make_settings(environment='simple_arm', environment_config = 'low_dimensional',
babbling_mode='goal',
interest_model='random',
sensorimotor_model='nearest_neighbor',
context_mode=context_mode)
expe = Experiment.from_settings(goal_babbling)
expe.evaluate_at([10, 100, 200, 300, 400, 500],
rand_bounds(np.vstack(([1., 0., -0.1, -0.1], [1., 0., 0., 0.1])), n=200))
expe.run()
ax = axes()
expe.log.plot_learning_curve(ax)
Explanation: Using 'Experiment'
End of explanation |
13,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Processing
We need to make a column for average stats for each 'mon
We need to label each 'mon by its generation
(We should figure out a way to ignore non-stat changed formes i.e. Arceus, as he may be upsetting the Gen IV data)
Step1: Some Stats
Step2: Machine Learning and Clustering
Step3: PCA
Step4: K-Means Clustering | Python Code:
mons["AVERAGE_STAT"] = mons["STAT_TOTAL"]/6
gens = pd.Series([0 for i in range(len(mons.index))], index=mons.index)
for ID, mon in mons.iterrows():
if 0<mon.DEXID<=151:
gens[ID] = 1
elif 151<mon.DEXID<=251:
gens[ID] = 2
elif 251<mon.DEXID<=386:
gens[ID] = 3
elif 386<mon.DEXID<=493:
gens[ID] = 4
elif 493<mon.DEXID<=649:
gens[ID] = 5
elif 649<mon.DEXID<=721:
gens[ID] = 6
elif 721<mon.DEXID<=805:
gens[ID] = 7
else:
gens[ID] = 0
mons["GEN"] = gens
mons.to_csv("./data/pokemon_preUSUM_data.csv")
gen = {}
for i in range(1,8):
gen[i] = mons[mons.GEN == i]
plt.figure(100)
colors = sns.color_palette("colorblind", 7)
for i in range(1,8):
sns.distplot( mons[mons["GEN"] == i]["STAT_TOTAL"], hist=False,kde=True, color=colors[i-1], label=f"Gen {i}")
plt.legend()
plt.show()
Explanation: Data Processing
We need to make a column for average stats for each 'mon
We need to label each 'mon by its generation
(We should figure out a way to ignore non-stat changed formes i.e. Arceus, as he may be upsetting the Gen IV data)
End of explanation
stat_averages_by_gen = {i:gen[i].AVERAGE_STAT for i in range(1,8)}
testable_data = list(stat_averages_by_gen.values())
data = [list(gen) for gen in testable_data]
data = np.array(data)
averages = {i: stat_averages_by_gen[i].mean() for i in range(1,8)}
averages
stats.kruskal(*data)
recarray = mons.to_records()
test = comp.pairwise_tukeyhsd(recarray["AVERAGE_STAT"], recarray["GEN"])
test.summary()
Explanation: Some Stats
End of explanation
np.random.seed(525_600)
stats_gens = mons[['HP', 'ATTACK', 'DEFENSE',
'SPECIAL_ATTACK', 'SPECIAL_DEFENSE', 'SPEED', 'GEN']]
X = np.c_[stats_gens]
Explanation: Machine Learning and Clustering
End of explanation
pca = decomposition.PCA()
pca.fit(X)
pca.explained_variance_
pca.n_components = 3
X_reduced = pca.fit_transform(X)
X_reduced.shape
pca.get_params()
Explanation: PCA
End of explanation
from sklearn import cluster
k_means = cluster.KMeans(n_clusters = 6)
k_means.fit(X)
mons["KMEANS_LABEL"] = pd.Series(k_means.labels_)
plotData = mons[["GEN", "STAT_TOTAL", "KMEANS_LABEL"]]
colors = sns.color_palette("colorblind", 7)
for i in range(1,8):
sns.distplot( plotData[plotData["GEN"] == i]["STAT_TOTAL"], color=colors[i-1])
plt.figure(925)
sns.boxplot(x="KMEANS_LABEL", y="STAT_TOTAL", data=plotData)
plt.show()
plt.figure(9050624)
sns.pairplot(plotData, kind="scatter", hue="GEN", palette=colors)
plt.show()
plotData.to_csv("./data/kmeans.csv")
Explanation: K-Means Clustering
End of explanation |
13,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
import time
import pylab as pl
from IPython import display
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
text_sorted = sorted(list(set(text)))
return (dict((word, i) for i, word in enumerate(text_sorted)), \
dict((i, word) for i, word in enumerate(text_sorted)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
"." : "||Period||",
"," : "||Comma||",
"\"" : "||QuotationMark||",
";" : "||Semicolon||",
"!" : "||ExclamationMark||",
"?" : "||QuestionMark||",
"(" : "||LeftParentheses||",
")" : "||RightParentheses||",
"--" : "||Dash||",
"\n" : "||Return||"
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input, targets, learning_rate = tf.placeholder(tf.int32, [None, None], name='input'),\
tf.placeholder(tf.int32, [None, None], name='targets'),\
tf.placeholder(tf.float32, None, name='learning_rate')
return (input, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
lstm_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=0.75)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell])
initial_state = tf.identity(rnn_cell.zero_state(batch_size, tf.float32), name='initial_state')
return (rnn_cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
return tf.nn.embedding_lookup(embedding, input_data)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
embedding_size = 300
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
embedding = get_embed(input_data, vocab_size, embedding_size)
outputs, final_state = build_rnn(cell, embedding)
fully_connected = tf.contrib.layers.fully_connected(outputs, vocab_size,\
activation_fn=None,\
weights_initializer = tf.truncated_normal_initializer(mean=0, stddev = 0.01),\
biases_initializer = tf.truncated_normal_initializer(mean=0, stddev = 0.01))
return (fully_connected, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
n_batches = int(len(int_text) / (batch_size * seq_length))
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 200
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.005
# Show stats for every n number of batches
show_every_n_batches = 5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
def plot_loss(epoch, loss):
pl.ylim(min(loss), 1.0)
pl.xlim(min(epoch), max(epoch))
pl.plot(epoch, loss, label = 'Training Loss', color = 'blue')
pl.legend(loc='upper right')
pl.xlabel('Time')
pl.ylabel('Loss')
display.clear_output(wait=True)
display.display(pl.gcf())
pl.gcf().clear()
time.sleep(0.1)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
times, loss = [],[]
counter = 0
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
loss.append(train_loss)
counter += 1
times = range(counter)
plot_loss(times, loss)
print('Epoch: {:>3}\tBatch: {:>4}/{}\tTraining Loss: {:.3f}'.format(
epoch_i+1,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
return loaded_graph.get_tensor_by_name("input:0"), \
loaded_graph.get_tensor_by_name("initial_state:0"), \
loaded_graph.get_tensor_by_name("final_state:0"), \
loaded_graph.get_tensor_by_name("probs:0")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.random.choice(range(len(probabilities)), 1, p=probabilities)[0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 400
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
13,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The search for nearest-neighbors between (two) mock catalogs
As a first step in working over the cross-matching of two astronomical catalogs, below I experiment a nearest-neighbor (NN) method using two sets of artificial sources.
At the first part of this notebook I generate the (mock) sources and then search for the (positional) matching pairs.
TOC
Step4: Simulation of source images
Object (or "sources") in astronomical catalogs come from the processing of astronomical images through a detection algorithm. It is out the scope here to discuss such detection algorithms, as such we generate the sources across a region mocking some region of the sky. The images help us to see what were are going to process, and by all means add to the quality/completeness of the workflow.
Step5: Result of the simulation for the first image
Step6: Result of the simulation for the second image
Step8: Merge images
Step10: Finally cross-match the catalogs | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import cm
import numpy
plt.rcParams['figure.figsize'] = (10.0, 10.0)
Explanation: The search for nearest-neighbors between (two) mock catalogs
As a first step in working over the cross-matching of two astronomical catalogs, below I experiment a nearest-neighbor (NN) method using two sets of artificial sources.
At the first part of this notebook I generate the (mock) sources and then search for the (positional) matching pairs.
TOC:
* Simulation of source images
* Resultant simulation for the first image/catalog
* Resultant simulation for the second image/catalog
* Resultant merged images
* Cross-match the tables
End of explanation
# Define parameters for the images and sources therein.
# size of the images
sx = 500
sy = 500
# number of sources on each image
nsrc1 = int( 0.05 * (sx*sy)/(sx+sy) )
nsrc2 = int( 0.5 * nsrc1 )
# typical error radius (in pixels)
rerr1 = 20
rerr2 = rerr1
def generate_positions(npts,img_shape):
Generate 'npts' points uniformly across 'image_shape'.
Args:
npts : number of points to generate
img_shape : (y,x) shape where to generate points
Returns:
Pair_Coordinates_List : list of (y,x) tuples
import numpy as np
_sy,_sx = img_shape
assert _sy>=5 and _sx>=5 # because I want
indy = np.random.randint(0,_sy-1,npts)
indx = np.random.randint(0,_sx-1,npts)
_inds = zip(indy,indx)
return _inds
# "sources 1"
coords1 = generate_positions(nsrc1,(sy,sx))
assert isinstance(coords1,list) and len(coords1) is nsrc1
# Below are utility functions just to handle an properly format
# the position table to be used -- first, as a dictionary -- by the image generation function
# and then -- as a pandas.DataFrame -- through the rest of the work
def create_positions_table(coords,err_radius):
tab = {}
for i,oo in enumerate(coords):
i = i+1
tab[i] = [i,oo[1],oo[0],err_radius]
return tab
# table for "sources 1"
tab1 = create_positions_table(coords1,rerr1)
def tab2df(tab):
nt = {'ID':[],'x':[],'y':[],'r':[]}
for k,v in tab.iteritems():
nt['ID'].append(v[0])
nt['x'].append(v[1])
nt['y'].append(v[2])
nt['r'].append(v[3])
import pandas
df = pandas.DataFrame(nt)
return df
df1 = tab2df(tab1)
# create and draw each source on black(null) images
def draw_image_sources(tab_positions,img_shape,colormap='colorful'):
Returns a ~PIL.Image with the objects draw in it
Input:
- tab_positions : dict()
dictionary with keys as row numbers (index)
and as values a tuple (index,x_position,y_position,radius)
- img_shape : tuple
tuple with (y,x) sizes, as a ~numpy.array.shape output
- colomap : str
name of the colormap to use: {colorful, blue, red, green}
Output:
- tuple with: ~PIL.Image with the sources(circles) draw
, a dictionary with identifiers for each source (internal use only)
def color_filling(mode='colorful'):
def _colorful(x,y,size):
_R = int(255 - ( int(x/256) + int(y/256)*(1 + ceil(size[0]/256)) )) #TODO: restrict total size of image to avoid _R<=0
_G = x%256
_B = y%256
return (_R,_G,_B)
def _blue(x,y,size):
_R = 0
_G = 0
_B = 255
return (_R,_G,_B)
def _green(x,y,size):
_R = 0
_G = 255
_B = 0
return (_R,_G,_B)
def _red(x,y,size):
_R = 255
_G = 0
_B = 0
return (_R,_G,_B)
foos = {'blue' : _blue,
'red' : _red,
'green' : _green,
'colorful': _colorful}
try:
foo = foos[mode]
except:
foo = _colorful
return foo
from math import ceil
from PIL import Image,ImageDraw
assert(isinstance(img_shape,tuple) and len(img_shape) is 2)
size = img_shape[::-1]
# Modification to accomplish color codes ---
#mode = 'L'
mode = 'RGB'
# ---
color = "black"
img = Image.new(mode,size,color)
assert(len(tab_positions)>=1)
#
dictColorId = {}
filling_foo = color_filling(colormap)
#
for i,src in tab_positions.items():
assert isinstance(src,list) and src is tab_positions[i]
assert len(src)>=4, "length of table raw %d is %d" % (i,len(src))
assert i==src[0]
draw = ImageDraw.Draw(img)
x = src[1]
assert 0<=x and x<size[0], "coordinate x is %d" % x
y = src[2]
assert 0<=y and y<size[1], "coordinate y is %d" % y
r = src[3]
assert r<size[0]/2 and r<size[1]/2
box = (x-r,y-r,x+r,y+r)
# Modification to accomplish color codes ---
#fill=255
fill = filling_foo(x,y,size)
# ---
dictColorId[str(fill)] = i
draw.ellipse(box,fill=fill)
del draw,box,x,y,r
return (img,dictColorId)
img1,cor2id1 = draw_image_sources(tab1,(sy,sx),colormap='blue')
#img1.show()
## Utility functions to handle convertion between PIL -> numpy, to show it with Matplotlib
#
# cmap reference:
#
# cm api: http://matplotlib.org/api/cm_api.html
# cmaps : http://matplotlib.org/users/colormaps.html
# imshow: http://matplotlib.org/users/image_tutorial.html
#cmap = cm.get_cmap('Blues')
def pilImage_2_numpyArray(img,shape):
sx,sy = shape
img_array = numpy.array(list(img.getdata())).reshape(sx,sy,3)
return img_array
def rgbArray_2_mono(img_arr,chanel='R'):
chanels = {'R':0,
'G':1,
'B':2}
_i = chanels[chanel]
return img_arr[:,:,_i]
Explanation: Simulation of source images
Object (or "sources") in astronomical catalogs come from the processing of astronomical images through a detection algorithm. It is out the scope here to discuss such detection algorithms, as such we generate the sources across a region mocking some region of the sky. The images help us to see what were are going to process, and by all means add to the quality/completeness of the workflow.
End of explanation
img1_array = pilImage_2_numpyArray(img1,[sx,sy])
img1_mono = rgbArray_2_mono(img1_array,'B')
plt.imshow(img1_mono,cmap='Blues')
print "Catalog A:"
print "----------"
print df1
# do the same steps for "sources 2"
coords2 = generate_positions(nsrc2,(sy,sx))
assert isinstance(coords2,list) and len(coords2) is nsrc2
tab2 = create_positions_table(coords2,rerr2)
img2,cor2id2 = draw_image_sources(tab2,(sy,sx),colormap='red')
#img2.show()
df2 = tab2df(tab2)
Explanation: Result of the simulation for the first image
End of explanation
img2_array = pilImage_2_numpyArray(img2,[sx,sy])
img2_mono = rgbArray_2_mono(img2_array,'R')
print "Catalog B:"
print "----------"
print df2
plt.imshow(img2_mono,cmap='Reds')
Explanation: Result of the simulation for the second image
End of explanation
def add_arrays_2_image(img1,img2):
def array_2_image(arr):
from PIL import Image
imgout = Image.fromarray(numpy.uint8(arr))
return imgout
return array_2_image(img1+img2)
img_sum = add_arrays_2_image(img1_array,img2_array)
plt.imshow(img_sum)
Explanation: Merge images
End of explanation
def nn_search(catA,catB):
import pandas
assert isinstance(catA,pandas.DataFrame)
assert isinstance(catB,pandas.DataFrame)
A = catA.copy()
B = catB.copy()
from astropy.coordinates import SkyCoord
from astropy import units
norm_fact = 500.0
Ax_norm = A.x / norm_fact
Ay_norm = A.y / norm_fact
A_coord = SkyCoord(ra=Ax_norm, dec=Ay_norm, unit=units.deg)
Bx_norm = B.x / norm_fact
By_norm = B.y / norm_fact
B_coord = SkyCoord(ra=Bx_norm, dec=By_norm, unit=units.deg)
from astropy.coordinates import match_coordinates_sky
match_A_nn_idx, match_A_nn_sep, _d3d = match_coordinates_sky(A_coord, B_coord)
match_B_nn_idx, match_B_nn_sep, _d3d = match_coordinates_sky(B_coord, A_coord)
A['NN_in_B'] = B.ID[match_A_nn_idx].values
B['NN_in_A'] = A.ID[match_B_nn_idx].values
import numpy
A_matched_pairs = zip(numpy.arange(len(match_A_nn_idx)),
match_A_nn_idx )
B_matched_pairs = set(zip(match_B_nn_idx,
numpy.arange(len(match_B_nn_idx))))
duplicate_pairs = []
duplicate_dists = []
for i,p in enumerate(A_matched_pairs):
if p in B_matched_pairs:
duplicate_pairs.append(p)
duplicate_dists.append(match_A_nn_sep[i].value)
A_matched_idx,B_matched_idx = zip(*duplicate_pairs)
df_matched = pandas.DataFrame({ 'A_idx':A_matched_idx,
'B_idx':B_matched_idx,
'separation':duplicate_dists})
df_matched = df_matched.set_index('A_idx')
A.columns = [ 'A_'+c for c in A.columns ]
B.columns = [ 'B_'+c for c in B.columns ]
B_matched = B.iloc[df_matched.B_idx]
B_matched['A_idx'] = df_matched.index
B_matched = B_matched.set_index('A_idx')
B_matched['dist'] = numpy.asarray(df_matched.separation * norm_fact, dtype=int)
df = pandas.concat([A,B_matched],axis=1)
return df
from astropy.table import Table
table_match = Table.from_pandas( nn_search(df1,df2) )
table_match.show_in_notebook()
Explanation: Finally cross-match the catalogs
End of explanation |
13,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2022 The TensorFlow Authors.
Step1: Composing Learning Algorithms
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: NOTE
Step5: There are a few important points about the code above. First, it keeps track of the number of examples seen, as this will constitute the weight of the client update (when computing an average across clients).
Second, it uses tff.learning.templates.ClientResult to package the output. This return type is used to standardize client work building blocks in tff.learning.
Creating a ClientWorkProcess
While the TF logic above will do local training with clipping, it still needs to be wrapped in TFF code in order to create the necessary building block.
Specifically, the 4 building blocks are represented as a tff.templates.MeasuredProcess. This means that all 4 blocks have both an initialize and next function used to instantiate and run the computation.
This allows each building block to keep track of its own state (stored at the server) as needed to perform its operations. While it will not be used in this tutorial, it can be used for things like tracking how many iterations have occurred, or keeping track of optimizer states.
Client work TF logic should generally be wrapped as a tff.learning.templates.ClientWorkProcess, which codifies the expected types going into and out of the client's local training. It can be parameterized by a model and optimizer, as below.
Step6: Composing a Learning Algorithm
Let's put the client work above into a full-fledged algorithm. First, let's set up our data and model.
Preparing the input data
Load and preprocess the EMNIST dataset included in TFF. For more details, see the image classification tutorial.
Step8: In order to feed the dataset into our model, the data is flattened and converted into tuples of the form (flattened_image_vector, label).
Let's select a small number of clients, and apply the preprocessing above to their datasets.
Step9: Preparing the model
This uses the same model as in the image classification tutorial. This model (implemented via tf.keras) has a single hidden layer, followed by a softmax layer. In order to use this model in TFF, Keras model is wrapped as a tff.learning.Model. This allows us to perform the model's forward pass within TFF, and extract model outputs. For more details, also see the image classification tutorial.
Step10: Preparing the optimizers
Just as in tff.learning.build_federated_averaging_process, there are two optimizers here
Step11: Defining the building blocks
Now that the client work building block, data, model, and optimizers are set up, it remains to create building blocks for the distributor, the aggregator, and the finalizer. This can be done just by borrowing some defaults available in TFF and that are used by FedAvg.
Step12: Composing the building blocks
Finally, you can use a built-in composer in TFF for putting the building blocks together. This one is a relatively simple composer, which takes the 4 building blocks above and wires their types together.
Step13: Running the algorithm
Now that the algorithm is done, let's run it. First, initialize the algorithm. The state of this algorithm has a component for each building block, along with one for the global model weights.
Step14: As expected, the client work has an empty state (remember the client work code above!). However, other building blocks may have non-empty state. For example, the finalizer keeps track of how many iterations have occurred. Since next has not been run yet, it has a state of 0.
Step15: Now run a training round.
Step16: The output of this (tff.learning.templates.LearningProcessOutput) has both a .state and .metrics output. Let's look at both.
Step17: Clearly, the finalizer state has incremented by one, as one round of .next has been run. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
from typing import Callable
import tensorflow as tf
import tensorflow_federated as tff
Explanation: Composing Learning Algorithms
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/composing_learning_algorithms"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/composing_learning_algorithms.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/composing_learning_algorithms.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/composing_learning_algorithms.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Before you start
Before you start, please run the following to make sure that your environment is
correctly setup. If you don't see a greeting, please refer to the
Installation guide for instructions.
End of explanation
@tf.function
def client_update(model: tff.learning.Model,
dataset: tf.data.Dataset,
server_weights: tff.learning.ModelWeights,
client_optimizer: tf.keras.optimizers.Optimizer):
Performs training (using the server model weights) on the client's dataset.
# Initialize the client model with the current server weights.
client_weights = tff.learning.ModelWeights.from_model(model)
tf.nest.map_structure(lambda x, y: x.assign(y),
client_weights, server_weights)
# Use the client_optimizer to update the local model.
# Keep track of the number of examples as well.
num_examples = 0.0
for batch in dataset:
with tf.GradientTape() as tape:
# Compute a forward pass on the batch of data
outputs = model.forward_pass(batch)
num_examples += tf.cast(outputs.num_examples, tf.float32)
# Compute the corresponding gradient
grads = tape.gradient(outputs.loss, client_weights.trainable)
# Compute the gradient norm and clip
gradient_norm = tf.linalg.global_norm(grads)
if gradient_norm > 1:
grads = tf.nest.map_structure(lambda x: x/gradient_norm, grads)
grads_and_vars = zip(grads, client_weights.trainable)
# Apply the gradient using a client optimizer.
client_optimizer.apply_gradients(grads_and_vars)
# Compute the difference between the server weights and the client weights
client_update = tf.nest.map_structure(tf.subtract,
client_weights.trainable,
server_weights.trainable)
return tff.learning.templates.ClientResult(
update=client_update, update_weight=num_examples)
Explanation: NOTE: This colab has been verified to work with the latest released version of the tensorflow_federated pip package, but the Tensorflow Federated project is still in pre-release development and may not work on main.
Composing Learning Algorithms
The Building Your Own Federated Learning Algorithm Tutorial used TFF's federated core to directly implement a version of the Federated Averaging (FedAvg) algorithm.
In this tutorial, you will use federated learning components in TFF's API to build federated learning algorithms in a modular manner, without having to re-implement everything from scratch.
For the purposes of this tutorial, you will implement a variant of FedAvg that employs gradient clipping through local training.
Learning Algorithm Building Blocks
At a high level, many learning algorithms can be separated into 4 separate components, referred to as building blocks. These are as follows:
Distributor (ie. server-to-client communication)
Client work (ie. local client computation)
Aggregator (ie. client-to-server communication)
Finalizer (ie. server computation using aggregated client outputs)
While the Building Your Own Federated Learning Algorithm Tutorial implemented all of these building blocks from scratch, this is often unnecessary. Instead, you can re-use building blocks from similar algorithms.
In this case, to implement FedAvg with gradient clipping, you only need to modify the client work building block. The remaining blocks can be identical to what is used in "vanilla" FedAvg.
Implementing the Client Work
First, let's write TF logic that does local model training with gradient clipping. For simplicity, gradients will be clipped have norm at most 1.
TF Logic
End of explanation
def build_gradient_clipping_client_work(
model_fn: Callable[[], tff.learning.Model],
optimizer_fn: Callable[[], tf.keras.optimizers.Optimizer],
) -> tff.learning.templates.ClientWorkProcess:
Creates a client work process that uses gradient clipping.
with tf.Graph().as_default():
# Wrap model construction in a graph to avoid polluting the global context
# with variables created for this model.
model = model_fn()
data_type = tff.SequenceType(model.input_spec)
model_weights_type = tff.learning.framework.weights_type_from_model(model)
@tff.federated_computation
def initialize_fn():
return tff.federated_value((), tff.SERVER)
@tff.tf_computation(model_weights_type, data_type)
def client_update_computation(model_weights, dataset):
model = model_fn()
optimizer = optimizer_fn()
return client_update(model, dataset, model_weights, optimizer)
@tff.federated_computation(
initialize_fn.type_signature.result,
tff.type_at_clients(model_weights_type),
tff.type_at_clients(data_type)
)
def next_fn(state, model_weights, client_dataset):
client_result = tff.federated_map(
client_update_computation, (model_weights, client_dataset))
# Return empty measurements, though a more complete algorithm might
# measure something here.
measurements = tff.federated_value((), tff.SERVER)
return tff.templates.MeasuredProcessOutput(state, client_result,
measurements)
return tff.learning.templates.ClientWorkProcess(
initialize_fn, next_fn)
Explanation: There are a few important points about the code above. First, it keeps track of the number of examples seen, as this will constitute the weight of the client update (when computing an average across clients).
Second, it uses tff.learning.templates.ClientResult to package the output. This return type is used to standardize client work building blocks in tff.learning.
Creating a ClientWorkProcess
While the TF logic above will do local training with clipping, it still needs to be wrapped in TFF code in order to create the necessary building block.
Specifically, the 4 building blocks are represented as a tff.templates.MeasuredProcess. This means that all 4 blocks have both an initialize and next function used to instantiate and run the computation.
This allows each building block to keep track of its own state (stored at the server) as needed to perform its operations. While it will not be used in this tutorial, it can be used for things like tracking how many iterations have occurred, or keeping track of optimizer states.
Client work TF logic should generally be wrapped as a tff.learning.templates.ClientWorkProcess, which codifies the expected types going into and out of the client's local training. It can be parameterized by a model and optimizer, as below.
End of explanation
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
Explanation: Composing a Learning Algorithm
Let's put the client work above into a full-fledged algorithm. First, let's set up our data and model.
Preparing the input data
Load and preprocess the EMNIST dataset included in TFF. For more details, see the image classification tutorial.
End of explanation
NUM_CLIENTS = 10
BATCH_SIZE = 20
def preprocess(dataset):
def batch_format_fn(element):
Flatten a batch of EMNIST data and return a (features, label) tuple.
return (tf.reshape(element['pixels'], [-1, 784]),
tf.reshape(element['label'], [-1, 1]))
return dataset.batch(BATCH_SIZE).map(batch_format_fn)
client_ids = sorted(emnist_train.client_ids)[:NUM_CLIENTS]
federated_train_data = [preprocess(emnist_train.create_tf_dataset_for_client(x))
for x in client_ids
]
Explanation: In order to feed the dataset into our model, the data is flattened and converted into tuples of the form (flattened_image_vector, label).
Let's select a small number of clients, and apply the preprocessing above to their datasets.
End of explanation
def create_keras_model():
initializer = tf.keras.initializers.GlorotNormal(seed=0)
return tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer=initializer),
tf.keras.layers.Softmax(),
])
def model_fn():
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=federated_train_data[0].element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
Explanation: Preparing the model
This uses the same model as in the image classification tutorial. This model (implemented via tf.keras) has a single hidden layer, followed by a softmax layer. In order to use this model in TFF, Keras model is wrapped as a tff.learning.Model. This allows us to perform the model's forward pass within TFF, and extract model outputs. For more details, also see the image classification tutorial.
End of explanation
client_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=0.01)
server_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=1.0)
Explanation: Preparing the optimizers
Just as in tff.learning.build_federated_averaging_process, there are two optimizers here: A client optimizer, and a server optimizer. For simplicity, the optimizers will be SGD with different learning rates.
End of explanation
@tff.tf_computation()
def initial_model_weights_fn():
return tff.learning.ModelWeights.from_model(model_fn())
model_weights_type = initial_model_weights_fn.type_signature.result
distributor = tff.learning.templates.build_broadcast_process(model_weights_type)
client_work = build_gradient_clipping_client_work(model_fn, client_optimizer_fn)
# TFF aggregators use a factory pattern, which create an aggregator
# based on the output type of the client work. This also uses a float (the number
# of examples) to govern the weight in the average being computed.)
aggregator_factory = tff.aggregators.MeanFactory()
aggregator = aggregator_factory.create(model_weights_type.trainable,
tff.TensorType(tf.float32))
finalizer = tff.learning.templates.build_apply_optimizer_finalizer(
server_optimizer_fn, model_weights_type)
Explanation: Defining the building blocks
Now that the client work building block, data, model, and optimizers are set up, it remains to create building blocks for the distributor, the aggregator, and the finalizer. This can be done just by borrowing some defaults available in TFF and that are used by FedAvg.
End of explanation
fed_avg_with_clipping = tff.learning.templates.compose_learning_process(
initial_model_weights_fn,
distributor,
client_work,
aggregator,
finalizer
)
Explanation: Composing the building blocks
Finally, you can use a built-in composer in TFF for putting the building blocks together. This one is a relatively simple composer, which takes the 4 building blocks above and wires their types together.
End of explanation
state = fed_avg_with_clipping.initialize()
state.client_work
Explanation: Running the algorithm
Now that the algorithm is done, let's run it. First, initialize the algorithm. The state of this algorithm has a component for each building block, along with one for the global model weights.
End of explanation
state.finalizer
Explanation: As expected, the client work has an empty state (remember the client work code above!). However, other building blocks may have non-empty state. For example, the finalizer keeps track of how many iterations have occurred. Since next has not been run yet, it has a state of 0.
End of explanation
learning_process_output = fed_avg_with_clipping.next(state, federated_train_data)
Explanation: Now run a training round.
End of explanation
learning_process_output.state.finalizer
Explanation: The output of this (tff.learning.templates.LearningProcessOutput) has both a .state and .metrics output. Let's look at both.
End of explanation
learning_process_output.metrics
Explanation: Clearly, the finalizer state has incremented by one, as one round of .next has been run.
End of explanation |
13,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: encode fun
Step2: encode
Step3: we have an array of 25,000 rows and 10,000 columns. columns are words, rows are documents | Python Code:
import tensorflow as tf
import numpy as np
import pandas as pd
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.imdb.load_data(num_words=10000)
word_index = tf.keras.datasets.imdb.get_word_index()
word_index['fawn']
# why in the world it's indexed by word?
reverse_word_index = dict([(value,key) for (key,value) in word_index.items()])
reverse_word_index[4]
reverse_word_index[1]
min([max(sequence) for sequence in x_train])
pd.DataFrame(x_train)
np.sum(y_train)/y_train.shape[0]
Explanation: <a href="https://colab.research.google.com/github/matthewpecsok/development/blob/master/imbd.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
def vector_seq(seq,dim=10000):
res = np.zeros((len(seq),dim))
for i, seq in enumerate(seq):
res[i,seq] = 1.
return(res)
Explanation: encode fun
End of explanation
x_train_enc = vector_seq(x_train)
x_test_enc = vector_seq(x_test)
x_train_enc
x_train_enc.shape
x_train_enc.dtype
Explanation: encode
End of explanation
y_train.dtype
type(y_train)
y_train = np.asarray(y_train).astype('float32')
y_test = np.asarray(y_test).astype('float32')
y_train.dtype
type(y_train)
np.dot((1,2),(2,3))
tf.keras.activations.relu(np.dot((1,2,3,4),(2,3,4,5)))
tf.keras.activations.relu(np.dot((1,2,3,4),(2,3,4,5)))
Explanation: we have an array of 25,000 rows and 10,000 columns. columns are words, rows are documents
End of explanation |
13,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions
Functions in Python work straight-forward
Step1: But what if we wanted to reuse this code to congratulate someone else, e.g. named Thomas? There's basically two (fundamentally different) approaches here
Step2: Or – and here's where the high art of programming actually begins – write a function that takes a certain input variable (i.e. the name), performs some processing steps (i.e. put the single lines together), and returns the result (i.e. the complete song).
Step3: Using our newly gained knowledge about <a href="https
Step4: Please note that in the latter version, we also set a default value for name in order to save some typing work on Chris's birthday. In the end, it is totally up to you which of the two versions you prefer, i.e. four print statements in a row or rather the nested for/if-else solution.
<hr>
The return statement
So far, you have only encountered selfmade functions that print some text to the console. However, the majority of functions you'll be writing in the future need to perform actual calculations based on a set of input variables and then return an output object.
Therefore, in addition to the things we have just learned, bear in mind to insert a return statement whenever you are actually trying to return an object. For example,
Step5: returns an empty tuple, whereas
Step6: returns an int object. Accordingly, a custom sum function that takes a sequence of numbers as input could roughly look as follows | Python Code:
print("Happy birthday to you.")
print("Happy birthday to you.")
print("Happy birthday, dear Chris.")
print("Happy birthday to you.")
Explanation: Functions
Functions in Python work straight-forward: They (optionally) require a set of inputs, perform some internal operations, and return a result. Now, we could either go on talking about the basic syntax of function declaration, how the whole story works, etc... OR we could simply take a look at the following image that puts it all in a graphical, and hence, easy-to-understand context (all the credit is due to <a href="http://learnict.it/computerscienceposters/">LearnICT.it</a>):
<br>
<center>
<figure>
<img src="http://hcc-cs.weebly.com/uploads/2/4/5/3/24535251/1088210.jpg?1390056443" alt="functions" width="690">
<figcaption>Source: http://hcc-cs.weebly.com/functions.html</figcaption>
</figure>
</center>
<br>
Okay, so just for the record, here's the syntax of function definitions in Python:
def function_name () : <br>
   indentedFunctionBody
So, what's the purpose of functions anyway? Well... suppose we wanted to write some Python code that reproduces the lines of the famous "Happy Birthday" song (taken from <a href="http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/functions.html">here</a>). In order to achieve this and assuming our birthday child's name is Chris, we could simply use some print calls:
End of explanation
print("Happy birthday to you.")
print("Happy birthday to you.")
print("Happy birthday, dear Thomas.")
print("Happy birthday to you.")
Explanation: But what if we wanted to reuse this code to congratulate someone else, e.g. named Thomas? There's basically two (fundamentally different) approaches here: We could either copy and paste the above code, replacing the name of the birthday child:
End of explanation
def birthdaySong(name):
print("Happy birthday to you.")
print("Happy birthday to you.")
print("Happy birthday, dear ", name, ".", sep = "")
print("Happy birthday to you.")
birthdaySong(name = "Thomas")
Explanation: Or – and here's where the high art of programming actually begins – write a function that takes a certain input variable (i.e. the name), performs some processing steps (i.e. put the single lines together), and returns the result (i.e. the complete song).
End of explanation
def birthdaySong(name = "Chris"):
for i in range(4):
if (i != 2):
print("Happy birthday to you.")
else:
print("Happy birthday, dear ", name, ".", sep = "")
birthdaySong()
Explanation: Using our newly gained knowledge about <a href="https://oer.uni-marburg.de/goto.php?target=pg_5101_720&client_id=mriliasmooc">E02-1: Loops</a> and <a href="https://oer.uni-marburg.de/goto.php?target=pg_5102_720&client_id=mriliasmooc">E02-2: Conditionals</a> from before, we could take this even a step further and restructure the function's body using a for loop together with embedded if-else statements:
End of explanation
def f1():
return()
f1()
Explanation: Please note that in the latter version, we also set a default value for name in order to save some typing work on Chris's birthday. In the end, it is totally up to you which of the two versions you prefer, i.e. four print statements in a row or rather the nested for/if-else solution.
<hr>
The return statement
So far, you have only encountered selfmade functions that print some text to the console. However, the majority of functions you'll be writing in the future need to perform actual calculations based on a set of input variables and then return an output object.
Therefore, in addition to the things we have just learned, bear in mind to insert a return statement whenever you are actually trying to return an object. For example,
End of explanation
def f2():
return(5)
f2()
Explanation: returns an empty tuple, whereas
End of explanation
def sumPy(x):
out = 0.0
for i in x:
out += i
return(out)
sumPy(range(0, 6))
Explanation: returns an int object. Accordingly, a custom sum function that takes a sequence of numbers as input could roughly look as follows:
End of explanation |
13,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load a thredds dataset
In the following example we will load a thredds dataset from the norwegian met.no thredds server.
Step1: The first step is to load the dataset. This will be performed with pymepps.open_model_dataset. The NetCDF4 backend is also supporting opendap paths. So we could specify nc as data type.
Step2: The resulting dataset is a SpatialDataset. The dataset has several methods to load a xr.DataArray from the path. It also possible to print the content of the dataset. The content contains the dataset type, the number of file handlers within the dataset and all available data variables.
Step3: The next step is to select/extract a variable from the Dataset. We will select the air temperature in 2 metre height and print the content of the resulting data
Step4: We could see that the resulting data is a normal xarray.DataArray and all of the DataArray methods could be used. The coordinates of the DataArray are normalized. The DataArray is expanded with an accessor. Also the coordinates are normalized. We could access the accessor with metno_t2m.pp. The main methods of the accessor are allowing a grid handling. So our next step is to explore the grid of the DataArray.
Step5: We could see that the grid is a grid with a defined projection. In our next step we will slice out an area around Hamburg. We will see that a new DataArray with a new grid is created.
Step6: We sliced a longitude and latitude box around the given grid. So we sliced the data in a longitude and latitude projection. Our original grid was in another projection with unstructured lat lon coordinates. So it is not possible to create a structured grid based on this slice. So the grid becomes an unstructured grid. In the next step we will show the remapping capabilities of the pymepps grid structure.
If we slice the data we have seen that the structured grid could not maintained. So in the next step we will create a structured LonLatGrid from scratch. After the grid building we will remap the raw DataArray basen on the new grid.
The first step is to calculate the model resolution in degree.
Step7: Our next step is to build the grid. The grid implementation is inspired by the climate data operators. So to build the grid we will use the same format.
Step8: Now we use our grid dict together with the GridBuilder to build our grid.
Step9: Now we created the grid. The next step is a remapping of the raw DataArray to the new Grid. We will use th enearest neighbour approach to remap the data.
Step10: To plot the data in a map, we have to slice the data. We will select the first validtime as plotting parameter.
Step11: In the map around Hamburg we could see the north and baltic sea in the top edges. But with the nearest enighbour approach we retain some of the sharp edges at the map. Our last step is a second remap plot, this time with a bilinear approach. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pymepps
Explanation: Load a thredds dataset
In the following example we will load a thredds dataset from the norwegian met.no thredds server.
End of explanation
metno_path = 'http://thredds.met.no/thredds/dodsC/meps25files/' \
'meps_det_pp_2_5km_latest.nc'
metno_ds = pymepps.open_model_dataset(metno_path, 'nc')
Explanation: The first step is to load the dataset. This will be performed with pymepps.open_model_dataset. The NetCDF4 backend is also supporting opendap paths. So we could specify nc as data type.
End of explanation
print(metno_ds)
Explanation: The resulting dataset is a SpatialDataset. The dataset has several methods to load a xr.DataArray from the path. It also possible to print the content of the dataset. The content contains the dataset type, the number of file handlers within the dataset and all available data variables.
End of explanation
metno_t2m = metno_ds.select('air_temperature_2m')
print(metno_t2m)
metno_t2m.isel(validtime=0).plot()
plt.show()
Explanation: The next step is to select/extract a variable from the Dataset. We will select the air temperature in 2 metre height and print the content of the resulting data
End of explanation
print(metno_t2m.pp.grid)
Explanation: We could see that the resulting data is a normal xarray.DataArray and all of the DataArray methods could be used. The coordinates of the DataArray are normalized. The DataArray is expanded with an accessor. Also the coordinates are normalized. We could access the accessor with metno_t2m.pp. The main methods of the accessor are allowing a grid handling. So our next step is to explore the grid of the DataArray.
End of explanation
hh_bounds = [9, 54, 11, 53]
t2m_hh = metno_t2m.pp.sellonlatbox(hh_bounds)
print(t2m_hh.pp.grid)
print(t2m_hh)
Explanation: We could see that the grid is a grid with a defined projection. In our next step we will slice out an area around Hamburg. We will see that a new DataArray with a new grid is created.
End of explanation
res = 2500 # model resolution in metre
earth_radius = 6371000 # Earth radius in metre
res_deg = np.round(res*360/(earth_radius*2*np.pi), 4)
# rounded model resolution equivalent in degree if it where on the equator
print(res_deg)
Explanation: We sliced a longitude and latitude box around the given grid. So we sliced the data in a longitude and latitude projection. Our original grid was in another projection with unstructured lat lon coordinates. So it is not possible to create a structured grid based on this slice. So the grid becomes an unstructured grid. In the next step we will show the remapping capabilities of the pymepps grid structure.
If we slice the data we have seen that the structured grid could not maintained. So in the next step we will create a structured LonLatGrid from scratch. After the grid building we will remap the raw DataArray basen on the new grid.
The first step is to calculate the model resolution in degree.
End of explanation
grid_dict = dict(
gridtype='lonlat',
xsize=int((hh_bounds[2]-hh_bounds[0])/res_deg),
ysize=int((hh_bounds[1]-hh_bounds[3])/res_deg),
xfirst=hh_bounds[0],
xinc=res_deg,
yfirst=hh_bounds[3],
yinc=res_deg,
)
Explanation: Our next step is to build the grid. The grid implementation is inspired by the climate data operators. So to build the grid we will use the same format.
End of explanation
builder = pymepps.GridBuilder(grid_dict)
hh_grid = builder.build_grid()
print(hh_grid)
Explanation: Now we use our grid dict together with the GridBuilder to build our grid.
End of explanation
t2m_hh_remapped = metno_t2m.pp.remapnn(hh_grid)
print(t2m_hh_remapped)
Explanation: Now we created the grid. The next step is a remapping of the raw DataArray to the new Grid. We will use th enearest neighbour approach to remap the data.
End of explanation
t2m_hh_remapped.isel(validtime=0).plot()
plt.show()
Explanation: To plot the data in a map, we have to slice the data. We will select the first validtime as plotting parameter.
End of explanation
# sphinx_gallery_thumbnail_number = 3
metno_t2m.pp.remapbil(hh_grid).isel(validtime=0).plot()
plt.show()
Explanation: In the map around Hamburg we could see the north and baltic sea in the top edges. But with the nearest enighbour approach we retain some of the sharp edges at the map. Our last step is a second remap plot, this time with a bilinear approach.
End of explanation |
13,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The computation graph
TensorFlow programs are usually structured into a construction phase, that assembles a graph, and an execution phase that uses a session to execute ops in the graph.
For example, it is common to create a graph to represent and train a neural network in the construction phase, and then repeatedly execute a set of training ops in the graph in the execution phase.
TensorFlow can be used from C, C++, and Python programs. It is presently much easier to use the Python library to assemble graphs, as it provides a large set of helper functions not available in the C and C++ libraries.
The session libraries have equivalent functionalities for the three languages.
Building the graph
To build a graph start with ops that do not need any input (source ops), such as Constant, and pass their output to other ops that do computation.
The ops constructors in the Python library return objects that stand for the output of the constructed ops. You can pass these to other ops constructors to use as inputs.
The TensorFlow Python library has a default graph to which ops constructors add nodes. The default graph is sufficient for many applications. See the Graph class documentation for how to explicitly manage multiple graphs.
Step1: Launching the graph in a session
Step2: Sessions should be closed to release resources. You can also enter a Session with a "with" block. The Session closes automatically at the end of the with block.
Step3: If you have more than one GPU available on your machine, to use a GPU beyond the first you must assign ops to it explicitly. Use with...Device statements to specify which CPU or GPU to use for operations
Step4: Devices are specified with strings. The currently supported devices are
Step5: Interactive Usage
Step6: Tensors
TensorFlow programs use a tensor data structure to represent all data -- only tensors are passed between operations in the computation graph. You can think of a TensorFlow tensor as an n-dimensional array or list. A tensor has a static type, a rank, and a shape. To learn more about how TensorFlow handles these concepts, see the Rank, Shape, and Type reference.
Variables
Variables maintain state across executions of the graph. The following example shows a variable serving as a simple counter. See Variables for more details.
Step7: Fetches
To fetch the outputs of operations, execute the graph with a run() call on the Session object and pass in the tensors to retrieve. In the previous example we fetched the single node state, but you can also fetch multiple tensors
Step8: Feeds
The examples above introduce tensors into the computation graph by storing them in Constants and Variables. TensorFlow also provides a feed mechanism for patching a tensor directly into any operation in the graph.
A feed temporarily replaces the output of an operation with a tensor value. You supply feed data as an argument to a run() call. The feed is only used for the run call to which it is passed. The most common use case involves designating specific operations to be "feed" operations by using tf.placeholder() to create them | Python Code:
import tensorflow as tf
# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])
# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])
# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)
print(product)
Explanation: The computation graph
TensorFlow programs are usually structured into a construction phase, that assembles a graph, and an execution phase that uses a session to execute ops in the graph.
For example, it is common to create a graph to represent and train a neural network in the construction phase, and then repeatedly execute a set of training ops in the graph in the execution phase.
TensorFlow can be used from C, C++, and Python programs. It is presently much easier to use the Python library to assemble graphs, as it provides a large set of helper functions not available in the C and C++ libraries.
The session libraries have equivalent functionalities for the three languages.
Building the graph
To build a graph start with ops that do not need any input (source ops), such as Constant, and pass their output to other ops that do computation.
The ops constructors in the Python library return objects that stand for the output of the constructed ops. You can pass these to other ops constructors to use as inputs.
The TensorFlow Python library has a default graph to which ops constructors add nodes. The default graph is sufficient for many applications. See the Graph class documentation for how to explicitly manage multiple graphs.
End of explanation
# Launch the default graph.
sess = tf.Session()
# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op. This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session. They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of three ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
result = sess.run(product)
print(result)
# ==> [[ 12.]]
# Close the Session when we're done.
sess.close()
Explanation: Launching the graph in a session
End of explanation
with tf.Session() as sess:
result = sess.run([product])
print(result)
sess.close()
Explanation: Sessions should be closed to release resources. You can also enter a Session with a "with" block. The Session closes automatically at the end of the with block.
End of explanation
with tf.Session() as sess:
with tf.device("/gpu:1"):
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
print(result)
sess.close()
Explanation: If you have more than one GPU available on your machine, to use a GPU beyond the first you must assign ops to it explicitly. Use with...Device statements to specify which CPU or GPU to use for operations
End of explanation
with tf.Session("grpc://example:2222") as sess:
# Calls to sess.run(...) will be executed on the cluster.
with tf.device("/gpu:1"):
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
#result = sess.run([product])
print(result)
sess.close()
Explanation: Devices are specified with strings. The currently supported devices are:
"/cpu:0": The CPU of your machine.
"/gpu:0": The GPU of your machine, if you have one.
"/gpu:1": The second GPU of your machine, etc.
Launching the graph in a distributed session
To create a TensorFlow cluster, launch a TensorFlow server on each of the machines in the cluster. When you instantiate a Session in your client, you pass it the network location of one of the machines in the cluster:
End of explanation
# Enter an interactive TensorFlow Session.
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
# Initialize 'x' using the run() method of its initializer op.
x.initializer.run()
# Add an op to subtract 'a' from 'x'. Run it and print the result
sub = tf.sub(x, a)
print(sub.eval())
# ==> [-2. -1.]
# Close the Session
sess.close()
Explanation: Interactive Usage
End of explanation
# Reset the computation graph
tf.reset_default_graph()
# Create a Variable, that will be initialized to the scalar value 0.
state = tf.Variable(0, name="counter")
# Create an Op to add one to `state`.
one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)
# Launch the graph and run the ops.
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(sess.run(state))
for _ in range(3):
sess.run(update)
print(sess.run(state))
Explanation: Tensors
TensorFlow programs use a tensor data structure to represent all data -- only tensors are passed between operations in the computation graph. You can think of a TensorFlow tensor as an n-dimensional array or list. A tensor has a static type, a rank, and a shape. To learn more about how TensorFlow handles these concepts, see the Rank, Shape, and Type reference.
Variables
Variables maintain state across executions of the graph. The following example shows a variable serving as a simple counter. See Variables for more details.
End of explanation
# Reset the computation graph
tf.reset_default_graph()
#
input1 = tf.constant([3.0])
input2 = tf.constant([2.0])
input3 = tf.constant([5.0])
intermed = tf.add(input2, input3)
mul = tf.mul(input1, intermed)
with tf.Session() as sess:
result = sess.run([mul, intermed])
print(result)
Explanation: Fetches
To fetch the outputs of operations, execute the graph with a run() call on the Session object and pass in the tensors to retrieve. In the previous example we fetched the single node state, but you can also fetch multiple tensors:
End of explanation
# Reset the computation graph
tf.reset_default_graph()
#
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session() as sess:
print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
Explanation: Feeds
The examples above introduce tensors into the computation graph by storing them in Constants and Variables. TensorFlow also provides a feed mechanism for patching a tensor directly into any operation in the graph.
A feed temporarily replaces the output of an operation with a tensor value. You supply feed data as an argument to a run() call. The feed is only used for the run call to which it is passed. The most common use case involves designating specific operations to be "feed" operations by using tf.placeholder() to create them:
End of explanation |
13,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Chapter 1 - Getting Started with Variables and Values
This notebook uses code snippets and explanations from this course.
Welcome to the course! In this course we will learn how to load, process and save data using a versatile programming language
Step2: What happened here? Well, Python has a large set of built-in functions, and print() is one of them. When you use this function, print() outputs its argument to the screen. 'Argument' is a fancy word for "object you put in a function". In this case, the argument is the string "Hello, world!". And 'string' just means "a sequence of characters".
Did you also notice the first line starting with a hash (#) character? This is called a comment. We use comments to document our code and explain what's happening. These lines are not executed in Python. We will use them a lot in this course to make our code easy to understand!
Can you edit the block below in such a way that it will print out your own name?
Step3: 1.2 Calculations
Apart from printing some text to your screen, you can also use Python to do calculations. The code is actually really simple - you probly know most of it from your calculator!
Step4: 2. Variables and values
Instead of providing the string directly as an argument to the print function, we can also create a variable that refers to the string value "Hello, world!".
When you pass this variable to the print() function, you get the same result as before
Step5: Such a piece of text ("Hello, world!") is called a string in Python (a string of characters). Strings in Python must always be enclosed with 'quotes' (e.g. single or double quotes). Without those quotes, Python will think it's dealing with the name of some variable that has been defined earlier, because variable names never take quotes.
Programming languages can be seen as formalized ways of telling your computer what to do. They rely on very strict regularities (called syntax) so they can distinguish different kinds of things. We will see that python knows a couple of different data types, each of which can be used to do different things. Python needs to be able to tell them apart.
The following distinction is confusing, but extremely important
Step6: We can also assign numerical values to variables
Step7: 2.1 Variable assignment
If you vaguely remember your math classes in school, this should look familiar. It is basically the same notation with the name of the variable on the left, the value on the right, and the '=' sign in the middle. This is what is called assignment. We stored a value and named it using the '=' symbol, so that we can easily use it later on without having to type the particular value.
We can use the box metaphor to further explain this concept. The variable x above behaves pretty much like a box on which we write an x with a thick, black marker to find it back later. In this box we can put whatever we want, such as a piece of text or a numerical value. In Python, the term variable refers to such a box, whereas the term value refers to what is inside this box.
Note that we can re-use variable names for other values, but that any assignment will overwrite the original value! In other words
Step8: When we have stored values inside variables, we can do interesting things with these variables. Run the following code block to see what happens.
Step9: 2.2 Variable names
Note that the variable names text and x used above are not part of Python. In principle, you could use any name you like. Even if you change the variable text to something silly like pikachu or sniffles, the example would still work
Step10: However, variable names are only valid if they
Step11: 2.3 Copying/referencing variables
We can also 'copy' the contents of a variable into another variable, which is what happens in the code below. In fact, what is happening is that the variable second_number now refers to the same data object as first_number. You should of course watch out in such cases
Step12: Have a look at this code at Python Tutor to see what's happening!
2.4 User input
Up until now we have defined the values stored in the variables ourselves. However, we can also ask for input from a user. We'll make use of another built-in function
Step13: Exercises
Exercise 1
Step14: Exercise 2
Step15: Exercise 3
Step16: Exercise 4
Step17: Exercise 5
Step18: Exercise 6 | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Materil.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_01_Getting_Started_with_Variables_and_Values.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
# this will print some text
print("Hello, world!")
Explanation: Chapter 1 - Getting Started with Variables and Values
This notebook uses code snippets and explanations from this course.
Welcome to the course! In this course we will learn how to load, process and save data using a versatile programming language: Python. We are going to practise Python using Notebooks. These Notebooks contain instructions and so called 'code blocks'. The instructions are paragraphs of text that explain the concepts we are going to use. The 'code blocks' contain Python code.
Note: Jupyter has integrated different languages now! This means that your menue is probably shown in your language rather than in English. Please take this into account when reading the instructions.
Notebooks are pretty straightforward. Some tips:
Cells in a notebook contain code or text. If you run a cell, it will either run the code or render the text.
There are five ways to run a cell:
Click the 'play' button next to the 'stop' and 'refresh' button in the toolbar.
Alt + Enter runs the current cell and creates a new cell.
Ctrl + Enter runs the current cell without creating a new cell. (Cmd + Enter on a Mac.)
Shift + Enter runs the current cell and moves to the next one.
Use the menu and select Run/Run selected cells or Run/Run all cells to run the entire notebook (not recommended at this stage).
You can create a new cell by hitting the '+' button on top.
The instructions are written in Markdown. You can select whether a cell should contain markdown or code by clicking on the drop-down menue on top. Here is a nice Markdown cheatsheet if you want to write some more.
Explore the menus for more options! You can even create a presentation using Notebooks.
Hint: when you're writing Python code, press Tab to auto-complete your variable names! We will introduce you to variable names in this notebook.
At the end of this chapter, you will be able to:
print information to your screen using the built-in function print()
assign values to variables (using valid and clear variable names)
do calculations in Python
have an understanding of some basic aspects of programming in python
If you want to learn more about these topics, you might find the following links useful:
Documentation: The Python 3 documentation
Glossary: Glossary
Free e-book: How to think like a computer scientist
Free e-book: A Byte of Python
Community: Learnpython -- Reddit community for learners of Python
Video: Python names and values -- Note: this might be a bit too technical at this stage
PEP8 Style Guide for Python
Python Tutor -- Shows you line-by-line what your code does
Important information for this course:
We work with the latest python version (3.8). Please make sure this is the version you're using.
We highly recommend using the Anaconda distribution of python. It comes with a couple of highly useful python installations as well as the jupyter notebook environment. Everything you need will be installed in one go.
This course does not assume any programming background. We strongly believe that anyone can learn how to code. We encourage a playful attitude and do our best to include fear-taking exercises.
A note on getting stuck:
Please note that learning how to code is a process that usually heavily relies on 'trial and error'. This means that throughout the course, you will repeatedly have to try out different solutions until you find out what works (and hopefully why). Having to try several times is completely normal and part of the process - please do not get discouraged.
It is highly recommended to start learning how to 'debug' your own code early on. If you do not know what to do or get error messages, try and take a step back and think about what you were doing and why. Using pen and paper to break down problems into smaller steps is highly recommeded. We will highlight different debugging strategies throughout the course.
Please follow the steps outlined in the readme [link] if you get stuck. Contact the teacher if you have applied all strategies but could not solve your problem.
If you have questions about this chapter, please contact us at [email protected].
Now let's get started!
1. Getting started together
1.1 Hello, world!
The best way to learn Python is by jumping right in. Let's start with something really simple. Every programming language is traditionally introduced with a "Hello world" example. Please run the following cell:
End of explanation
print("Hello, world!")
Explanation: What happened here? Well, Python has a large set of built-in functions, and print() is one of them. When you use this function, print() outputs its argument to the screen. 'Argument' is a fancy word for "object you put in a function". In this case, the argument is the string "Hello, world!". And 'string' just means "a sequence of characters".
Did you also notice the first line starting with a hash (#) character? This is called a comment. We use comments to document our code and explain what's happening. These lines are not executed in Python. We will use them a lot in this course to make our code easy to understand!
Can you edit the block below in such a way that it will print out your own name?
End of explanation
# summing
print(3+2)
# subtracting
print(7-1)
# multiplication
print(3*3)
# division
print(10/3)
# power
print(5**2)
# combining stuff
print(5*2-3+4/2)
#using brakets to tell python what to calculate first:
print((5+5)*(8+2))
# compared to:
print(5+5*8+2)
#Can you tell what is happening in these examples?
Explanation: 1.2 Calculations
Apart from printing some text to your screen, you can also use Python to do calculations. The code is actually really simple - you probly know most of it from your calculator!
End of explanation
text = "Hello, world!"
print(text)
Explanation: 2. Variables and values
Instead of providing the string directly as an argument to the print function, we can also create a variable that refers to the string value "Hello, world!".
When you pass this variable to the print() function, you get the same result as before:
End of explanation
name = "Patrick Bateman"
print("name") # this is a string value
print(name) # this is a variable name containing a string value
Explanation: Such a piece of text ("Hello, world!") is called a string in Python (a string of characters). Strings in Python must always be enclosed with 'quotes' (e.g. single or double quotes). Without those quotes, Python will think it's dealing with the name of some variable that has been defined earlier, because variable names never take quotes.
Programming languages can be seen as formalized ways of telling your computer what to do. They rely on very strict regularities (called syntax) so they can distinguish different kinds of things. We will see that python knows a couple of different data types, each of which can be used to do different things. Python needs to be able to tell them apart.
The following distinction is confusing, but extremely important: variable names (without quotes) and string values (with quotes) look similar, but they serve a completely different purpose. Compare:
End of explanation
x = 22
print(x)
Explanation: We can also assign numerical values to variables:
End of explanation
text = "I like apples"
print(text)
text = "I like oranges"
print(text)
Explanation: 2.1 Variable assignment
If you vaguely remember your math classes in school, this should look familiar. It is basically the same notation with the name of the variable on the left, the value on the right, and the '=' sign in the middle. This is what is called assignment. We stored a value and named it using the '=' symbol, so that we can easily use it later on without having to type the particular value.
We can use the box metaphor to further explain this concept. The variable x above behaves pretty much like a box on which we write an x with a thick, black marker to find it back later. In this box we can put whatever we want, such as a piece of text or a numerical value. In Python, the term variable refers to such a box, whereas the term value refers to what is inside this box.
Note that we can re-use variable names for other values, but that any assignment will overwrite the original value! In other words: when you re-asign a variable, you remove the content of the box and put something new in it. Each variable will always contain the value that you last assigned to it.
End of explanation
x = 3
print(x)
print(x * x)
print(x + x)
print(x - 6)
Explanation: When we have stored values inside variables, we can do interesting things with these variables. Run the following code block to see what happens.
End of explanation
sniffles = "Hello, world!"
print(sniffles)
Explanation: 2.2 Variable names
Note that the variable names text and x used above are not part of Python. In principle, you could use any name you like. Even if you change the variable text to something silly like pikachu or sniffles, the example would still work:
End of explanation
seconds_in_seven_years = 220752000
print(seconds_in_seven_years)
Explanation: However, variable names are only valid if they:
start with a letter or underscore (_)
only contain letters, numbers and underscores
Even though you could use any variable name as long as they are valid, there are some naming conventions that are explained in the PEP8 Style Guide for Python Code. For now, it's enough to remember the following for naming your variables:
use clear, meaningful, descriptive variable names so that your code will remain understandable
use the lowercase_with_underscores style, with lowercase characters and underscores for separating words
do not use built-in names, such as print or sum (these will turn green in Jupyter Notebooks)
For example, the following variable name is valid, much more descriptive than x would be and follows the naming conventions:
End of explanation
first_number = 5
print(first_number)
second_number = first_number
first_number = 3
print(first_number)
print(second_number)
Explanation: 2.3 Copying/referencing variables
We can also 'copy' the contents of a variable into another variable, which is what happens in the code below. In fact, what is happening is that the variable second_number now refers to the same data object as first_number. You should of course watch out in such cases: make sure that you keep track of the value of each individual variable in your code (later in the course, we will see that this is especially tricky with data types that are mutable (i.e. things you can modify after you created them), such as lists).
End of explanation
text = input("Please enter some text: ")
print(text)
Explanation: Have a look at this code at Python Tutor to see what's happening!
2.4 User input
Up until now we have defined the values stored in the variables ourselves. However, we can also ask for input from a user. We'll make use of another built-in function: input(). This takes user input and returns it as a string. Try it below:
End of explanation
# your code here
Explanation: Exercises
Exercise 1:
Use Python as a calculator to calculate the number of seconds in seven years (in one line of code). Assign the output to a variable with a clear variable name. Print the result.
End of explanation
days_in_year = 365
# assign each of the values to meaningful variable names
seconds_in_seven_years = days_in_year * # finish this line
print(seconds_in_seven_years)
Explanation: Exercise 2:
Rewrite the code you wrote for exercise 1 by assigning each of the numerical values to a variable with a clear name. Then use these variables to calculate the number of seconds in seven years. We've made a start for you below:
End of explanation
eggs = 3
_eggs = 6
5eggs = 5
eggs$ = 1
eggs123 = 9
ten_eggs = 10
TwelveEggs = 8
twelve.eggs = 12
Explanation: Exercise 3:
Run the following code block and see what happens. Can you fix the invalid variable names? You should get no error in the end.
End of explanation
first_number = 3
second_number = 5
Explanation: Exercise 4:
Can you write some code that swaps the values of these two variables? Hint: the easiest way is to create an extra variable. If you like a more challenging exercise, try to swap the values without using an extra variable.
End of explanation
# adapt the code
name1 = "Paul"
print("Hello,", name1)
Explanation: Exercise 5:
Write a program that asks two people for their names using input(). Store the names in variables called name1 and name2. Say hello to both of them.
End of explanation
my_text = 'The word 'python' has many meanings in natural language. '
Explanation: Exercise 6:
The following piece of code will not work. You can see this, because it will print an error message (more on this in the following chapters). Can you figure out how to fix it?
Hint 1: Look at the quite (') characters.
Hint 2: You can use single quotes (') and double quites(") to define strings.
End of explanation |
13,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Downloading files to your local file system
files.download will invoke a browser download of the file to the user's local computer.
Step2: Google Drive
You can access files in Drive in a number of ways, including
Step3: PyDrive
The example below shows 1) authentication, 2) file upload, and 3) file download. More examples are available in the PyDrive documentation
Step4: Drive REST API
The first step is to authenticate.
Step5: Now we can construct a Drive API client.
Step6: With the client created, we can use any of the functions in the Google Drive API reference. Examples follow.
Creating a new Drive file with data from Python
Step7: After executing the cell above, a new file named 'Sample file' will appear in your drive.google.com file list. Your file ID will differ since you will have created a new, distinct file from the example above.
Downloading data from a Drive file into Python
Step8: Google Sheets
Our examples below will use the existing open-source gspread library for interacting with Sheets.
First, we'll install the package using pip.
Step9: Next, we'll import the library, authenticate, and create the interface to sheets.
Step10: Below is a small set of gspread examples. Additional examples are shown on the gspread Github page.
Creating a new sheet with data from Python
Step11: After executing the cell above, a new spreadsheet will be shown in your sheets list on sheets.google.com.
Step12: After executing the cell above, the sheet will be populated with random numbers in the assigned range.
Downloading data from a sheet into Python as a Pandas DataFrame
We'll read back to the data that we inserted above and convert the result into a Pandas DataFrame.
(The data you observe will differ since the contents of each cell is a random number.)
Step13: Google Cloud Storage (GCS)
We'll start by authenticating to GCS and creating the service client.
Step14: Upload a file from Python to a GCS bucket
We'll start by creating the sample file to be uploaded.
Step15: Next, we'll upload the file using the gsutil command, which is included by default on Colab backends.
Step16: Using Python
This section demonstrates how to upload files using the native Python API rather than gsutil.
This snippet is based on a larger example with additional uses of the API.
Step17: The cell below uploads the file to our newly created bucket.
Step18: Once the upload has finished, the data will appear in the cloud console storage browser for your project
Step19: Using Python
We repeat the download example above using the native Python API. | Python Code:
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
Explanation: <a href="https://colab.research.google.com/github/termanli/CLIOL/blob/master/External_data_Drive,_Sheets,_and_Cloud_Storage.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This notebook provides recipes for loading and saving data from external sources.
Local file system
Uploading files from your local file system
files.upload returns a dictionary of the files which were uploaded.
The dictionary is keyed by the file name, the value is the data which was uploaded.
End of explanation
from google.colab import files
with open('example.txt', 'w') as f:
f.write('some content')
files.download('example.txt')
Explanation: Downloading files to your local file system
files.download will invoke a browser download of the file to the user's local computer.
End of explanation
from google.colab import drive
drive.mount('/content/gdrive')
with open('/content/gdrive/My Drive/foo.txt', 'w') as f:
f.write('Hello Google Drive!')
!cat /content/gdrive/My\ Drive/foo.txt
Explanation: Google Drive
You can access files in Drive in a number of ways, including:
1. Using the native REST API;
1. Using a wrapper around the API such as PyDrive; or
1. Mounting your Google Drive in the runtime's virtual machine.
Example of each are below.
Mounting Google Drive locally
The example below shows how to mount your Google Drive in your virtual machine using an authorization code, and shows a couple of ways to write & read files there. Once executed, observe the new file (foo.txt) is visible in https://drive.google.com/
Note this only supports reading and writing files; to programmatically change sharing settings etc use one of the other options below.
End of explanation
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# PyDrive reference:
# https://gsuitedevs.github.io/PyDrive/docs/build/html/index.html
# 2. Create & upload a file text file.
uploaded = drive.CreateFile({'title': 'Sample upload.txt'})
uploaded.SetContentString('Sample upload file content')
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
# 3. Load a file by ID and print its contents.
downloaded = drive.CreateFile({'id': uploaded.get('id')})
print('Downloaded content "{}"'.format(downloaded.GetContentString()))
Explanation: PyDrive
The example below shows 1) authentication, 2) file upload, and 3) file download. More examples are available in the PyDrive documentation
End of explanation
from google.colab import auth
auth.authenticate_user()
Explanation: Drive REST API
The first step is to authenticate.
End of explanation
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
Explanation: Now we can construct a Drive API client.
End of explanation
# Create a local file to upload.
with open('/tmp/to_upload.txt', 'w') as f:
f.write('my sample file')
print('/tmp/to_upload.txt contains:')
!cat /tmp/to_upload.txt
# Upload the file to Drive. See:
#
# https://developers.google.com/drive/v3/reference/files/create
# https://developers.google.com/drive/v3/web/manage-uploads
from googleapiclient.http import MediaFileUpload
file_metadata = {
'name': 'Sample file',
'mimeType': 'text/plain'
}
media = MediaFileUpload('/tmp/to_upload.txt',
mimetype='text/plain',
resumable=True)
created = drive_service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
print('File ID: {}'.format(created.get('id')))
Explanation: With the client created, we can use any of the functions in the Google Drive API reference. Examples follow.
Creating a new Drive file with data from Python
End of explanation
# Download the file we just uploaded.
#
# Replace the assignment below with your file ID
# to download a different file.
#
# A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz
file_id = 'target_file_id'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))
Explanation: After executing the cell above, a new file named 'Sample file' will appear in your drive.google.com file list. Your file ID will differ since you will have created a new, distinct file from the example above.
Downloading data from a Drive file into Python
End of explanation
!pip install --upgrade -q gspread
Explanation: Google Sheets
Our examples below will use the existing open-source gspread library for interacting with Sheets.
First, we'll install the package using pip.
End of explanation
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
Explanation: Next, we'll import the library, authenticate, and create the interface to sheets.
End of explanation
sh = gc.create('A new spreadsheet')
Explanation: Below is a small set of gspread examples. Additional examples are shown on the gspread Github page.
Creating a new sheet with data from Python
End of explanation
# Open our new sheet and add some data.
worksheet = gc.open('A new spreadsheet').sheet1
cell_list = worksheet.range('A1:C2')
import random
for cell in cell_list:
cell.value = random.randint(1, 10)
worksheet.update_cells(cell_list)
Explanation: After executing the cell above, a new spreadsheet will be shown in your sheets list on sheets.google.com.
End of explanation
# Open our new sheet and read some data.
worksheet = gc.open('A new spreadsheet').sheet1
# get_all_values gives a list of rows.
rows = worksheet.get_all_values()
print(rows)
# Convert to a DataFrame and render.
import pandas as pd
pd.DataFrame.from_records(rows)
Explanation: After executing the cell above, the sheet will be populated with random numbers in the assigned range.
Downloading data from a sheet into Python as a Pandas DataFrame
We'll read back to the data that we inserted above and convert the result into a Pandas DataFrame.
(The data you observe will differ since the contents of each cell is a random number.)
End of explanation
from google.colab import auth
auth.authenticate_user()
Explanation: Google Cloud Storage (GCS)
We'll start by authenticating to GCS and creating the service client.
End of explanation
# Create a local file to upload.
with open('/tmp/to_upload.txt', 'w') as f:
f.write('my sample file')
print('/tmp/to_upload.txt contains:')
!cat /tmp/to_upload.txt
Explanation: Upload a file from Python to a GCS bucket
We'll start by creating the sample file to be uploaded.
End of explanation
# First, we need to set our project. Replace the assignment below
# with your project ID.
project_id = 'Your_project_ID_here'
!gcloud config set project {project_id}
import uuid
# Make a unique bucket to which we'll upload the file.
# (GCS buckets are part of a single global namespace.)
bucket_name = 'colab-sample-bucket-' + str(uuid.uuid1())
# Full reference: https://cloud.google.com/storage/docs/gsutil/commands/mb
!gsutil mb gs://{bucket_name}
# Copy the file to our new bucket.
# Full reference: https://cloud.google.com/storage/docs/gsutil/commands/cp
!gsutil cp /tmp/to_upload.txt gs://{bucket_name}/
# Finally, dump the contents of our newly copied file to make sure everything worked.
!gsutil cat gs://{bucket_name}/to_upload.txt
Explanation: Next, we'll upload the file using the gsutil command, which is included by default on Colab backends.
End of explanation
# The first step is to create a bucket in your cloud project.
#
# Replace the assignment below with your cloud project ID.
#
# For details on cloud projects, see:
# https://cloud.google.com/resource-manager/docs/creating-managing-projects
project_id = 'Your_project_ID_here'
# Authenticate to GCS.
from google.colab import auth
auth.authenticate_user()
# Create the service client.
from googleapiclient.discovery import build
gcs_service = build('storage', 'v1')
# Generate a random bucket name to which we'll upload the file.
import uuid
bucket_name = 'colab-sample-bucket' + str(uuid.uuid1())
body = {
'name': bucket_name,
# For a full list of locations, see:
# https://cloud.google.com/storage/docs/bucket-locations
'location': 'us',
}
gcs_service.buckets().insert(project=project_id, body=body).execute()
print('Done')
Explanation: Using Python
This section demonstrates how to upload files using the native Python API rather than gsutil.
This snippet is based on a larger example with additional uses of the API.
End of explanation
from googleapiclient.http import MediaFileUpload
media = MediaFileUpload('/tmp/to_upload.txt',
mimetype='text/plain',
resumable=True)
request = gcs_service.objects().insert(bucket=bucket_name,
name='to_upload.txt',
media_body=media)
response = None
while response is None:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, response = request.next_chunk()
print('Upload complete')
Explanation: The cell below uploads the file to our newly created bucket.
End of explanation
# Download the file.
!gsutil cp gs://{bucket_name}/to_upload.txt /tmp/gsutil_download.txt
# Print the result to make sure the transfer worked.
!cat /tmp/gsutil_download.txt
Explanation: Once the upload has finished, the data will appear in the cloud console storage browser for your project:
https://console.cloud.google.com/storage/browser?project=YOUR_PROJECT_ID_HERE
Downloading a file from GCS to Python
Next, we'll download the file we just uploaded in the example above. It's as simple as reversing the order in the gsutil cp command.
End of explanation
# Authenticate to GCS.
from google.colab import auth
auth.authenticate_user()
# Create the service client.
from googleapiclient.discovery import build
gcs_service = build('storage', 'v1')
from apiclient.http import MediaIoBaseDownload
with open('/tmp/downloaded_from_gcs.txt', 'wb') as f:
request = gcs_service.objects().get_media(bucket=bucket_name,
object='to_upload.txt')
media = MediaIoBaseDownload(f, request)
done = False
while not done:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = media.next_chunk()
print('Download complete')
# Inspect the file we downloaded to /tmp
!cat /tmp/downloaded_from_gcs.txt
Explanation: Using Python
We repeat the download example above using the native Python API.
End of explanation |
13,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Isosurface in volumetric data
Linear and nonlinear slices in volumetric data, as graphs of functions of two variables, were defined in this Jupyter Notebook http
Step1: We define an isosurface of equation $F(x,y,z)=x^4 + y^4 + z^4 - (x^2+y^2+z^2)^2 + 3(x^2+x^2+z^2) - 3=1.2$ in the volume $[-2, 2]\times[-2, 2]\times [-2, 2]$, on which a scalar field (like density or another physical property) is defined by $\psi(x,y, z)= −xe^−(x^2+y^22+z^2)$.
An isosurface $F(x,y,z)=c$ is plotted as a trisurf surface, having the vertices and faces (triangles) of a triangulation returned by the function skimage.measure.marching_cubes_lewiner(F,c).
The isosurface is colored with a colorscale based on the values of the scalar field, $\psi(x,y,z)$, at its points.
The following function returns the intensities at the vertices of the triangulation of the isosurface
Step2: Define a meshgrid on our volume and the function that for the (iso)surface equation
Step3: Although our 3D data is defined in $[-2,2]^3$, the function measure.marching_cubes_lewiner returns verts translated such that they belong
to the parallelipiped $[0,4]^3$, provided that the spacing key in this function is the same as the spacing
of voxels in the initial parallelipiped,
i.e. spacing=(X[1,0, 0]-X[0,0,0], Y[0,1, 0]-Y[0,0,0], Z[0,0, 1]-Z[0,0,0])
Step4: Now we translate the verts back in the original parallelipiped | Python Code:
import plotly.graph_objs as go
import numpy as np
from skimage import measure
Explanation: Isosurface in volumetric data
Linear and nonlinear slices in volumetric data, as graphs of functions of two variables, were defined in this Jupyter Notebook http://nbviewer.jupyter.org/github/empet/Plotly-plots/blob/master/Plotly-Slice-in-volumetric-data.ipynb. Here we illustrate how to plot an isosurface in volumetric data.
End of explanation
def intensity_func(x,y,z):
return -x * np.exp(-(x**2 + y**2 + z**2))
def plotly_triangular_mesh(vertices, faces, intensities=intensity_func, colorscale="Viridis",
showscale=False, reversescale=False, plot_edges=False):
# vertices: a numpy array of shape (n_vertices, 3)
# faces: a numpy array of shape (n_faces, 3)
# intensities can be either a function of (x,y,z) or a list of values
x, y, z = vertices.T
I, J, K = faces.T
if hasattr(intensities, '__call__'):
intensity = intensities(x,y,z) # the intensities are computed here via the passed function,
# that returns a list of vertices intensities
elif isinstance(intensities, (list, np.ndarray)):
intensity = intensities #intensities are given in a list
else:
raise ValueError("intensities can be either a function or a list, np.array")
mesh = go.Mesh3d(x=x,
y=y,
z=z,
colorscale=colorscale,
reversescale=reversescale,
intensity= intensity,
i=I,
j=J,
k=K,
name='',
showscale=showscale
)
if showscale is True:
mesh.update(colorbar=dict(thickness=20, ticklen=4, len=0.75))
if plot_edges is False: # the triangle sides are not plotted
return [mesh]
else: #plot edges
#define the lists Xe, Ye, Ze, of x, y, resp z coordinates of edge end points for each triangle
#None separates data corresponding to two consecutive triangles
tri_vertices= vertices[faces]
Xe=[]
Ye=[]
Ze=[]
for T in tri_vertices:
Xe += [T[k%3][0] for k in range(4)]+[ None]
Ye += [T[k%3][1] for k in range(4)]+[ None]
Ze += [T[k%3][2] for k in range(4)]+[ None]
#define the lines to be plotted
lines = go.Scatter3d(
x=Xe,
y=Ye,
z=Ze,
mode='lines',
name='',
line=dict(color= 'rgb(70,70,70)', width=1)
)
return [mesh, lines]
Explanation: We define an isosurface of equation $F(x,y,z)=x^4 + y^4 + z^4 - (x^2+y^2+z^2)^2 + 3(x^2+x^2+z^2) - 3=1.2$ in the volume $[-2, 2]\times[-2, 2]\times [-2, 2]$, on which a scalar field (like density or another physical property) is defined by $\psi(x,y, z)= −xe^−(x^2+y^22+z^2)$.
An isosurface $F(x,y,z)=c$ is plotted as a trisurf surface, having the vertices and faces (triangles) of a triangulation returned by the function skimage.measure.marching_cubes_lewiner(F,c).
The isosurface is colored with a colorscale based on the values of the scalar field, $\psi(x,y,z)$, at its points.
The following function returns the intensities at the vertices of the triangulation of the isosurface:
End of explanation
X, Y, Z = np.mgrid[-2:2:50j, -2:2:50j, -2:2:50j]
surf_eq = X**4 + Y**4 + Z**4 - (X**2+Y**2+Z**2)**2 + 3*(X**2+Y**2+Z**2) - 3
Explanation: Define a meshgrid on our volume and the function that for the (iso)surface equation:
End of explanation
verts, faces = measure.marching_cubes_lewiner(surf_eq, 1.2,
spacing=(X[1,0, 0]-X[0,0,0], Y[0,1, 0]-Y[0,0,0],
Z[0,0, 1]-Z[0,0,0]))[:2]
title = 'Isosurface in volumetric data'
Explanation: Although our 3D data is defined in $[-2,2]^3$, the function measure.marching_cubes_lewiner returns verts translated such that they belong
to the parallelipiped $[0,4]^3$, provided that the spacing key in this function is the same as the spacing
of voxels in the initial parallelipiped,
i.e. spacing=(X[1,0, 0]-X[0,0,0], Y[0,1, 0]-Y[0,0,0], Z[0,0, 1]-Z[0,0,0]):
End of explanation
verts = verts-2
pl_BrBG=[[0.0, 'rgb(84, 48, 5)'],
[0.1, 'rgb(138, 80, 9)'],
[0.2, 'rgb(191, 129, 45)'],
[0.3, 'rgb(222, 192, 123)'],
[0.4, 'rgb(246, 232, 195)'],
[0.5, 'rgb(244, 244, 244)'],
[0.6, 'rgb(199, 234, 229)'],
[0.7, 'rgb(126, 203, 192)'],
[0.8, 'rgb(53, 151, 143)'],
[0.9, 'rgb(0, 101, 93)'],
[1.0, 'rgb(0, 60, 48)']]
data = plotly_triangular_mesh(verts, faces, colorscale=pl_BrBG,
showscale=True)
axis = dict(showbackground=True,
backgroundcolor="rgb(230, 230,230)",
gridcolor="rgb(255, 255, 255)",
zerolinecolor="rgb(255, 255, 255)")
noaxis = dict(visible=False)
layout = go.Layout(
title=title,
font=dict(family='Balto'),
showlegend=False,
width=800,
height=800,
scene=dict(xaxis=axis,
yaxis=axis,
zaxis=axis,
aspectratio=dict(x=1,
y=1,
z=1)
)
)
fig = go.Figure(data=data, layout=layout)
import plotly.plotly as py
py.sign_in('username', 'api_key')
py.iplot(fig, filename='isosurface-volume')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Now we translate the verts back in the original parallelipiped:
End of explanation |
13,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plug in dummy classifier
Step1: Use on classification model with exact method
Step2: Forest regressor model with estimated, exact, and recursive, using data
Works, though estimated is pretty bad
Step3: Forest regressor model with estimated, exact, and recursive, using grid and data
Step4: Multiclass classification | Python Code:
def f(array):
return (np.sum(X, axis=1) > 0.1).astype(float)
from sklearn.dummy import DummyRegressor
c = DummyRegressor()
c.fit(X, y)
c.predict = f
pdp, axes = partial_dependence.partial_dependence(c, [0, 1], X = X, method='exact')
Explanation: Plug in dummy classifier :
Works with a hack
End of explanation
pdp, axes = partial_dependence.partial_dependence(logit_model, [0, 1], X = X, method='exact')
pdp, axes = partial_dependence.partial_dependence(logit_model, [0, 1], X = X, method='estimated')
Explanation: Use on classification model with exact method:
Fails due to bad error handling
End of explanation
pdp_exact, axes_exact = partial_dependence.partial_dependence(rf_r_model, [0], X = X, method='exact')
pdp_est, axes_est = partial_dependence.partial_dependence(rf_r_model, [0], X = X, method='estimated')
pdp_rec, axes_rec = partial_dependence.partial_dependence(rf_r_model, [0], X = X, method='recursion')
ax = pd.DataFrame(pdp_exact[0], index = axes_exact[0], columns=['exact']).plot()
pd.DataFrame(pdp_est[0], index = axes_est[0], columns=['estimated']).plot(ax=ax)
pd.DataFrame(pdp_rec[0], index = axes_rec[0], columns=['recursion']).plot(ax=ax)
Explanation: Forest regressor model with estimated, exact, and recursive, using data
Works, though estimated is pretty bad
End of explanation
grid = [i / 10. for i in range(-70, 90)]
pdp_exact, axes_exact = partial_dependence.partial_dependence(rf_r_model, [0], grid = grid, X=X, method='exact')
pdp_est, axes_est = partial_dependence.partial_dependence(rf_r_model, [0], grid = grid, X=X, method='estimated')
pdp_rec, axes_rec = partial_dependence.partial_dependence(rf_r_model, [0], grid = grid, X=X, method='recursion')
ax = pd.DataFrame(pdp_exact[0], index = axes_exact[0], columns=['exact']).plot()
pd.DataFrame(pdp_est[0], index = axes_est[0], columns=['estimated']).plot(ax=ax)
pd.DataFrame(pdp_rec[0], index = axes_rec[0], columns=['recursion']).plot(ax=ax)
Explanation: Forest regressor model with estimated, exact, and recursive, using grid and data:
Fails due requirement of either X or grid but not both
End of explanation
def get_multiclass(x, Y):
if x > np.percentile(Y, .66):
return 1
elif x < np.percentile(Y, .33):
return -1
else:
return 0
y_multiclass = np.array(map(lambda x: get_multiclass(x, y), y))
rf_multi_model = RandomForestClassifier()
rf_multi_model.fit(X, y_multiclass)
pdp_exact, axes_exact = partial_dependence.partial_dependence(rf_multi_model, [0], X=X, method='exact')
Explanation: Multiclass classification:
Fails due to bad error handling
End of explanation |
13,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 2
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Convert to Numpy Array
Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").
Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the "feature matrix" by the "weight vector".
First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
Step3: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things
Step4: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output
Step5: Predicting output given regression weights
Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this
Step6: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights
Step7: If you want to test your code run the following cell
Step8: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.
Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows
Step9: To test your feature derivartive run the following
Step10: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
Step11: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.
Running the Gradient Descent as Simple Regression
First let's split the data into training and test data.
Step12: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model
Step13: Next run your gradient descent with the above parameters.
How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Quiz Question
Step14: Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first
Step15: Now compute your predictions using test_simple_feature_matrix and your weights from above.
Step16: Quiz Question
Step17: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
Step18: Running a multiple regression
Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters
Step19: Use the above parameters to estimate the model weights. Record these values for your quiz.
Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
Step20: Quiz Question
Step21: What is the actual price for the 1st house in the test data set?
Step22: Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 2: Multiple Regression (gradient descent)
In the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.
In this notebook we will cover estimating multiple regression weights via gradient descent. You will:
* Add a constant column of 1's to a graphlab SFrame to account for the intercept
* Convert an SFrame into a Numpy array
* Write a predict_output() function using Numpy
* Write a numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.
* Use the gradient descent function to estimate regression weights for multiple features
Fire up graphlab create
Make sure you have the latest version of graphlab (>= 1.7)
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
import numpy as np # note this allows us to refer to numpy as np instead
Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Convert to Numpy Array
Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").
Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the "feature matrix" by the "weight vector".
First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
End of explanation
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:
* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')
* A numpy array containing the values of the output
With this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)
Please note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!
End of explanation
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
Explanation: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:
End of explanation
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
Explanation: Predicting output given regression weights
Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:
End of explanation
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = []
for col in range(feature_matrix.shape[0]):
predictions.append(np.dot(feature_matrix[col,], weights))
return(predictions)
Explanation: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:
End of explanation
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
Explanation: If you want to test your code run the following cell:
End of explanation
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2 * np.dot(errors, feature)
return(derivative)
Explanation: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.
Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:
(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[1]*[feature_k] - output)^2
Where we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:
2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[1]*[feature_k] - output)* [feature_i]
The term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:
2*error*[feature_i]
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors.
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).
End of explanation
(example_features, example_output) = get_numpy_data(sales, ['sqft_living', 'sqft_living15'], 'price')
my_weights = np.array([0., 0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
Explanation: To test your feature derivartive run the following:
End of explanation
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
derivative = feature_derivative(errors, feature_matrix[:,i])
# add the squared value of the derivative to the gradient magnitude (for assessing convergence)
gradient_sum_squares += derivative ** 2
# subtract the step size times the derivative from the current weight
weights[i] -= step_size * derivative
# compute the square-root of the gradient sum of squares to get the gradient matnigude:
gradient_magnitude = sqrt(gradient_sum_squares)
print gradient_magnitude
if gradient_magnitude < tolerance:
converged = True
return(weights)
Explanation: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.
Running the Gradient Descent as Simple Regression
First let's split the data into training and test data.
End of explanation
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
sgd_weights = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)
Explanation: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:
End of explanation
sgd_weights
Explanation: Next run your gradient descent with the above parameters.
How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Quiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?
End of explanation
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
Explanation: Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:
End of explanation
predictions = predict_output(test_simple_feature_matrix, sgd_weights)
Explanation: Now compute your predictions using test_simple_feature_matrix and your weights from above.
End of explanation
predictions[:1]
Explanation: Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?
End of explanation
residuals = [(predictions[i] - test_data[i]['price']) ** 2 for i in range(len(predictions))]
print sum(residuals)
Explanation: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
End of explanation
train_data[:1]
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
sgd_weights_2 = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)
sgd_weights_2
Explanation: Running a multiple regression
Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:
End of explanation
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
predictions_2 = predict_output(test_feature_matrix, sgd_weights_2)
Explanation: Use the above parameters to estimate the model weights. Record these values for your quiz.
Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
End of explanation
predictions_2[:1]
Explanation: Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?
End of explanation
test_data[0]['price']
Explanation: What is the actual price for the 1st house in the test data set?
End of explanation
residuals_2 = [(predictions_2[i] - test_data[i]['price']) ** 2 for i in range(len(predictions))]
print sum(residuals_2)
Explanation: Quiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2?
Now use your predictions and the output to compute the RSS for model 2 on TEST data.
End of explanation |
13,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chempy
we will now introduce the Chempy function which will calculate the chemical evolution of a one-zone open box model
Step1: Loading all the input
solar abundances
SFR
infall
initial abundances and inflowing abundances
Step2: Elemental abundances at start
We need to define the abundances of
Step3: Initialising the element evolution matrix
We now feed everything into the abundance matrix and check its entries
Step4: Time integration
With the advance_one_step method we can evolve the matrix in time, given that we provide the feedback from each steps previous SSP.
Step5: Making abundances from element fractions
The cube stores everything in elemental fractions, we use a tool to convert these to abundances scaled to solar
Step6: Likelihood calculation
There are a few build-in functions (actually representing the observational constraints from the Chempy paper) which return a likelihood. One of those is called sol_norm and compares the proto-solar abundances with the Chempy ISM abundances 4.5 Gyr ago.
Step7: Net vs. total yield
Now we will change a little detail in the time-integration. Instead of letting unprocessed material that is expelled from the stars ('unprocessed_mass_in_winds' in the yield tables) being composed of the stellar birth material, which would be consistent (and is what I call 'net' yield), we now use solar-scaled material which only has the same metallicity as the stellar birth material (This is what happens if yield tables are giving the total yield including the unprocessed material, which means that the author usually uses solar-scaled material which is then expelled by the star, but might not even be produced by it). Therefore we see a difference in the likelihood which is better for the total yields case (-180.05 vs -198.30). We see the difference especially well in K and Ti.
Step8: Making chemical evolution modelling fast and flexible
Now we have all ingredients at hand. We use a wrapper function were we only need to pass the ModelParameters.
Step9: IMF effect
now we can easily check the effect of the IMF on the chemical evolution
Step10: SFR effect
We can do the same for the peak of the SFR etc...
Step11: Time resolution
The time steps are equidistant and the resolution is flexible. Even with coarse 0.5Gyr resolution the results are quite good, saving a lot of computational time. Here we test different time resolution of 0.5, 0.1 and 0.025 Gyr.
All results converge after metallicity increases above -1. The shorter time sampling allows more massive stars to explode first which generally have alpha enhanced yields, therefore the [O/Fe] is higher in the beginning.
Step12: A note on chemical evolution tracks and 'by eye' fit
Sometimes Astronomers like to show that their chemical evolution track runs through some stellar abundance data points. But if we want the computer to steer our result fit we need to know the selection function of the stars that we try to match and we need to take our star formation history into account (Maybe there are almost no stars formed after 8Gyr).
- We assume that we have an unbiased sample of red clump stars
- We reproduce its selection function by multiplying their age-distribution for a flat SFR with the SFR.
(for the age distribution I have included a cut from a mock catalogue according to Just&Rybizki 2016 but you could also use the analytic formula from Bovy+2014)
- Then we sample some synthetic stars (with observational errors) along the chemical evolutionary track
Step13: This PDF can then be compared to real data to get a realistic likelihood.
The nucleosynthetic feedback per element
With the plot_processes routine we can plot the total feedback of each element and the fractional contribution from each nucleosynthetic feedback for a specific Chempy run. | Python Code:
%pylab inline
# loading the default parameters
from Chempy.parameter import ModelParameters
a = ModelParameters()
Explanation: Chempy
we will now introduce the Chempy function which will calculate the chemical evolution of a one-zone open box model
End of explanation
# Initialising sfr, infall, elements to trace, solar abundances
from Chempy.wrapper import initialise_stuff
basic_solar, basic_sfr, basic_infall = initialise_stuff(a)
elements_to_trace = a.elements_to_trace
Explanation: Loading all the input
solar abundances
SFR
infall
initial abundances and inflowing abundances
End of explanation
# Setting the abundance fractions at the beginning to primordial
from Chempy.infall import INFALL, PRIMORDIAL_INFALL
basic_primordial = PRIMORDIAL_INFALL(list(elements_to_trace),np.copy(basic_solar.table))
basic_primordial.primordial()
basic_primordial.fractions
Explanation: Elemental abundances at start
We need to define the abundances of:
- The ISM at beginning
- The corona gas at beginning
- The cosmic inflow into the corona for all times.
For all we chose primordial here.
End of explanation
# Initialising the ISM instance
from Chempy.time_integration import ABUNDANCE_MATRIX
cube = ABUNDANCE_MATRIX(np.copy(basic_sfr.t),np.copy(basic_sfr.sfr),np.copy(basic_infall.infall),list(elements_to_trace),list(basic_primordial.symbols),list(basic_primordial.fractions),float(a.gas_at_start),list(basic_primordial.symbols),list(basic_primordial.fractions),float(a.gas_reservoir_mass_factor),float(a.outflow_feedback_fraction),bool(a.check_processes),float(a.starformation_efficiency),float(a.gas_power), float(a.sfr_factor_for_cosmic_accretion), list(basic_primordial.symbols), list(basic_primordial.fractions))
# All the entries of the ISM instance
print(list(cube.cube.dtype.names))
# Helium at start
print('Primordial ratio of H to He: ',cube.cube['H'][0]/cube.cube['He'][0])
print('Helium over time: ',cube.cube['He'])
Explanation: Initialising the element evolution matrix
We now feed everything into the abundance matrix and check its entries
End of explanation
# Now we run the time integration
from Chempy.wrapper import SSP_wrap
basic_ssp = SSP_wrap(a)
for i in range(len(basic_sfr.t)-1):
j = len(basic_sfr.t)-i
ssp_mass = float(basic_sfr.sfr[i])
# The metallicity needs to be passed for the yields to be calculated as well as the initial elemental abundances
element_fractions = []
for item in elements_to_trace:
element_fractions.append(float(np.copy(cube.cube[item][max(i-1,0)]/cube.cube['gas'][max(i-1,0)])))## gas element fractions from one time step before
metallicity = float(cube.cube['Z'][i])
time_steps = np.copy(basic_sfr.t[:j])
basic_ssp.calculate_feedback(float(metallicity), list(elements_to_trace), list(element_fractions), np.copy(time_steps), ssp_mass)
cube.advance_one_step(i+1,np.copy(basic_ssp.table),np.copy(basic_ssp.sn2_table),np.copy(basic_ssp.agb_table),np.copy(basic_ssp.sn1a_table),np.copy(basic_ssp.bh_table))
print(cube.cube['He'])
Explanation: Time integration
With the advance_one_step method we can evolve the matrix in time, given that we provide the feedback from each steps previous SSP.
End of explanation
# Turning the fractions into dex values (normalised to solar [X/H])
from Chempy.making_abundances import mass_fraction_to_abundances
abundances,elements,numbers = mass_fraction_to_abundances(np.copy(cube.cube),np.copy(basic_solar.table))
print(abundances['He'])
## Alpha enhancement over time
plot(cube.cube['time'][1:],abundances['O'][1:]-abundances['Fe'][1:])
plt.xlabel('time in Gyr')
plt.ylabel('[O/Fe]')
# [X/Fe] vs. [Fe/H]
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], label = 'N')
plt.xlabel('[Fe/H]')
plt.ylabel('[X/Fe]')
plt.legend()
Explanation: Making abundances from element fractions
The cube stores everything in elemental fractions, we use a tool to convert these to abundances scaled to solar:
End of explanation
# Here we load a likelihood test for the solar abundances
# This is how it looks for the prior parameters with the default yield set
from Chempy.data_to_test import sol_norm
probabilities, abundance_list, element_names = sol_norm(True,a.name_string,np.copy(abundances),np.copy(cube.cube),elements_to_trace,a.element_names,np.copy(basic_solar.table),a.number_of_models_overplotted,a.produce_mock_data,a.use_mock_data,a.error_inflation)
Explanation: Likelihood calculation
There are a few build-in functions (actually representing the observational constraints from the Chempy paper) which return a likelihood. One of those is called sol_norm and compares the proto-solar abundances with the Chempy ISM abundances 4.5 Gyr ago.
End of explanation
cube = ABUNDANCE_MATRIX(np.copy(basic_sfr.t),np.copy(basic_sfr.sfr),np.copy(basic_infall.infall),list(elements_to_trace),list(basic_primordial.symbols),list(basic_primordial.fractions),float(a.gas_at_start),list(basic_primordial.symbols),list(basic_primordial.fractions),float(a.gas_reservoir_mass_factor),float(a.outflow_feedback_fraction),bool(a.check_processes),float(a.starformation_efficiency),float(a.gas_power), float(a.sfr_factor_for_cosmic_accretion), list(basic_primordial.symbols), list(basic_primordial.fractions))
basic_ssp = SSP_wrap(a)
for i in range(len(basic_sfr.t)-1):
j = len(basic_sfr.t)-i
ssp_mass = float(basic_sfr.sfr[i])
metallicity = float(cube.cube['Z'][i])
# Instead of using the ISM composition we use solar scaled material
solar_scaled_material = PRIMORDIAL_INFALL(list(elements_to_trace),np.copy(basic_solar.table))
solar_scaled_material.solar(np.log10(metallicity/basic_solar.z))
time_steps = np.copy(basic_sfr.t[:j])
basic_ssp.calculate_feedback(float(metallicity), list(elements_to_trace), list(solar_scaled_material.fractions), np.copy(time_steps), ssp_mass)
cube.advance_one_step(i+1,np.copy(basic_ssp.table),np.copy(basic_ssp.sn2_table),np.copy(basic_ssp.agb_table),np.copy(basic_ssp.sn1a_table),np.copy(basic_ssp.bh_table))
abundances,elements,numbers = mass_fraction_to_abundances(np.copy(cube.cube),np.copy(basic_solar.table))
# We do the solar abundance test again and see that the likelihood improves
probabilities, abundance_list, element_names = sol_norm(True,a.name_string,np.copy(abundances),np.copy(cube.cube),elements_to_trace,a.element_names,np.copy(basic_solar.table),a.number_of_models_overplotted,a.produce_mock_data,a.use_mock_data,a.error_inflation)
Explanation: Net vs. total yield
Now we will change a little detail in the time-integration. Instead of letting unprocessed material that is expelled from the stars ('unprocessed_mass_in_winds' in the yield tables) being composed of the stellar birth material, which would be consistent (and is what I call 'net' yield), we now use solar-scaled material which only has the same metallicity as the stellar birth material (This is what happens if yield tables are giving the total yield including the unprocessed material, which means that the author usually uses solar-scaled material which is then expelled by the star, but might not even be produced by it). Therefore we see a difference in the likelihood which is better for the total yields case (-180.05 vs -198.30). We see the difference especially well in K and Ti.
End of explanation
# This is a convenience function
from Chempy.wrapper import Chempy
a = ModelParameters()
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], label = 'N')
plt.xlabel('[Fe/H]')
plt.ylabel('[X/Fe]')
plt.legend()
Explanation: Making chemical evolution modelling fast and flexible
Now we have all ingredients at hand. We use a wrapper function were we only need to pass the ModelParameters.
End of explanation
# prior IMF
a = ModelParameters()
a.imf_parameter= (0.69, 0.079,-2.29)
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O', color = 'b')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn', color = 'orange')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], label = 'N', color = 'g')
# top-heavy IMF
a = ModelParameters()
a.imf_parameter = (0.69, 0.079,-2.09)
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], color = 'b', linestyle = ':')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], color = 'orange', linestyle = ':')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], color = 'g', linestyle = ':')
# bottom-heavy IMF
a = ModelParameters()
a.imf_parameter = (0.69, 0.079,-2.49)
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], color = 'b', linestyle = '--')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], color = 'orange', linestyle = '--')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], color = 'g', linestyle = '--')
plt.xlabel('[Fe/H]')
plt.ylabel('[X/Fe]')
plt.title('IMF effect: top-heavy as dotted line, bottom-heavy as dashed line')
plt.legend()
Explanation: IMF effect
now we can easily check the effect of the IMF on the chemical evolution
End of explanation
# Prior SFR
a = ModelParameters()
a.sfr_scale = 3.5
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O', color = 'b')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn', color = 'orange')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], label = 'N', color = 'g')
# Early peak in the SFR
a = ModelParameters()
a.sfr_scale = 1.5
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], color = 'b', linestyle = ':')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], color = 'orange', linestyle = ':')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], color = 'green', linestyle = ':')
# late peak in the SFR
a = ModelParameters()
a.sfr_scale = 6.5
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], color = 'b', linestyle = '--')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], color = 'orange', linestyle = '--')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], color = 'green', linestyle = '--')
plt.xlabel('[Fe/H]')
plt.ylabel('[X/Fe]')
plt.title('SFR effect: early peak as dotted line, late peak as dashed line')
plt.legend()
Explanation: SFR effect
We can do the same for the peak of the SFR etc...
End of explanation
## 0.5 Gyr resolution
a = ModelParameters()
a.time_steps = 28 # default
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], label = 'O', color = 'b')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], label = 'Mn', color = 'orange')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], label = 'N', color = 'g')
# 0.1 Gyr resolution
a = ModelParameters()
a.time_steps = 136
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], color = 'b', linestyle = ':')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], color = 'orange', linestyle = ':')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], color = 'green', linestyle = ':')
# 25 Myr resolution
a = ModelParameters()
a.time_steps = 541
cube, abundances = Chempy(a)
plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], color = 'b', linestyle = '--')
plot(abundances['Fe'][1:],abundances['Mn'][1:]-abundances['Fe'][1:], color = 'orange', linestyle = '--')
plot(abundances['Fe'][1:],abundances['N'][1:]-abundances['Fe'][1:], color = 'green', linestyle = '--')
plt.xlabel('[Fe/H]')
plt.ylabel('[X/Fe]')
plt.title('Time resolution effect: 0.5 solid, 0.1 dotted, 0.025Gyr dashed line')
plt.legend()
Explanation: Time resolution
The time steps are equidistant and the resolution is flexible. Even with coarse 0.5Gyr resolution the results are quite good, saving a lot of computational time. Here we test different time resolution of 0.5, 0.1 and 0.025 Gyr.
All results converge after metallicity increases above -1. The shorter time sampling allows more massive stars to explode first which generally have alpha enhanced yields, therefore the [O/Fe] is higher in the beginning.
End of explanation
# Default model parameters
from Chempy import localpath
a = ModelParameters()
a.check_processes = True
cube, abundances = Chempy(a)
# Red clump age distribution
selection = np.load(localpath + "input/selection/red_clump_new.npy")
time_selection = np.load(localpath + "input/selection/time_red_clump_new.npy")
plt.plot(time_selection,selection)
plt.xlabel('Age in Gyr')
plt.title('Age distribution of Red clump stars')
plt.show()
# We need to put the age distribution on the same time-steps as our model
selection = np.interp(cube.cube['time'], time_selection[::-1], selection)
plt.plot(cube.cube['time'],selection)
plt.xlabel('time in Gyr')
plt.title('Normalisation for a population of Red clump stars')
plt.show()
# Comparing to the SFR
plt.plot(cube.cube['time'],cube.cube['sfr'])
plt.xlabel('time in Gyr')
plt.title('SFR')
plt.show()
# Convolution of SFR and Red clump age distribution
weight = cube.cube['sfr']*selection
plt.plot(cube.cube['time'],weight)
plt.xlabel('time in Gyr')
plt.title('Weight to reproduce red clump stellar sample')
plt.show()
# Here we sample 1000 stars with this age-distribution
from Chempy.data_to_test import sample_stars
sample_size = 1000
x,y = sample_stars(cube.cube['sfr'][1:],selection[1:],abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:],float(basic_solar.table['error'][np.where(basic_solar.table['Symbol']=='Fe')]),float(basic_solar.table['error'][np.where(basic_solar.table['Symbol']=='O')]),int(sample_size))
plt.plot(x,y,"g.", alpha = 0.2, label = '(%d) synthesized red clum stars' %(int(sample_size)))
plt.plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:], 'r', label = 'evolutionary track')
plt.xlabel('[Fe/H]')
plt.ylabel('[O/Fe]')
plt.title("Sampling from SFH and red clump age distribution")
plt.legend(bbox_to_anchor = [1,1.5])
plt.show()
# And we plot the 2d histogramm where we see that our model prediction for a red clump population
plt.hist2d(x,y,20)
plt.plot(abundances['Fe'][1:],abundances['O'][1:]-abundances['Fe'][1:],'r')
plt.xlabel('[Fe/H]')
plt.ylabel('[O/Fe]')
plt.title("Sampling from SFH and red clump age distribution")
plt.show
Explanation: A note on chemical evolution tracks and 'by eye' fit
Sometimes Astronomers like to show that their chemical evolution track runs through some stellar abundance data points. But if we want the computer to steer our result fit we need to know the selection function of the stars that we try to match and we need to take our star formation history into account (Maybe there are almost no stars formed after 8Gyr).
- We assume that we have an unbiased sample of red clump stars
- We reproduce its selection function by multiplying their age-distribution for a flat SFR with the SFR.
(for the age distribution I have included a cut from a mock catalogue according to Just&Rybizki 2016 but you could also use the analytic formula from Bovy+2014)
- Then we sample some synthetic stars (with observational errors) along the chemical evolutionary track
End of explanation
# Loading the routine and plotting the process contribution into the current folder
# Total enrichment mass in gray to the right, single process fractional contribution to the left
from Chempy.data_to_test import plot_processes
plot_processes(True,a.name_string,cube.sn2_cube,cube.sn1a_cube,cube.agb_cube,a.element_names,np.copy(cube),a.number_of_models_overplotted)
Explanation: This PDF can then be compared to real data to get a realistic likelihood.
The nucleosynthetic feedback per element
With the plot_processes routine we can plot the total feedback of each element and the fractional contribution from each nucleosynthetic feedback for a specific Chempy run.
End of explanation |
13,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2
Step1: Experiment parameters
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.
Step2: Specify Nodes
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
Step3: Specify GLM contrasts
To do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the fingerfootlips task in this dataset
Step4: Specify GLM Model
The next step is now to get information such as stimuli onset, duration and other regressors into the GLM model. For this we need to create a helper function, in our case called subjectinfo.
To recap, let's see what we have in the TSV file for each run
Step5: We can also create a data frame using pandas library.
Step6: And finally we need to separate the onsets of the three conditions, i.e. group by trial_type. This can be done as follows
Step7: Now, let us incorporate all this in the helper function subjectinfo.
Step8: Specify input & output stream
Specify where the input data can be found & where and how to save the output data.
Step9: Specify Workflow
Create a workflow and connect the interface nodes and the I/O stream to each other.
Step10: Visualize the workflow
It always helps to visualize your workflow.
Step11: Run the Workflow
Now that everything is ready, we can run the 1st-level analysis workflow. Change n_procs to the number of jobs/cores you want to use.
Step12: Inspect output
Let's check the structure of the output folder, to see if we have everything we wanted to save. You should have nine contrast images (con_*.nii for T-contrasts and ess_*.nii for T-contrasts) and nine statistic images (spmT_*.nii and spmF_*.nii) for every subject and smoothing kernel.
Step13: Visualize results
Let's look at the contrasts of one subject that we've just computed. First, let's see what the difference of smoothing is for the contrast average
Step14: Now, let's look at the three contrasts Finger, Foot, Lips.
Step15: We can also check three additional contrasts Finger > others, Foot > others and Lips > others.
Step16: Special case
There is something special with the Finger contrast in all subjects. So let's take a look at all of them. | Python Code:
from nilearn import plotting
%matplotlib inline
from os.path import join as opj
import json
from nipype.interfaces.spm import Level1Design, EstimateModel, EstimateContrast
from nipype.algorithms.modelgen import SpecifySPMModel
from nipype.interfaces.utility import Function, IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype import Workflow, Node
Explanation: Example 2: 1st-level Analysis
In this example, we will take the preprocessed output from the first example and run for each subject a 1st-level analysis. For this we need to do the following steps:
Extract onset times of stimuli from TVA file
Specify the model (TR, high pass filter, onset times, etc.)
Specify contrasts to compute
Estimate contrasts
In the previous example, we used two different smoothing kernels of fwhm=4 and fwhm=8. Therefore, let us also run the 1st-level analysis for those two versions.
So, let's begin!
Imports
First, we need to import all the modules we later want to use.
End of explanation
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# list of subject identifiers
subject_list = ['01', '02', '03', '04', '05', '06', '07', '08', '09', '10']
# TR of functional images
with open('/data/ds000114/task-fingerfootlips_bold.json', 'rt') as fp:
task_info = json.load(fp)
TR = task_info['RepetitionTime']
# Smoothing withds used during preprocessing
fwhm = [4, 8]
Explanation: Experiment parameters
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.
End of explanation
# SpecifyModel - Generates SPM-specific Model
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
input_units='secs',
output_units='secs',
time_repetition=TR,
high_pass_filter_cutoff=128),
name="modelspec")
# Level1Design - Generates an SPM design matrix
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
timing_units='secs',
interscan_interval=TR,
model_serial_correlations='FAST'),
name="level1design")
# EstimateModel - estimate the parameters of the model
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level1estimate")
# EstimateContrast - estimates contrasts
level1conest = Node(EstimateContrast(), name="level1conest")
Explanation: Specify Nodes
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
End of explanation
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']
# Contrasts
cont01 = ['average', 'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger', 'T', condition_names, [1, 0, 0]]
cont03 = ['Foot', 'T', condition_names, [0, 1, 0]]
cont04 = ['Lips', 'T', condition_names, [0, 0, 1]]
cont05 = ['Finger > others','T', condition_names, [1, -0.5, -0.5]]
cont06 = ['Foot > others', 'T', condition_names, [-0.5, 1, -0.5]]
cont07 = ['Lips > others', 'T', condition_names, [-0.5, -0.5, 1]]
cont08 = ['activation', 'F', [cont02, cont03, cont04]]
cont09 = ['differences', 'F', [cont05, cont06, cont07]]
contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]
Explanation: Specify GLM contrasts
To do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the fingerfootlips task in this dataset:
finger
foot
lips
Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):
End of explanation
!cat /data/ds000114/task-fingerfootlips_events.tsv
Explanation: Specify GLM Model
The next step is now to get information such as stimuli onset, duration and other regressors into the GLM model. For this we need to create a helper function, in our case called subjectinfo.
To recap, let's see what we have in the TSV file for each run:
End of explanation
import pandas as pd
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo
Explanation: We can also create a data frame using pandas library.
End of explanation
for group in trialinfo.groupby('trial_type'):
print(group)
print("")
Explanation: And finally we need to separate the onsets of the three conditions, i.e. group by trial_type. This can be done as follows:
End of explanation
def subjectinfo(subject_id):
import pandas as pd
from nipype.interfaces.base import Bunch
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo.head()
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(list(group[1].onset - 10)) # subtracting 10s due to removing of 4 dummy scans
durations.append(group[1].duration.tolist())
subject_info = [Bunch(conditions=conditions,
onsets=onsets,
durations=durations,
#amplitudes=None,
#tmod=None,
#pmod=None,
#regressor_names=None,
#regressors=None
)]
return subject_info # this output will later be returned to infosource
# Get Subject Info - get subject specific condition information
getsubjectinfo = Node(Function(input_names=['subject_id'],
output_names=['subject_info'],
function=subjectinfo),
name='getsubjectinfo')
Explanation: Now, let us incorporate all this in the helper function subjectinfo.
End of explanation
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id',
'fwhm_id',
'contrasts'],
contrasts=contrast_list),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'func': opj(output_dir, 'preproc', 'sub-{subject_id}', 'task-{task_id}',
'fwhm-{fwhm_id}_ssub-{subject_id}_ses-test_task-{task_id}_bold.nii'),
'mc_param': opj(output_dir, 'preproc', 'sub-{subject_id}', 'task-{task_id}',
'sub-{subject_id}_ses-test_task-{task_id}_bold.par'),
'outliers': opj(output_dir, 'preproc', 'sub-{subject_id}', 'task-{task_id}',
'art.sub-{subject_id}_ses-test_task-{task_id}_bold_outliers.txt')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
selectfiles.inputs.task_id = 'fingerfootlips'
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id_', 'sub-')]
subjFolders = [('_fwhm_id_%ssub-%s' % (f, sub), 'sub-%s/fwhm-%s' % (sub, f))
for f in fwhm
for sub in subject_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
Explanation: Specify input & output stream
Specify where the input data can be found & where and how to save the output data.
End of explanation
# Initiation of the 1st-level analysis workflow
l1analysis = Workflow(name='l1analysis')
l1analysis.base_dir = opj(experiment_dir, working_dir)
# Connect up the 1st-level analysis components
l1analysis.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('fwhm_id', 'fwhm_id')]),
(infosource, getsubjectinfo, [('subject_id',
'subject_id')]),
(getsubjectinfo, modelspec, [('subject_info',
'subject_info')]),
(infosource, level1conest, [('contrasts', 'contrasts')]),
(selectfiles, modelspec, [('func', 'functional_runs')]),
(selectfiles, modelspec, [('mc_param', 'realignment_parameters'),
('outliers', 'outlier_files')]),
(modelspec, level1design, [('session_info',
'session_info')]),
(level1design, level1estimate, [('spm_mat_file',
'spm_mat_file')]),
(level1estimate, level1conest, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')]),
(level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
('spmT_images', '1stLevel.@T'),
('con_images', '1stLevel.@con'),
('spmF_images', '1stLevel.@F'),
('ess_images', '1stLevel.@ess'),
]),
])
Explanation: Specify Workflow
Create a workflow and connect the interface nodes and the I/O stream to each other.
End of explanation
# Create 1st-level analysis output graph
l1analysis.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(l1analysis.base_dir, 'l1analysis', 'graph.png'))
Explanation: Visualize the workflow
It always helps to visualize your workflow.
End of explanation
l1analysis.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Run the Workflow
Now that everything is ready, we can run the 1st-level analysis workflow. Change n_procs to the number of jobs/cores you want to use.
End of explanation
!tree /output/datasink/1stLevel
Explanation: Inspect output
Let's check the structure of the output folder, to see if we have everything we wanted to save. You should have nine contrast images (con_*.nii for T-contrasts and ess_*.nii for T-contrasts) and nine statistic images (spmT_*.nii and spmF_*.nii) for every subject and smoothing kernel.
End of explanation
from nilearn.plotting import plot_stat_map
anatimg = '/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_preproc.nii.gz'
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0001.nii', title='average - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-8/spmT_0001.nii', title='average - fwhm=8',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
Explanation: Visualize results
Let's look at the contrasts of one subject that we've just computed. First, let's see what the difference of smoothing is for the contrast average
End of explanation
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0002.nii', title='finger - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0003.nii', title='foot - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0004.nii', title='lips - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
Explanation: Now, let's look at the three contrasts Finger, Foot, Lips.
End of explanation
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0005.nii', title='finger - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0006.nii', title='foot - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0007.nii', title='lips - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
Explanation: We can also check three additional contrasts Finger > others, Foot > others and Lips > others.
End of explanation
plot_stat_map(
'/output/datasink/1stLevel/sub-01/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-01',
bg_img='/data/ds000114/derivatives/fmriprep/sub-01/anat/sub-01_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-02',
bg_img='/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-03/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-03',
bg_img='/data/ds000114/derivatives/fmriprep/sub-03/anat/sub-03_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-04/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-04',
bg_img='/data/ds000114/derivatives/fmriprep/sub-04/anat/sub-04_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-05/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-05',
bg_img='/data/ds000114/derivatives/fmriprep/sub-05/anat/sub-05_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-06/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-06',
bg_img='/data/ds000114/derivatives/fmriprep/sub-06/anat/sub-06_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-07/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-07',
bg_img='/data/ds000114/derivatives/fmriprep/sub-07/anat/sub-07_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-08/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-08',
bg_img='/data/ds000114/derivatives/fmriprep/sub-08/anat/sub-08_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-09/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-09',
bg_img='/data/ds000114/derivatives/fmriprep/sub-09/anat/sub-09_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-10/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-10',
bg_img='/data/ds000114/derivatives/fmriprep/sub-10/anat/sub-10_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
Explanation: Special case
There is something special with the Finger contrast in all subjects. So let's take a look at all of them.
End of explanation |
13,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Who Is J?
Analysing JOTB diversity network
One of the main goals of the ‘Yes We Tech’ community is contributing to create an inclusive space where we can celebrate diversity, provide visibility to women-in-tech, and ensure that everybody has an equal chance to learn, share and enjoy technology-related disciplines.
As co-organisers of the event, we have concentrated our efforts in getting more women speakers on board under the assumption that a more diverse panel would enrich the conversation also around technology.
Certainly, we have doubled the number of women giving talks this year, but, is this diversity enough? How can we know that we have succeeded in our goal? and more importantly, what can we learn to create a more diverse event in future editions?
The work that we are sharing here talks about two things
Step1: Small data analysis
Small data says that last year, our 'J' engaged up to 48 speakers and 299 attendees into this big data thing.
I'm not considering here any member of the organisation.
Step2: This year speakers are 40, few less than last year, while participation have reached the number of 368 people. (Compare the increment of attendees 368 vs 299
Step3: It is noticable also, that big data is bigger than ever and this year we have included workshops and a hackathon.
The more the better right? Let's continue because there are more numbers behind those ones. Numbers that will give us some signs of diversity.
Diversity
When it comes about speakers, this year we have a 27.5% of women speaking to J, compared with a rough 10.4% of the last year.
Step4: However, and this is the worrying thing, the participation of women as attendees has slightly dropped from a not too ambitious 13% to a disappointing 9.8%. So we have an x% more of attendees but zero impact on a wider variaty of people.
Step5: Why this happened?
We don’t really know. But we continued looking at the numbers and realised that 30 of the 45 companies that enrolled two or more people didn't include any women on their lists. Meaning a 31% of the mass of attendees. Correlate team size with women percentage to validate if
Step6: For us this is not a good sign. Despite the fact that our ability to summon has increased on our monthly meetups (the ones that attempts to create this culture for equality on Málaga), the engagement on other events doesn’t have a big impact.
Again I'm not blaming companies here, because if we try to identify the participation rate of women who are not part of a team, the representation also decreased almost a 50%.
Step7: Before before blaming anyone or falling to quickly into self-indulgence, there are still more data to play with.
Note aside
Step8: From the small 50% of J's friends that could be identified with a gender, the distribution woman/men is a 20/80. Friends are the ones who follow and are followed by J.
Step9: J follows to...
Step10: J is followed by...
Step11: Gender distribution
Step12: Language distribution
Step13: Location distribution
Step14: Tweets analysis | Python Code:
import pandas as pd
import numpy as np
import scipy as sp
import pygal
import operator
from iplotter import GCPlotter
plotter = GCPlotter()
Explanation: Who Is J?
Analysing JOTB diversity network
One of the main goals of the ‘Yes We Tech’ community is contributing to create an inclusive space where we can celebrate diversity, provide visibility to women-in-tech, and ensure that everybody has an equal chance to learn, share and enjoy technology-related disciplines.
As co-organisers of the event, we have concentrated our efforts in getting more women speakers on board under the assumption that a more diverse panel would enrich the conversation also around technology.
Certainly, we have doubled the number of women giving talks this year, but, is this diversity enough? How can we know that we have succeeded in our goal? and more importantly, what can we learn to create a more diverse event in future editions?
The work that we are sharing here talks about two things: data and people. Both data and people should help us to find out some answers and understand the reasons why.
Let's start with a story about data. Data is pretty simple compared with people. Just take a look at the numbers, the small ones, the ones that better describe what happened in 2016 and 2017 J On The Beach editions.
End of explanation
data2016 = pd.read_csv('../input/small_data_2016.csv')
data2016['Women Rate'] = pd.Series(data2016['Women']*100/data2016['Total'])
data2016['Men Rate'] = pd.Series(data2016['Men']*100/data2016['Total'])
data2016
Explanation: Small data analysis
Small data says that last year, our 'J' engaged up to 48 speakers and 299 attendees into this big data thing.
I'm not considering here any member of the organisation.
End of explanation
data2017 = pd.read_csv('../input/small_data_2017.csv')
data2017['Women Rate'] = pd.Series(data2017['Women']*100/data2017['Total'])
data2017['Men Rate'] = pd.Series(data2017['Men']*100/data2017['Total'])
data2017
increase = 100 - 299*100.00/368
increase
Explanation: This year speakers are 40, few less than last year, while participation have reached the number of 368 people. (Compare the increment of attendees 368 vs 299
End of explanation
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
['2016', data2016['Women Rate'][0], data2016['Men Rate'][0],''],
['2017', data2017['Women Rate'][0], data2017['Men Rate'][0],''],
]
options = {
"title": 'Speakers at JOTB',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '50%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
Explanation: It is noticable also, that big data is bigger than ever and this year we have included workshops and a hackathon.
The more the better right? Let's continue because there are more numbers behind those ones. Numbers that will give us some signs of diversity.
Diversity
When it comes about speakers, this year we have a 27.5% of women speaking to J, compared with a rough 10.4% of the last year.
End of explanation
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
['2016', data2016['Women Rate'][1], data2016['Men Rate'][1],''],
['2017', data2017['Women Rate'][1], data2017['Men Rate'][1],''],
]
options = {
"title": 'Attendees at JOTB',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '55%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
Explanation: However, and this is the worrying thing, the participation of women as attendees has slightly dropped from a not too ambitious 13% to a disappointing 9.8%. So we have an x% more of attendees but zero impact on a wider variaty of people.
End of explanation
companies_team = data2017['Total'][3] + data2017['Total'][4]
mass_represented = pd.Series(data2017['Total'][4]*100/companies_team)
women_represented = pd.Series(100 - mass_represented)
mass_represented
Explanation: Why this happened?
We don’t really know. But we continued looking at the numbers and realised that 30 of the 45 companies that enrolled two or more people didn't include any women on their lists. Meaning a 31% of the mass of attendees. Correlate team size with women percentage to validate if: the smaller the teams are, the less chances to include a women on their lists
End of explanation
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
[data2016['Tribe'][2], data2016['Women Rate'][2], data2016['Men Rate'][2],''],
[data2016['Tribe'][3], data2016['Women Rate'][3], data2016['Men Rate'][3],''],
[data2016['Tribe'][5], data2016['Women Rate'][5], data2016['Men Rate'][5],''],
]
options = {
"title": '2016 JOTB Edition',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '55%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
[data2017['Tribe'][2], data2017['Women Rate'][2], data2017['Men Rate'][2],''],
[data2017['Tribe'][3], data2017['Women Rate'][3], data2017['Men Rate'][3],''],
[data2017['Tribe'][5], data2017['Women Rate'][5], data2017['Men Rate'][5],''],
]
options = {
"title": '2017 JOTB Edition',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '55%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
Explanation: For us this is not a good sign. Despite the fact that our ability to summon has increased on our monthly meetups (the ones that attempts to create this culture for equality on Málaga), the engagement on other events doesn’t have a big impact.
Again I'm not blaming companies here, because if we try to identify the participation rate of women who are not part of a team, the representation also decreased almost a 50%.
End of explanation
run index.py jotb2018
Explanation: Before before blaming anyone or falling to quickly into self-indulgence, there are still more data to play with.
Note aside: the next thing is nothing but an experiment, nothing is categorical or has been made with the intention of offending any body. Like our t-shirt labels says: no programmer have been injured in the creation of the following data game.
Social network analysis
The next story talks about people. The people around J, the ones who follow, are followed by, interact with, and create the chances of a more diverse and interesting conference.
It is also a story about the people who organise this conference. Because when we started to plan a conference like this, we did nothing but thinking on what could be interesting for the people who come. In order to get that we used the previous knowledge that we have about cool people who do amazing things with data, and JVM technologies. And this means looking into our own networks and following suggestions of the people we trust.
So if we assume that we are biased by the people around us, we thought it was a good idea to know first how is the network of people around J to see the chances that we have to bring someone different, unusual that can add value to the conference.
For the moment, since this is an experiment that wants to trigger your reaction we will look at J's Twitter account.
Indeed, a real-world network would have a larger amount of numbers and people to look at, but yet a digital social network is about human interactions, conversations and knowledge sharing.
For this experiment we've used sexmachine python library https://pypi.python.org/pypi/SexMachine/ and the 'Twitter Gender Distribution' project published in github https://github.com/ajdavis/twitter-gender-distribution to find out the gender of a specific twitter acount.
End of explanation
# Read the file and take some important information
whoisj = pd.read_json('../out/jotb2018.json', orient = 'columns')
people = pd.read_json(whoisj['jotb2018'].to_json())
following_total = whoisj['jotb2018']['friends_count']
followers_total = whoisj['jotb2018']['followers_count']
followers = pd.read_json(people['followers_list'].to_json(), orient = 'index')
following = pd.read_json(people['friends_list'].to_json(), orient = 'index')
whoisj
Explanation: From the small 50% of J's friends that could be identified with a gender, the distribution woman/men is a 20/80. Friends are the ones who follow and are followed by J.
End of explanation
# J follows to...
following_total
Explanation: J follows to...
End of explanation
# J is followed by...
followers_total
Explanation: J is followed by...
End of explanation
followers['gender'].value_counts()
following['gender'].value_counts()
followers_dist = followers['gender'].value_counts()
genders = followers['gender'].value_counts().keys()
followers_map = pygal.Pie(height=400)
followers_map.title = 'Followers Gender Map'
for i in genders:
followers_map.add(i,followers_dist[i]*100.00/followers_total)
followers_map.render_in_browser()
following_dist = following['gender'].value_counts()
genders = following['gender'].value_counts().keys()
following_map = pygal.Pie(height=400)
following_map.title = 'Following Gender Map'
for i in genders:
following_map.add(i,following_dist[i]*100.00/following_total)
following_map.render_in_browser()
Explanation: Gender distribution
End of explanation
lang_counts = followers['lang'].value_counts()
languages = followers['lang'].value_counts().keys()
followers_dist = followers['gender'].value_counts()
lang_followers_map = pygal.Treemap(height=400)
lang_followers_map.title = 'Followers Language Map'
for i in languages:
lang_followers_map.add(i,lang_counts[i]*100.00/followers_total)
lang_followers_map.render_in_browser()
lang_counts = following['lang'].value_counts()
languages = following['lang'].value_counts().keys()
following_dist = following['gender'].value_counts()
lang_following_map = pygal.Treemap(height=400)
lang_following_map.title = 'Following Language Map'
for i in languages:
lang_following_map.add(i,lang_counts[i]*100.00/following_total)
lang_following_map.render_in_browser()
Explanation: Language distribution
End of explanation
followers['location'].value_counts()
following['location'].value_counts()
Explanation: Location distribution
End of explanation
run tweets.py jotb2018 1000
j_network = pd.read_json('../out/jotb2018_tweets.json', orient = 'index')
interactions = j_network['gender'].value_counts()
genders = j_network['gender'].value_counts().keys()
j_network_map = pygal.Pie(height=400)
j_network_map.title = 'Interactions Gender Map'
for i in genders:
j_network_map.add(i,interactions[i])
j_network_map.render_in_browser()
a = j_network['hashtags']
b = j_network['gender']
say_something = [x for x in a if x != []]
tags = []
for y in say_something:
for x in pd.DataFrame(y)[0]:
tags.append(x.lower())
tags_used = pd.DataFrame(tags)[0].value_counts()
tags_keys = pd.DataFrame(tags)[0].value_counts().keys()
tags_map = pygal.Treemap(height=400)
tags_map.title = 'Hashtags Map'
for i in tags_keys:
tags_map.add(i,tags_used[i])
tags_map.render_in_browser()
pairs = []
for i in j_network['gender'].keys() :
if (j_network['hashtags'][i] != []) :
pairs.append([j_network['hashtags'][i], j_network['gender'][i]])
key_pairs = []
for i,j in pairs:
for x in i:
key_pairs.append((x,j))
key_pairs
key_pair_dist = {x: key_pairs.count(x) for x in key_pairs}
sorted_x = sorted(key_pair_dist.items(), key = operator.itemgetter(1), reverse = True)
sorted_x
Explanation: Tweets analysis
End of explanation |
13,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Determining whether a Javascript sample is malicious is not computable (https
Step1: Features | Python Code:
import glob
import string
import re
import numpy as np
# Loading the data
data = []
for js_file in glob.glob('Javascript/*/*'):
new = {}
new['name'] = js_file.split('/')[-1]
new['code'] = open(js_file,'r').read()
if new['name'][-2:] == 'js':
if new['name'][-6:] == 'min.js':
new['nature'] = 'Minified'
new['color'] = 'b'
else:
new['nature'] = 'Normal'
new['color'] = 'g'
elif new['name'][-3:] == 'out':
new['nature'] = 'Malicious'
new['color'] = 'r'
data.append(new)
Explanation: Determining whether a Javascript sample is malicious is not computable (https://en.wikipedia.org/wiki/Computable_function) : we are looking for an algorithm that takes a program (which can be seen as an arbitrary Turing Machine) as an input and whose output is a property of the execution of that program.
If you are unfamiliar with the theory of computability and want ot get an intuitive sense of this, imagine writing a JS sample that non trivially never terminates. A simple while(1){} would not do the trick because it can be trivially proven (without executing it) that it never terminates.
A program terminating depending on the answer to some complex mathematical problem (e.g. finding whether a big number is prime) can not be proven to terminate short of actually solving the problem, the best method for doing so being to actually execute the program.
Therefore, the best way to now if this program will terminate is to execute it, which may never ends. That is why deciding a property about the execution of that program is not computable in the general case.
This does not deter us from trying though, because in practice a program that does not terminate in a few seconds will be interrupted by the browser, and is therefore neither malicious nor begnin, it is non-fonctional. The goal here is to devise some indicator of malignity of a JS sample without even executing it (who wants to execute malicious code ?).
Related works
\cite{likarish2009obfuscated}. Bonne intro, bon blabla, mais ils ont créé un détecteur d'obfuscation plus qu'autre chose. On utilise quand même leur features.
On se limite aux features qu'on peut calculer sans même parser le JS (ne fut-ce que parce qu'on est pas à l'abri d'une attaque sur le parser.
Code
End of explanation
def length(code):
return len(code)
def nb_lines(code):
return len(code.split('\n'))
def avg_char_per_line(code):
return length(code)/nb_lines(code)
def nb_strings(code):
'''Ugly approximation, no simple way out of this short of actually parsing the JS.'''
return len(code.split("'"))+len(code.split('"'))
def nb_non_printable(code):
'''\cite{likarish2009obfuscated} use unicode symbol, but we are more general'''
return len([x for x in code if not x in string.printable])
hex_octal_re = re.compile('([^A-F0-9]0[0-7]+|0x[A-F0-9]+)')
def hex_or_octal(code):
'''Ugly as hell, but we dont want to parse'''
return len(list(hex_octal_re.finditer(code)))
def max_nesting_level(code):
l = 0
max_l = 0
for c in code:
if c in '({[':
l+=1
max_l = l if l > max_l else max_l
elif c in ')}]':
l-=1
return max_l
features = [length, nb_lines, avg_char_per_line, nb_strings, nb_non_printable, hex_or_octal, max_nesting_level]
X = np.array([[f(x['code']) for f in features] for x in data])
X[:30]
#http://scikit-learn.org/stable/auto_examples/manifold/plot_compare_methods.html#example-manifold-plot-compare-methods-py
from sklearn import manifold
%matplotlib inline
import matplotlib.pylab as plt
n_neighbors = 10
n_components = 2
#Y = manifold.Isomap(n_neighbors, n_components).fit_transform(X)
#Y = manifold.LocallyLinearEmbedding(n_neighbors, n_components,
# eigen_solver='auto').fit_transform(X)
Y = manifold.MDS(n_components, max_iter=100, n_init=1).fit_transform(X)
#Y = manifold.SpectralEmbedding(n_components=n_components,
# n_neighbors=n_neighbors).fit_transform(X)
#Y = manifold.TSNE(n_components=n_components, init='pca', random_state=0).fit_transform(X)
plt.scatter(Y[:, 0], Y[:, 1], c=[x['color'] for x in data], alpha=0.2)
for label, x, y in zip([x['name'] for x in data], Y[:, 0], Y[:, 1]):
if '.js' in label and not ('min.' in lab2el):
plt.annotate(label ,xy=[x,y])
plt.savefig('toto.pdf')
[[x['name'],hex_or_octal(x['code'])] for x in data[:3]]
[x for x in data[-3]['code'] if not x in string.printable]
Explanation: Features
End of explanation |
13,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
from collections import Counter
import numpy as np
import tensorflow as tf
from math import floor
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
vocab = set(words)
vocab_to_int = dict()
for i, word in enumerate(vocab):
vocab_to_int[word] = i + 1
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = list()
for review in reviews:
review_int = list()
for word in review.split():
review_int.append(vocab_to_int[word])
reviews_ints.append(review_int)
reviews[0]
print(vocab_to_int['bromwell'])
print(vocab_to_int['high'])
reviews_ints[0][:2]
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels[:100]
labels = labels.split('\n')
len(labels)
labels[:100]
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = np.array([1 if label == 'positive' else 0 for label in labels])
len(labels)
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
# reviews_ints = [review_int for review_int in reviews_ints if len(review_int) > 0]
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
def truncate_pad_tensor(arr, seq_len=200):
arr = np.array(arr)
# Get the first seq_len items
arr = arr[:seq_len]
padded = np.pad(arr, (seq_len - len(arr),0), 'constant')
return padded
features = np.array([truncate_pad_tensor(review_int) for review_int in reviews_ints])
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
print(len(features))
print(len(labels))
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
tf.reset_default_graph()
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
13,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object recognition with CNN
Keras is a Python library for deep learning that wraps the powerful numerical libraries Theano and TensorFlow.
A difficult problem where traditional neural networks fall down is called object recognition. It is where a model is able to identify the objects in images.
In this post, we will discover how to develop and evaluate deep learning models for object recognition in Keras. After completing this tutorial we will know
Step1: Simple Convolutional Neural Network for CIFAR-10
The CIFAR-10 problem is best solved using a Convolutional Neural Network (CNN).
We can quickly start off by defining all of the classes and functions we will need in this example.
Step2: As is good practice, we next initialize the random number seed with a constant to ensure the results are reproducible.
Step3: Next we can load the CIFAR-10 dataset.
Step4: The pixel values are in the range of 0 to 255 for each of the red, green and blue channels.
It is good practice to work with normalized data. Because the input values are well understood, we can easily normalize to the range 0 to 1 by dividing each value by the maximum observation which is 255.
Note, the data is loaded as integers, so we must cast it to floating point values in order to perform the division.
Step5: The output variables are defined as a vector of integers from 0 to 1 for each class.
We can use a one hot encoding to transform them into a binary matrix in order to best model the classification problem. We know there are 10 classes for this problem, so we can expect the binary matrix to have a width of 10.
Step6: Let’s start off by defining a simple CNN structure as a baseline and evaluate how well it performs on the problem.
We will use a structure with two convolutional layers followed by max pooling and a flattening out of the network to fully connected layers to make predictions.
Our baseline network structure can be summarized as follows
Step7: We can fit this model with 10 epochs and a batch size of 32.
A small number of epochs was chosen to help keep this tutorial moving. Normally the number of epochs would be one or two orders of magnitude larger for this problem.
Once the model is fit, we evaluate it on the test dataset and print out the classification accuracy.
Step8: We can improve the accuracy significantly by creating a much deeper network.
Larger Convolutional Neural Network for CIFAR-10
We have seen that a simple CNN performs poorly on this complex problem. In this section we look at scaling up the size and complexity of our model.
Let’s design a deep version of the simple CNN above. We can introduce an additional round of convolutions with many more feature maps. We will use the same pattern of Convolutional, Dropout, Convolutional and Max Pooling layers.
This pattern will be repeated 3 times with 32, 64, and 128 feature maps. The effect be an increasing number of feature maps with a smaller and smaller size given the max pooling layers. Finally an additional and larger Dense layer will be used at the output end of the network in an attempt to better translate the large number feature maps to class values.
We can summarize a new network architecture as follows
Step9: We can fit and evaluate this model using the same a procedure above and the same number of epochs but a larger batch size of 64, found through some minor experimentation. | Python Code:
# Plot ad hoc CIFAR10 instances
from keras.datasets import cifar10
from matplotlib import pyplot
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
Explanation: Object recognition with CNN
Keras is a Python library for deep learning that wraps the powerful numerical libraries Theano and TensorFlow.
A difficult problem where traditional neural networks fall down is called object recognition. It is where a model is able to identify the objects in images.
In this post, we will discover how to develop and evaluate deep learning models for object recognition in Keras. After completing this tutorial we will know:
About the CIFAR-10 object recognition dataset and how to load and use it in Keras.
How to create a simple Convolutional Neural Network for object recognition.
How to lift performance by creating deeper Convolutional Neural Networks.
The CIFAR-10 Problem Description
The problem of automatically identifying objects in photographs is difficult because of the near infinite number of permutations of objects, positions, lighting and so on. It’s a really hard problem.
This is a well-studied problem in computer vision and more recently an important demonstration of the capability of deep learning. A standard computer vision and deep learning dataset for this problem was developed by the Canadian Institute for Advanced Research (CIFAR).
The CIFAR-10 dataset consists of 60,000 photos divided into 10 classes (hence the name CIFAR-10). Classes include common objects such as airplanes, automobiles, birds, cats and so on. The dataset is split in a standard way, where 50,000 images are used for training a model and the remaining 10,000 for evaluating its performance.
The photos are in color with red, green and blue components, but are small measuring 32 by 32 pixel squares.
Loading The CIFAR-10 Dataset in Keras
The CIFAR-10 dataset can easily be loaded in Keras.
Keras has the facility to automatically download standard datasets like CIFAR-10 and store them in the ~/.keras/datasets directory using the cifar10.load_data() function. This dataset is large at 163 megabytes, so it may take a few minutes to download.
Once downloaded, subsequent calls to the function will load the dataset ready for use.
The dataset is stored as pickled training and test sets, ready for use in Keras. Each image is represented as a three dimensional matrix, with dimensions for red, green, blue, width and height. We can plot images directly using matplotlib.
End of explanation
# Simple CNN model for CIFAR-10
import numpy
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.constraints import maxnorm
from keras.optimizers import SGD
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.set_image_dim_ordering('th')
Explanation: Simple Convolutional Neural Network for CIFAR-10
The CIFAR-10 problem is best solved using a Convolutional Neural Network (CNN).
We can quickly start off by defining all of the classes and functions we will need in this example.
End of explanation
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
Explanation: As is good practice, we next initialize the random number seed with a constant to ensure the results are reproducible.
End of explanation
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
Explanation: Next we can load the CIFAR-10 dataset.
End of explanation
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0
Explanation: The pixel values are in the range of 0 to 255 for each of the red, green and blue channels.
It is good practice to work with normalized data. Because the input values are well understood, we can easily normalize to the range 0 to 1 by dividing each value by the maximum observation which is 255.
Note, the data is loaded as integers, so we must cast it to floating point values in order to perform the division.
End of explanation
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
Explanation: The output variables are defined as a vector of integers from 0 to 1 for each class.
We can use a one hot encoding to transform them into a binary matrix in order to best model the classification problem. We know there are 10 classes for this problem, so we can expect the binary matrix to have a width of 10.
End of explanation
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 10
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer='adagrad', metrics=['categorical_accuracy'])
print(model.summary())
Explanation: Let’s start off by defining a simple CNN structure as a baseline and evaluate how well it performs on the problem.
We will use a structure with two convolutional layers followed by max pooling and a flattening out of the network to fully connected layers to make predictions.
Our baseline network structure can be summarized as follows:
Convolutional input layer, 32 feature maps with a size of 3×3, a rectifier activation function and a weight constraint of max norm set to 3.
Dropout set to 20%.
Convolutional layer, 32 feature maps with a size of 3×3, a rectifier activation function and a weight constraint of max norm set to 3.
Max Pool layer with size 2×2.
Flatten layer.
Fully connected layer with 512 units and a rectifier activation function.
Dropout set to 50%.
Fully connected output layer with 10 units and a softmax activation function.
A logarithmic loss function is used with the stochastic gradient descent optimization algorithm configured with a large momentum and weight decay start with a learning rate of 0.01.
End of explanation
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
Explanation: We can fit this model with 10 epochs and a batch size of 32.
A small number of epochs was chosen to help keep this tutorial moving. Normally the number of epochs would be one or two orders of magnitude larger for this problem.
Once the model is fit, we evaluate it on the test dataset and print out the classification accuracy.
End of explanation
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 10
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer='adagrad', metrics=['categorical_accuracy'])
print(model.summary())
Explanation: We can improve the accuracy significantly by creating a much deeper network.
Larger Convolutional Neural Network for CIFAR-10
We have seen that a simple CNN performs poorly on this complex problem. In this section we look at scaling up the size and complexity of our model.
Let’s design a deep version of the simple CNN above. We can introduce an additional round of convolutions with many more feature maps. We will use the same pattern of Convolutional, Dropout, Convolutional and Max Pooling layers.
This pattern will be repeated 3 times with 32, 64, and 128 feature maps. The effect be an increasing number of feature maps with a smaller and smaller size given the max pooling layers. Finally an additional and larger Dense layer will be used at the output end of the network in an attempt to better translate the large number feature maps to class values.
We can summarize a new network architecture as follows:
Convolutional input layer, 32 feature maps with a size of 3×3 and a rectifier activation function.
Dropout layer at 20%.
Convolutional layer, 32 feature maps with a size of 3×3 and a rectifier activation function.
Max Pool layer with size 2×2.
Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.
Dropout layer at 20%.
Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.
Max Pool layer with size 2×2.
Convolutional layer, 128 feature maps with a size of 3×3 and a rectifier activation function.
Dropout layer at 20%.
Convolutional layer,128 feature maps with a size of 3×3 and a rectifier activation function.
Max Pool layer with size 2×2.
Flatten layer.
Dropout layer at 20%.
Fully connected layer with 1024 units and a rectifier activation function.
Dropout layer at 20%.
Fully connected layer with 512 units and a rectifier activation function.
Dropout layer at 20%.
Fully connected output layer with 10 units and a softmax activation function.
We can very easily define this network topology in Keras, as follows:
End of explanation
numpy.random.seed(seed)
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
Explanation: We can fit and evaluate this model using the same a procedure above and the same number of epochs but a larger batch size of 64, found through some minor experimentation.
End of explanation |
13,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I swore to myself up and down that I wouldn't write one of these. But then I went and hacked up Pynads. And then I wrote a post on Pynads. And then I posted explainations about Monads on reddit. So what the hell. I already fulfilled my "Write about decorators when I understand them" obligation and ditto for descriptors. So Monads, why not...
It's simple, a monad is like a...
No. Stooooooop.
Step1: I have that input. But I need output that looks like this
Step2: And it works. Or I could just chain all those operations together
Step3: Each string method returns a new string that can carry the chain forward. We can add in as many string methods that return a string. But if we place something like split or find then our chain can't be continued as there's a list or a integer now. That's not to say we can't continue the chain, but we likely need to do in a separate expression (which is okay).
Worshipping at the altar of bind
So Haskell style monads are pretty much defined by the presence of >>= and return. return just lifts a value into a monad. And >>= is the sequencing operator. Neither of these are magic, we need to define them ourselves. I like using Maybe as an example because it's simple enough to explain but addresses a real world problem
Step4: We can use this to process information from STDIN (for example)
Step5: We just have to make sure we include the if x is None check everywhere. That's easy. Right. ...right? guise? On top of it being something to remember, it's line noise. Completely in the way of what we're attempting to accomplish. Instead, let's look at Maybe in terms of Haskell and Python
Step6: And we can use this to reimplement our int_from_stdin and sqrt functions above
Step7: And chain them together like this
Step8: What >>= does isn't just sequence actions together. That's easy to do, we could have accomplished them the same thing before with sqrt(int_from_stdin()). However, the real magic sauce of >>= is abstracting how they're sequenced. In this case, sequencing a Just results in feeding the contained value of Just to a function and getting back a Maybe. And sequencing a Nothing results in Nothing.
The great thing about Maybe is we're allowed to decide at an arbitrary point if we even want to continue with the computation or bail out completely. Let's say we have something against even numbers. Perhaps it's that only one of them is Prime. But we like odds. So if we get an even number from STDIN, we'll just bail out.
Step9: Other ways to sequence
Obviously bind/>>= isn't the only way to interact with monads if they're just about sequencing functions together. For example, Scala has a suped-up version of Maybe called Option. It's the same basic structure | Python Code:
x = y = ' Fred\n Thompson '
Explanation: I swore to myself up and down that I wouldn't write one of these. But then I went and hacked up Pynads. And then I wrote a post on Pynads. And then I posted explainations about Monads on reddit. So what the hell. I already fulfilled my "Write about decorators when I understand them" obligation and ditto for descriptors. So Monads, why not...
It's simple, a monad is like a...
No. Stooooooop. :( Burritos. Bucket brigades. Semicolons. All these analogies just confused me for a long time. And then I "got them" and by "got them" I mean "Even more hopelessly confused but I didn't know that." Like what does "programmable semicolon" even mean? Every language I've used (which isn't many) a semicolon means "This bit of code ends here, kthxbai". The burrito analogy was meant as a critique of this phenomenon -- and I'll likely fall victim of the "Monad Tutorial Curse". And the bucket brigade was a valiant effort by a SO user to explain them.
It's simple, a monad is like a Unix Pipe
Instead of reaching for some non-programming analogy like burritos or bucket brigades, I think Unix Pipes are a pretty good analogy to Haskell-style monads. Let's say I'm in a directory that has a bunch of different types of files -- maybe it's the bottomless bin that is ~/Downloads ): And I want to find all the MP4 files in the top level directory and print them out:
ls -lh ~/Downloads | grep -i "*mp4" | less
Super simple. We take the first command ls feed it some options and a directory to list out. Then | goes "Oh, you have output and I have this thing that needs input, here grep!" And then grep does its business and | steps back in and goes "Oh, you have output and I have this thing that needs input, here less!"
Of course it isn't a perfect analogy. But all analogies break down under scrutiny. But this is essentially what Haskell's >>= does. "Oh, you have output, let me feed it to this function that wants input!" That's it. Monads are about chaining together a series of actions of functions (depending on how you want to look at it) in a way that each action/function returns something that can carry the chain forward somehow.
But the short of monads is that they have nothing to do with I/O, impure values, side effects or anything else. Those are implementation specific to certain monads. Monads in general only deal with how to combine expressions.
But Python doesn't have monads
Eh. It all depends on how you want to look at it. Sure, it doesn't have Haskell style monads. But it doesn't need to. Let's look at something:
End of explanation
x = x.replace('Fred', 'Jack')
x = x.replace('\n', '')
x = x.strip()
x = x.upper()
print(x)
Explanation: I have that input. But I need output that looks like this: "JACK THOMPSON". The obvious way is doing it imperatively:
End of explanation
print(y.replace('Fred', 'Jack').replace('\n', '').strip().upper())
Explanation: And it works. Or I could just chain all those operations together:
End of explanation
def sqrt(x):
if x is None:
return None
return x**.5
print(sqrt(4))
print(sqrt(None))
Explanation: Each string method returns a new string that can carry the chain forward. We can add in as many string methods that return a string. But if we place something like split or find then our chain can't be continued as there's a list or a integer now. That's not to say we can't continue the chain, but we likely need to do in a separate expression (which is okay).
Worshipping at the altar of bind
So Haskell style monads are pretty much defined by the presence of >>= and return. return just lifts a value into a monad. And >>= is the sequencing operator. Neither of these are magic, we need to define them ourselves. I like using Maybe as an example because it's simple enough to explain but addresses a real world problem: Null Pointer Exceptions. (:
We usually avoid this sort of thing with this pattern in Python:
End of explanation
def int_from_stdin():
x = input()
return int(x) if x.isdigit() else None
maybe_int = int_from_stdin()
print(sqrt(maybe_int))
maybe_int = int_from_stdin()
print(sqrt(maybe_int))
Explanation: We can use this to process information from STDIN (for example):
End of explanation
class Maybe:
@staticmethod
def unit(v):
return Just(v)
def bind(self, bindee):
raise NotImplementedError
class Just(Maybe):
def __init__(self, v):
self.v = v
def __repr__(self):
return 'Just {!r}'.format(self.v)
def bind(self, bindee):
return bindee(self.v)
class Nothing(Maybe):
def bind(self, bindee):
return self
def __repr__(self):
return 'Nothing'
Explanation: We just have to make sure we include the if x is None check everywhere. That's easy. Right. ...right? guise? On top of it being something to remember, it's line noise. Completely in the way of what we're attempting to accomplish. Instead, let's look at Maybe in terms of Haskell and Python:
data Maybe a = Nothing | Just a
instance Monad Maybe where
return = Just
(Just x) >>= f = f x
Nothing >>= f = Nothing
We have the type constructor Maybe which has two data constructors Just and Nothing. In Python terms, we have an abstract class Maybe and two implementations Just and Nothing. When we have a Just and >>= is used, we get the result of the function with the input of whatever is in Just. If we have Nothing and >>=is used, we get Nothing (Nothing from nothing leaves nothing. You gotta have something, if you wanna be with me). Notice that onus to return a Maybe is on whatever function we bind to. This puts the power in our hands to decide if we have a failure at any given point in the operation.
In Python, a simplified version looks a lot like this:
End of explanation
def int_from_stdin():
x = input()
return Just(int(x)) if x.isdigit() else Nothing()
def sqrt(x):
return Just(x**.5)
Explanation: And we can use this to reimplement our int_from_stdin and sqrt functions above:
End of explanation
int_from_stdin().bind(sqrt)
int_from_stdin().bind(sqrt)
Explanation: And chain them together like this:
End of explanation
def only_odds(x):
return Just(x) if x&1 else Nothing()
int_from_stdin().bind(only_odds).bind(sqrt)
int_from_stdin().bind(only_odds).bind(sqrt)
Explanation: What >>= does isn't just sequence actions together. That's easy to do, we could have accomplished them the same thing before with sqrt(int_from_stdin()). However, the real magic sauce of >>= is abstracting how they're sequenced. In this case, sequencing a Just results in feeding the contained value of Just to a function and getting back a Maybe. And sequencing a Nothing results in Nothing.
The great thing about Maybe is we're allowed to decide at an arbitrary point if we even want to continue with the computation or bail out completely. Let's say we have something against even numbers. Perhaps it's that only one of them is Prime. But we like odds. So if we get an even number from STDIN, we'll just bail out.
End of explanation
from itertools import islice
import builtins as __builtin__
def take(n, it):
return islice(it, n)
class UFCS(object):
def __init__(self, value):
self.state = value
def __getattr__(self, item):
try:
func = getattr(__builtin__, item)
except AttributeError:
func = globals()[item]
def curried(*args):
if not args:
self.state = func(self.state)
else:
args = list(args)
args.append(self.state)
self.state = func(*args)
return self
return curried
def get(self):
return self.state
x = ['#3.462289264065068',
'4.283990003510465',
'#1.7285949138067824',
'#2.6009019446392987',
'5.089491698891653',
'3.854140130424576',
'4.118846086899804',
'5.110436429053362',
'9.044631493138326',
'5.503343391187907',
'1.4415742971795897',
'2.7162342709197618',
'9.438995804377226',
'1.8698624486908322',
'4.008599242523804',
'8.914062382096017',
'4.120213633898632',
'6.9189185117106975',
# more were included, but removed here
]
UFCS(x).filter(lambda s: s and s[0] != "#").map(float).sorted().take(10).list().print()
Explanation: Other ways to sequence
Obviously bind/>>= isn't the only way to interact with monads if they're just about sequencing functions together. For example, Scala has a suped-up version of Maybe called Option. It's the same basic structure: Some (our successful computation) and None (a failed computation). It also has ways of recovering from a possibly failed computation with its getOrX methods. For example, if we have Some("abc") we can do this to recover when check if d is present:
Some("abc") filter (i => match i indexOf "d" {
case -1 => None
case _ => Some(i)
}
}) getOr "d"
Which should return "d" but Scala isn't my mother tongue, so there's probably an error somewhere.
You could argue that SQLAlchemy is monadic as well based on how you build queries in it:
q = session.query(Person).filter(Person.name.startswith('A')).first()
SQLAlchemy queries return query objects that can carry the chain further, allowing us to craft complicated queries in a relatively simple manner.
I found a more clever example in a thread on /r/learnpython about what features would you implement in Python given that chance. Below the "Everything not nailed down in Haskell" comment, there was one about universal function call syntax from D. /u/AMorpork proposed simply creating a monad where __getattr__ is the sequencing operation (reproduced here):
End of explanation |
13,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classify a Raster Using Threshold Values
In this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. Refer to the links below for more information about NEON data products and the CHM product DP3.30015.001
Step1: Open a Geotif with GDAL
Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function
Step2: Read information from Geotif Tags
The Geotif file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands
Step3: GetProjection
We can use the gdal GetProjection method to display information about the coordinate system and EPSG code.
Step4: GetGeoTransform
The geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to
Step5: In this case, the geotransform values correspond to
Step6: GetRasterBand
We can read in a single raster band with GetRasterBand and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows
Step7: ReadAsArray
Finally we can convert the raster to an array using the gdal ReadAsArray method. Cast the array to a floating point value using astype(np.float). Once we generate the array, we want to set No Data Values to nan, and apply the scale factor (for CHM this is just 1.0, so will not matter, but it's a good habit to get into)
Step8: Let's look at the dimensions of the array we read in
Step9: We can calculate the % of pixels that are undefined (nan) and non-zero using np.count_nonzero. Typically tiles in the center of a site will have close to 0% NaN, but tiles on the edges of sites may have a large percent of nan values.
Step10: Plot Canopy Height Data
To get a better idea of the dataset, we can use a similar function to plot_aop_refl that we used in the NEON AOP reflectance tutorials
Step11: Plot Histogram of Data
As we did with the reflectance tile, it is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values. First we'll make a copy of the array and remove the nan values.
Step12: On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values
Step13: Threshold Based Raster Classification
Next, we will create a classified raster object. To do this, we will use the se the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups
Step14: We can define our own colormap to plot these discrete classifications, and create a custom legend to label the classes | Python Code:
import numpy as np
import gdal, copy
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: Classify a Raster Using Threshold Values
In this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. Refer to the links below for more information about NEON data products and the CHM product DP3.30015.001:
http://data.neonscience.org/data-products/explore
http://data.neonscience.org/data-products/DP3.30015.001
Objectives
By the end of this tutorial, you should be able to
Use gdal to read NEON LiDAR Raster Geotifs (eg. CHM, Slope Aspect) into a Python numpy array.
Create a classified array using thresholds.
A useful resource for using gdal in Python is the Python GDAL/OGR cookbook.
https://pcjericks.github.io/py-gdalogr-cookbook/
First, let's import the required packages and set our plot display to be in-line:
End of explanation
chm_filename = r'C:\Users\bhass\Documents\GitHub\NEON_RSDI\RSDI_2018\Day2_LiDAR\data\NEON_D02_SERC_DP3_368000_4306000_CHM.tif'
chm_dataset = gdal.Open(chm_filename)
chm_dataset
Explanation: Open a Geotif with GDAL
Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function:
End of explanation
#Display the dataset dimensions, number of bands, driver, and geotransform
cols = chm_dataset.RasterXSize; print('# of columns:',cols)
rows = chm_dataset.RasterYSize; print('# of rows:',rows)
print('# of bands:',chm_dataset.RasterCount)
print('driver:',chm_dataset.GetDriver().LongName)
Explanation: Read information from Geotif Tags
The Geotif file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands:
End of explanation
print('projection:',chm_dataset.GetProjection())
Explanation: GetProjection
We can use the gdal GetProjection method to display information about the coordinate system and EPSG code.
End of explanation
print('geotransform:',chm_dataset.GetGeoTransform())
Explanation: GetGeoTransform
The geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to:
End of explanation
chm_mapinfo = chm_dataset.GetGeoTransform()
xMin = chm_mapinfo[0]
yMax = chm_mapinfo[3]
xMax = xMin + chm_dataset.RasterXSize/chm_mapinfo[1] #divide by pixel width
yMin = yMax + chm_dataset.RasterYSize/chm_mapinfo[5] #divide by pixel height (note sign +/-)
chm_ext = (xMin,xMax,yMin,yMax)
print('chm raster extent:',chm_ext)
Explanation: In this case, the geotransform values correspond to:
Left-Most X Coordinate = 367000.0
W-E Pixel Resolution = 1.0
Rotation (0 if Image is North-Up) = 0.0
Upper Y Coordinate = 4307000.0
Rotation (0 if Image is North-Up) = 0.0
N-S Pixel Resolution = -1.0
The negative value for the N-S Pixel resolution reflects that the origin of the image is the upper left corner. We can convert this geotransform information into a spatial extent (xMin, xMax, yMin, yMax) by combining information about the origin, number of columns & rows, and pixel size, as follows:
End of explanation
chm_raster = chm_dataset.GetRasterBand(1)
noDataVal = chm_raster.GetNoDataValue(); print('no data value:',noDataVal)
scaleFactor = chm_raster.GetScale(); print('scale factor:',scaleFactor)
chm_stats = chm_raster.GetStatistics(True,True)
print('SERC CHM Statistics: Minimum=%.2f, Maximum=%.2f, Mean=%.3f, StDev=%.3f' %
(chm_stats[0], chm_stats[1], chm_stats[2], chm_stats[3]))
Explanation: GetRasterBand
We can read in a single raster band with GetRasterBand and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows:
End of explanation
#Read Raster Band as an Array
chm_array = chm_dataset.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(np.float)
#Assign CHM No Data Values to NaN
chm_array[chm_array==int(noDataVal)]=np.nan
#Apply Scale Factor
chm_array=chm_array/scaleFactor
print('SERC CHM Array:\n',chm_array) #display array values
Explanation: ReadAsArray
Finally we can convert the raster to an array using the gdal ReadAsArray method. Cast the array to a floating point value using astype(np.float). Once we generate the array, we want to set No Data Values to nan, and apply the scale factor (for CHM this is just 1.0, so will not matter, but it's a good habit to get into):
End of explanation
chm_array.shape
Explanation: Let's look at the dimensions of the array we read in:
End of explanation
pct_nan = np.count_nonzero(np.isnan(chm_array))/(rows*cols)
print('% NaN:',round(pct_nan*100,2))
print('% non-zero:',round(100*np.count_nonzero(chm_array)/(rows*cols),2))
Explanation: We can calculate the % of pixels that are undefined (nan) and non-zero using np.count_nonzero. Typically tiles in the center of a site will have close to 0% NaN, but tiles on the edges of sites may have a large percent of nan values.
End of explanation
def plot_spatial_array(band_array,spatial_extent,colorlimit,ax=plt.gca(),title='',cmap_title='',colormap=''):
plot = plt.imshow(band_array,extent=spatial_extent,clim=colorlimit);
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20);
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
Explanation: Plot Canopy Height Data
To get a better idea of the dataset, we can use a similar function to plot_aop_refl that we used in the NEON AOP reflectance tutorials:
End of explanation
plt.hist(chm_array[~np.isnan(chm_array)],100);
ax = plt.gca()
ax.set_ylim([0,15000]) #adjust the y limit to zoom in on area of interest
Explanation: Plot Histogram of Data
As we did with the reflectance tile, it is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values. First we'll make a copy of the array and remove the nan values.
End of explanation
plot_spatial_array(chm_array,
chm_ext,
(0,35),
title='SERC Canopy Height',
cmap_title='Canopy Height, m',
colormap='BuGn')
Explanation: On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values:
Note that it appears that the trees don't have a smooth or normal distribution, but instead appear blocked off in chunks. This is an artifact of the Canopy Height Model algorithm, which bins the trees into 5m increments (this is done to avoid another artifact of "pits" (Khosravipour et al., 2014).
From the histogram we can see that the majority of the trees are < 30m. We can re-plot the CHM array, this time adjusting the color bar limits to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.
End of explanation
chm_reclass = chm_array.copy()
chm_reclass[np.where(chm_array==0)] = 1 # CHM = 0 : Class 1
chm_reclass[np.where((chm_array>0) & (chm_array<=10))] = 2 # 0m < CHM <= 10m - Class 2
chm_reclass[np.where((chm_array>10) & (chm_array<=20))] = 3 # 10m < CHM <= 20m - Class 3
chm_reclass[np.where((chm_array>20) & (chm_array<=30))] = 4 # 20m < CHM <= 30m - Class 4
chm_reclass[np.where(chm_array>30)] = 5 # CHM > 30m - Class 5
Explanation: Threshold Based Raster Classification
Next, we will create a classified raster object. To do this, we will use the se the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups:
- Class 1: CHM = 0 m
- Class 2: 0m < CHM <= 10m
- Class 3: 10m < CHM <= 20m
- Class 4: 20m < CHM <= 30m
- Class 5: CHM > 30m
We can use np.where to find the indices where a boolean criteria is met.
End of explanation
import matplotlib.colors as colors
plt.figure();
cmapCHM = colors.ListedColormap(['lightblue','yellow','orange','green','red'])
plt.imshow(chm_reclass,extent=chm_ext,cmap=cmapCHM)
plt.title('SERC CHM Classification')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
# Create custom legend to label the four canopy height classes:
import matplotlib.patches as mpatches
class1_box = mpatches.Patch(color='lightblue', label='CHM = 0m')
class2_box = mpatches.Patch(color='yellow', label='0m < CHM <= 10m')
class3_box = mpatches.Patch(color='orange', label='10m < CHM <= 20m')
class4_box = mpatches.Patch(color='green', label='20m < CHM <= 30m')
class5_box = mpatches.Patch(color='red', label='CHM > 30m')
ax.legend(handles=[class1_box,class2_box,class3_box,class4_box,class5_box],
handlelength=0.7,bbox_to_anchor=(1.05, 0.4),loc='lower left',borderaxespad=0.)
Explanation: We can define our own colormap to plot these discrete classifications, and create a custom legend to label the classes:
End of explanation |
13,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses)
Step1: Note
Step2: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
Step3: Dynamic factors
A general dynamic factor model is written as
Step4: Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference
Step5: Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons
Step6: Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
Step7: Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
Step8: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
Step9: Appendix 1
Step10: So what did we just do?
__init__
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
start_params
start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
param_names
param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
transform_params and untransform_params
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons
Step11: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
from pandas.io.data import DataReader
# Get the datasets from FRED
start = '1979-01-01'
end = '2014-12-01'
indprod = DataReader('IPMAN', 'fred', start=start, end=end)
income = DataReader('W875RX1', 'fred', start=start, end=end)
# sales = DataReader('CMRMTSPL', 'fred', start=start, end=end)
emp = DataReader('PAYEMS', 'fred', start=start, end=end)
# dta = pd.concat((indprod, income, sales, emp), axis=1)
# dta.columns = ['indprod', 'income', 'sales', 'emp']
Explanation: Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses):
Industrial production (IPMAN)
Real aggregate income (excluding transfer payments) (W875RX1)
Manufacturing and trade sales (CMRMTSPL)
Employees on non-farm payrolls (PAYEMS)
In all cases, the data is at the monthly frequency and has been seasonally adjusted; the time-frame considered is 1972 - 2005.
End of explanation
HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end)
CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end)
HMRMT_growth = HMRMT.diff() / HMRMT.shift()
sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index)
# Fill in the recent entries (1997 onwards)
sales[CMRMT.index] = CMRMT
# Backfill the previous entries (pre 1997)
idx = sales.ix[:'1997-01-01'].index
for t in range(len(idx)-1, 0, -1):
month = idx[t]
prev_month = idx[t-1]
sales.ix[prev_month] = sales.ix[month] / (1 + HMRMT_growth.ix[prev_month].values)
dta = pd.concat((indprod, income, sales, emp), axis=1)
dta.columns = ['indprod', 'income', 'sales', 'emp']
dta.ix[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6));
Explanation: Note: in the most recent update on FRED (8/12/15) the time series CMRMTSPL was truncated to begin in 1997; this is probably a mistake due to the fact that CMRMTSPL is a spliced series, so the earlier period is from the series HMRMT and the latter period is defined by CMRMT.
Until this is corrected, the pre-8/12/15 dataset can be downloaded from Alfred (https://alfred.stlouisfed.org/series/downloaddata?seid=CMRMTSPL) or constructed by hand from HMRMT and CMRMT, as I do below (process taken from the notes in the Alfred xls file).
End of explanation
# Create log-differenced series
dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100
dta['dln_income'] = (np.log(dta.income)).diff() * 100
dta['dln_sales'] = (np.log(dta.sales)).diff() * 100
dta['dln_emp'] = (np.log(dta.emp)).diff() * 100
# De-mean and standardize
dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std()
dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std()
dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std()
dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std()
Explanation: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
End of explanation
# Get the endogenous data
endog = dta.ix['1979-02-01':, 'std_indprod':'std_emp']
# Create the model
mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2)
initial_res = mod.fit(method='powell', disp=False)
res = mod.fit(initial_res.params)
Explanation: Dynamic factors
A general dynamic factor model is written as:
$$
\begin{align}
y_t & = \Lambda f_t + B x_t + u_t \
f_t & = A_1 f_{t-1} + \dots + A_p f_{t-p} + \eta_t \qquad \eta_t \sim N(0, I)\
u_t & = C_1 u_{t-1} + \dots + C_1 f_{t-q} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \Sigma)
\end{align}
$$
where $y_t$ are observed data, $f_t$ are the unobserved factors (evolving as a vector autoregression), $x_t$ are (optional) exogenous variables, and $u_t$ is the error, or "idiosyncratic", process ($u_t$ is also optionally allowed to be autocorrelated). The $\Lambda$ matrix is often referred to as the matrix of "factor loadings". The variance of the factor error term is set to the identity matrix to ensure identification of the unobserved factors.
This model can be cast into state space form, and the unobserved factor estimated via the Kalman filter. The likelihood can be evaluated as a byproduct of the filtering recursions, and maximum likelihood estimation used to estimate the parameters.
Model specification
The specific dynamic factor model in this application has 1 unobserved factor which is assumed to follow an AR(2) proces. The innovations $\varepsilon_t$ are assumed to be independent (so that $\Sigma$ is a diagonal matrix) and the error term associated with each equation, $u_{i,t}$ is assumed to follow an independent AR(2) process.
Thus the specification considered here is:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
where $i$ is one of: [indprod, income, sales, emp ].
This model can be formulated using the DynamicFactor model built-in to Statsmodels. In particular, we have the following specification:
k_factors = 1 - (there is 1 unobserved factor)
factor_order = 2 - (it follows an AR(2) process)
error_var = False - (the errors evolve as independent AR processes rather than jointly as a VAR - note that this is the default option, so it is not specified below)
error_order = 2 - (the errors are autocorrelated of order 2: i.e. AR(2) processes)
error_cov_type = 'diagonal' - (the innovations are uncorrelated; this is again the default)
Once the model is created, the parameters can be estimated via maximum likelihood; this is done using the fit() method.
Note: recall that we have de-meaned and standardized the data; this will be important in interpreting the results that follow.
Aside: in their empirical example, Kim and Nelson (1999) actually consider a slightly different model in which the employment variable is allowed to also depend on lagged values of the factor - this model does not fit into the built-in DynamicFactor class, but can be accomodated by using a subclass to implement the required new parameters and restrictions - see Appendix A, below.
Parameter estimation
Multivariate models can have a relatively large number of parameters, and it may be difficult to escape from local minima to find the maximized likelihood. In an attempt to mitigate this problem, I perform an initial maximization step (from the model-defined starting paramters) using the modified Powell method available in Scipy (see the minimize documentation for more information). The resulting parameters are then used as starting parameters in the standard LBFGS optimization method.
End of explanation
print res.summary(separate_params=False)
Explanation: Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference:
The estimated parameters
The estimated factor
Parameters
The estimated parameters can be helpful in understanding the implications of the model, although in models with a larger number of observed variables and / or unobserved factors they can be difficult to interpret.
One reason for this difficulty is due to identification issues between the factor loadings and the unobserved factors. One easy-to-see identification issue is the sign of the loadings and the factors: an equivalent model to the one displayed below would result from reversing the signs of all factor loadings and the unobserved factor.
Here, one of the easy-to-interpret implications in this model is the persistence of the unobserved factor: we find that exhibits substantial persistence.
End of explanation
fig, ax = plt.subplots(figsize=(13,3))
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, res.factors.filtered[0], label='Factor')
ax.legend()
# Retrieve and also plot the NBER recession indicators
rec = DataReader('USREC', 'fred', start=start, end=end)
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
Explanation: Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons:
The sign-related identification issue described above.
Since the data was differenced, the estimated factor explains the variation in the differenced data, not the original data.
It is for these reasons that the coincident index is created (see below).
With these reservations, the unobserved factor is plotted below, along with the NBER indicators for US recessions. It appears that the factor is successful at picking up some degree of business cycle activity.
End of explanation
res.plot_coefficients_of_determination(figsize=(8,2));
Explanation: Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
End of explanation
usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI']
usphci.plot(figsize=(13,3));
dusphci = usphci.diff()[1:].values
def compute_coincident_index(mod, res):
# Estimate W(1)
spec = res.specification
design = mod.ssm['design']
transition = mod.ssm['transition']
ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1]
k_states = ss_kalman_gain.shape[0]
W1 = np.linalg.inv(np.eye(k_states) - np.dot(
np.eye(k_states) - np.dot(ss_kalman_gain, design),
transition
)).dot(ss_kalman_gain)[0]
# Compute the factor mean vector
factor_mean = np.dot(W1, dta.ix['1972-02-01':, 'dln_indprod':'dln_emp'].mean())
# Normalize the factors
factor = res.factors.filtered[0]
factor *= np.std(usphci.diff()[1:]) / np.std(factor)
# Compute the coincident index
coincident_index = np.zeros(mod.nobs+1)
# The initial value is arbitrary; here it is set to
# facilitate comparison
coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean()
for t in range(0, mod.nobs):
coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean
# Attach dates
coincident_index = pd.TimeSeries(coincident_index, index=dta.index).iloc[1:]
# Normalize to use the same base year as USPHCI
coincident_index *= (usphci.ix['1992-07-01'] / coincident_index.ix['1992-07-01'])
return coincident_index
Explanation: Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
End of explanation
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
coincident_index = compute_coincident_index(mod, res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, label='Coincident index')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
Explanation: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
End of explanation
from statsmodels.tsa.statespace import tools
class ExtendedDFM(sm.tsa.DynamicFactor):
def __init__(self, endog, **kwargs):
# Setup the model as if we had a factor order of 4
super(ExtendedDFM, self).__init__(
endog, k_factors=1, factor_order=4, error_order=2,
**kwargs)
# Note: `self.parameters` is an ordered dict with the
# keys corresponding to parameter types, and the values
# the number of parameters of that type.
# Add the new parameters
self.parameters['new_loadings'] = 3
# Cache a slice for the location of the 4 factor AR
# parameters (a_1, ..., a_4) in the full parameter vector
offset = (self.parameters['factor_loadings'] +
self.parameters['exog'] +
self.parameters['error_cov'])
self._params_factor_ar = np.s_[offset:offset+2]
self._params_factor_zero = np.s_[offset+2:offset+4]
@property
def start_params(self):
# Add three new loading parameters to the end of the parameter
# vector, initialized to zeros (for simplicity; they could
# be initialized any way you like)
return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0]
@property
def param_names(self):
# Add the corresponding names for the new loading parameters
# (the name can be anything you like)
return super(ExtendedDFM, self).param_names + [
'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)]
def transform_params(self, unconstrained):
# Perform the typical DFM transformation (w/o the new parameters)
constrained = super(ExtendedDFM, self).transform_params(
unconstrained[:-3])
# Redo the factor AR constraint, since we only want an AR(2),
# and the previous constraint was for an AR(4)
ar_params = unconstrained[self._params_factor_ar]
constrained[self._params_factor_ar] = (
tools.constrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[constrained, unconstrained[-3:]]
def untransform_params(self, constrained):
# Perform the typical DFM untransformation (w/o the new parameters)
unconstrained = super(ExtendedDFM, self).untransform_params(
constrained[:-3])
# Redo the factor AR unconstraint, since we only want an AR(2),
# and the previous unconstraint was for an AR(4)
ar_params = constrained[self._params_factor_ar]
unconstrained[self._params_factor_ar] = (
tools.unconstrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[unconstrained, constrained[-3:]]
def update(self, params, transformed=True):
# Peform the transformation, if required
if not transformed:
params = self.transform_params(params)
params[self._params_factor_zero] = 0
# Now perform the usual DFM update, but exclude our new parameters
super(ExtendedDFM, self).update(params[:-3], transformed=True)
# Finally, set our new parameters in the design matrix
self.ssm['design', 3, 1:4] = params[-3:]
Explanation: Appendix 1: Extending the dynamic factor model
Recall that the previous specification was described by:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Written in state space form, the previous specification of the model had the following observation equation:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
and transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
the DynamicFactor model handles setting up the state space representation and, in the DynamicFactor.update method, it fills in the fitted parameter values into the appropriate locations.
The extended specification is the same as in the previous example, except that we also want to allow employment to depend on lagged values of the factor. This creates a change to the $y_{\text{emp},t}$ equation. Now we have:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \qquad & i \in {\text{indprod}, \text{income}, \text{sales} }\
y_{i,t} & = \lambda_{i,0} f_t + \lambda_{i,1} f_{t-1} + \lambda_{i,2} f_{t-2} + \lambda_{i,2} f_{t-3} + u_{i,t} \qquad & i = \text{emp} \
u_{i,t} & = c_{i,1} u_{i,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Now, the corresponding observation equation should look like the following:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp,1} & \lambda_\text{emp,2} & \lambda_\text{emp,3} & \lambda_\text{emp,4} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
Notice that we have introduced two new state variables, $f_{t-2}$ and $f_{t-3}$, which means we need to update the transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
This model cannot be handled out-of-the-box by the DynamicFactor class, but it can be handled by creating a subclass when alters the state space representation in the appropriate way.
First, notice that if we had set factor_order = 4, we would almost have what we wanted. In that case, the last line of the observation equation would be:
$$
\begin{bmatrix}
\vdots \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\vdots & & & & & & & & & & & \vdots \
\lambda_\text{emp,1} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
\vdots
\end{bmatrix}
$$
and the first line of the transition equation would be:
$$
\begin{bmatrix}
f_t \
\vdots
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & a_3 & a_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\vdots & & & & & & & & & & & \vdots \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
\vdots
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
Relative to what we want, we have the following differences:
In the above situation, the $\lambda_{\text{emp}, j}$ are forced to be zero for $j > 0$, and we want them to be estimated as parameters.
We only want the factor to transition according to an AR(2), but under the above situation it is an AR(4).
Our strategy will be to subclass DynamicFactor, and let it do most of the work (setting up the state space representation, etc.) where it assumes that factor_order = 4. The only things we will actually do in the subclass will be to fix those two issues.
First, here is the full code of the subclass; it is discussed below. It is important to note at the outset that none of the methods defined below could have been omitted. In fact, the methods __init__, start_params, param_names, transform_params, untransform_params, and update form the core of all state space models in Statsmodels, not just the DynamicFactor class.
End of explanation
# Create the model
extended_mod = ExtendedDFM(endog)
initial_extended_res = extended_mod.fit(method='powell', disp=False)
extended_res = extended_mod.fit(initial_extended_res.params, maxiter=1000)
print extended_res.summary(separate_params=False)
Explanation: So what did we just do?
__init__
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
start_params
start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
param_names
param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
transform_params and untransform_params
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons:
The version in the DynamicFactor class is expecting 3 fewer parameters than we have now. At a minimum, we need to handle the three new parameters.
The version in the DynamicFactor class constrains the factor lag coefficients to be stationary as though it was an AR(4) model. Since we actually have an AR(2) model, we need to re-do the constraint. We also set the last two autoregressive coefficients to be zero here.
update
The most important reason we need to specify a new update method is because we have three new parameters that we need to place into the state space formulation. In particular we let the parent DynamicFactor.update class handle placing all the parameters except the three new ones in to the state space representation, and then we put the last three in manually.
End of explanation
extended_res.plot_coefficients_of_determination(figsize=(8,2));
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
extended_coincident_index = compute_coincident_index(extended_mod, extended_res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model')
ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
ax.set(title='Coincident indices, comparison')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1);
Explanation: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results.
End of explanation |
13,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kernel density estimation (KDE)
The following code has been adapted from Till A. Hoffmann.
See https
Step1: Unweighted, one-dimensional
Step2: Weighted, one-dimensional
Step3: Weighted, two-dimensional | Python Code:
%matplotlib inline
import os, sys
sys.path.append(os.path.abspath('../../main/python'))
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
import thalesians.tsa.distrs as distrs
import thalesians.tsa.kde as kde
import importlib
importlib.reload(distrs)
importlib.reload(kde)
Explanation: Kernel density estimation (KDE)
The following code has been adapted from Till A. Hoffmann.
See https://nbviewer.jupyter.org/gist/tillahoffmann/f844bce2ec264c1c8cb5
and https://stackoverflow.com/questions/27623919/weighted-gaussian-kernel-density-estimation-in-python
End of explanation
#Define parameters
num_samples = 1000
xmax = 5
bins=21
#Generate equal-weighted samples
samples = np.random.normal(size=num_samples)
weights = np.ones(num_samples) / num_samples
empirical_distr = distrs.EmpiricalDistr(particles=samples, weights=weights)
print('empirical_distr mean', empirical_distr.mean)
#Plot a histogram
plt.hist(empirical_distr.particles, bins, (-xmax, xmax), histtype='stepfilled', alpha=.2, density=True, color='k', label='histogram')
#Construct a KDE and plot it
gaussian_kde_distr = kde.GaussianKDEDistr(empirical_distr)
print('gaussian_kde_distr mean', gaussian_kde_distr.mean)
x = np.linspace(-xmax, xmax, 200)
y = gaussian_kde_distr.pdf(x)
plt.plot(x, y, label='kde')
#Plot the samples
plt.scatter(samples, np.zeros_like(samples), marker='x', color='k', alpha=.1, label='samples')
#Plot the true pdf
y = stats.norm().pdf(x)
plt.plot(x,y, label='true PDF')
#Boiler plate
plt.xlabel('Variable')
plt.ylabel('Density')
plt.legend(loc='best', frameon=False)
plt.tight_layout()
plt.show()
Explanation: Unweighted, one-dimensional
End of explanation
bins = 21
#Define a Gaussian mixture to draw samples from
num_samples = 10000
xmin, xmax = -10, 8
#Weight attributed to each component of the mixture
gaussian_weights = np.array([2, 1], dtype=np.float)
gaussian_weights /= np.sum(gaussian_weights)
#Mean and std of each mixture
gaussian_means = np.array([-1, 1])
gaussian_std = np.array([2, 1])
#Observation probability of each mixture
gaussian_observation = np.array([1, .5])
#How many samples belong to each mixture?
gaussian_samples = np.random.multinomial(num_samples, gaussian_weights)
samples = []
weights = []
#Generate samples and observed samples for each mixture component
for n, m, s, o in zip(gaussian_samples, gaussian_means, gaussian_std, gaussian_observation):
_samples = np.random.normal(m, s, n)
_samples = _samples[o > np.random.uniform(size=n)]
samples.extend(_samples)
weights.extend(np.ones_like(_samples) / o)
#Renormalize the sample weights
weights = np.array(weights, np.float)
weights /= np.sum(weights)
samples = np.array(samples)
#Compute the true pdf
x = np.linspace(xmin, xmax, 200)
true_pdf = 0
for w, m, s in zip(gaussian_weights, gaussian_means, gaussian_std):
true_pdf = true_pdf + w * stats.norm(m, s).pdf(x)
#Plot a histogram
plt.hist(samples, bins, (xmin, xmax), histtype='stepfilled', alpha=.2, density=True, color='k', label='histogram', weights=weights)
#Construct a KDE and plot it
empirical_distr = distrs.EmpiricalDistr(particles=samples, weights=weights)
gaussian_kde_distr = kde.GaussianKDEDistr(empirical_distr)
print('empirical_distr mean', empirical_distr.mean)
print('gaussian_kde_distr mean', gaussian_kde_distr.mean)
y = gaussian_kde_distr.pdf(x)
plt.plot(x, y, label='weighted kde')
#Compare with a naive kde
pdf = stats.gaussian_kde(samples)
y = pdf(x)
plt.plot(x, y, label='unweighted kde')
#Plot the samples
plt.scatter(samples, np.zeros_like(samples), marker='x', color='k', alpha=.02, label='samples')
#Plot the true pdf
plt.plot(x,true_pdf, label='true PDF')
#Boiler plate
plt.xlabel('Variable')
plt.ylabel('Density')
plt.legend(loc='best', frameon=False)
plt.tight_layout()
plt.show()
gaussian_kde_distr
gaussian_kde_distr.empirical_distr.cov
gaussian_kde_distr.cov
Explanation: Weighted, one-dimensional
End of explanation
bins = 21
#Define a Gaussian mixture to draw samples from
num_samples = 10000
xmin, xmax = -10, 8
#Weight attributed to each component of the mixture
gaussian_weights = np.array([2, 1], dtype=np.float)
gaussian_weights /= np.sum(gaussian_weights)
#Mean and std of each mixture
gaussian_means = np.array([-1, 1])
gaussian_std = np.array([2, 1])
#Observation probability of each mixture
gaussian_observation = np.array([1, .5])
#How many samples belong to each mixture?
gaussian_samples = np.random.multinomial(num_samples, gaussian_weights)
samples = []
weights = []
#Generate samples and observed samples for each mixture component
for n, m, s, o in zip(gaussian_samples, gaussian_means, gaussian_std, gaussian_observation):
_samples = np.random.normal(m, s, (n, 2))
_samples = _samples[o > np.random.uniform(size=n)]
samples.extend(_samples)
weights.extend(np.ones(len(_samples)) / o)
#Renormalize the sample weights
weights = np.array(weights, np.float)
weights /= np.sum(weights)
samples = np.transpose(samples)
#Evaluate the true pdf on a grid
x = np.linspace(xmin, xmax, 100)
xx, yy = np.meshgrid(x, x)
true_pdf = 0
for w, m, s in zip(gaussian_weights, gaussian_means, gaussian_std):
true_pdf = true_pdf + w * stats.norm(m, s).pdf(xx) * stats.norm(m, s).pdf(yy)
#Evaluate the kde on a grid
empirical_distr = distrs.EmpiricalDistr(particles=samples.T, weights=weights)
gaussian_kde_distr = kde.GaussianKDEDistr(empirical_distr)
print('empirical_distr mean', empirical_distr.mean)
print('gaussian_kde_distr mean', gaussian_kde_distr.mean)
points = (np.ravel(xx), np.ravel(yy))
points = np.array(points).T
zz = gaussian_kde_distr.pdf(points)
zz = np.reshape(zz, xx.shape)
kwargs = dict(extent=(xmin, xmax, xmin, xmax), cmap='hot', origin='lower')
#Plot the true pdf
plt.subplot(221)
plt.imshow(true_pdf.T, **kwargs)
plt.title('true PDF')
#Plot the kde
plt.subplot(222)
plt.imshow(zz.T, **kwargs)
plt.title('kde')
plt.tight_layout()
#Plot a histogram
ax = plt.subplot(223)
plt.hist2d(samples[0], samples[1], bins, ((xmin, xmax), (xmin, xmax)), True, weights, cmap='hot')
ax.set_aspect(1)
plt.title('histogram')
plt.tight_layout()
plt.show()
gaussian_kde_distr.cov
gaussian_kde_distr.empirical_distr.cov
gaussian_kde_distr.empirical_distr.weights
result = None
for w in gaussian_kde_distr.empirical_distr.weights.flatten():
if result is None:
result = w * gaussian_kde_distr.cov
else:
result += w * gaussian_kde_distr.cov
result
np.sum(gaussian_kde_distr.empirical_distr.weights)
len(gaussian_kde_distr.empirical_distr.weights)
Explanation: Weighted, two-dimensional
End of explanation |
13,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step1: Deviation form thermal
Step2: Software version | Python Code:
%pylab inline
from qutip import *
import time
#number of states for each mode
N0=8
N1=8
N2=8
K=1.0
#damping rates
gamma0=0.1
gamma1=0.1
gamma2=0.4
alpha=sqrt(3)#initial coherent state param for mode 0
epsilon=0.5j #sqeezing parameter
tfinal=4.0
dt=0.05
tlist=arange(0.0,tfinal+dt,dt)
taulist=K*tlist #non-dimensional times
ntraj=100#number of trajectories
#define operators
a0=tensor(destroy(N0),qeye(N1),qeye(N2))
a1=tensor(qeye(N0),destroy(N1),qeye(N2))
a2=tensor(qeye(N0),qeye(N1),destroy(N2))
#number operators for each mode
num0=a0.dag()*a0
num1=a1.dag()*a1
num2=a2.dag()*a2
#dissipative operators for zero-temp. baths
C0=sqrt(2.0*gamma0)*a0
C1=sqrt(2.0*gamma1)*a1
C2=sqrt(2.0*gamma2)*a2
#initial state: coherent mode 0 & vacuum for modes #1 & #2
vacuum=tensor(basis(N0,0),basis(N1,0),basis(N2,0))
D=(alpha*a0.dag()-conj(alpha)*a0).expm()
psi0=D*vacuum
#trilinear Hamiltonian
H=1j*K*(a0*a1.dag()*a2.dag()-a0.dag()*a1*a2)
#run Monte-Carlo
start_time=time.time()
#avg=mcsolve(H,psi0,taulist,ntraj,[C0,C1,C2],[num0,num1,num2])
output=mesolve(H,psi0,taulist,[C0,C1,C2],[num0,num1,num2])
avg=output.expect
finish_time=time.time()
print('time elapsed = ',finish_time-start_time)
#plot expectation value for photon number in each mode
plot(taulist,avg[0],taulist,avg[1],taulist,avg[2])
xlabel("Time")
ylabel("Average number of particles")
legend(('Mode 0', 'Mode 1','Mode 2'));
Explanation: QuTiP example: Trilinear Oscillator Coupling
J.R. Johansson and P.D. Nation
For more information about QuTiP see http://qutip.org
End of explanation
from qutip import *
from pylab import *
import time
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
#number of states for each mode
N0=6
N1=6
N2=6
#define operators
a0=tensor(destroy(N0),qeye(N1),qeye(N2))
a1=tensor(qeye(N0),destroy(N1),qeye(N2))
a2=tensor(qeye(N0),qeye(N1),destroy(N2))
#number operators for each mode
num0=a0.dag()*a0
num1=a1.dag()*a1
num2=a2.dag()*a2
#initial state: coherent mode 0 & vacuum for modes #1 & #2
alpha=sqrt(2)#initial coherent state param for mode 0
initial=tensor(coherent(N0,alpha),basis(N1,0),basis(N2,0))
psi0=initial
#trilinear Hamiltonian
H=1.0j*(a0*a1.dag()*a2.dag()-a0.dag()*a1*a2)
#run Monte-Carlo
tlist=linspace(0,2.5,50)
output=mcsolve(H,psi0,tlist,[],[],1)
mode1=[ptrace(k,1) for k in output.states]
diags1=[real(k.diag()) for k in mode1]
num1=[expect(num1,k) for k in output.states]
thermal=[thermal_dm(N1,k).diag() for k in num1]
colors=['m', 'g','orange','b', 'y','pink']
x=range(N1)
params = {'axes.labelsize': 14,'font.size': 14,'legend.fontsize': 12,
'xtick.labelsize': 14,'ytick.labelsize': 14}
rcParams.update(params)
fig = plt.figure(figsize=(8,6))
ax = Axes3D(fig)
for j in range(5):
ax.bar(x, diags1[10*j], zs=tlist[10*j], zdir='y',color=colors[j],linewidth=1.0,
alpha=0.6,align='center')
ax.plot(x,thermal[10*j],zs=tlist[10*j],zdir='y',color='r',linewidth=3,alpha=1)
ax.set_zlabel(r'Probability')
ax.set_xlabel(r'Number State')
ax.set_ylabel(r'Time')
ax.set_zlim3d(0, 1);
Explanation: Deviation form thermal
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Software version:
End of explanation |
13,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Partial Differential Equations
If there's anything that needs serious computing power it's the solution of PDEs. However, you can go a long way to getting intuition on complex problems with simple numerical methods.
Let's take parts of the doped semiconductor model, concentrating on the time-dependent behaviour, simplify the constants, and restrict to one spatial dimension
Step1: Note
The animations that follow will only work if you have ffmepg installed. Even then, they may be touchy. The plots should always work.
Step2: The solution looks good - smooth, the initial profile is diffusing nicely. Try with something a bit more complex, such as $y(0, x) = \sin^4(4 \pi x)$ to see the diffusive effects
Step3: All the features are smoothing out, as expected.
Step4: However, we used a really small timestep to get these results. It would be less numerical work if we increased the timestep. Let's try only $100$ steps with $\Delta t = 10^{-4}$
Step5: This doesn't look good - it's horribly unstable, with the results blowing up very fast.
The problem is that the size of the timestep really matters. Von Neumann stability calculations can be used to show that only when
\begin{equation}
\frac{\Delta t}{\Delta x^2} < \frac{1}{2}
\end{equation}
are the numerical results stable, and hence trustable. This is a real problem when you want to improve accuracy by increasing the number of points, hence decreasing $\Delta x$
Step6: Now set the initial data to be
\begin{align}
p &= n_i \left(1 + 0.1 \sin(4 \pi x) \right), \
n &= n_i \left(1 + 0.1 \sin(6 \pi x) \right).
\end{align}
The spatial domain should be $[0, 1]$. The spatial step size should be $0.05$. The timestep should be $10^{-7}$. The evolution should be for $10^5$ steps, to $t=0.01$.
The crucial point is what happens at the boundaries. To discretely represent a Neumann boundary condition where the normal derivative vanishes on the boundary, set the boundary points equal to the first data point in the interior, ie y[ | Python Code:
from __future__ import division
import numpy
from matplotlib import pyplot
%matplotlib notebook
dt = 1e-5
dx = 1e-2
x = numpy.arange(0,1+dx,dx)
y = numpy.zeros_like(x)
y = x * (1 - x)
def update_heat(y, dt, dx):
dydt = numpy.zeros_like(y)
dydt[1:-1] = dt/dx**2 * (y[2:] + y[:-2] - 2*y[1:-1])
return dydt
Nsteps = 10000
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
Explanation: Partial Differential Equations
If there's anything that needs serious computing power it's the solution of PDEs. However, you can go a long way to getting intuition on complex problems with simple numerical methods.
Let's take parts of the doped semiconductor model, concentrating on the time-dependent behaviour, simplify the constants, and restrict to one spatial dimension:
\begin{align}
\frac{\partial p}{\partial t} + \frac{\partial}{\partial x} \left( p \frac{\partial}{\partial x} \left( \frac{1}{p} \right) \right) &= \left( n_i^2 - np \right) + G, \
\frac{\partial n}{\partial t} + \frac{\partial}{\partial x} \left( n \frac{\partial}{\partial x} \left( \frac{1}{n} \right) \right) &= \left( n_i^2 - np \right) + G.
\end{align}
This is a pair of coupled PDEs for $n, p$ in terms of physical and material constants, and the quasi-Fermi levels $E_{F_{n,p}}$ depend on $n, p$ - here we've replaced them with terms proportional to ${n, p}^{-1}$.
We'll write this in the form
\begin{equation}
\frac{\partial {\bf y}}{\partial t} + \frac{\partial}{\partial x} \left( {\bf g}({\bf y}) \frac{\partial}{\partial x} {\bf h}({\bf y}) \right) = {\bf f}({\bf y}).
\end{equation}
Finite differencing
We used finite differencing when looking at IVPs. We introduced a grid of points $x_j$ in space and replace derivatives with finite difference expressions. For example, we saw the forward difference approximation
\begin{equation}
\left. \frac{\text{d} y}{\text{d} x} \right|{x = x_j} = \frac{y{j+1} - y_j}{\Delta x} + {\cal O} \left( \Delta x^1 \right),
\end{equation}
and the central difference approximation
\begin{equation}
\left. \frac{\text{d} y}{\text{d} x} \right|{x = x_j} = \frac{y{j+1} - y_{j-1}}{2 \Delta x} + {\cal O} \left( \Delta x^2 \right).
\end{equation}
This extends to partial derivatives, and to more than one variable. We introduce a grid in time, $t^n$, and denote $y(t^n, x_j) = y^n_j$. Then we can do, say, forward differencing in time and central differencing in space:
\begin{align}
\left. \frac{\partial y}{\partial t} \right|{x = x_j, t = t^n} &= \frac{y^{n+1}{j} - y^{n}{j}}{\Delta t}, \
\left. \frac{\partial y}{\partial x} \right|{x = x_j, t = t^n} &= \frac{y^{n}{j+1} - y^{n}{j-1}}{2 \Delta x}.
\end{align}
Diffusion equation
To illustrate the simplest way of proceeding we'll look at the diffusion equation
\begin{equation}
\frac{\partial y}{\partial t} - \frac{\partial^2 y}{\partial x^2} = 0.
\end{equation}
When finite differences are used, with the centred second derivative approximation
\begin{equation}
\left. \frac{\text{d}^2 y}{\text{d} x^2} \right|{x = x_j} = \frac{y{j+1} + y_{j-1} - 2 y_{j}}{\Delta x^2} + {\cal O} \left( \Delta x^2 \right),
\end{equation}
we find the approximation
\begin{align}
&& \frac{y^{n+1}{j} - y^{n}{j}}{\Delta t} - \frac{y^{n}{j+1} + y^{n}{j-1} - 2 y^{n}{j}}{\Delta x^2} &= 0 \
\implies && y^{n+1}{j} &= y^{n}{j} + \frac{\Delta t}{\Delta x^2} \left( y^{n}{j+1} + y^{n}{j-1} - 2 y^{n}{j} \right).
\end{align}
Given initial data and boundary conditions, this can be solved.
Let's implement the heat equation with homogeneous Dirichlet boundary conditions ($y(t, 0) = 0 = y(t, 1)$) and simple initial data ($y(0, x) = x (1 - x)$), using a spatial step size of $\Delta x = 10^{-2}$ and a time step of $\Delta t = 10^{-5}$, solving to $t = 0.1$ ($10000$ steps).
End of explanation
from matplotlib import animation
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'
y = numpy.zeros_like(x)
y = x * (1 - x)
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,0.3))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(100):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)
Explanation: Note
The animations that follow will only work if you have ffmepg installed. Even then, they may be touchy. The plots should always work.
End of explanation
y = numpy.sin(4.0*numpy.pi*x)**4
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
Nsteps = 10000
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
Explanation: The solution looks good - smooth, the initial profile is diffusing nicely. Try with something a bit more complex, such as $y(0, x) = \sin^4(4 \pi x)$ to see the diffusive effects:
End of explanation
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0,1))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(20):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)
Explanation: All the features are smoothing out, as expected.
End of explanation
dt = 1e-4
Nsteps = 100
y = numpy.sin(4.0*numpy.pi*x)**4
for n in range(Nsteps):
update = update_heat(y, dt, dx)
y += update
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y)
pyplot.xlabel(r"$x$")
pyplot.ylabel(r"$y$")
pyplot.xlim(0, 1)
pyplot.show()
y = numpy.zeros_like(x)
y = numpy.sin(4.0*numpy.pi*x)**4
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(-1,3))
line, = ax.plot([], [])
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(1):
y += update_heat(y, dt, dx)
line.set_data(x, y)
return line
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=50, interval=100, blit=True)
Explanation: However, we used a really small timestep to get these results. It would be less numerical work if we increased the timestep. Let's try only $100$ steps with $\Delta t = 10^{-4}$:
End of explanation
ni = 0.1
G = 0.1
def f(y):
p = y[0,:]
n = y[1,:]
f_vector = numpy.zeros_like(y)
f_vector[:,:] = ni**2 - n*p + G
return f_vector
def g(y):
p = y[0,:]
n = y[1,:]
g_vector = numpy.zeros_like(y)
g_vector[0,:] = p
g_vector[1,:] = n
return g_vector
def h(y):
p = y[0,:]
n = y[1,:]
h_vector = numpy.zeros_like(y)
h_vector[0,:] = 1.0/p
h_vector[1,:] = 1.0/n
return h_vector
def update_term(y, dt, dx):
dydt = numpy.zeros_like(y)
f_vector = f(y)
g_vector = g(y)
h_vector = h(y)
dydt[:,2:-2] += dt * f_vector[:,2:-2]
dydt[:,2:-2] -= dt/(4.0*dx**2)*(g_vector[:,3:-1]*h_vector[:,4:] -\
(g_vector[:,3:-1] + g_vector[:,1:-3])*h_vector[:,2:-2] + \
g_vector[:,1:-3]*h_vector[:,:-4])
return dydt
Explanation: This doesn't look good - it's horribly unstable, with the results blowing up very fast.
The problem is that the size of the timestep really matters. Von Neumann stability calculations can be used to show that only when
\begin{equation}
\frac{\Delta t}{\Delta x^2} < \frac{1}{2}
\end{equation}
are the numerical results stable, and hence trustable. This is a real problem when you want to improve accuracy by increasing the number of points, hence decreasing $\Delta x$: with $\Delta x = 10^{-3}$ we need $\Delta t < 5 \times 10^{-7}$ already!
Exercise
Check, by changing $\Delta x$ and re-running the simulations, that you see numerical instabilities when this stability bound is violated. You'll only need to take a few tens of timesteps irrespective of the value of $\Delta t$.
Full problem
Finally, we can evaluate the PDE at a specific point and re-arrange the equation. Assuming we know all the data at $t^n$, the only unknowns will be at $t^{n+1}$, giving the algorithm
\begin{align}
{\bf y}^{n+1}{j} &= {\bf y}^{n}{j} + \Delta t \, {\bf f}^{n}{j} - \frac{\Delta t}{2 \Delta x} \left( {\bf g}^{n}{j+1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}{j+2} - {\bf h}^{n}{j} \right) - {\bf g}^{n}{j-1} \frac{1}{2 \Delta x} \left( {\bf h}^{n}{j} - {\bf h}^{n}{j-2} \right) \right) \
&= {\bf y}^{n}{j} + \Delta t \, {\bf f}^{n}{j} - \frac{\Delta t}{4 \left( \Delta x \right)^2} \left( {\bf g}^{n}{j+1} {\bf h}^{n}{j+2} - \left( {\bf g}^{n}{j+1} + {\bf g}^{n}{j-1} \right) {\bf h}^{n}{j} + {\bf g}^{n}{j-1} {\bf h}^{n}{j-2} \right)
\end{align}
We'll implement that by writing a function that computes the update term (${\bf y}^{n+1}_j - {\bf y}^n_j$), choosing $n_i = 0.1, G = 0.1$:
End of explanation
dx = 0.05
dt = 1e-7
Nsteps = 10000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()
for n in range(Nsteps):
update = update_term(y, dt, dx)
y += update
y[:,1] = y[:,2]
y[:,0] = y[:,1]
y[:,-2] = y[:,-3]
y[:,-1] = y[:,-2]
pyplot.figure(figsize=(10,6))
pyplot.plot(x, y[0,:], label=r"$p$")
pyplot.plot(x, y[1,:], label=r"$n$")
pyplot.legend()
pyplot.xlabel(r"$x$")
pyplot.xlim(0, 1)
pyplot.show()
dx = 0.01
dt = 1e-8
Nsteps = 100000
x = numpy.arange(-dx,1+2*dx,dx)
y = numpy.zeros((2,len(x)))
y[0,:] = ni*(1.0+0.1*numpy.sin(4.0*numpy.pi*x))
y[1,:] = ni*(1.0+0.1*numpy.sin(6.0*numpy.pi*x))
fig = pyplot.figure(figsize=(10,6))
ax = pyplot.axes(xlim=(0,1),ylim=(0.09,0.11))
line1, = ax.plot([], [], label="$p$")
line2, = ax.plot([], [], label="$n$")
pyplot.legend()
pyplot.close()
def init():
ax.set_xlabel(r"$x$")
def update(i, y):
for n in range(1000):
update = update_term(y, dt, dx)
y += update
y[:,1] = y[:,2]
y[:,0] = y[:,1]
y[:,-2] = y[:,-3]
y[:,-1] = y[:,-2]
y += update_heat(y, dt, dx)
line1.set_data(x, y[0,:])
line2.set_data(x, y[1,:])
return line1, line2
animation.FuncAnimation(fig, update, init_func=init, fargs=(y,), frames=100, interval=100, blit=True)
Explanation: Now set the initial data to be
\begin{align}
p &= n_i \left(1 + 0.1 \sin(4 \pi x) \right), \
n &= n_i \left(1 + 0.1 \sin(6 \pi x) \right).
\end{align}
The spatial domain should be $[0, 1]$. The spatial step size should be $0.05$. The timestep should be $10^{-7}$. The evolution should be for $10^5$ steps, to $t=0.01$.
The crucial point is what happens at the boundaries. To discretely represent a Neumann boundary condition where the normal derivative vanishes on the boundary, set the boundary points equal to the first data point in the interior, ie y[:,0] = y[:,1] = y[:,2] and y[:,-1] = y[:,-2] = y[:,-3].
End of explanation |
13,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python
1. Installing Python
2. The Language
Expressions
List, Tuple and Dictionary
Strings
Functions
3. Example
Step1: To use the result of an expression in the future,
we assign an expression to a variable.
Type of a variable in python is usually implied.
(duck-typing -- read more on https
Step2: The weirdest expression in Python
Step3: Q
Step4: A list has a length
Step5: We can loop over items in a list.
Step6: A tuple is almost a list, defined with () instead of [].
() can sometimes be omitted.
Step7: But Tuples have a twist.
Items in a tuple is immutable;
Items in a list can change
Let's try it out
Step8: Oops.
Tuple object does not support item assignment.
Tuples are immutable.
Dictionary
A dicionary records a mapping from Keys to Values.
Mathematically a dictionary defines a function on a finite, discrete domain.
Step9: We may write
MyDictionary
Step10: 2.? String
We have seen strings a few times.
String literals can be defined with quotation marks, single or double.
Step11: Q
Step12: Python give us a lot of means to manipulate a string.
Step13: We can look for substring from a string
Step14: Formatting strings with the traditional printf formats
Step15: Conversion between bytes and strings
encode
Step16: Encodings are important if you work with text beyond English.
2.? Functions
A function is a more compact representation of mathematical functions.
(still remember dictionaries)
Step17: Compare this with our dictionary
Step18: The domain of a function is much bigger than a dictionary.
A diciontary only remembers what we told it;
a function reevalutes its body every time it is called.
Step19: Oops. We never told MyDictionary about 10.
3. A Word Count Example
In this section we will analyze some textual data with Python.
We first obtain the data, with a bash cell.
Step20: Reading in a text file is very easy in Python.
Step21: Q
Step22: Let's chop the text off into semantic elements.
Step23: Looks like we read in the file correctly.
Let's visualize this data.
We use some exteral help from a package, wordcloud.
So we will first install the package with pip, the Python Package Manager.
Step24: Oops I have already installed wordcloud. You may see a different message.
Step25: The biggest keyword is Python. Let's get quantatitive
Step26: Seems to be working. Let's make a function.
Step27: The function freq is a mapping between a list and a dictionary,
where each key of the dictionary (output) is associated with the number of occurances
of the key in the list (input).
Step28: Q
Step29: Q
Step30: Using the max function avoids writing an if
Step31: final challenge
Step32: Exporting data
The world of Python has 4 corners.
We need to reach out to other applications.
Export the data from Python.
Step33: Reading file in with Pandas | Python Code:
2 + 3 # Press <Ctrl-Enter to evaluate a cell>
2 + int(3.5 * 4) * float("8")
9 // 2 # Press <Ctrl-Enter to evaluate>
Explanation: Introduction to Python
1. Installing Python
2. The Language
Expressions
List, Tuple and Dictionary
Strings
Functions
3. Example: Word Frequency Analysis with Python
Reading text files
Geting and using python packages : wordcloud
Histograms
Exporting data as text files
1. Installing Python:
Easy way : with a Python distribution, anaconda
https://www.continuum.io/downloads
Hard way : compile it yourself from source. It is open-source after all.
[Not covered here; was the main way in early days, before 2011 or even 2014]
Three Python user interfaces
Python Shell python
[yfeng1@waterfall ~]$ python
Python 2.7.12 (default, Sep 29 2016, 13:30:34)
[GCC 6.2.1 20160916 (Red Hat 6.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
Jupyter Notebook (in a browser, like this)
IDEs: PyCharm, Spyder, etc.
We use Jupyter Notebook here.
Jupyter Notebook is included in the Anaconda distribution.
2. Python the Language
2.1 Expressions
An expression looks like a math formula
End of explanation
x = 2 + 3
x
Explanation: To use the result of an expression in the future,
we assign an expression to a variable.
Type of a variable in python is usually implied.
(duck-typing -- read more on https://en.wikipedia.org/wiki/Duck_typing)
End of explanation
print(x)
Explanation: The weirdest expression in Python:
End of explanation
MyListOfNumbers = [1,2,3,4,5,6,7]
Explanation: Q: What happens under the hood?
2.2 List, Tuple, Set and Dictionary
A list is a list of expressions.
End of explanation
len(MyListOfNumbers)
Explanation: A list has a length
End of explanation
for num in MyListOfNumbers:
print(num, end=', ')
Explanation: We can loop over items in a list.
End of explanation
MyTupleOfNumbers = (1, 2, 3, 4, 5, 6)
MyTupleOfNumbers = 1, 2, 3, 4, 5, 6
for num in MyTupleOfNumbers:
print(num, end=', ')
Explanation: A tuple is almost a list, defined with () instead of [].
() can sometimes be omitted.
End of explanation
MyListOfNumbers[4] = 99
print(MyListOfNumbers)
Tuple[4] = 99
Explanation: But Tuples have a twist.
Items in a tuple is immutable;
Items in a list can change
Let's try it out
End of explanation
MyDictionary = {}
MyDictionary[9] = 81
MyDictionary[3] = 9
print(MyDictionary)
Explanation: Oops.
Tuple object does not support item assignment.
Tuples are immutable.
Dictionary
A dicionary records a mapping from Keys to Values.
Mathematically a dictionary defines a function on a finite, discrete domain.
End of explanation
for k, v in MyDictionary.items():
print('Key', k, ":", 'Value', v, end=' | ')
Explanation: We may write
MyDictionary : {9, 3} => R.
We can loop over items in a dictionary, as well
End of explanation
"the hacker within", 'the hacker within', r'the hacker within', u'the hacker within', b'the hacker within'
Explanation: 2.? String
We have seen strings a few times.
String literals can be defined with quotation marks, single or double.
End of explanation
name = "the hacker within"
Explanation: Q: Mind the tuple
If we assign a string literal to a variable,
we get a string variable
End of explanation
print(name.upper())
print(name.split())
print(name.upper().split())
Explanation: Python give us a lot of means to manipulate a string.
End of explanation
name.find("hack")
name[name.find("hack"):]
Explanation: We can look for substring from a string
End of explanation
foo = "there are %03d numbers" % 3
print(foo)
Explanation: Formatting strings with the traditional printf formats
End of explanation
bname = name.encode()
print(bname)
print(bname.decode())
Explanation: Conversion between bytes and strings
encode : from bytes to string
decode : from string to bytes
The conversion is called 'encoding'. The default encoding on Unix is UTF-8.
Q: What is the default encoding on Windows and OS X?
End of explanation
def square_num(num):
return num*num
print(square_num(9))
print(square_num(3))
Explanation: Encodings are important if you work with text beyond English.
2.? Functions
A function is a more compact representation of mathematical functions.
(still remember dictionaries)
End of explanation
print(MyDictionary[9])
print(MyDictionary[3])
Explanation: Compare this with our dictionary
End of explanation
print(square_num(10))
print(MyDictionary[10])
Explanation: The domain of a function is much bigger than a dictionary.
A diciontary only remembers what we told it;
a function reevalutes its body every time it is called.
End of explanation
%%bash
curl -so titles.tsv https://raw.githubusercontent.com/thehackerwithin/berkeley/master/code_examples/spring17_survey/session_titles.tsv
head -5 titles.tsv
Explanation: Oops. We never told MyDictionary about 10.
3. A Word Count Example
In this section we will analyze some textual data with Python.
We first obtain the data, with a bash cell.
End of explanation
text = open('titles.tsv').read()
Explanation: Reading in a text file is very easy in Python.
End of explanation
with open('titles.tsv') as ff:
text = ff.read()
Explanation: Q : There is a subtle problem.
We usually use a different syntax for reading files.
End of explanation
words = text.split()
lines = text.split("\n")
print(words[::10]) # 1 word every 10
print(lines[::10]) # 1 line every 10
Explanation: Let's chop the text off into semantic elements.
End of explanation
import pip
pip.main(['install', "wordcloud"])
Explanation: Looks like we read in the file correctly.
Let's visualize this data.
We use some exteral help from a package, wordcloud.
So we will first install the package with pip, the Python Package Manager.
End of explanation
from wordcloud import WordCloud
wordcloud = WordCloud(width=800, height=300, prefer_horizontal=1, stopwords=None).generate(text)
wordcloud.to_image()
Explanation: Oops I have already installed wordcloud. You may see a different message.
End of explanation
freq_dict = {}
for word in words:
freq_dict[word] = freq_dict.get(word, 0) + 1
print(freq_dict)
print(freq_dict['Python'])
print(freq_dict['CUDA'])
Explanation: The biggest keyword is Python. Let's get quantatitive:
Frequency statistics: How many times does each word occur in the file?
For each word, we need to remember a number (number of occurances)
Use dictionary.
We will examine all words in the file (splitted into words).
Use loop.
End of explanation
def freq(items):
freq_dict = {}
for word in items:
freq_dict[word] = freq_dict.get(word, 0) + 1
return freq_dict
Explanation: Seems to be working. Let's make a function.
End of explanation
freq_dict = freq(words)
freq_freq = freq(freq_dict.values())
Explanation: The function freq is a mapping between a list and a dictionary,
where each key of the dictionary (output) is associated with the number of occurances
of the key in the list (input).
End of explanation
print(freq_freq)
Explanation: Q : what is in freq_freq?
End of explanation
top_word = ""
top_word_freq = 0
for word, freq in freq_dict.items():
if freq > top_word_freq:
top_word = word
top_word_freq = freq
print('word', top_word, 'freq', top_word_freq)
Explanation: Q: Which is the most frequent word?
Answer
End of explanation
most = (0, None)
for word, freq in freq_dict.items():
most = max([most, (freq, word)])
print(most)
Explanation: Using the max function avoids writing an if
End of explanation
next(reversed(sorted((freq, word) for word, freq in freq_dict.items())))
Explanation: final challenge: the 1 liner.
End of explanation
def save(filename, freq_dict):
ff = open(filename, 'w')
for word, freq in sorted(freq_dict.items()):
ff.write("%s %s\n" % (word, freq))
ff.close()
def save(filename, freq_dict):
with open(filename, 'w') as ff:
for word, freq in sorted(freq_dict.items()):
ff.write("%s %s\n" % (word, freq))
save("freq_dict_thw.txt", freq_dict)
!cat freq_dict_thw.txt
save("freq_freq_thw.txt", freq_freq)
!cat freq_freq_thw.txt
Explanation: Exporting data
The world of Python has 4 corners.
We need to reach out to other applications.
Export the data from Python.
End of explanation
import pandas as pd
dataframe = pd.read_table("freq_freq_thw.txt", sep=' ', header=None, index_col=0)
dataframe
%matplotlib inline
dataframe.plot(kind='bar')
import pandas as pd
dataframe = pd.read_table("freq_dict_thw.txt", sep=' ', header=None, index_col=0)
dataframe.plot(kind='bar')
Explanation: Reading file in with Pandas
End of explanation |
13,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
These graphs have all been produced according to appendix A of "Effect of GPS System Biases on Differential
Group Delay Measurements" available here
Step1: This plot is uncorrected, it is a remake of the plot in Anthea's email on Wed, Jun 15, 2016 at 7
Step2: try some stuff out from "An Automatic Editing Algorith for GPS Data" by Blewitt | Python Code:
files = glob("/home/greg/Documents/Summer Research/rinex files/ma*")
poop=rinexobs(files[6])
plt.figure(figsize=(14,14))
ax1 = plt.subplot(211)
ax1.xaxis.set_major_formatter(fmt)
plt.plot(2.85*(poop[:,23,'P2','data']*1.0E9/3.0E8-poop[:,23,'C1','data']*1.0E9/3.0E8)[10:],
'.',markersize=3,label='pr tec')
plt.plot(2.85E9*((poop[:,23,'L1','data'])/f1-(poop[:,23,'L2','data'])/f2)[10:],
'.',markersize=3,label='ph tec')
plt.title('mah13 sv23, biased')
plt.xlabel('time')
plt.ylabel('TECu')
plt.legend()
plt.grid()
plt.show()
Explanation: These graphs have all been produced according to appendix A of "Effect of GPS System Biases on Differential
Group Delay Measurements" available here: http://www.dtic.mil/dtic/tr/fulltext/u2/a220284.pdf. It is referenced in Anthea Coster's 1992 paper "Ionospheric Monitoring System". Equation 2.3 in the paper states that TEC in TECu is equal to the propogation time difference between the two frequencies. In order to find the propogation time, I divided the distance by the speed of light. I did the same thing with the phase but first I had to multiply the cycles (which is available from the rinex file) by the wavelength.
Basically what I want to explore with this notebook is combining pseudorange and phase data to get better TEC plots. Pseudorange is an absolute measurement but is subject to a lot of noise. Phase is low noise but has an unknown cycle ambiguity. According to the previously mentioned paper, the cycle ambiguity in phase data can be estimated by finding the average difference between the pseudorange and phase. In this script I converted pseudorange into cycles by dividing by wavelength. Then I calculated TEC according to equation 2.3 and plotted it next to the pseudorange calculated TEC and the biased phase calculated TEC.
Shown above is only a short time period free of cycle slips and loss of lock. In order to implement this averaging strategy, I think you have to re-average every time there is a cycle slip or missing data.
End of explanation
sl=200
plt.figure(figsize=(15,15))
ax1=plt.subplot(211)
ax1.xaxis.set_major_formatter(fmt)
plt.plot(2.85E9*(poop[:,23,'P2','data']/3.0E8
-poop[:,23,'C1','data']/3.0E8),'b.',label='prtec',markersize=3)
for i in range(int(len(poop[:,23,'L1','data'])/sl)):
phtec = 2.85E9*(poop[poop.labels[sl*i:sl*(i+1)],23,'L1','data']/f1
-poop[poop.labels[sl*i:sl*(i+1)],23,'L2','data']/f2)
prtec = 2.85E9*(poop[poop.labels[sl*i:sl*(i+1)],23,'P2','data']/3.0E8
-poop[poop.labels[sl*i:sl*(i+1)],23,'C1','data']/3.0E8)
b = np.average((phtec-prtec)[np.logical_not(np.isnan(phtec-prtec))])
plt.plot(phtec-b,'r-',linewidth=3,label='')
plt.axis([poop.labels[10],poop.labels[10000],-50,50])
plt.title('bias corrected phase data')
plt.xlabel('time')
plt.ylabel('TECu')
plt.grid()
plt.legend()
plt.show()
Explanation: This plot is uncorrected, it is a remake of the plot in Anthea's email on Wed, Jun 15, 2016 at 7:06 AM. The phase calculated TEC appears to match the pseudorange calculated TEC in parts, but other parts are offset by a bias. The next script and plot shows how the phase data can be shifted to line up with the average of the pseudorange data. It does so according to how large the slice value is. Basically it slices up the whole data set into a specifiable range of points and does the averaging on each range of points individually in order to fix phase lock cycle slips. This isn't the best way of doing it, the program should check for loss of lock and shift the phase data in chunks based on that. I am going to read more about cycle slips now.
End of explanation
f1 = 1575.42E6
f2 = 1227.6E6
svn = 23
L1 = -1*3.0E8*poop[:,svn,'L1','data']/f1 #(1a)
L2 = -1*3.0E8*poop[:,svn,'L2','data']/f2 #(1b)
P1 = poop[:,svn,'C1','data'] #(1c)
P2 = poop[:,svn,'P2','data'] #(1d)
#wide lane combination
wld = 3.0E8/(f1-f2)
Ld = (f1*L1-f2*L2)/(f1-f2) #(3)
prd = (f1*P1+f2*P2)/(f1+f2) #(4)
bd = (Ld-prd)/wld #(5)
#wide lane cycle slip detection
bdmean = bd[1]
rms = 0
g=np.empty((bd.shape))
g[:]=np.nan
p=np.empty((bd.shape))
p[:]=np.nan
for i in range(2,len(bd)):
if not np.isnan(bd[i]):
g[i] = abs(bd[i]-bdmean)
p[i] = np.sqrt(rms)
rms = rms+((bd[i]-bdmean)**2-rms)/(i-1) #(8b)
bdmean = bdmean+(bd[i]-bdmean)/(i-1) #(8a)
plt.figure(figsize=(12,12))
plt.subplot(211).xaxis.set_major_formatter(fmt)
plt.plot(bd.keys(),g,label='bd[i]-bias average')
plt.plot(bd.keys(),4*p,label='rms')
plt.legend()
plt.grid()
plt.title('if current bd>4*rms then it is a wide-lane cycle slip')
plt.ylabel('cycles')
plt.xlabel('time')
plt.show()
#ionospheric combination
LI = L1-L2 #(6)
PI = P2-P1 #(7)
#ionospheric cycle slip detection
plt.figure(figsize=(10,10))
# get x and y vectors
mask=~np.isnan(PI)
x = np.arange(len(PI[mask]))
y = PI[mask]
# calculate polynomial
z = np.polyfit(x, y, 6)
f = np.poly1d(z)
# calculate new x's and y's
x_new = np.linspace(x[0], x[-1], len(x))
Q = f(x_new)
residual = LI[mask]-Q
plt.plot(residual[1:])
plt.show()
for i in range(1,len(residual)):
if(residual[i]-residual[i-1]>1):
print(i,residual[i]-residual[i-1])
Explanation: try some stuff out from "An Automatic Editing Algorith for GPS Data" by Blewitt
End of explanation |
13,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding the most representative GWAS associated with cell-specific enhancers
(Execution on Google Cloud File System)
In this tutorial we are going to use a GWAS dataset (accessible from this link) together with the whole ENCODE BroadPeak dataset to find which mutations (and their associated traits) are most represented in enhancer regions which are present in a limited set of cells.
As first thing let's download the data.
Step1: In order to run the query on HDFS, we have to put the file there. We will use the bucket for this Terra Notebook.
Step2: Library imports
Step3: Setting the master of the cluster
In this example, the data reside in the HDFS of the spark cluster. Let's say that the cluster is managed by the YARN resource manager. We have therefore to tell PyGMQL to use it.
Step4: Loading of the GWAS dataset
In this example, we have loaded the GMQL repository on the Google Cloud Storage. It is convenient to store in a variable the path of the repository.
Step5: The GWAS data comes from a single TSV file. Therefore we can import it using the load_from_file function.
Notice that we have to specify a parser to properly load our data. Therefore it is wise to take a look at the schema of the downloaded file.
We are mainly interested in the mutation position (11-th and 12-th column) and the associated trait (7-th).
Step6: Inspecting the dataset
We can load a tiny part of the dataset to make sense of the data types and schema. You can inspect the dataset using the head function. This function returns a GDataframe object, which enables the access to regions (regs) and metadata (meta)
Step7: We can also simply look at the schema
Step8: Plotting the traits
We want to get an idea of the trait distribution. In order to do that we have to load the data in memory. Thereofre we can call the materialize function and take the regions.
Step9: We now plot the number of regions for each of the top 30 represented traits.
Step10: Loading of the ENCODE BroadPeak dataset
We now load the ENCODE BroadPeak dataset.
If the data come already in the GDM format, they can be loaded using the load_from_path function. A GDM dataset is stored as a folder having the following structure
Step11: Getting the enhancers
We identify enhancers thanks to the presence of H3K27ac peaks. We therefore select all acetylation peaks from ENCODE thanks to the experiment_target metadata attribute.
Step12: We get the peak region of the Chip-Seq using the reg_project function. The peak position (peak) is given by the center of the region.
$$
peak = \frac{right + left}{2}
$$
Step13: Once we have the peak, we extend the search region to $\pm 1500 bp$. We use again reg_project
Step14: Grouping by cell line and aggregating the signals
We are interested in enhancers which are cell specific. Therefore it is important to group our data by cell line. In addition to this we merge the signals coming from different tracks for the same cell line. We can do both of these actions using the normal_cover function.
Step15: To select only the cell-specific enhancers we can now apply again normal_cover and constraining the maximum number of overlaps between the regions to be a selected threshold.
In this case we select a threshold of 2.
Step16: Mapping mutations to cell specific enhancers
We now map the mutations in the GWAS dataset on the enhancer regions. We store the list of traits associated to each enhancer using the gl.BAG expression.
Step17: Materializing the result
We now can call the materialize function to execute the full query. The result will be collected in a GDataframe object.
Step18: The traits column of the resulting region is the list of traits associated with the cell specific enhancer. The data comes in the form of a string of trait names.
We convert the string to a list.
Step19: Analysis
The final part of the analysis regards the matching of cell lines and traits. We want to understand if a cell line (which is represented by its specific enhancers) has some particular mutation trait associated.
The analysis is performed in Pandas using the result region attributes traits and cell_line.
We build an association matrix between cell lines and traits by firstly converting the result to a list of (cell_line, trait), converting it to a Pandas DataFrame, and finally using the crosstab Pandas function to extract the matrix.
Step20: We finally plot the result as an heatmap. | Python Code:
%%bash
wget -q https://www.ebi.ac.uk/gwas/api/search/downloads/full -O tmp.tsv
cat tmp.tsv | \
awk 'BEGIN {FS="\t";OFS="\t"} {chrom=$12; gsub(chrom,"chr"chrom,$12)}{print $0}' | \
sed s/,//g > gwas.tsv
rm tmp.tsv
myBucket = "gs://fc-cad72548-2d6b-41ce-82aa-975cb7e8b764"
Explanation: Finding the most representative GWAS associated with cell-specific enhancers
(Execution on Google Cloud File System)
In this tutorial we are going to use a GWAS dataset (accessible from this link) together with the whole ENCODE BroadPeak dataset to find which mutations (and their associated traits) are most represented in enhancer regions which are present in a limited set of cells.
As first thing let's download the data.
End of explanation
!gsutil cp ./gwas.tsv $myBucket/
Explanation: In order to run the query on HDFS, we have to put the file there. We will use the bucket for this Terra Notebook.
End of explanation
import gmql as gl
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
Explanation: Library imports
End of explanation
gl.set_master("yarn")
Explanation: Setting the master of the cluster
In this example, the data reside in the HDFS of the spark cluster. Let's say that the cluster is managed by the YARN resource manager. We have therefore to tell PyGMQL to use it.
End of explanation
gmql_repository = "gs://geco_repository/"
gwas_path = myBucket + "/gwas.tsv"
Explanation: Loading of the GWAS dataset
In this example, we have loaded the GMQL repository on the Google Cloud Storage. It is convenient to store in a variable the path of the repository.
End of explanation
gwas = gl.load_from_file(gwas_path,
parser=gl.parsers.RegionParser(
chrPos=11,
startPos=12,
stopPos=12,
otherPos=[(7, "trait", 'string')]))
Explanation: The GWAS data comes from a single TSV file. Therefore we can import it using the load_from_file function.
Notice that we have to specify a parser to properly load our data. Therefore it is wise to take a look at the schema of the downloaded file.
We are mainly interested in the mutation position (11-th and 12-th column) and the associated trait (7-th).
End of explanation
gwas.head().regs
Explanation: Inspecting the dataset
We can load a tiny part of the dataset to make sense of the data types and schema. You can inspect the dataset using the head function. This function returns a GDataframe object, which enables the access to regions (regs) and metadata (meta)
End of explanation
gwas.schema
Explanation: We can also simply look at the schema
End of explanation
gwas_data = gwas.materialize().regs
Explanation: Plotting the traits
We want to get an idea of the trait distribution. In order to do that we have to load the data in memory. Thereofre we can call the materialize function and take the regions.
End of explanation
plt.figure(figsize=(20,5))
sns.countplot(data=gwas_data[gwas_data.trait.isin(
gwas_data.trait.value_counts().iloc[:30].index)], x='trait')
plt.xticks(rotation=90)
plt.title("Top represented GWAS traits", fontsize=20)
plt.show()
Explanation: We now plot the number of regions for each of the top 30 represented traits.
End of explanation
broad = gl.load_from_path(gmql_repository + "HG19_ENCODE_BROAD/")
broad.schema
Explanation: Loading of the ENCODE BroadPeak dataset
We now load the ENCODE BroadPeak dataset.
If the data come already in the GDM format, they can be loaded using the load_from_path function. A GDM dataset is stored as a folder having the following structure:
/path/to/dataset/:
- sample1.gdm
- sample1.gdm.meta
- sample2.gdm
- sample2.gdm.meta
- ...
- schema.xml
The first dataset we load is the one from the GWAS study.
End of explanation
acetyl = broad[broad['experiment_target'] == 'H3K27ac-human']
Explanation: Getting the enhancers
We identify enhancers thanks to the presence of H3K27ac peaks. We therefore select all acetylation peaks from ENCODE thanks to the experiment_target metadata attribute.
End of explanation
peaked = acetyl.reg_project(new_field_dict={
'peak': acetyl.right/2 + acetyl.left/2})
Explanation: We get the peak region of the Chip-Seq using the reg_project function. The peak position (peak) is given by the center of the region.
$$
peak = \frac{right + left}{2}
$$
End of explanation
enlarge = peaked.reg_project(new_field_dict={
'left': peaked.peak - 1500,
'right': peaked.peak + 1500})
Explanation: Once we have the peak, we extend the search region to $\pm 1500 bp$. We use again reg_project
End of explanation
enhancers_by_cell_line = enlarge.normal_cover(1, "ANY",
groupBy=['biosample_term_name'])
Explanation: Grouping by cell line and aggregating the signals
We are interested in enhancers which are cell specific. Therefore it is important to group our data by cell line. In addition to this we merge the signals coming from different tracks for the same cell line. We can do both of these actions using the normal_cover function.
End of explanation
max_overlapping = 2
cell_specific_enhancers = enhancers_by_cell_line.normal_cover(1, max_overlapping)
cell_specific_enhancers.schema
cell_specific_enhancers_by_cell_line = enhancers_by_cell_line.join(
cell_specific_enhancers,
[gl.DLE(0)], 'left',
refName="en", expName="csen")
Explanation: To select only the cell-specific enhancers we can now apply again normal_cover and constraining the maximum number of overlaps between the regions to be a selected threshold.
In this case we select a threshold of 2.
End of explanation
gwas.schema
enhancer_gwas = cell_specific_enhancers_by_cell_line.map(
gwas, refName="csen", expName="gwas",
new_reg_fields={'traits': gl.BAG('trait')})
enhancer_gwas = enhancer_gwas.reg_project(
["count_csen_gwas", "traits"],
new_field_dict={'cell_line': enhancer_gwas['csen.en.biosample_term_name','string']})
Explanation: Mapping mutations to cell specific enhancers
We now map the mutations in the GWAS dataset on the enhancer regions. We store the list of traits associated to each enhancer using the gl.BAG expression.
End of explanation
enhancer_gwas = enhancer_gwas.materialize()
Explanation: Materializing the result
We now can call the materialize function to execute the full query. The result will be collected in a GDataframe object.
End of explanation
enhancer_gwas.regs['traits'] = enhancer_gwas.regs.traits\
.map(lambda x: x.split(",") if pd.notnull(x) else x)
Explanation: The traits column of the resulting region is the list of traits associated with the cell specific enhancer. The data comes in the form of a string of trait names.
We convert the string to a list.
End of explanation
cell_trait = pd.DataFrame.from_records([(k, v) for k, vs in enhancer_gwas.regs[enhancer_gwas.regs.count_csen_gwas > 0]\
.groupby("cell_line").traits.sum().to_dict().items() for v in vs],
columns=['cell_line', 'trait'])
cross = pd.crosstab(cell_trait.cell_line, cell_trait.trait)
Explanation: Analysis
The final part of the analysis regards the matching of cell lines and traits. We want to understand if a cell line (which is represented by its specific enhancers) has some particular mutation trait associated.
The analysis is performed in Pandas using the result region attributes traits and cell_line.
We build an association matrix between cell lines and traits by firstly converting the result to a list of (cell_line, trait), converting it to a Pandas DataFrame, and finally using the crosstab Pandas function to extract the matrix.
End of explanation
plt.figure(figsize=(50, 15))
sns.heatmap(cross[cross.sum(0).sort_values(ascending=False).iloc[:100].index], cmap='Reds', vmax=70, linewidths=1, annot=True, cbar=False)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.xlabel("Trait", fontsize=30)
plt.ylabel("Cell line", fontsize=30)
plt.show()
Explanation: We finally plot the result as an heatmap.
End of explanation |
13,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anti-Aliasing Functions in Interferometry
Step1: Test setup
We will use a field of view of 0.004 radian. We place one
source within the field of view ($l=m=0.001$) and another 5 times stronger source just outside ($l=m=0.0025$).
Step2: Simple Imaging
Imaging without convolution with just the first source within field of view
Step3: If we now again do simple imaging with both sources, we see that the strong
source at (0.0025, 0.0025) is getting "aliased" back into the field of view at (-0.0015, -0.0015)
Step4: Anti-aliasing function
This is an example anti-aliasing function to use. It is separable, so we can work equivalently with one- or two-dimensional representations
Step5: After FFT-ing and extracting the middle this is what the oversampled anti-aliasing
kernel looks like in grid space
Step6: Imaginary part is close to nil
Step7: Gridding with anti-aliasing function
This is the image of single source within field of view without correcting the taper. Note that brightness falls off
towards the edges of the picture. This is because applying the anti-aliasing convolution kernel is equivalent to multiplying the picture with the anti-aliasing function shown above.
Step8: However, as the anti-aliasing function never goes to zero, we can easily revert this effect by dividing out the anti-aliasing function
Step9: Now we have restored image performance with just a single source in the field of view. In fact,
imaging is a good deal cleaner than before (and the source slightly stronger), as with
oversampling we are now taking fractional coordinates of visibilities into account.
Bust most critically if we now add back the source outside of the field of view, it gets
suppressed strongly. Because of its strength we still see noise centered around its off-screen
position at (0.0025, 0.0025), but the source itself is gone | Python Code:
%matplotlib inline
import sys
sys.path.append('../..')
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = 12, 10
import numpy
import scipy
import scipy.special
from crocodile.clean import *
from crocodile.synthesis import *
from crocodile.simulate import *
from crocodile.antialias import *
from util.visualize import *
Explanation: Anti-Aliasing Functions in Interferometry
End of explanation
vlas = numpy.genfromtxt("../../data/configurations/VLA_A_hor_xyz.csv", delimiter=",")
uvw = xyz_to_baselines(vlas, numpy.arange(0,numpy.pi,0.04), numpy.pi/4) / 5
yyone = simulate_point(uvw, 0.001, 0.001)
yytwo = yyone + 5*simulate_point(uvw, 0.0025, 0.0025)
Explanation: Test setup
We will use a field of view of 0.004 radian. We place one
source within the field of view ($l=m=0.001$) and another 5 times stronger source just outside ($l=m=0.0025$).
End of explanation
theta = 0.004
lam = 30000
d,_,_=do_imaging(theta, lam, uvw, None, yyone, simple_imaging)
show_image(d, "simple[yyone]", theta)
print(d[40:60,40:60].std())
Explanation: Simple Imaging
Imaging without convolution with just the first source within field of view:
End of explanation
d,_,_=do_imaging(theta, lam, uvw, None, yytwo, simple_imaging)
show_image(d, "simple[yytwo]", theta)
print(d[40:60,40:60].std())
Explanation: If we now again do simple imaging with both sources, we see that the strong
source at (0.0025, 0.0025) is getting "aliased" back into the field of view at (-0.0015, -0.0015):
End of explanation
support = 6
aa = anti_aliasing_function(int(theta*lam), 0, support)
aa2 = numpy.outer(aa, aa)
pylab.rcParams['figure.figsize'] = 7, 5
pylab.plot(theta*coordinates(int(theta*lam)), aa); pylab.show()
show_image(aa2, "aa2", theta)
Explanation: Anti-aliasing function
This is an example anti-aliasing function to use. It is separable, so we can work equivalently with one- or two-dimensional representations:
End of explanation
oversample = 128
r = numpy.arange(-oversample*(support//2), oversample*((support+1)//2)) / oversample
kv=kernel_oversample(aa, oversample, support)
pylab.plot(r, numpy.transpose(kv).flatten().real);
Explanation: After FFT-ing and extracting the middle this is what the oversampled anti-aliasing
kernel looks like in grid space:
End of explanation
pylab.plot(r, numpy.transpose(kv)[::-1].flatten().imag);
Explanation: Imaginary part is close to nil:
End of explanation
d,_,_=do_imaging(theta, lam, uvw, None, yyone, conv_imaging, kv=kv)
pylab.rcParams['figure.figsize'] = 12, 10
show_image(d, "aa_{one}", theta)
print(d[40:60,40:60].std())
Explanation: Gridding with anti-aliasing function
This is the image of single source within field of view without correcting the taper. Note that brightness falls off
towards the edges of the picture. This is because applying the anti-aliasing convolution kernel is equivalent to multiplying the picture with the anti-aliasing function shown above.
End of explanation
show_image(d/numpy.outer(aa, aa), "aa'_{one}", theta)
print((d/aa2)[40:60,40:60].std())
Explanation: However, as the anti-aliasing function never goes to zero, we can easily revert this effect by dividing out the anti-aliasing function:
End of explanation
d,_,_=do_imaging(theta, lam, uvw, None, yytwo, conv_imaging, kv=kv)
show_image(d/numpy.outer(aa, aa), "aa'_{two}", theta)
print((d/aa2)[40:60,40:60].std())
Explanation: Now we have restored image performance with just a single source in the field of view. In fact,
imaging is a good deal cleaner than before (and the source slightly stronger), as with
oversampling we are now taking fractional coordinates of visibilities into account.
Bust most critically if we now add back the source outside of the field of view, it gets
suppressed strongly. Because of its strength we still see noise centered around its off-screen
position at (0.0025, 0.0025), but the source itself is gone:
End of explanation |
13,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Scientific Programming in Python</h1>
<h2 align="center">Topic 5
Step1: Table of Contents
1.- Cython Basic Usage
2.- Advanced usage
3.- Pure C in Python
<div id='cython' />
1.- Cython Basic Usage
Cython is both a Superset of Python and a Python Library that lets you combine C and Python in various ways. There are two main use-cases
Step2: Let's evaluate the performance for the first version
Step3: And now we write our first Cython version, by just adding %%cython magic in the first line of the cell
Step4: We achieve x2 speed improvement doing (practically) nothing!.
When we add %%cython at the beginning of the cell, the code gets compiled by Cython into a C extension. Then, this extension is loaded, and the compiled function is readily available in the interactive namespace.
Lets help the compiler by explicitly defining the type of the variables with the cdef macro/keyword
Step5: Then
Step6: Numba wins this time! but
Step7: Alternative usage of Cython
Step8: Now let's improve this naive Cython implementation by statically defining the types of the variables
Step9: Typed Memory Views
Typed memoryviews allow efficient access to memory buffers, such as those underlying NumPy arrays, without incurring any Python overhead. Memoryviews are similar to the current NumPy array buffer support (np.ndarray[np.float64_t, ndim=2]), but they have more features and cleaner syntax.
They can handle a wider variety of sources of array data. For example, they can handle C arrays and the Cython array type (Cython arrays).
Syntaxis
Step10: Compiler optimization
With the -c option we can pass the compiler (gcc) optimization options. Below we use the most common of them
Step11: Compiler directives
Compiler directives are instructions which affect the behavior of Cython code.
cdivision (True / False)
* If set to False, Cython will adjust the remainder and quotient operators C types to match those of Python ints (which differ when the operands have opposite signs) and raise a ZeroDivisionError when the right operand is 0. This has up to a 35% speed penalty. If set to True.
boundscheck (True / False)
* If set to False, Cython is free to assume that indexing operations ([]-operator) in the code will not cause any IndexErrors to be raised. Lists, tuples, and strings are affected only if the index can be determined to be non-negative (or if wraparound is False). Conditions which would normally trigger an IndexError may instead cause segfaults or data corruption if this is set to False. Default is True.
nonecheck (True / False)
* If set to False, Cython is free to assume that native field accesses on variables typed as an extension type, or buffer accesses on a buffer variable, never occurs when the variable is set to None. Otherwise a check is inserted and the appropriate exception is raised. This is off by default for performance reasons. Default is False.
wraparound (True / False)
* In Python arrays can be indexed relative to the end. For example A[-1] indexes the last value of a list. In C negative indexing is not supported. If set to False, Cython will neither check for nor correctly handle negative indices, possibly causing segfaults or data corruption. Default is True.
initializedcheck (True / False)
* If set to True, Cython checks that a memoryview is initialized whenever its elements are accessed or assigned to. Setting this to False disables these checks. Default is True.
For all the compilation directives see here.
Step12: Pure C functions
With the cdef keywork we can realy define C function, as we shown below. In such functions all variable types should be defined and should have a return type, and can't be called directly in Python, i.e, only can be called by functions defined in the same module.
There is a midpoint between def and cdef which automatically creates a Python function with the same name, so the function can be called directly.
Step13: Example of cdef and cpdef
Step14: Function inlining
In computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function.
As a rule of thumb
Step15: What about Numba?
Step16: 4.- Other advanced things you can do with Cython
We have seen that with Cython we can implement our algorithms achieving C performance. Moreover it is very versatile and we can do some other advanced thing with it | Python Code:
%matplotlib inline
import numpy as np
import numexpr as ne
import numba
import math
import random
import matplotlib.pyplot as plt
import scipy as sp
import sys
%load_ext Cython
Explanation: <h1 align="center">Scientific Programming in Python</h1>
<h2 align="center">Topic 5: Accelerating Python with Cython: Writting C in Python </h2>
Notebook created by Martín Villanueva - [email protected] - DI UTFSM - May2017.
End of explanation
def primes_python(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
primes_python(20)
Explanation: Table of Contents
1.- Cython Basic Usage
2.- Advanced usage
3.- Pure C in Python
<div id='cython' />
1.- Cython Basic Usage
Cython is both a Superset of Python and a Python Library that lets you combine C and Python in various ways. There are two main use-cases:
1. Optimizing your Python code by statically compiling it to C.
2. Wrapping a C/C++ library in Python.
In order to get it properly working, you need Cython and a C compiler:
1. Cython: conda install cython
2. C compiler: Install GNU C compiler with your package manager (Unix/Linux) or install Xcode (OSX).
We will introduce the basic Cython usage by impementing the Eratosthenes Sieve Algorithm, which is an algorithm to find all prime numbers smaller than a given number.
End of explanation
tp = %timeit -o primes_python(10000)
Explanation: Let's evaluate the performance for the first version:
End of explanation
%%cython
def primes_cython1(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
tc1 = %timeit -o primes_cython1(10000)
Explanation: And now we write our first Cython version, by just adding %%cython magic in the first line of the cell:
End of explanation
%%cython
def primes_cython2(int n):
# Note the type declarations below
cdef list primes = [False, False] + [True] * (n - 2)
cdef int i = 2
cdef int k = 0
# The rest of the functions is unchanged
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
tc2 = %timeit -o primes_cython2(10000)
print("Cython version 1 speedup: {0}".format(tp.best/tc1.best))
print("Cython version 2 speedup: {0}".format(tp.best/tc2.best))
Explanation: We achieve x2 speed improvement doing (practically) nothing!.
When we add %%cython at the beginning of the cell, the code gets compiled by Cython into a C extension. Then, this extension is loaded, and the compiled function is readily available in the interactive namespace.
Lets help the compiler by explicitly defining the type of the variables with the cdef macro/keyword:
End of explanation
@numba.jit(nopython=True)
def primes_numba(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
res = []
for i in range(2,n):
if primes[i]: res.append(i)
return res
tn = %timeit -o primes_numba(10000)
Explanation: Then: In general, Cython will be the most efficient when it can compile data structures and operations directly to C by __making as few CPython API calls as possible__. Specifying the types of the variables often leads to greater speed improvements.
Just for curiosity let's see the performance Numba's JIT achieves:
End of explanation
%%cython -a
def primes_cython1(n):
primes = [False, False] + [True] * (n - 2)
i= 2
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
%%cython -a
def primes_cython2(int n):
# Note the type declarations below
cdef list primes = [False, False] + [True] * (n - 2)
cdef int i = 2
cdef int k = 0
# The rest of the functions is unchanged
while i < n:
# We do not deal with composite numbers.
if not primes[i]:
i += 1
continue
k= i+i
# We mark multiples of i as composite numbers.
while k < n:
primes[k] = False
k += i
i += 1
# We return all numbers marked with True.
return [i for i in range(2, n) if primes[i]]
Explanation: Numba wins this time! but: This is not the final form of Cython...
Inspecting Cython bottlenecks with annotations
We can inspect the C code generated by Cython with the -a argument. Let's inspect the code used above.
The non-optimized lines will be shown in a gradient of yellow (white lines are faster, yellow lines are slower), telling you which lines are the least efficiently compiled to C. By clicking on a line, you can see the generated C code corresponding to that line.
End of explanation
# Matrices to use
A = np.random.random((1000,3))
B = np.random.random((500,3))
def dist(a, b):
return np.sqrt(np.sum((a-b)**2))
def distance_matrix_python(A, B):
m = A.shape[0]
n = B.shape[0]
D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i],B[j])
return D
%timeit distance_matrix_python(A,B)
%%cython -a
import numpy as np
def dist(a, b):
return np.sqrt(np.sum((a-b)**2))
def distance_matrix_cython0(A, B):
m = A.shape[0]
n = B.shape[0]
D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i],B[j])
return D
%timeit distance_matrix_cython0(A,B)
Explanation: Alternative usage of Cython: Outside the notebook
If you want to use Cython outside the notebook (the way it was thought...), you have to do the work of the magic:
1. Write the function into a .pyx file.
2. Cythonize it with cython filename.pyx generating the filename.c file.
3. Compile it with GCC:
gcc -shared -fPIC -fwrapv -O3 -fno-strict-aliasing -I/home/mavillan/anaconda3/include/python3.6m -o primes.so primes.c
<div id='cython++' />
2.- Advanced usage
In this section we will consider the example of computing a distance matrix: Given the matrices $A_{m,3}$ and $B_{n,3}$ (each row is a 3D-position), the distance matrix has entries $D_{i,j} = d(A[i],B[j])$.
NumPy Arrays
You can use NumPy from Cython exactly the same as in regular Python, but by doing so you are losing potentially high speedups because Cython has support for fast access to NumPy arrays.
End of explanation
%%cython -a
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
def dist(cnp.ndarray[float64_t, ndim=1] a, cnp.ndarray[float64_t, ndim=1] b):
return np.sqrt(np.sum((a-b)**2))
def distance_matrix_cython1(cnp.ndarray[float64_t, ndim=2] A, cnp.ndarray[float64_t, ndim=2] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
cnp.ndarray[float64_t, ndim=2] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython1(A,B)
%%cython -a
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(cnp.ndarray[float64_t, ndim=1] a, cnp.ndarray[float64_t, ndim=1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython2(cnp.ndarray[float64_t, ndim=2] A, cnp.ndarray[float64_t, ndim=2] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
cnp.ndarray[float64_t, ndim=2] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython2(A,B)
Explanation: Now let's improve this naive Cython implementation by statically defining the types of the variables:
End of explanation
%%cython -a
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython3(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython3(A,B)
Explanation: Typed Memory Views
Typed memoryviews allow efficient access to memory buffers, such as those underlying NumPy arrays, without incurring any Python overhead. Memoryviews are similar to the current NumPy array buffer support (np.ndarray[np.float64_t, ndim=2]), but they have more features and cleaner syntax.
They can handle a wider variety of sources of array data. For example, they can handle C arrays and the Cython array type (Cython arrays).
Syntaxis: dtype[:,::1] where ::1 indicates the axis where elements are contiguous.
End of explanation
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython4(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i], B[j])
return D
%timeit -n 10 distance_matrix_cython4(A,B)
Explanation: Compiler optimization
With the -c option we can pass the compiler (gcc) optimization options. Below we use the most common of them:
End of explanation
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
def dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython5(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_cython5(A,B)
Explanation: Compiler directives
Compiler directives are instructions which affect the behavior of Cython code.
cdivision (True / False)
* If set to False, Cython will adjust the remainder and quotient operators C types to match those of Python ints (which differ when the operands have opposite signs) and raise a ZeroDivisionError when the right operand is 0. This has up to a 35% speed penalty. If set to True.
boundscheck (True / False)
* If set to False, Cython is free to assume that indexing operations ([]-operator) in the code will not cause any IndexErrors to be raised. Lists, tuples, and strings are affected only if the index can be determined to be non-negative (or if wraparound is False). Conditions which would normally trigger an IndexError may instead cause segfaults or data corruption if this is set to False. Default is True.
nonecheck (True / False)
* If set to False, Cython is free to assume that native field accesses on variables typed as an extension type, or buffer accesses on a buffer variable, never occurs when the variable is set to None. Otherwise a check is inserted and the appropriate exception is raised. This is off by default for performance reasons. Default is False.
wraparound (True / False)
* In Python arrays can be indexed relative to the end. For example A[-1] indexes the last value of a list. In C negative indexing is not supported. If set to False, Cython will neither check for nor correctly handle negative indices, possibly causing segfaults or data corruption. Default is True.
initializedcheck (True / False)
* If set to True, Cython checks that a memoryview is initialized whenever its elements are accessed or assigned to. Setting this to False disables these checks. Default is True.
For all the compilation directives see here.
End of explanation
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
cdef float64_t dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython6(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_cython6(A,B)
Explanation: Pure C functions
With the cdef keywork we can realy define C function, as we shown below. In such functions all variable types should be defined and should have a return type, and can't be called directly in Python, i.e, only can be called by functions defined in the same module.
There is a midpoint between def and cdef which automatically creates a Python function with the same name, so the function can be called directly.
End of explanation
%%cython -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
cdef float64_t test1(float64_t a, float64_t b):
return a+b
test1(1.,1.)
%%cython -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
cpdef float64_t test2(float64_t a, float64_t b):
return a+b
test2(1,1)
Explanation: Example of cdef and cpdef
End of explanation
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
import numpy as np
cimport numpy as cnp
ctypedef cnp.float64_t float64_t
from libc.math cimport sqrt
cdef inline float64_t dist(float64_t[::1] a, float64_t[::1] b):
cdef:
int i = 0
int n = a.shape[0]
float ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return sqrt(ret)
def distance_matrix_cython7(float64_t[:,::1] A, float64_t[:,::1] B):
cdef:
int m = A.shape[0]
int n = B.shape[0]
int i,j
float64_t[:,::1] D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_cython7(A,B)
Explanation: Function inlining
In computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function.
As a rule of thumb: Some inlining will improve speed at very minor cost of space, but excess inlining will hurt speed, due to inlined code consuming too much of the instruction cache, and also cost significant space.
End of explanation
@numba.jit(nopython=True)
def dist(a, b):
n = a.shape[0]
ret = 0
for i in range(n):
ret += (a[i]-b[i])**2
return math.sqrt(ret)
@numba.jit(nopython=True)
def distance_matrix_numba(A, B):
m = A.shape[0]
n = B.shape[0]
D = np.empty((m,n))
for i in range(m):
for j in range(n):
D[i,j] = dist(A[i,:], B[j,:])
return D
%timeit -n 10 distance_matrix_numba(A,B)
Explanation: What about Numba?
End of explanation
%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing
#!python
#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False
cdef class A(object):
def d(self): return 0
cdef int c(self): return 0
cpdef int p(self): return 0
def test_def(self, long num):
while num > 0:
self.d()
num -= 1
def test_cdef(self, long num):
while num > 0:
self.c()
num -= 1
def test_cpdef(self, long num):
while num > 0:
self.p()
num -= 1
%%timeit n = 1000000
a1 = A()
a1.test_def(n)
%%timeit n = 1000000
a1 = A()
a1.test_cdef(n)
%%timeit n = 1000000
a1 = A()
a1.test_cpdef(n)
Explanation: 4.- Other advanced things you can do with Cython
We have seen that with Cython we can implement our algorithms achieving C performance. Moreover it is very versatile and we can do some other advanced thing with it:
Object-oriented programming: Classes and methods
To support object-oriented programming, Cython supports writing normal Python classes exactly as in Python.
End of explanation |
13,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hyperparameters and Model Validation
<!--BOOK_INFORMATION-->
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
<!--NAVIGATION-->
< Introducing Scikit-Learn | Contents | Feature Engineering >
In the previous section, we saw the basic recipe for applying a supervised machine learning model
Step1: Next we choose a model and hyperparameters.
Here we'll use a k-neighbors classifier with n_neighbors=1.
- This is a very simple and intuitive model
- "the label of an unknown point is the same as the label of its closest training point"
Step2: Then
- we train the model, and
- use it to predict labels for data we already know
Step3: Finally, we compute the fraction of correctly labeled points
Step4: We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model!
In fact, this approach contains a fundamental flaw
Step5: We see here a more reasonable result
Step6: What comes out are two accuracy scores, which
- we could calculate the mean value to get a better measure of the global model performance.
- This particular form of cross-validation is a two-fold cross-validation
- that is, one in which we have split the data into two sets and used each in turn as a validation set.
We could expand on this idea to use even more trials, and more folds in the data—for example, here is a visual depiction of five-fold cross-validation
Step7: Repeating the validation across different subsets of the data gives us an even better idea of the performance of the algorithm.
Scikit-Learn implements a number of useful cross-validation schemes that are useful in particular situations;
- these are implemented via iterators in the cross_validation module.
- For example, we might wish to go to the extreme case in which our number of folds is equal to the number of data points
Step8: Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and
- the score indicates either successful (1.0) or unsuccessful (0.0) prediction.
- Taking the mean of these gives an estimate of the error rate
Step9: Other cross-validation schemes can be used similarly.
use IPython to explore the sklearn.cross_validation submodule, or
take a look at Scikit-Learn's online cross-validation documentation.
Selecting the Best Model
Question
Step10: Now let's create some data to which we will fit our model
Step11: We can now visualize our data, along with polynomial fits of several degrees
Step12: Question
The knob controlling model complexity in this case is the degree of the polynomial
what degree of polynomial provides a suitable trade-off between bias (under-fitting) and variance (over-fitting)?
We can make progress in this by visualizing the validation curve for this particular data and model;
- this can be done straightforwardly using the validation_curve convenience routine provided by Scikit-Learn.
- Given a model, data, parameter name, and a range to explore, this function will automatically compute both the training score and validation score across the range
Step13: This shows precisely the qualitative behavior we expect
Step14: Notice that finding this optimal model did not actually require us to compute the training score,
- but examining the relationship between the training score and validation score can give us useful insight into the performance of the model.
Learning Curves
One important aspect of model complexity is that the optimal model will generally depend on the size of your training data.
For example, let's generate a new dataset with a factor of five more points
Step15: We will duplicate the preceding code to plot the validation curve for this larger dataset;
- for reference let's over-plot the previous results as well
Step16: The solid lines show the new results, while the fainter dashed lines show the results of the previous smaller dataset.
It is clear from the validation curve that the larger dataset can support a much more complicated model
Step17: <img src = './img/figures/05.03-learning-curve2.png', width = 800px>
This is a valuable diagnostic
- it gives us a visual depiction of how our model responds to increasing training data.
When your learning curve has already converged
- adding more training data will not significantly improve the fit!
- in the left panel, with the learning curve for the degree-2 model.
The only way to increase the converged score is to use a different (usually more complicated) model.
- in the right panel
Step18: Notice that like a normal estimator, this has not yet been applied to any data.
Calling the fit() method will fit the model at each grid point, keeping track of the scores along the way
Step19: Now that this is fit, we can ask for the best parameters as follows
Step20: Finally, if we wish, we can use the best model and show the fit to our data using code from before | Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
Explanation: Hyperparameters and Model Validation
<!--BOOK_INFORMATION-->
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
<!--NAVIGATION-->
< Introducing Scikit-Learn | Contents | Feature Engineering >
In the previous section, we saw the basic recipe for applying a supervised machine learning model:
Choose a class of model
Choose model hyperparameters
Fit the model to the training data
Use the model to predict labels for new data
The choice of model and choice of hyperparameters are perhaps the most important part.
- we need a way to validate that our model and our hyperparameters are a good fit to the data.
There are some pitfalls that you must avoid to do this effectively.
Thinking about Model Validation
Model validation is very simple:
- Applying the trained model to test data,
- Comparing the prediction to the known values.
- The use of holdout sets
- The use of cross-validation
Model validation the wrong way
Let's demonstrate the naive approach to validation using the Iris data
End of explanation
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=1)
Explanation: Next we choose a model and hyperparameters.
Here we'll use a k-neighbors classifier with n_neighbors=1.
- This is a very simple and intuitive model
- "the label of an unknown point is the same as the label of its closest training point"
End of explanation
model.fit(X, y)
y_model = model.predict(X)
Explanation: Then
- we train the model, and
- use it to predict labels for data we already know:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(y, y_model)
Explanation: Finally, we compute the fraction of correctly labeled points:
End of explanation
from sklearn.model_selection import train_test_split
# split the data with 50% in each set
X1, X2, y1, y2 = train_test_split(X, y, random_state=0,
train_size=0.5, test_size = 0.5)
# fit the model on one set of data
model.fit(X1, y1)
# evaluate the model on the second set of data
y2_model = model.predict(X2)
accuracy_score(y2, y2_model)
Explanation: We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model!
In fact, this approach contains a fundamental flaw:
it trains and evaluates the model on the same data.
Furthermore, the nearest neighbor model is an instance-based estimator that simply stores the training data,
it predicts labels by comparing new data to these stored points:
except in contrived cases, it will get 100% accuracy every time!
Model validation the right way: Holdout sets
We hold back some subset of the data from the training of the model, and then
use this holdout set to check the model performance.
This splitting can be done using the train_test_split utility in Scikit-Learn:
End of explanation
# Here we do two validation trials,
# alternately using each half of the data as a holdout set.
# Using the split data from before, we could implement it like this:
y2_model = model.fit(X1, y1).predict(X2)
y1_model = model.fit(X2, y2).predict(X1)
accuracy_score(y1, y1_model), accuracy_score(y2, y2_model)
Explanation: We see here a more reasonable result:
- the nearest-neighbor classifier is about 90% accurate on this hold-out set.
- The hold-out set is similar to unknown data, because the model has not "seen" it before.
Model validation via cross-validation
One disadvantage of using a holdout set for model validation
- we have lost a portion of our data to the model training.
- In the above case, half the dataset does not contribute to the training of the model!
- This is not optimal, and can cause problems
- especially if the initial set of training data is small.
cross-validation does a sequence of fits where each subset of the data is used both as a training set and as a validation set.
Model validation via cross-validation
Visually, it might look something like this:
figure source in Appendix
End of explanation
# We can use Scikit-Learn's ``cross_val_score`` convenience routine to do it succinctly:
from sklearn.model_selection import cross_val_score
cross_val_score(model, X, y, cv=5)
Explanation: What comes out are two accuracy scores, which
- we could calculate the mean value to get a better measure of the global model performance.
- This particular form of cross-validation is a two-fold cross-validation
- that is, one in which we have split the data into two sets and used each in turn as a validation set.
We could expand on this idea to use even more trials, and more folds in the data—for example, here is a visual depiction of five-fold cross-validation:
figure source in Appendix
Here we split the data into five groups, and use each of them in turn to evaluate the model fit on the other 4/5 of the data.
End of explanation
from sklearn.model_selection import LeaveOneOut
scores = cross_val_score(model, X, y, cv=LeaveOneOut())
scores
Explanation: Repeating the validation across different subsets of the data gives us an even better idea of the performance of the algorithm.
Scikit-Learn implements a number of useful cross-validation schemes that are useful in particular situations;
- these are implemented via iterators in the cross_validation module.
- For example, we might wish to go to the extreme case in which our number of folds is equal to the number of data points:
- we train on all points but one in each trial.
- This type of cross-validation is known as leave-one-out cross validation
End of explanation
scores.mean()
Explanation: Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and
- the score indicates either successful (1.0) or unsuccessful (0.0) prediction.
- Taking the mean of these gives an estimate of the error rate:
End of explanation
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
Explanation: Other cross-validation schemes can be used similarly.
use IPython to explore the sklearn.cross_validation submodule, or
take a look at Scikit-Learn's online cross-validation documentation.
Selecting the Best Model
Question: if our estimator is underperforming, how should we move forward?
Use a more complicated/more flexible model
Use a less complicated/less flexible model
Gather more training samples
Gather more data to add features to each sample
Selecting the Best Model
The answer to this question is often counter-intuitive.
In particular, sometimes
- using a more complicated model will give worse results
- adding more training samples may not improve your results!
The ability to determine what steps will improve your model is what separates the successful machine learning practitioners from the unsuccessful.
The Bias-variance trade-off
Fundamentally, the question of "the best model" is about finding a sweet spot in the tradeoff between bias and variance.
Consider the following figure, which presents two regression fits to the same dataset:
figure source in Appendix
The Bias-variance trade-off
The model on the left
- The data are intrinsically more complicated than a straight line, the straight-line model will never be able to describe this dataset well.
- Such a model is said to underfit the data:
- it does not have enough model flexibility to suitably account for all the features in the data;
- the model has high bias.
The Bias-variance trade-off
The model on the right
- Here the model fit has enough flexibility to nearly perfectly account for the fine features in the data,
- but even though it very accurately describes the training data, its precise form seems to be more reflective of the particular noise properties of the data rather than the intrinsic properties of whatever process generated that data.
- Such a model is said to overfit the data:
- it has so much model flexibility that the model ends up accounting for random errors as well as the underlying data distribution;
- the model has high variance.
To look at this in another light, consider what happens if we use these two models to predict the y-value for some new data.
In the following diagrams, the red/lighter points indicate data that is omitted from the training set:
figure source in Appendix
$R^2$ score, or coefficient of determination
- which measures how well a model performs relative to a simple mean of the target values.
- $R^2=1$ indicates a perfect match
- $R^2=0$ indicates the model does no better than simply taking the mean of the data
- $R^2<0$ negative values mean even worse models.
From the scores associated with these two models, we can make an observation that holds more generally:
For high-bias models, the performance of the model on the validation set is similar to the performance on the training set.
For high-variance models, the performance of the model on the validation set is far worse than the performance on the training set.
If we imagine that we have some ability to tune the model complexity, we would expect the training score and validation score to behave as illustrated in the following figure:
figure source in Appendix
The diagram is often called a validation curve:
The training score is everywhere higher than the validation score. This is generally the case:
the model will be a better fit to data it has seen than to data it has not seen.
For very low model complexity (a high-bias model), the training data is under-fit
the model is a poor predictor both for the training data and for any previously unseen data.
For very high model complexity (a high-variance model), the training data is over-fit
the model predicts the training data very well, but fails for any previously unseen data.
For some intermediate value, the validation curve has a maximum.
This level of complexity indicates a suitable trade-off between bias and variance.
Tuning the model complexity varies from model to model
Validation curves in Scikit-Learn
using cross-validation to compute the validation curve.
a polynomial regression model:
- a generalized linear model in which the degree of the polynomial is a tunable parameter.
For example, a degree-1 polynomial fits a straight line to the data; for model parameters $a$ and $b$:
$$
y = ax + b
$$
A degree-3 polynomial fits a cubic curve to the data; for model parameters $a, b, c, d$:
$$
y = ax^3 + bx^2 + cx + d
$$
In Scikit-Learn, we can implement this with a simple linear regression combined with the polynomial preprocessor.
We will use a pipeline to string these operations together (we will discuss polynomial features and pipelines more fully in Feature Engineering):
End of explanation
import numpy as np
def make_data(N, err=1.0, rseed=1):
# randomly sample the data
rng = np.random.RandomState(rseed)
X = rng.rand(N, 1) ** 2
y = 10 - 1. / (X.ravel() + 0.1)
if err > 0:
y += err * rng.randn(N)
return X, y
X, y = make_data(40)
Explanation: Now let's create some data to which we will fit our model:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # plot formatting
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
plt.scatter(X.ravel(), y, color='black')
axis = plt.axis()
for degree in [1, 3, 5]:
y_test = PolynomialRegression(degree).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree))
plt.xlim(-0.1, 1.0)
plt.ylim(-2, 12)
plt.legend(loc='best');
Explanation: We can now visualize our data, along with polynomial fits of several degrees:
End of explanation
from sklearn.learning_curve import validation_curve
degree = np.arange(0, 21)
train_score, val_score = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree',
degree, cv=7)
plt.plot(degree, np.median(train_score, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
Explanation: Question
The knob controlling model complexity in this case is the degree of the polynomial
what degree of polynomial provides a suitable trade-off between bias (under-fitting) and variance (over-fitting)?
We can make progress in this by visualizing the validation curve for this particular data and model;
- this can be done straightforwardly using the validation_curve convenience routine provided by Scikit-Learn.
- Given a model, data, parameter name, and a range to explore, this function will automatically compute both the training score and validation score across the range:
End of explanation
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = PolynomialRegression(3).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
Explanation: This shows precisely the qualitative behavior we expect:
the training score is everywhere higher than the validation score;
the training score is monotonically improving with increased model complexity;
the validation score reaches a maximum before dropping off as the model becomes over-fit.
The optimal trade-off between bias and variance is found for a third-order polynomial;
- we can compute and display this fit over the original data as follows:
End of explanation
X2, y2 = make_data(200)
plt.scatter(X2.ravel(), y2);
Explanation: Notice that finding this optimal model did not actually require us to compute the training score,
- but examining the relationship between the training score and validation score can give us useful insight into the performance of the model.
Learning Curves
One important aspect of model complexity is that the optimal model will generally depend on the size of your training data.
For example, let's generate a new dataset with a factor of five more points:
End of explanation
degree = np.arange(21)
train_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2,
'polynomialfeatures__degree', degree, cv=7)
plt.plot(degree, np.median(train_score2, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score2, 1), color='red', label='validation score')
plt.plot(degree, np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed')
plt.plot(degree, np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed')
plt.legend(loc='lower center')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
Explanation: We will duplicate the preceding code to plot the validation curve for this larger dataset;
- for reference let's over-plot the previous results as well:
End of explanation
from sklearn.learning_curve import learning_curve
import warnings
warnings.filterwarnings("ignore")
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for i, degree in enumerate([2, 9]):
N, train_lc, val_lc = learning_curve(PolynomialRegression(degree),
X, y, cv=7,
train_sizes=np.linspace(0.3, 1, 25))
ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax[i].set_ylim(0, 1)
ax[i].set_xlim(N[0], N[-1])
ax[i].set_xlabel('training size', fontsize = 30)
ax[i].set_ylabel('score', fontsize = 30)
ax[i].set_title('degree = {0}'.format(degree), size=24)
ax[i].legend(loc='best', fontsize = 30)
#fig.savefig('figures/05.03-learning-curve2.png')
Explanation: The solid lines show the new results, while the fainter dashed lines show the results of the previous smaller dataset.
It is clear from the validation curve that the larger dataset can support a much more complicated model:
the peak here is around a degree of 6, but a degree-20 model is not seriously over-fitting the data
the validation and training scores remain very close.
Thus we see that the behavior of the validation curve has not one but two important inputs:
the model complexity
the number of training points.
A plot of the training/validation score with respect to the size of the training set is known as a learning curve.
The general behavior we would expect from a learning curve is this:
A model of a given complexity will overfit a small dataset:
the training score will be relatively high, while the validation score will be relatively low.
A model of a given complexity will underfit a large dataset:
the training score will decrease, but the validation score will increase.
A model will never, except by chance, give a better score to the validation set than the training set:
the curves should keep getting closer together but never cross.
With these features in mind, we would expect a learning curve to look qualitatively like that shown in the following figure:
figure source in Appendix
The notable feature of the learning curve
- The convergence to a particular score as the number of training samples grows.
- once you have enough points that a particular model has converged, adding more training data will not help you!
- The only way to increase model performance in this case is to use another (often more complex) model.
Learning curves in Scikit-Learn
Scikit-Learn offers a convenient utility for computing such learning curves from your models;
here we will compute a learning curve for our original dataset with a second-order polynomial model and a ninth-order polynomial:
End of explanation
from sklearn.grid_search import GridSearchCV
param_grid = {'polynomialfeatures__degree': np.arange(21),
'linearregression__fit_intercept': [True, False],
'linearregression__normalize': [True, False]}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=7)
Explanation: <img src = './img/figures/05.03-learning-curve2.png', width = 800px>
This is a valuable diagnostic
- it gives us a visual depiction of how our model responds to increasing training data.
When your learning curve has already converged
- adding more training data will not significantly improve the fit!
- in the left panel, with the learning curve for the degree-2 model.
The only way to increase the converged score is to use a different (usually more complicated) model.
- in the right panel: by moving to a much more complicated model, we increase the score of convergence (indicated by the dashed line)
- at the expense of higher model variance (indicated by the difference between the training and validation scores).
If we were to add even more data points, the learning curve for the more complicated model would eventually converge.
Plotting a learning curve for your particular choice of model and dataset can help you to make this type of decision about how to move forward in improving your analysis.
Validation in Practice: Grid Search
The trade-off between bias and variance, and its dependence on model complexity and training set size.
In practice, models generally have more than one knob to turn
plots of validation and learning curves change from lines to multi-dimensional surfaces.
such visualizations are difficult
we would rather simply find the particular model that maximizes the validation score.
Validation in Practice: Grid Search
Scikit-Learn provides automated tools to do this in the grid search module.
Here is an example of using grid search to find the optimal polynomial model.
We will explore a three-dimensional grid of model features;
- the polynomial degree,
- the flag telling us whether to fit the intercept
- the flag telling us whether to normalize the problem.
This can be set up using Scikit-Learn's GridSearchCV meta-estimator:
End of explanation
grid.fit(X, y);
Explanation: Notice that like a normal estimator, this has not yet been applied to any data.
Calling the fit() method will fit the model at each grid point, keeping track of the scores along the way:
End of explanation
grid.best_params_
Explanation: Now that this is fit, we can ask for the best parameters as follows:
End of explanation
model = grid.best_estimator_
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = model.fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test, hold=True);
plt.axis(lim);
Explanation: Finally, if we wish, we can use the best model and show the fit to our data using code from before:
End of explanation |
13,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
La documentación necesaria para poder superar este ejercicio se encuentra en la documentación de ERPpeek
Tarea 1 - Conexión
Demuestra que sabes conectarte a una instancia de Odoo y listar todas sus bases de datos.
Step1: Tarea 2 - Volcado de datos
Demuestra que puedes imprimir el id y el nombre de todos los usuarios (son registros del modelo res.users).
Step2: Tarea 3 - Crear y configurar una base de datos
Demuestra que sabes crear una base de datos, listar todos los módulos instalados por defecto, y si no está presente un módulo determinado instalarlo.
Step3: Tarea 4 - Explorar un modelo
Demuesta que sabes listar todos los campos del modelo res.users, incluyendo nombre, tipo y etiqueta
Step4: Tarea 5 - Poblar un modelo
Crea el código neccesario para migrar los usuarios de una base de datos a otra base de datos. No es necesario migrar todos los campos. Basta con una prueba de concepto. | Python Code:
client = erppeek.Client(server=SERVER)
for database in client.db.list():
print('Base de datos: %r' % (database,))
Explanation: La documentación necesaria para poder superar este ejercicio se encuentra en la documentación de ERPpeek
Tarea 1 - Conexión
Demuestra que sabes conectarte a una instancia de Odoo y listar todas sus bases de datos.
End of explanation
client = erppeek.Client(SERVER, DATABASE, USERNAME, PASSWORD)
proxy = client.model('res.users')
users = proxy.browse([])
for user in users:
a = "{user.id} {user.name}".format(user=user)
print(a)
Explanation: Tarea 2 - Volcado de datos
Demuestra que puedes imprimir el id y el nombre de todos los usuarios (son registros del modelo res.users).
End of explanation
DATABASE = 'trabajoPython'
ADMIN_PASSWORD = 'admin'
client = erppeek.Client(server=SERVER)
if not DATABASE in client.db.list():
print("La BD no existe y se va a crear...")
client.create_database(ADMIN_PASSWORD, DATABASE)
print("Base de datos creada")
# Procedemos a listar todos los modulos instalados por defecto
installed_modules = client.modules(installed=True)
print("Lista de modulos instalados:")
for module in installed_modules['installed']:
print(module)
# Se comprueba si esta presente el modulo CRM
print("Comprobando modulo CRM...")
modules = client.modules('crm', installed=False)
if 'crm' in modules['uninstalled']:
# Si no esta instalado se instala
client.install('crm')
print("Modulo CRM instalado.")
else:
# El modulo ya esta instalado
print("El modulo CRM ya estaba instalado...")
Explanation: Tarea 3 - Crear y configurar una base de datos
Demuestra que sabes crear una base de datos, listar todos los módulos instalados por defecto, y si no está presente un módulo determinado instalarlo.
End of explanation
DATABASE = 'desarrollo'
client = erppeek.Client(SERVER, DATABASE, USERNAME, PASSWORD)
proxy = client.model('res.users')
users = proxy.browse([])
for user in users:
print("Usuario: {user.name}, Tipo: {user.type}, Etiqueta (alias): {user.alias_id}".format(user=user))
Explanation: Tarea 4 - Explorar un modelo
Demuesta que sabes listar todos los campos del modelo res.users, incluyendo nombre, tipo y etiqueta
End of explanation
DATABASE1 = 'desarrollo'
DATABASE2 = 'sandbox'
USERNAME1 = '[email protected]'
PASSWORD1 = 'platano-1'
USERNAME2 = 'admin'
PASSWORD2 = 'admin'
origen = erppeek.Client(SERVER, DATABASE1, USERNAME1, PASSWORD1)
destino = erppeek.Client(SERVER, DATABASE2, USERNAME2, PASSWORD2)
proxy1 = origen.model('res.users')
proxy2 = destino.model('res.users')
users = proxy1.browse([])
print("Migrando usuarios entre origen y destino...")
for user in users:
login = user.login
name = user.name
password = user.password
proxy2.create({'login': login, 'name': name, 'password' : password})
print("Usuario: " + name + ". Creado correctamente")
print("Se han migrado los usuarios correctamente.")
Explanation: Tarea 5 - Poblar un modelo
Crea el código neccesario para migrar los usuarios de una base de datos a otra base de datos. No es necesario migrar todos los campos. Basta con una prueba de concepto.
End of explanation |
13,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building deep retrieval models
Learning Objectives
Converting raw input examples into feature embeddings.
Splitting the data into a training set and a testing set.
Configuring the deeper model with losses and metrics.
Introduction
In the featurization tutorial we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.
In general, deeper models are capable of learning more complex patterns than shallower models. For example, our user model incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies
Step1: NOTE
Step2: NOTE
Step3: This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
Step4: In this tutorial we will use the models from the featurization tutorial to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
Step5: We also do some housekeeping to prepare feature vocabularies.
Step6: Model definition
Query model
We start with the user model defined in the featurization tutorial as the first layer of our model, tasked with converting raw input examples into feature embeddings.
Step9: Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern
Step10: The layer_sizes parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models.
Candidate model
We can adopt the same approach for the movie model. Again, we start with the MovieModel from the featurization tutorial
Step13: And expand it with hidden layers
Step14: Combined model
With both QueryModel and CandidateModel defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
Step15: Training the model
Prepare the data
We first split the data into a training set and a testing set.
Step16: Shallow model
We're ready to try out our first, shallow, model!
NOTE
Step17: This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models.
Deeper model
What about a deeper model with two layers?
NOTE
Step18: The accuracy here is 0.29, quite a bit better than the shallow model.
We can plot the validation accuracy curves to illustrate this
Step19: Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.
However, even deeper models are not necessarily better. The following model extends the depth to three layers
Step20: In fact, we don't see improvement over the shallow model | Python Code:
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
Explanation: Building deep retrieval models
Learning Objectives
Converting raw input examples into feature embeddings.
Splitting the data into a training set and a testing set.
Configuring the deeper model with losses and metrics.
Introduction
In the featurization tutorial we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.
In general, deeper models are capable of learning more complex patterns than shallower models. For example, our user model incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.
Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.
Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful hyperparameter tuning. For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
Preliminaries
We first import the necessary packages.
End of explanation
!pip install tensorflow==2.5.0
Explanation: NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.
End of explanation
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
Explanation: NOTE: Please ignore any incompatibility warnings and errors.
NOTE: Restart your kernel to use updated packages.
End of explanation
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
Explanation: This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
End of explanation
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
Explanation: In this tutorial we will use the models from the featurization tutorial to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
End of explanation
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
Explanation: We also do some housekeeping to prepare feature vocabularies.
End of explanation
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
Explanation: Model definition
Query model
We start with the user model defined in the featurization tutorial as the first layer of our model, tasked with converting raw input examples into feature embeddings.
End of explanation
class QueryModel(tf.keras.Model):
Model for encoding user queries.
def __init__(self, layer_sizes):
Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
Explanation: Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:
+----------------------+
| 128 x 64 |
+----------------------+
| relu
+--------------------------+
| 256 x 128 |
+--------------------------+
| relu
+------------------------------+
| ... x 256 |
+------------------------------+
Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.
We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
End of explanation
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
Explanation: The layer_sizes parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models.
Candidate model
We can adopt the same approach for the movie model. Again, we start with the MovieModel from the featurization tutorial:
End of explanation
class CandidateModel(tf.keras.Model):
Model for encoding movies.
def __init__(self, layer_sizes):
Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
Explanation: And expand it with hidden layers:
End of explanation
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
Explanation: Combined model
With both QueryModel and CandidateModel defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
End of explanation
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
Explanation: Training the model
Prepare the data
We first split the data into a training set and a testing set.
End of explanation
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
Explanation: Shallow model
We're ready to try out our first, shallow, model!
NOTE: The below cell will take approximately 15~20 minutes to get executed completely.
End of explanation
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
Explanation: This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models.
Deeper model
What about a deeper model with two layers?
NOTE: The below cell will take approximately 15~20 minutes to get executed completely.
End of explanation
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
Explanation: The accuracy here is 0.29, quite a bit better than the shallow model.
We can plot the validation accuracy curves to illustrate this:
End of explanation
# Model extends the depth to three layers
# TODO 3a -- your code goes here
Explanation: Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.
However, even deeper models are not necessarily better. The following model extends the depth to three layers:
NOTE: The below cell will take approximately 15~20 minutes to get executed completely.
End of explanation
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
Explanation: In fact, we don't see improvement over the shallow model:
End of explanation |
13,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step2: The estimation game
Root mean squared error is one of several ways to summarize the average error of an estimation process.
Step4: The following function simulates experiments where we try to estimate the mean of a population based on a sample with size n=7. We run iters=1000 experiments and collect the mean and median of each sample.
Step6: Using $\bar{x}$ to estimate the mean works a little better than using the median; in the long run, it minimizes RMSE. But using the median is more robust in the presence of outliers or large errors.
Estimating variance
The obvious way to estimate the variance of a population is to compute the variance of the sample, $S^2$, but that turns out to be a biased estimator; that is, in the long run, the average error doesn't converge to 0.
The following function computes the mean error for a collection of estimates.
Step7: The following function simulates experiments where we try to estimate the variance of a population based on a sample with size n=7. We run iters=1000 experiments and two estimates for each sample, $S^2$ and $S_{n-1}^2$.
Step8: The mean error for $S^2$ is non-zero, which suggests that it is biased. The mean error for $S_{n-1}^2$ is close to zero, and gets even smaller if we increase iters.
The sampling distribution
The following function simulates experiments where we estimate the mean of a population using $\bar{x}$, and returns a list of estimates, one from each experiment.
Step9: Here's the "sampling distribution of the mean" which shows how much we should expect $\bar{x}$ to vary from one experiment to the next.
Step10: The mean of the sample means is close to the actual value of $\mu$.
Step11: An interval that contains 90% of the values in the sampling disrtribution is called a 90% confidence interval.
Step12: And the RMSE of the sample means is called the standard error.
Step13: Confidence intervals and standard errors quantify the variability in the estimate due to random sampling.
Estimating rates
The following function simulates experiments where we try to estimate the mean of an exponential distribution using the mean and median of a sample.
Step14: The RMSE is smaller for the sample mean than for the sample median.
But neither estimator is unbiased.
Exercises
Exercise
Step15: Exercise
Step17: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import brfss
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
def RMSE(estimates, actual):
Computes the root mean squared error of a sequence of estimates.
estimate: sequence of numbers
actual: actual value
returns: float RMSE
e2 = [(estimate-actual)**2 for estimate in estimates]
mse = np.mean(e2)
return np.sqrt(mse)
Explanation: The estimation game
Root mean squared error is one of several ways to summarize the average error of an estimation process.
End of explanation
import random
def Estimate1(n=7, iters=1000):
Evaluates RMSE of sample mean and median as estimators.
n: sample size
iters: number of iterations
mu = 0
sigma = 1
means = []
medians = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for _ in range(n)]
xbar = np.mean(xs)
median = np.median(xs)
means.append(xbar)
medians.append(median)
print('Experiment 1')
print('rmse xbar', RMSE(means, mu))
print('rmse median', RMSE(medians, mu))
Estimate1()
Explanation: The following function simulates experiments where we try to estimate the mean of a population based on a sample with size n=7. We run iters=1000 experiments and collect the mean and median of each sample.
End of explanation
def MeanError(estimates, actual):
Computes the mean error of a sequence of estimates.
estimate: sequence of numbers
actual: actual value
returns: float mean error
errors = [estimate-actual for estimate in estimates]
return np.mean(errors)
Explanation: Using $\bar{x}$ to estimate the mean works a little better than using the median; in the long run, it minimizes RMSE. But using the median is more robust in the presence of outliers or large errors.
Estimating variance
The obvious way to estimate the variance of a population is to compute the variance of the sample, $S^2$, but that turns out to be a biased estimator; that is, in the long run, the average error doesn't converge to 0.
The following function computes the mean error for a collection of estimates.
End of explanation
def Estimate2(n=7, iters=1000):
mu = 0
sigma = 1
estimates1 = []
estimates2 = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for i in range(n)]
biased = np.var(xs)
unbiased = np.var(xs, ddof=1)
estimates1.append(biased)
estimates2.append(unbiased)
print('mean error biased', MeanError(estimates1, sigma**2))
print('mean error unbiased', MeanError(estimates2, sigma**2))
Estimate2()
Explanation: The following function simulates experiments where we try to estimate the variance of a population based on a sample with size n=7. We run iters=1000 experiments and two estimates for each sample, $S^2$ and $S_{n-1}^2$.
End of explanation
def SimulateSample(mu=90, sigma=7.5, n=9, iters=1000):
xbars = []
for j in range(iters):
xs = np.random.normal(mu, sigma, n)
xbar = np.mean(xs)
xbars.append(xbar)
return xbars
xbars = SimulateSample()
Explanation: The mean error for $S^2$ is non-zero, which suggests that it is biased. The mean error for $S_{n-1}^2$ is close to zero, and gets even smaller if we increase iters.
The sampling distribution
The following function simulates experiments where we estimate the mean of a population using $\bar{x}$, and returns a list of estimates, one from each experiment.
End of explanation
cdf = thinkstats2.Cdf(xbars)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Sample mean',
ylabel='CDF')
Explanation: Here's the "sampling distribution of the mean" which shows how much we should expect $\bar{x}$ to vary from one experiment to the next.
End of explanation
np.mean(xbars)
Explanation: The mean of the sample means is close to the actual value of $\mu$.
End of explanation
ci = cdf.Percentile(5), cdf.Percentile(95)
ci
Explanation: An interval that contains 90% of the values in the sampling disrtribution is called a 90% confidence interval.
End of explanation
stderr = RMSE(xbars, 90)
stderr
Explanation: And the RMSE of the sample means is called the standard error.
End of explanation
def Estimate3(n=7, iters=1000):
lam = 2
means = []
medians = []
for _ in range(iters):
xs = np.random.exponential(1.0/lam, n)
L = 1 / np.mean(xs)
Lm = np.log(2) / thinkstats2.Median(xs)
means.append(L)
medians.append(Lm)
print('rmse L', RMSE(means, lam))
print('rmse Lm', RMSE(medians, lam))
print('mean error L', MeanError(means, lam))
print('mean error Lm', MeanError(medians, lam))
Estimate3()
Explanation: Confidence intervals and standard errors quantify the variability in the estimate due to random sampling.
Estimating rates
The following function simulates experiments where we try to estimate the mean of an exponential distribution using the mean and median of a sample.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: The RMSE is smaller for the sample mean than for the sample median.
But neither estimator is unbiased.
Exercises
Exercise: In this chapter we used $\bar{x}$ and median to estimate µ, and found that $\bar{x}$ yields lower MSE. Also, we used $S^2$ and $S_{n-1}^2$ to estimate σ, and found that $S^2$ is biased and $S_{n-1}^2$ unbiased.
Run similar experiments to see if $\bar{x}$ and median are biased estimates of µ. Also check whether $S^2$ or $S_{n-1}^2$ yields a lower MSE.
End of explanation
# Solution goes here
# Solution goes here
Explanation: Exercise: Suppose you draw a sample with size n=10 from an exponential distribution with λ=2. Simulate this experiment 1000 times and plot the sampling distribution of the estimate L. Compute the standard error of the estimate and the 90% confidence interval.
Repeat the experiment with a few different values of n and make a plot of standard error versus n.
End of explanation
def SimulateGame(lam):
Simulates a game and returns the estimated goal-scoring rate.
lam: actual goal scoring rate in goals per game
goals = 0
t = 0
while True:
time_between_goals = random.expovariate(lam)
t += time_between_goals
if t > 1:
break
goals += 1
# estimated goal-scoring rate is the actual number of goals scored
L = goals
return L
# Solution goes here
# Solution goes here
Explanation: Exercise: In games like hockey and soccer, the time between goals is roughly exponential. So you could estimate a team’s goal-scoring rate by observing the number of goals they score in a game. This estimation process is a little different from sampling the time between goals, so let’s see how it works.
Write a function that takes a goal-scoring rate, lam, in goals per game, and simulates a game by generating the time between goals until the total time exceeds 1 game, then returns the number of goals scored.
Write another function that simulates many games, stores the estimates of lam, then computes their mean error and RMSE.
Is this way of making an estimate biased?
End of explanation |
13,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpretation and data acquisition strategies of seismic refraction data
In the <a href="https
Step1: Data
Below, we show 3 plots
Step2: Setup for the seismic refraction survey
Consider a shot gather for seismic refraction survey, which means we have one shot (source), and multiple receivers (12). Shot location is fixed at x=0. There are two survey parameters
Step3: Interpretation of seismic refraction data
Assume that you have seismic refraction data. The structure of the earth is unknown and you may want to obtain useful information about the subsurface. We will assume that the subsurface in the survey area has a three-layer structure and that the velocities increase with depth.
Thus, there can be four unknowns | Python Code:
plotWavelet()
Explanation: Interpretation and data acquisition strategies of seismic refraction data
In the <a href="https://www.3ptscience.com/app/SeismicRefraction">3pt Science app</a>, you explored the expected arrival times for refractions and reflections from a two-layer over a half-space model.
In this notebook, we will use synthetic seismic data to examine the impact of survey parameters on the expected seismic data.
Source
In an ideal case, the source wavelet would be an impulse (ie. an instantaneous spike). However, in reality, the source energy is spread in space and in time (see the <a href="http://gpg.geosci.xyz/content/seismic/wave_basics.html#waves-and-rays">GPG: Waves and Rays</a>). The source wavelet used for these examples is shown below.
End of explanation
fig, ax = plt.subplots(1, 3, figsize=(15,6))
ax[0].set_title('Expected Arrival Times')
ax[1].set_title('Clean Data')
ax[2].set_title('Noisy Data')
ax[0]=viewTXdiagram(x0=1., dx=8, v1=400., v2=1000., v3=1500., z1=5., z2=15., ax=ax[0])
ax[1]=plotWiggleTX(x0=1., dx=8, v1=400., v2=1000., v3=1500., z1=5., z2=15., ax=ax[1])
ax[2]=plotWiggleTX(x0=1., dx=8, v1=400., v2=1000., v3=1500., z1=5., z2=15., ax=ax[2], noise=True)
plt.show()
Explanation: Data
Below, we show 3 plots:
- left: expected arrival times for the direct, refracted waves and reflection from the first layer
- center: clean data - the wavelet arriving at the expected arrival time. Each line represents what would be recorded by an ideal geophone.
- right: noisy data - clean data + random noise.
The model used is the same as is in the lab write-up:
- v1 = 400 m/s
- v2 = 1000 m/s
- v3 = 1500 m/s
- z1 = 5m (depth to layer 1)
- z2 = 15m (depth to layer 2)
End of explanation
makeinteractSeisRefracSurvey()
Explanation: Setup for the seismic refraction survey
Consider a shot gather for seismic refraction survey, which means we have one shot (source), and multiple receivers (12). Shot location is fixed at x=0. There are two survey parameters:
x0: offset between shot and the first geophone
dx: spacing between two consecutive geophones
In the widget below you can alter x0 or dx to change your survey setup. Run the next cell then try to change x0 and dx in the cell below that. Note that the next two cells are designed to help you visualize the survey layout. The x0 and dx parameter adjustment sliders here are not linked to the widget at the end of this notebook.
End of explanation
makeinteractTXwigglediagram()
Explanation: Interpretation of seismic refraction data
Assume that you have seismic refraction data. The structure of the earth is unknown and you may want to obtain useful information about the subsurface. We will assume that the subsurface in the survey area has a three-layer structure and that the velocities increase with depth.
Thus, there can be four unknowns:
v1: velocity of the first layer (m/s)
v2: velocity of the second layer (m/s)
v3: velocity of the third layer (m/s)
z1: depth of the first layer (m)
z2: depth of the second layer (m)
Based on the above information, we may expect to have up to four arrivals at a geophone, related to
Direct
Reflected: interface 1
Refraction: interface 1
Refraction: interface 2
The widget below will allow you to estimate the layer depths and velocities. The parameters for the widget are:
x0: offset between shot and the first geophone
dx: spacing between two consecutive geophones
Fit: checking this activates fittting function (if you click this red line will show up)
tI: intercept time for a line function (s)
v: inverse slope of the line function (m/s; which can be velocity of either direct and critically refracted wave)
Run below widget and find useful subsurface information!
End of explanation |
13,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy for Economists
Scipy provides many user-friendly and efficient numerical routines, e.g. numerical integration and optimization. The full documentation is available at http
Step1: Let us have a look at the relationship.
Step2: Estimation using Linear Algebra Tools
We can determine the $\hat{\beta} = (X^T X)^{-1}X^T Y$ using basic linear algebra tools from NumPy.
Step3: Estimation using Optimization Tools
Let us now determine $\hat{\beta}$ using Maximum Likelihood (ML) estimation. So, we need to maximize the following log-likelihood function
Step4: Formatting | Python Code:
# standard library
import numpy as np
# Parametrization
num_agents = 1000
num_covars = 3
betas_true = np.array([0.22, 0.30, -0.1]).T
sd_true = 0.01
# Sampling of observables
np.random.seed(123)
X = np.random.rand(num_agents, num_covars)
X[:,0] = 1
# Sampling disturbances
eps = np.random.normal(loc=0.0, scale=sd_true, size=num_agents)
# Create endogenous outcome
idx_true = np.dot(X, betas_true)
Y = idx_true + eps
# Checks
assert (X.dtype == 'float')
assert (Y.dtype == 'float')
assert (np.all(np.isfinite(X)))
assert (np.all(np.isfinite(Y)))
assert (X.shape == (num_agents, num_covars))
assert (Y.shape == (num_agents, ))
assert (np.all(X[:,0] == 1.0))
Explanation: SciPy for Economists
Scipy provides many user-friendly and efficient numerical routines, e.g. numerical integration and optimization. The full documentation is available at http://docs.scipy.org/doc/scipy/reference/.
We will use the provided tools to simulate and estimate and Ordinary Least Squares (OLS) regression: $Y = X\beta + \epsilon$
We will proceed in three steps:
Simulated Sample
Estimate Model usig Linear Algebra Tools (NumPy)
Estimate Model usig Optimization Tools (SciPy)
Of course, OLS and other statistical models are readily available in the StatsModels Library http://statsmodels.sourceforge.net/.
Simulate Sample
End of explanation
%pylab inline
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_ylabel(r'$Y$'), ax.set_xlabel(r'$ X\beta $')
ax.plot(idx_true, Y, 'o')
Explanation: Let us have a look at the relationship.
End of explanation
# Let us get the estimates.
beta_hat = np.dot(np.dot(np.linalg.inv(np.dot(X.T,X)), X.T), Y)
sd_hat = np.sqrt(np.var(Y - np.dot(X, beta_hat)))
# Let us have a look now.
print('Results for beta', beta_hat, ' Results for sd', sd_hat)
Explanation: Estimation using Linear Algebra Tools
We can determine the $\hat{\beta} = (X^T X)^{-1}X^T Y$ using basic linear algebra tools from NumPy.
End of explanation
# standard library
from scipy.optimize import minimize
from scipy.stats import norm
# Auxiliary functions.
def sample_likelihood(paras, X, Y):
''' Construct sample likelihood.
'''
# Antibugging.
assert (isinstance(paras, np.ndarray))
assert (paras.dtype == 'float')
assert (X.ndim == 2), (Y.ndim == 2)
# Auxiliary objects.
num_agents = Y.shape[0]
# Summing over the sample.
contribs = 0.0
for i in range(num_agents):
contrib = individual_likelihood(paras, X[i,:], Y[i])
contribs += contrib
# Modifications.
contribs = np.mean(contribs)
# Finishing.
return contribs
def individual_likelihood(paras, x, y):
''' This function determines the an individual's contribution to the sample likelihood.
'''
# Antibugging.
assert (isinstance(paras, np.ndarray))
assert (paras.dtype == 'float')
assert (x.ndim == 1), (y.ndim == 1)
# Distribute parameters.
betas, sd = paras[:-1], paras[-1]
# Calculate likelihood contribution.
resid = (y - np.dot(x, betas))/sd
contrib = (1.0/sd)*norm.pdf(resid)
# Modifications.
contrib = np.clip(contrib, 1e-20, np.inf)
contrib = -np.log(contrib)
# Finishing.
return contrib
''' Main calculations.
'''
# Construct parameters.
paras = np.concatenate((betas_true, [sd_true]))
# Single individual.
individual_likelihood(paras, X[1,:], Y[1])
# Full sample.
sample_likelihood(paras, X, Y)
# Optimization.
x0 = paras
#x0 = [0.0, 0.0, 0.0, 1.0]
for optimizer in ['BFGS', 'Nelder-Mead']:
rslt = minimize(sample_likelihood, x0, args=(X, Y), method=optimizer)
Explanation: Estimation using Optimization Tools
Let us now determine $\hat{\beta}$ using Maximum Likelihood (ML) estimation. So, we need to maximize the following log-likelihood function:
$$L(\beta, \sigma) = \sum_{i = 1, ..., N} \log\left(\frac{1}{\sigma}\phi\left(\frac{Y_i - X_i\beta}{\sigma}\right)\right)$$
SciPy offers a convenient interface to alternative optimization algorithms. Let us check it out online.
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1Ki3iXw').read())
Explanation: Formatting
End of explanation |
13,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Realistic example using outputs from MITgcm
This example requires the understanding of xgcm.grid and xmitgcm.open_mdsdataset.
Step1: One year of daily-averaged output from MITgcm.
Step2: Discrete Fourier Transform
We chunk the data along the time and Z axes to allow parallelized computation and detrend and window the data before taking the DFT along the horizontal axes.
Step3: Power spectrum
We compute the surface eddy kinetic energy spectrum.
Step4: Isotropic wavenumber spectrum
We now isotropize the spectrum
Step5: We plot $u$, $v$, $\hat{u}^2+$
Step6: Cross Spectrum
We calculate the cross correlation between vertical velocity ($w$) and buoyancy ($b$) | Python Code:
import numpy as np
import xarray as xr
import os.path as op
import xrft
from dask.diagnostics import ProgressBar
from xmitgcm import open_mdsdataset
from xgcm.grid import Grid
from matplotlib import colors, ticker
import matplotlib.pyplot as plt
%matplotlib inline
ddir = '/swot/SUM05/takaya/MITgcm/channel/runs/'
Explanation: Realistic example using outputs from MITgcm
This example requires the understanding of xgcm.grid and xmitgcm.open_mdsdataset.
End of explanation
ys20, dy20 = (60,1)
dt = 8e2
df = 108
ts = int(360*86400*ys20/dt+df)
te = int(360*86400*(ys20+dy20)/dt+df)
ds = open_mdsdataset(op.join(ddir,'zerores_20km_MOMbgc'), grid_dir=op.join(ddir,'20km_grid'),
iters=range(ts,te,df), prefix=['MOMtave'], delta_t=dt
).sel(YC=slice(5e5,15e5), YG=slice(5e5,15e5))
ds
grid = Grid(ds, periodic=['X'])
u = ds.UVEL #zonal velocity
v = ds.VVEL #meridional velocity
w = ds.WVEL #vertical velocity
phi = ds.PHIHYD #hydrostatic pressure
Explanation: One year of daily-averaged output from MITgcm.
End of explanation
b = grid.diff(phi,'Z',boundary='fill')/grid.diff(phi.Z,'Z',boundary='fill')
with ProgressBar():
what = xrft.dft(w.chunk({'time':1,'Zl':1}),
dim=['XC','YC'], detrend='linear', window=True).compute()
bhat = xrft.dft(b.chunk({'time':1,'Zl':1}),
dim=['XC','YC'], detrend='linear', window=True).compute()
bhat
Explanation: Discrete Fourier Transform
We chunk the data along the time and Z axes to allow parallelized computation and detrend and window the data before taking the DFT along the horizontal axes.
End of explanation
with ProgressBar():
uhat2 = xrft.power_spectrum(grid.interp(u,'X')[:,0].chunk({'time':1}),
dim=['XC','YC'], detrend='linear', window=True).compute()
vhat2 = xrft.power_spectrum(grid.interp(v,'Y',boundary='fill')[:,0].chunk({'time':1}),
dim=['XC','YC'], detrend='linear', window=True).compute()
ekehat = .5*(uhat2 + vhat2)
ekehat
Explanation: Power spectrum
We compute the surface eddy kinetic energy spectrum.
End of explanation
with ProgressBar():
uiso2 = xrft.isotropic_powerspectrum(grid.interp(u,'X')[0,0],
dim=['XC','YC'], detrend='linear', window=True).compute()
viso2 = xrft.isotropic_powerspectrum(grid.interp(v,'Y',boundary='fill')[0,0],
dim=['XC','YC'], detrend='linear', window=True).compute()
ekeiso = .5*(uiso2 + viso2)
ekeiso
Explanation: Isotropic wavenumber spectrum
We now isotropize the spectrum:
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(20,4))
fig.set_tight_layout(True)
u[0,0].plot(ax=axes[0])
v[0,0].plot(ax=axes[1])
im = axes[2].pcolormesh(ekehat.freq_XC*1e3, ekehat.freq_YC*1e3, ekehat[0],
norm=colors.LogNorm())
axes[3].plot(ekeiso.freq_r*1e3, ekeiso)
cbar = fig.colorbar(im, ax=axes[2])
cbar.set_label(r'[m$^2$ s$^{-2}$]')
axes[3].set_xscale('log')
axes[3].set_yscale('log')
axes[2].set_xlabel(r'k [cpkm]')
axes[2].set_ylabel(r'l [cpkm]')
axes[3].set_xlabel(r'k$_r$ [cpkm]')
axes[3].set_ylabel(r'[m$^3$ s$^{-2}$]')
Explanation: We plot $u$, $v$, $\hat{u}^2+$
End of explanation
with ProgressBar():
whatbhat = xrft.cross_spectrum(w.chunk({'time':1,'Zl':1}), b.chunk({'time':1,'Zl':1}),
dim=['XC','YC'], detrend='linear', window=True, density=False).compute()
whatbhat
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(11,4))
fig.set_tight_layout(True)
(what*np.conjugate(bhat)).real[:,:8].mean(['time','Zl']).plot(ax=ax1)
whatbhat[:,:8].mean(['time','Zl']).plot(ax=ax2)
Explanation: Cross Spectrum
We calculate the cross correlation between vertical velocity ($w$) and buoyancy ($b$):
End of explanation |
13,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook forms part of a series on computational optical radiometry. The notebooks can be downloaded from Github. These notebooks are constantly revised and updated, please revisit from time to time.
This notebook was written several Ipython/Jupyter generations ago, some information may be no longer applicable.
1 Jupyter / IPython notebook hints and tips
The date of this document and module versions used in this document are given at the end of the file.
Feedback is appreciated
Step1: Lorena Barba's tutorial, see this notebook for more detail on the Jupyter notebook.
Why use the IPython notebook?
<p><font size="3" color="red">The IPython notebook is an effective means to capture technical story lines or flow-of-thought; <br>being initially conceived as a lab book for science and technology investigations.
It is now also used for slides and as lecturing medium.
</font> </p>
The IPython notebook can never replace formal documentation, at least in its present form. The notebook should also not be used as a primary software development environment. The notebook lives alongside other forms of documentation and coding. Having said that, there is a definite place for the IPython notebook in several environments such as engineering research and development, teaching, and scientific research and experimentation (which it was initially developed for).
The IPython notebook is a very effective means to capture your work as you progress through an investigation comprising research, coding, and record keeping. It is also an excellent way to develop slides or teaching material where one wants to combine code, text and results in a story-line. It is used widely in Python conferences and lectures. The notebook is also a convenient means to do homework assignments.
http
Step2: IPython works by starting a web server on your PC and then serving pages from this server. This web server has an IP address of 127.0.0.1 (localhost), on port 8888, i.e., browser address 127.0.0.1
Step3: You can see all the IPython options by typing ipython --help.
Starting more than one IPython notebook server
From the manual
Step4: Once in the Sophos control panel, select the following menu option
<Configure><Anti-virus><Authorization...>.
The following window should appear
Step5: If the 127.0.0.1 loopback IP address does not appear in the list, add it as shown above.
Now close any ipython notebooks that you have, also the page at 127.0.0.1
Step6: Notebook cells
The notebook comprises a 'behind-the-scene' kernel that does all the processing and a 'front-man' rendering in the browser window. It is important to understand that the web page browser does very little work, it only renders the information under instruction from the kernel. The kernel contains all the code, text and data.
The notebook consists of a number of cells which can contain code, text or other information. Each cell is 'executed' or 'run' by clicking on the cell and pressing the Shift-Enter key combination. Cells can also be run in sequence from the Cell menu entry. Running a cell does at least two things
Step7: Command-line conversion
On the command line IPython has a convert subcommand to convert from the IPython format to other formats. In the command window, where the file is located type
Step8: There are 'magics' to support a number of other languages in the IPython notebook. From the IPython notebook website
Step9: To learn more about a magic execute ? followed by the magic as in
?%pastebin
it will open a new panel where the docstring is displayed.
Step10: BTW, you can also display the docstring of any python function by
import numpy as np
?np.asarray
Remove variables from the IPython notebook by using the %reset and %reset_selective varname magics. %reset removes all variables (i.e., cleans out the whole lot), whereas with %reset_selective varname you can specificy which variables must be cleared. Note that varname is a regular expression, but such that if a single letter is given all variables starting with that letter will be erased. So to ensure that only a single variable is removed, anchor both ends as shown below with ^ (beginning of line) and $ (end of line).
Step11: The %who command without any arguments will list all variables that exist in the global scope. Passing a parameter like str will list only variables of that type.
Step12: Use the %%writefile magic to write contents to a file. The file can be read in normal Python or it can be read and popped up in a window by %pycat.
Step13: The %edit magic is supposed to start up an editor from within IPython - I never got that to work under windows.
https
Step14: The%run magic runs a scipt along the stated path and prints the results to the cell output. Useful to run external scripts not coded in the notebook itself. Just be sure to copy the script with the notebook. The next cell writes a script to the current directory and then the following cell executes it.
Step15: %run can also run Jupyter/IPython notebooks and insert the output of the notebook into the current result cell.
Step16: The %store command lets you pass variables between two different notebooks.
In the PlayStats.ipynb the table variable is stored as follows
Step18: % pastebin 'file.py' to upload code to pastebin and get the url returned.
% bash to run cell with bash in a subprocess.
%mprun & %memit
Step19: Magics for using IPython for numeric and scientific work
Early tutorials advised that you start the IPython kernel with the --pylab=inline option to load a bunch of packages and see Matplotlib graphs in the browser. As from IPython 2.0, the server advises you not to use this option, because it pre-loads a number of modules and packages that may not be required and may even interfere with your intended work.
Don't use %pylab either
Step20: Writing functions in other languages
This example is taken from Josh Devlin, thanks Josh!
You can write functions in cython or fortran and use those directly from python code. First you’ll need to install
Step21: UK English spell checker
<font color="red"> I have not yet had the time to figure out how to do this for Jupyter 4</font>
Marco Pinto maintains a UK English hunspell-style list here. To implement the UK dictionary in an Anaconda Jupyter installation on Windows
Step22: You can also access the built in Python help system, by typing help(objectname), and then execute the cell.
Step23: Tab-completion help is also available, just press TAB after the period
Step24: You can also obtain the docstring by prepending a function with a question mark and then executing.
?str.replace()
The IPython Help menu item has links to IPython and the scientific packages
Step25: <a name="NotebooksRemember">
<a id='Notebooks Remember'></a> Notebooks remember, but not the way you might think
The notebook visible in the browser is not really the notebook, it is only a rendering of the notebook. The actual notebook and its data resides in the IPython kernel. The browser is connected to the kernel via a zmq channel, for the purpose of rendering and input. You can close all the browser windows, but the code and data still resides in the kernel process. As long as the kernel is running you can open a new browser window at 127.0.0.1
Step26: A similar error occurs in Pandas if the first cell creates a dataframe, and the second cell adds a new column to the dataframe. If the second cell is executed a number of times, many columns will be added, which was not the intention with the code if executed linearly from start to end.
Therefore be careful with cells that modifies its own input data - such data is not static and changes with each invocation of the cell.
Reloading imports
Python's import facility loads a file only once, if the import is encountered again for the same module, the import is ignored. If you are actively developing the the module to be imported the module contents changes all the time and these changes must be imported to see their effect.
In this case you want to force another import execution. This can be done with the %load_ext autoreload magic command. If the extension is already loaded, Ipython may complain, so I use %reload_ext autoreload, which attempts to load or reload, as appropriate.
http
Step27: Clearing the IPython memory
If for some reason you need to clear out all memory in your notebook this can be done by selecting the Kernel Restart menu option. This will clear the kernel memory, including the memory of all open notebooks. So use this avenue with care. After restarting the kernel, all notebooks must be re-run from start to build the information again.
You can use the Cell | All Output | Clear menu option to remove all output from a notebook. It will be much smaller, but also empty of any output or embedded media. To see the full notebook with all calculation results, you would have to run all cells again.
Markdown (MD) language syntax
Markdown is a simplified syntax for simple text layout. It allows you to embed simple formatting commands in a regular text file. In an ASCII text editor you will see the markup but not formatted. You can see the formatted version in a local or online markdown editor.
https
Step28: By just importing seaborn, the Matplotlib graphs is given a different style. If seaborn is not installed do conda install seaborn.
Step29: Matplotlib in qt window
Use the ipython magic command pylab to control the graphing backend, switch between inline and qt as required.<br>
http
Step30: Images can also be included as markdown by using the following format

Images can also be included as markdown by using the following format
<img src="images/ipythonhelp.png" width="200">
<img src="images/ipythonhelp.png" width="200">
Embedded vs non-embedded images
As of IPython 0.13, images are embedded by default for compatibility with QtConsole, and the ability to still be displayed offline.
http
Step31: Embedding other media
SVG graphic.
Step32: Embed a video from YouTube.
Step33: Embed an external web page.
Step35: Embed a video from local file system
The following shell shows a recording of the Mayavi display. The file is locally saved in the notebook. It seems that the format must be webm format, the other formats (avi, mp4) did not work.
Step36: Interactive widgets
IPython includes an architecture for interactive widgets that tie together Python code running in the kernel and JavaScript/HTML/CSS running in the browser. These widgets enable users to explore their code and data interactively. For details see
http
Step37: The following two cells illustrate how a slider is used in the widgets.interactive function to test the value of the slider and then do something with the value. The example below shows how to pass 'fixed' or non-widget parameters to the function. Any number of such widgets may be passed, but they must all be named.
For more examples see the links shown above. An example of interactive image segmentation is shown in notebook '10-ImageUtilities' in this series.
Step39: The following is an example by Ketcheson, Ahmadia and Granger taken from
http
Step40: An example at https
Step41: https
Step42: Interdependent widgets
The softmax function is used in neural networks.
Suppose we have a network with four neurons, and four corresponding weighted inputs, which we'll denote $z_{1}^{L}, z_{2}^{L}, z_{3}^{L}$, and $z_{4}^{L}$.
According to this function, the activation $a^L_j$ of the $j$ output neuron is
\begin{equation}
a_{j}^{L}=\frac{e^{z_{j}^{L}}}{\sum_{k} e^{z_{k}^{L}}}
\label{eq
Step43: The following information is somewhat esoteric, you need not go into this
Simple progress bar
Note that clear_output wipes the entire cell output, including previous output
https
Step44: Test
Step45: Notebook file format
From http
Step47: Running notebook servers
This document describes how you can secure a notebook server and how to run it on a public interface
Step48: HTML formatting in dynamic output display
Use HTML to format the output of your code
http
Step50: Displaying tables in HTML
http
Step51: Fine-tuning IPython typographic output appearance
Changing the fonts, colours and layout to suit your own style.
http
Step52: But you can instruct IPython to use default display as follows
Step53: Making slides from IPython notebooks
IPython is the tool of choice for presentations at Python conferences today - you hardly see a slideshow that was not made with IPython.
http
Step54: Class Descriptors
https
Step55: Class and Instance Attributes
https
Step56: Python and module versions, and dates | Python Code:
from IPython.display import display
from IPython.display import Image
from IPython.display import HTML
Explanation: This notebook forms part of a series on computational optical radiometry. The notebooks can be downloaded from Github. These notebooks are constantly revised and updated, please revisit from time to time.
This notebook was written several Ipython/Jupyter generations ago, some information may be no longer applicable.
1 Jupyter / IPython notebook hints and tips
The date of this document and module versions used in this document are given at the end of the file.
Feedback is appreciated: neliswillers at gmail dot com.
Jupyter and IPython
From http://ipython.org/:
"IPython is a growing project, with increasingly language-agnostic components. IPython 3.x will be the last monolithic release of IPython, containing the notebook server, qtconsole, etc. The language-agnostic parts of the project: the notebook format, message protocol, qtconsole, notebook web application, etc. will move to new projects under the name Jupyter. IPython itself will return to being focused on interactive Python, part of which will be providing a Python kernel for Jupyter. IPython 3.0 contains some indications of the project transition, including the logo in the notebook web UI being that of Jupyter."
In this document, all references to IPython refer to the IPython kernel, running on top of Jupyter.
IPython versions 2.x use the nbformat 3 file format.
IPython versions 3.x use the nbformat 4 file format.
To convert from nb4 to nb3 format (from with Jupyter IPython 3 installed) type:
ipython nbconvert --to notebook --nbformat 3 mynotebook.ipynb
http://ipython.org/ipython-doc/3/notebook/nbformat.html#nbformat
http://ipython.org/ipython-doc/3/whatsnew/version3.html
<a name="Overview"></a> Overview
This notebook provides a brief summary of how to start up and use the IPython notebook.
Introductions are given to magic commands, help functionality, using IPython for scientific work, cell memory, markdown, citations, embedding media files, and a few lesser used functions.
End of explanation
display(Image(filename='images/portalpage.png'))
Explanation: Lorena Barba's tutorial, see this notebook for more detail on the Jupyter notebook.
Why use the IPython notebook?
<p><font size="3" color="red">The IPython notebook is an effective means to capture technical story lines or flow-of-thought; <br>being initially conceived as a lab book for science and technology investigations.
It is now also used for slides and as lecturing medium.
</font> </p>
The IPython notebook can never replace formal documentation, at least in its present form. The notebook should also not be used as a primary software development environment. The notebook lives alongside other forms of documentation and coding. Having said that, there is a definite place for the IPython notebook in several environments such as engineering research and development, teaching, and scientific research and experimentation (which it was initially developed for).
The IPython notebook is a very effective means to capture your work as you progress through an investigation comprising research, coding, and record keeping. It is also an excellent way to develop slides or teaching material where one wants to combine code, text and results in a story-line. It is used widely in Python conferences and lectures. The notebook is also a convenient means to do homework assignments.
http://nbviewer.ipython.org/urls/raw.github.com/ellisonbg/talk-strata2013/master/StrataIPythonSlides.ipynb - why use notebooks?
http://nbviewer.ipython.org/ - gallery of notebooks
https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks - a gallery of notebooks
Getting started in IPython
During early 2015 the IPython 2.x tool was upgraded to IPython 3.0. When installing, use the latest version you can find.
If you installed the Anaconda Python distribution (http://docs.continuum.io/anaconda/install.html), it already has IPython, no need to download it.
If you are not using Anaconda, follow the steps outlined on these sites:
http://ipython.org/ - start here, download the latest version from here. <br>
http://ipython.org/ipython-doc/dev/interactive/notebook.html - short and concise, up and running quickly.<br>
http://nbviewer.ipython.org/urls/raw.github.com/Unidata/tds-python-workshop/master/ipython-notebook.ipynb - simple intro <br>
http://blog.safaribooksonline.com/2013/12/12/start-ipython-notebook/ linux install and general use instructions
http://ipython.org/ipython-doc/stable/notebook/index.html The IPython notebook.
http://www.astro.washington.edu/users/vanderplas/Astr599/notebooks/03_IPython_intro
For a good overview of the history and key user guide tips, see
1. https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook#gs.KMCTS4w.
2. https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/
Diffing and Merging Notebooks
https://nbdime.readthedocs.io/en/latest/
nbdime provides 'content-aware' diffing and merging of Jupyter notebooks. It understands the structure of notebook documents. Therefore, it can make intelligent decisions when diffing and merging notebooks.
Working in a Command Window
IPython must be started from a command window (in Windows) or a terminal console in Linux. If you use Linux, you already know how to do this. If you are using Windows, and you don't know learn here:
http://www.cs.princeton.edu/courses/archive/spr05/cos126/cmd-prompt.html
http://www.bleepingcomputer.com/tutorials/windows-command-prompt-introduction/
IPython 2.x allows you to open notebooks only from the present directory (getcwd) and in nested subdirectories. So, open the command window somewhere where your notebooks can be reached from.
This link will help you to create a context menu in Explorer to start a command window in a given directory:
http://stackoverflow.com/questions/1077814/assigning-a-shortcut-to-open-cmd-here
https://stackoverflow.com/questions/60904/how-can-i-open-a-cmd-window-in-a-specific-location
Scroll to the bottom of the pages. Alternatively just download this file and double click on it:
https://raw.githubusercontent.com/NelisW/ComputationalRadiometry/master/cmd-here/cmd-window-here.reg
Starting IPython
Command line
IPython files are json-format files, with the file extension .ipynb.
After you installed IPython you must start the server in a command window. The current version of IPython expects the notebook files to be in the same directory where it was started up (or below). So if you want to work in c:\myfiles then change to that directory and start IPython in the required directory.
Open a command window (DOS box), and create a directory where you want to work with the notebooks. Type the following commands in the command window (pressing Enter after each line):
cd \
mkdir myfiles [only do this the first time]
cd c:\myfiles
The command window should now display c:\myfiles>. Then type the following command (followed by Enter):
ipython notebook
Serving IPython pages
After a while IPython opens your web browser and display the IPython portal page.
Side note on browsers: Microsoft Internet Explorer does not run IPython very nicely. Consider using Firefox or Chrome.
End of explanation
display(Image(filename='images/ipaddress.png'))
Explanation: IPython works by starting a web server on your PC and then serving pages from this server. This web server has an IP address of 127.0.0.1 (localhost), on port 8888, i.e., browser address 127.0.0.1:8888 or localhost:8888. You can at any time open a new browser tab and enter this address and the server portal page should appear. This is sometimes convenient when you close all IPython tabs in the browser, but the server is still running.
End of explanation
display(Image(filename='images/sophos01.png'))
Explanation: You can see all the IPython options by typing ipython --help.
Starting more than one IPython notebook server
From the manual: "You can start more than one notebook server at the same time, if you want to work on notebooks in different directories. By default the first notebook server starts on port 8888, and later notebook servers search for ports near that one. You can also manually specify the port with the --port option."
http://ipython.org/ipython-doc/dev/interactive/notebook.html#starting-the-notebook-server
IPython notebook security
The IPython notebook server acts as a generic file server for files inside the same tree as your notebooks. Access is not granted outside the notebook folder so you have strict control over what files are visible. It is recommended that you do not run the notebook server with a notebook directory at a high level in your file system (e.g. your home directory).
When you run the notebook in a password-protected manner, local file access is restricted to authenticated users unless read-only views are active.
More information is required here
Live Connection to the Internet
IPython uses MathJax to render LaTeX, as in $y=ax^2+bx+c$. The default mode is to access MathJax online, live, as you are working. This means that you must be working on a PC connected to the Internet in order to render LaTeX. This is only required when LaTeX rendering is requested, otherwise you don't need Internet access. Once the LaTeX is rendered it is embedded as an image in the document and you do not need to be connected to MathJax to view the previously rendered image.
One catch is that if you live behind a server that requires you to authenticate before logging in to the Internet, this means that you must authenticate your Internet access prior to starting the ipython server.
IPython and your Firewall
If IPython does not work, it may be because your antivirus software or firewall prevents it from working. The firewall might not be set up to grant execution rights to the default IPython notebook server operating 127.0.0.1:8888. In this case the cells will simply not execute, with no warning. You don't receive any output in from the cells.
Using localhost to bypass the firewall
This approach does not require any changes to your firewall. Some firewalls are set up to grant localhost execution rights. In this case the server can be started with the command
ipython notebook --ip=localhost
Once started, the pages are served from
http://localhost:8888/
and not from http://127.0.0.1:8888/.
Setting up the Sophos firewall
Here is a description of how to fix this for Sophos - it may be necessary to do this after each reboot. Other firewalls will have similar functionality.
Sophos blocks the browser to not allows execution of code as is required by ipython. To enable the browser ipython functionality change the authorisation on your local loopback IP address 127.0.0.1.
For this to work, you must have Administrator rights, or at least belong to the Admin group (call your ICT sysadmin if you don't have these rights. Start by opening the Sophos application from the system tray by right-clicking on the S-shield icon and selecting the
End of explanation
display(Image(filename='images/sophos02.png'))
Explanation: Once in the Sophos control panel, select the following menu option
<Configure><Anti-virus><Authorization...>.
The following window should appear:
End of explanation
display(Image(filename='images/ipython-newnotebook.png'))
Explanation: If the 127.0.0.1 loopback IP address does not appear in the list, add it as shown above.
Now close any ipython notebooks that you have, also the page at 127.0.0.1:8888. Close the command window and re-open a new one. Open a new tab and enter the local address 127.0.0.1:8888. This should display the notebooks in the current directory and once opened, they should work.
Starting a new notebook
To create a new page click on the New Notebook button on the IPython notebook portal.
You can also make a copy of an existing notebook by selecting the appropriate menu option under the File menu.
End of explanation
HTML('<img src="images/convertfile.png" width=600 height=300/>')
Explanation: Notebook cells
The notebook comprises a 'behind-the-scene' kernel that does all the processing and a 'front-man' rendering in the browser window. It is important to understand that the web page browser does very little work, it only renders the information under instruction from the kernel. The kernel contains all the code, text and data.
The notebook consists of a number of cells which can contain code, text or other information. Each cell is 'executed' or 'run' by clicking on the cell and pressing the Shift-Enter key combination. Cells can also be run in sequence from the Cell menu entry. Running a cell does at least two things: (1) the cell changes/updates its data in the kernel and (2) the updated data is rendered in the browser window.
Writing the notebook becomes the process of creating cells and adding text or code to the cell.
Once created the cells must be executed. The cell execution must be in the order entered in the notebook (from start to end). You can do this manually or use the menu entry to run all cells consecutively.
It is possible and often done that cells are moved up or down, changing their location in the sequence. When the cell sequence is changed you must keep track by executing the cells in their new locations.
If the execution in strict sequence is not followed it can lead to all sorts of difficulties, see <a href="#NotebooksRemember">Notebooks Remember</a>
Saving and closing the Notebook
The notebook is saved in its json format by saving from the IPython menu. Click on the save button (leftmost button with the mouse-over message 'Save and Checkpoint') or select the File Save and Checkpoint menu option.
The IPython notebook file is saved and closed by selecting the File Close and Halt menu option.
Never use a Python exit() function in your notebook, because the notebook will interpret this as an exit to its own process, closing down the server.
Converting the notebook to other formats
IPython versions and notebook versions
IPython version 2.x writes files in notebook format nbformat 3.
IPython version 3.x writes files in notebook format nbformat 4.
IPython 3 can read nbformat 3 files, converting on opening. When saved the file will be in nbformat 4.
To convert a file from nbformat 4 to nbformat 3, use the following command line command:
ipython nbconvert --to notebook --nbformat 3 MyNotebook.ipynb
IPython built-in conversion
Starting from IPython 2, the notebook can be converted to a Python file (code with embedded comments), an HTML file or a ReST file. On the File menu, use Download as.
The notebook can also be saved to an HTML file by saving from your browser's menu. This saved HTML file is, however, not fully self-contained (images and JavaScript files are in a directory). So this is not the ideal way to save the file.
End of explanation
import os
# test to see if this is Linux or Windows
if os.path == '/':
!ls *.ipynb
else:
!dir *.ipynb
Explanation: Command-line conversion
On the command line IPython has a convert subcommand to convert from the IPython format to other formats. In the command window, where the file is located type:
ipython nbconvert filename.ipynb
This will create a fully self-contained HTML file that you can print or mail as a single file. The only problem is that HTML does not print very well, especially with large figures.
The notebook can be converted to LaTeX and then compiled to a PDF format with the following command:
ipython nbconvert --to latex --post filename.ipynb
The LaTeX document style can be specified as follows:
ipython nbconvert --to latex filename.ipynb --template=article
once the tex document is ready run pdflatex:
pdflatex filename.tex
http://ipython.org/ipython-doc/rel-1.0.0/interactive/nbconvert.html
http://blog.fperez.org/2012/09/blogging-with-ipython-notebook.html
http://nbviewer.ipython.org/url/www.damian.oquanta.info/posts/blogging-with-nikola-and-ipython.ipynb - blogging with Nikola
http://www.slideviper.oquanta.info/tutorial/slideshow_tutorial_slides.html?transition=none#/ - slides
http://www.damian.oquanta.info/ - a couple of links
http://nbviewer.ipython.org/github/Carreau/posts/blob/master/06-NBconvert-Doc-Draft.ipynb - using the nbconvert API
Tips to use IPython for publication ready work: http://blog.juliusschulz.de/blog/ultimate-ipython-notebook
<a id='LaTeX docs with template control'></a>LaTeX docs with template control
The standard LaTeX converter does not provide good control over the style of the resultant document. If you require better control of the document style look at my ipnb2tex script. The script support floating figures and tables and citations.
See this PDF for an example of the output from this converter. Using this converter you can fully control the LaTeX template to achieve the document format you require.
Making publication ready Python Notebooks provides very useful information on preparing notebooks such that it can be used for final publications.
High-quality graphics in LaTeX
By default the Matplotlib backend for IPython generates png files. The quality of these files are not all that good for publication. LaTeX traditionally uses Encapsulated PostScript (eps) files for publications, or PDF files in PDFLaTeX. It is possible to instruct the backend to render in Scalable Vector Graphics (svg) format by using:
%config InlineBackend.figure_format = 'svg'
The svg file is, however, not rendereable in LaTeX and must be converted to PDF for use in PDFLaTeX. In order to do the conversion you must have Inkscape on your PC (and on Windows the path to the inkscape executable must be on your PATH). The nbconvert process will convert the svg files created by Matplotlib/IPython to PDF files using Inkscape on your PC.
http://ipython.org/ipython-doc/rel-1.0.0/interactive/nbconvert.html
https://stackoverflow.com/questions/19659864/ipython-nbconvert-and-latex-use-eps-instead-of-png-for-plots
https://stackoverflow.com/questions/19600234/nbconvert-pdf-latex-page-formatting-ipython
https://stackoverflow.com/questions/19524554/suppress-code-in-nbconvert-ipython
Notebook viewer on the internet
There is a website nbviewer.ipython.org/ that will convert a notebook from ipynb format to html and display it in your browser. Just browse over to http://nbviewer.ipython.org/ and enter the URL of the notebook you want to view.
A particularly useful feature of http://nbviewer.ipython.org/ is that you can add the GitHub username of someone and then can view all the notebooks by that user.
The nbviewer site keeps a cache of the most recently calculated version of your notebook. To force an updated calculation add this to the end of the URL:
?flush_cache=true.
Alternatively, you can build a composite URL such as follows to view a notebook.
Keyboard Shortcuts
Jupyter stores a list of keyboard shortcuts under the menu at the top: Help > Keyboard Shortcuts.
Another way to access keyboard shortcuts, and a handy way to learn them is to use the command palette: Ctrl + Shift + C or Ctrl + Shift + P , but note that this key press combination may be hardwired in the browser..
http://ipython.readthedocs.io/en/stable/config/shortcuts/index.html
Josh Devlin provides the following summary of useful keyboard shortcuts (but study the full set as described above):
Esc will take you into command mode where you can navigate around your notebook with arrow keys.
While in command mode:
A to insert a new cell above the current cell,
B to insert a new cell below.
M to change the current cell to Markdown,
Y to change it back to code
D + D (press the key twice) to delete the current cell
Enter will take you from command mode back into edit mode for the given cell.
Shift + Tab will show you the Docstring (documentation) for the the object you have just typed in a code cell - you can keep pressing this short cut to cycle through a few modes of documentation. This operation requires install pyreadline.
Ctrl + Shift + - will split the current cell into two from where your cursor is.
Esc + F Find and replace on your code but not the outputs.
Esc + O Toggle cell output.
Select Multiple Cells:
Shift + J or Shift + Down selects the next sell in a downwards direction.
Shift + K or Shift + Upselect sells in an upwards direction.
Once cells are selected, you can then delete / copy / cut / paste / run them as a batch. This is helpful when you need to move parts of a notebook.
You can also use Shift + M to merge multiple cells.
Pretty Print all cell outputs
Normally only the last output in the cell will be printed. For everything else, you have to manually add print(), which is fine but not super convenient. You can change that by adding this at the top of the notebook:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Any time you want to go back to the original setting, just run
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "last_expr"
Just be aware that you have to run the setting change in a separate cell for it to take effect for the next cell run.
Magics and System Commands
IPython has a set of predefined 'magic functions' that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument.
http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/notebooks/Cell%20Magics.ipynb
http://ipython.org/ipython-doc/dev/interactive/tutorial.html#magic-functions
http://damontallen.github.io/IPython-quick-ref-sheets/
System commands (as you would normally type in a command window) can be executed by pre-pending with an exclamation mark.
End of explanation
%lsmagic
Explanation: There are 'magics' to support a number of other languages in the IPython notebook. From the IPython notebook website:
"We ship the official IPython kernel, but kernels for other languages such as Julia and Haskell are actively developed and used. Additionally, the IPython kernel supports multi-language integration, letting you for example mix Python code with Cython, R, Octave, and scripting in Bash, Perl or Ruby."
http://andrew.gibiansky.com/blog/ipython/ipython-kernels/
http://ipython.org/ipython-doc/stable/development/messaging.html
Magics for general use
Thanks to Josh Devlin for some of the examples shown here.
List all the magics currently available
End of explanation
?%timeit
Explanation: To learn more about a magic execute ? followed by the magic as in
?%pastebin
it will open a new panel where the docstring is displayed.
End of explanation
#remove only variable b
a=1; b=2; c=3; b1m=4; b2m=5; b3m=6; b4m=7; b2s=8
%reset_selective -f ^b$
%who_ls
#remove all variables starting with the letter b
a=1; b=2; c=3; b1m=4; b2m=5; b3m=6; b4m=7; b2s=8
%reset_selective -f b
%who_ls
Explanation: BTW, you can also display the docstring of any python function by
import numpy as np
?np.asarray
Remove variables from the IPython notebook by using the %reset and %reset_selective varname magics. %reset removes all variables (i.e., cleans out the whole lot), whereas with %reset_selective varname you can specificy which variables must be cleared. Note that varname is a regular expression, but such that if a single letter is given all variables starting with that letter will be erased. So to ensure that only a single variable is removed, anchor both ends as shown below with ^ (beginning of line) and $ (end of line).
End of explanation
one = "for the money"
two = "for the show"
three = "to get ready now go cat go"
%who str
Explanation: The %who command without any arguments will list all variables that exist in the global scope. Passing a parameter like str will list only variables of that type.
End of explanation
%%writefile test.txt
This is a test file!
It can contain anything I want...
more...
#open the file and read its contents
with open('test.txt', 'r') as fi:
print('{}'.format(' '.join(fi.readlines())))
%pycat test.txt
Explanation: Use the %%writefile magic to write contents to a file. The file can be read in normal Python or it can be read and popped up in a window by %pycat.
End of explanation
# The line below sets the environment variable OMP_NUM_THREADS
%env OMP_NUM_THREADS=4
Explanation: The %edit magic is supposed to start up an editor from within IPython - I never got that to work under windows.
https://ipython.org/ipython-doc/dev/interactive/magics.html#magic-edit
http://ipython.org/ipython-doc/1/config/editors.html
http://stackoverflow.com/questions/15681153/external-editor-for-ipython-notebook
http://stackoverflow.com/questions/3438531/ipython-workflow-edit-run
This works ok if you have a fast editor:
http://stackoverflow.com/questions/28309430/edit-ipython-cell-in-an-external-editor
Just replace gvim with your editor's exe and make sure it is on the path.
You can manage environment variables of your notebook without restarting the jupyter server process, %env is the most convenient way. Running %env without any arguments lists all environment variables.
End of explanation
%%file helloipython.py
print('Hello IPython!')
%run helloipython.py
Explanation: The%run magic runs a scipt along the stated path and prints the results to the cell output. Useful to run external scripts not coded in the notebook itself. Just be sure to copy the script with the notebook. The next cell writes a script to the current directory and then the following cell executes it.
End of explanation
%run ./PlayStats.ipynb
Explanation: %run can also run Jupyter/IPython notebooks and insert the output of the notebook into the current result cell.
End of explanation
%timeit range(100)
Explanation: The %store command lets you pass variables between two different notebooks.
In the PlayStats.ipynb the table variable is stored as follows:
# store the variable in the server for another notebook to read it
%store table
If the other notebook has been executed in the present Jupyter session, the data can be retrieved in this notebook by
%store -r table
print(table)
The %timeit magic can time the execution of a an expression.
It can be one line or multiline statement. In a one liner we can pass through multiple ones separated by semicolon.
http://pynash.org/2013/03/06/timing-and-profiling.html
End of explanation
from IPython.display import Latex
Latex(r\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray})
Explanation: % pastebin 'file.py' to upload code to pastebin and get the url returned.
% bash to run cell with bash in a subprocess.
%mprun & %memit: See how much memory a script uses (line-by-line, or averaged over a bunch of runs)
%prun statement_name will give you an ordered table showing you the number of times each internal function was called within the statement, the time each call took as well as the cumulative time of all runs of the function.
%% HTML: to render the cell as HTML. So you can even embed an image or other media in your notebook
%%HTML
<img src="http://storage.proboards.com/6578018/thumbnailer/cQSQRzgbljQLzGPJ3hfZ.jpg">
%%latex to render cell contents as LaTeX, see here
$$
\begin{aligned}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{aligned}
$$
End of explanation
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
5
a = 6
7
Explanation: Magics for using IPython for numeric and scientific work
Early tutorials advised that you start the IPython kernel with the --pylab=inline option to load a bunch of packages and see Matplotlib graphs in the browser. As from IPython 2.0, the server advises you not to use this option, because it pre-loads a number of modules and packages that may not be required and may even interfere with your intended work.
Don't use %pylab either:
Then there is the %pylab magic command, which essentially does the same as the --pylab=inline option. The full command is %pylab [--no-import-all] [gui].
When using this magic, IPython loads numpy and matplotlib. The following libraries are imported in this magic:
import numpy
import matplotlib
from matplotlib import pylab, mlab, pyplot
np = numpy
plt = pyplot
from IPython.display import display
from IPython.core.pylabtools import figsize, getfigs
from pylab import *
from numpy import *
Clearly the last two imports could potentially cause namespace conflicts with other modules. This is because import * is not good practice. If you use the form %pylab -–no-import-all the last two * imports will not be executed.
So the %pylab magic command could be something like %pylab -–no-import-all inline to get inline plots in the notebook.
But this method still clutters the IPython interactive namespace with global pylab names, potentially causing problems.
Use %matplotlib instead:
Do use the %matplotlib [gtk|gtk3|inline|osx|qt|qt4|tk|wx] magic to define the Matplotlib plotting backend without importing anything into the IPython interactive namespace. For the full discussion see here. So the preferred method to get Matplotlib graphics inline in the notebook is to use the magic command
%matplotlib inline
After using this magic, you still have to import numpy manually.
Note that the file format to which Matplotlib renders a graphic can be set by the following magic (svg, png or high resolution png):
%config InlineBackend.figure_format = 'svg'
%config InlineBackend.figure_format = 'png'
%config InlineBackend.figure_format = 'retina'
https://stackoverflow.com/questions/17582137/ipython-notebook-svg-figures-by-default
Results display in Results Cell
Normally IPython will display the last unassigned result from the cell in the result cell. You can modify the ast_note_interactivity kernel option to make jupyter do this for all unassigned variables.
If you want to set this behaviour for all instances of Jupyter (Notebook and Console), simply create a file ~/.ipython/profile_default/ipython_config.py with the lines below.
c = get_config()
# Run all nodes interactively
c.InteractiveShell.ast_node_interactivity = "all"
https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/
End of explanation
from IPython.display import YouTubeVideo
# a talk about the ICalico spell checker extension
YouTubeVideo('Km3AtRynWFQ')
Explanation: Writing functions in other languages
This example is taken from Josh Devlin, thanks Josh!
You can write functions in cython or fortran and use those directly from python code. First you’ll need to install:
pip install cython fortran-magic
Then
%load_ext Cython
%%cython
def myltiply_by_2(float x):
return 2.0 * x
Then in some further-down cell:
myltiply_by_2(23.)
Or
%load_ext fortranmagic
%%fortran
subroutine compute_fortran(x, y, z)
real, intent(in) :: x(:), y(:)
real, intent(out) :: z(size(x, 1))
z = sin(x + y)
end subroutine compute_fortran
Then in some further-down cell:
compute_fortran([1, 2, 3], [4, 5, 6])
http://arogozhnikov.github.io/2015/11/29/using-fortran-from-python.html
IPython notebook extensions
contrib nbextensions
The following repository contains a collection of extensions that add functionality to the Jupyter notebook. There are several extensions, best visit the repo for more information.
https://github.com/ipython-contrib/jupyter_contrib_nbextensions
https://github.com/ipython-contrib/IPython-notebook-extensions
https://github.com/ipython/ipython/wiki/Extensions-Index
The following commands will install the extensions, as well as a menu based configurator that will help you browse and enable the extensions from the main Jupyter notebook screen.
Method 1
https://github.com/Jupyter-contrib/jupyter_nbextensions_configurator
Method 2
The install instructions are taken from Josh Devlin. There is a risk that the following installation may not succeed on your Jupyter installation, depending on software version status.
pip install --upgrade https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tarball/master
pip install --upgrade jupyter_nbextensions_configurator
jupyter contrib nbextension install --user
jupyter nbextensions_configurator enable --user
You can install Nbextensions any time from your command line like this
conda install -c conda-forge jupyter_contrib_nbextensions
conda install -c conda-forge jupyter_nbextensions_configurator
jupyter contrib nbextension install --user
Once they’re installed, you’ll see an Nbextensions tab. Explore away!
General notes
The IPython architecture supports the installation of extension packages to add new functionality to notebooks.
http://ipython.org/ipython-doc/dev/config/extensions/
Several extensions exist to access other software tools from within IPython:
http://jupyter.cs.brynmawr.edu/hub/dblank/public/Jupyter%20Help.ipynb
http://ipython.org/ipython-doc/dev/config/extensions/
http://nbviewer.ipython.org/github/jrjohansson/ipython-asymptote/blob/master/Asymptote-examples.ipynb for Asymptote
http://www2.ipp.mpg.de/~mkraus/python/tikzmagic.py and http://www2.ipp.mpg.de/~mkraus/python/tikzmagic_test.ipynb for tikz.
On Windows, when using Anaconda, the notebook extensions are installed here:
C:\Anaconda\share\jupyter\nbextensions
On raw Python installations the notebook extensions appear to be installed here:
C:\Users\YourUserName\.ipython\nbextensions
C:\Users\YourUserName\.ipython\profile_default\static\custom
ICalico spell checker
The ICalico spell checker (thanks Doug Blank!) checks spelling and underlines words that appear incorrect. The spell checker is implemented in JavaScript and works on markdown cells in edit mode. It points out spelling errors but do not offer corrections at current. The word list is US English, so it is not very friendly towards UK English. The dictionary can be changed, see below.
To use the spell checker open a markdown cell for editing and click on the 'tickmark' button on the toolbar. The tickmark button will only be present if you installed and activated the spell checker, and then restarted the jupyter server.
The instructions are somewhat confusing because of different versions of the Jupyter notebook and different versions (and repository locations) of the extension.
To install and activate the extension follow the YouTube video below, or the instructions at http://calicoproject.org/Icalico and modified here. The installation requires at least the first two steps:
Download the extension (do this once only) - I am not sure which to use for the different combinations of Jupyter and repos:
Let Jupyter download it for you:
!jupyter nbextension install https://github.com/Calysto/notebook-extensions/archive/master.zip
or
!jupyter nbextension install https://github.com/Calysto/notebook-extensions
Download manually:
Clone the repo at https://github.com/Calysto/notebook-extensions into the local directory C:\ProgramData\jupyter\nbextensions\notebook-extensions-master. Note that you need to clone to a different directory name than is the repo name.
Make a copy of the files shown below, to one level lower than where you cloned, in C:\ProgramData\jupyter\nbextensions
Activate it by executing the following:
!jupyter nbextension enable calico-document-tools
To activate the spell checker for Jupyer 4.x notebooks, edit the file
C:\Users\YourUserName\.jupyter\nbconfig\notebook.json in your Jupyter profile and conform that the following load_extensions commands are present (add in the appropriate place if necessary):
{
"load_extensions": {"calico-spell-check":true,
"calico-document-tools": true,
"calico-cell-tools":true
}
}
The ICalico spell checker discussion takes place here:
https://github.com/ipython/ipython/issues/3216
End of explanation
from collections import defaultdict
# defaultdict?
display(Image(filename='images/introspection.png'))
Explanation: UK English spell checker
<font color="red"> I have not yet had the time to figure out how to do this for Jupyter 4</font>
Marco Pinto maintains a UK English hunspell-style list here. To implement the UK dictionary in an Anaconda Jupyter installation on Windows:
Download the two files en-GB.aff and en-GB.dic into the (new) folder C:\Anaconda\share\jupyter\nbextensions\typo\dictionaries\en_GB\.
Rename the two files to use underscore instead of dash/hyphen (look at the en_US equivalent). Change en-GB.aff to en_GB.aff and change en-GB.dic to en_GB.dic.
Edit the file C:\Anaconda\share\jupyter\nbextensions\calico-spell-check.js to replace var lang = "en_US"; with var lang = "en_GB";.
It should be a simple matter to find the equivalent directories in Linux.
So now at least we have a UK dictionary, the remaining work is to add new words. Marco Pinto's GUI-based tool at http://marcoagpinto.cidadevirtual.pt/proofingtoolgui.html is exceptionally suitable tool for this purpose. Doug Blank also pointed out: "Also, one can add words to the calico-spell-check extension by making a JSON object in a file named words.json and putting it next to calico-spell-check.js."
Multicursor editor support
Jupyter supports mutiple cursors, similar to Sublime Text. Simply click and drag your mouse while holding down Alt.
Optimising with IPython
https://robotwhale.wordpress.com/2014/08/17/optimization-with-ipython/
Local files
If you have local files in your Notebook directory, you can refer to these files in Markdown cells via relative URLs that are prefixed with files/ as in:
files/[subdirectory/]filename
Help
Introspection help is available by typing the object's name followed by a question mark, then execute the cell.
It will print details about the object, including docstrings, function call argument) and class constructor details.
Double click on the divider to close the help console.
End of explanation
# help(defaultdict)
display(Image(filename='images/python-help.png'))
Explanation: You can also access the built in Python help system, by typing help(objectname), and then execute the cell.
End of explanation
# defaultdict.
display(Image(filename='images/tabcompletion.png'))
Explanation: Tab-completion help is also available, just press TAB after the period
End of explanation
display(Image(filename='images/ipythonhelp.png'))
Explanation: You can also obtain the docstring by prepending a function with a question mark and then executing.
?str.replace()
The IPython Help menu item has links to IPython and the scientific packages
End of explanation
#run this cell once
a = 5
#run this cell several times
a = a + 1
print(a)
Explanation: <a name="NotebooksRemember">
<a id='Notebooks Remember'></a> Notebooks remember, but not the way you might think
The notebook visible in the browser is not really the notebook, it is only a rendering of the notebook. The actual notebook and its data resides in the IPython kernel. The browser is connected to the kernel via a zmq channel, for the purpose of rendering and input. You can close all the browser windows, but the code and data still resides in the kernel process. As long as the kernel is running you can open a new browser window at 127.0.0.1:8888 and the notebook will be displayed.
The results (including graphs and images) from previous runs are recorded in the notebook, so when it is rendered the previously stored results are shown. The previous results are overwritten only if the cell is executed again.
Most important, the results from a previous cell execution remains in the kernel, even if the cell is removed or moved up in the notebook. It happens from time to time that you execute a cell in a given location, creating some data in the kernel. The next cell uses this information in a calculation. So far, so good. Now you decide to re-organise the notebook and move the 'next cell' to a position earlier in the notebook, even before the cell that created the information in the first place. The newly moved cell still works in its new location, because the information is still in memory. At the end of work, you save and close the notebook and exit the kernel.
Tomorrow you start a new kernel and load the notebook. To ensure that all is fresh and well, you decide to execute all cells in the notebook. But now the moved cell does not work, because its input information is not yet created - it is only created a few cells into the future. Yet it did work yesterday (because of results still remained in memory after moving the cell). But today it fails because there is no memory of something that is yet to come.
It is important to understand that the concept of the non-existence of data prior to cell execution does not apply if you move cells around in the notebook. All results are remembered and accessible by cells linearly before the cell that created the information.
Similarly, remember that a cell changes information for prosperity, also for subsequent runs of the same cell. For example, suppose we assign a value to a variable in the next cell and in the cell thereafter increment the value. If the first cell is executed once and the second cell is executed repeatedly, the value will increase much more than it would be if the program was executed from start to finish with each cell executed once only.
End of explanation
%reload_ext autoreload
%autoreload 2
import numpy as np
Explanation: A similar error occurs in Pandas if the first cell creates a dataframe, and the second cell adds a new column to the dataframe. If the second cell is executed a number of times, many columns will be added, which was not the intention with the code if executed linearly from start to end.
Therefore be careful with cells that modifies its own input data - such data is not static and changes with each invocation of the cell.
Reloading imports
Python's import facility loads a file only once, if the import is encountered again for the same module, the import is ignored. If you are actively developing the the module to be imported the module contents changes all the time and these changes must be imported to see their effect.
In this case you want to force another import execution. This can be done with the %load_ext autoreload magic command. If the extension is already loaded, Ipython may complain, so I use %reload_ext autoreload, which attempts to load or reload, as appropriate.
http://ipython.org/ipython-doc/rel-1.1.0/config/extensions/autoreload.html
End of explanation
%matplotlib inline
import pylab as pl
import numpy as np
t = np.arange(0.0, 2.0, 0.01)
s = np.sin(2*np.pi*t)
pl.plot(t, s)
pl.xlabel('time (s)')
pl.ylabel('voltage (mV)')
pl.title('About as simple as it gets, folks')
pl.grid(True)
# savefig("test.png")
# show()
Explanation: Clearing the IPython memory
If for some reason you need to clear out all memory in your notebook this can be done by selecting the Kernel Restart menu option. This will clear the kernel memory, including the memory of all open notebooks. So use this avenue with care. After restarting the kernel, all notebooks must be re-run from start to build the information again.
You can use the Cell | All Output | Clear menu option to remove all output from a notebook. It will be much smaller, but also empty of any output or embedded media. To see the full notebook with all calculation results, you would have to run all cells again.
Markdown (MD) language syntax
Markdown is a simplified syntax for simple text layout. It allows you to embed simple formatting commands in a regular text file. In an ASCII text editor you will see the markup but not formatted. You can see the formatted version in a local or online markdown editor.
https://en.wikipedia.org/wiki/Markdown
https://daringfireball.net/projects/markdown/
https://help.github.com/articles/markdown-basics/
You can write markdown in any text editor or in a dedicated markdown editor:
https://notepad-plus-plus.org/
https://atom.io/
http://www.sitepoint.com/best-markdown-editors-windows/
http://www.sublimetext.com/ (paid software)
http://markdownpad.com/ (paid software)
IPython uses the markdown syntax for text in its non-code cells. When IPython creates a new cell it is a code cell by default, you must remember to change it to a markdown cell on the menu.
You can also edit markdown online here:
https://github.com/benweet/stackedit
http://dillinger.io/
http://markdownlivepreview.com/
One confusing matter is the fact there are different variants of markdown. For more details on the syntax see
https://stackoverflow.com/editing-help
http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/notebooks/Part%204%20-%20Markdown%20Cells.ipynb
http://johnmacfarlane.net/pandoc/demo/example9/pandocs-markdown.html
http://johnmacfarlane.net/pandoc/README.html
http://daringfireball.net/projects/markdown/
http://daringfireball.net/projects/markdown/basics
http://daringfireball.net/projects/markdown/syntax
https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
http://support.iawriter.com/help/kb/general-questions/markdown-syntax-reference-guide
You can also use embedded raw HTML into a markdown page, but you need to type a lot more than in pure markdown.
IPython uses the Pandoc converter to convert between different markup languages, so my guess is that it is prudent to work with the Pandoc variant of MD syntax (which is close to the original). The other major MD variant is the github MD syntax - which is somewhat different.
To force a newline you must end the current line with two spaces.
If your markdown does not want to work right, first try to leave a blank line before the offending text.
Markdown Cells
Type this to get the output shown below:
Markdown basics: lists, markup and code
* list item
* list item
* nested <font size="3" color="red">list item</font>
* *italics*
* **bold**
* `fixed_font`
You can embed code meant for illustration instead of execution in Python - use four spaces before:
def hello_ipython():
print "Hello IPython!"
Markdown basics: lists, markup and code
list item
list item
nested <font size="5" color="red">list item</font>
italics
bold
fixed_font
You can embed code meant for illustration instead of execution in Python - use four spaces before:
def hello_ipython():
print "Hello IPython!"
<font color="red">A key and important point is that IPython markdown can take embedded HTML of any form.</font> <BR>Therefore in applications where the markdown syntax it too weak, use HTML. Be aware however that not all of the HTML constructs can be converted to some of the conversion output formats (e.g., embedding video in a LaTeX document).
Type this to get the output shown below:
Markdown cells can also contain HTML
<p>Markdown basics: lists, markup and code</p>
<ul>
<li>list item</li>
<li><p>list item</p>
<ul>
<li>nested <font size="3" color="red">list item</font></li>
</ul></li>
<li><p><em>italics</em></p></li>
<li><strong>bold</strong></li>
<li><code>fixed_font</code></li>
</ul>
<p>Code examples:</p>
<pre><code>def hello_ipython():
print "Hello IPython!"
</code></pre>
Markdown cells can also contain HTML
<p>Markdown basics: lists, markup and code</p>
<ul>
<li>list item</li>
<li><p>list item</p>
<ul>
<li>nested <font size="3" color="red">list item</font></li>
</ul></li>
<li><p><em>italics</em></p></li>
<li><strong>bold</strong></li>
<li><code>fixed_font</code></li>
</ul>
<p>Code examples:</p>
<pre><code>def hello_ipython():
print "Hello IPython!"
</code></pre>
Type this to get the output shown below:
Using math mode (anything delimited before and after by single or double `$` symbols is interpreted as LaTeX math:
$$ D_{KL}(P||Q) = \sum\limits_{i}ln (\frac{P(i)}{Q(i)}) P(i)$$
Using math mode (anything delimited before and after by single or double $ symbols is interpreted as LaTeX math:
$$ D_{KL}(P||Q) = \sum\limits_{i}ln (\frac{P(i)}{Q(i)}) P(i)$$
$$e^{i\pi} + 1 = 0
$$
Github markdown can also be used
The Notebook webapp support Github flavored markdown meaning that you can use triple backticks for code blocks
```python
print "Hello World"
```
```javascript
console.log("Hello World")
```
gives
python
print "Hello World"
javascript
console.log("Hello World")
And a table like this :
| This | is |
|------|------|
| a | table|
gives
| This | is |
|------|------|
| a | table|
Citations and links
Markdown syntax does not have any means to provide citations as often required in formal documentation.
Markdown hyperlinks to external websites: pyradi Python package. You can provide the display text, the ling address and amouse-over attribute.
HTML anchors can be used to create links to other locations in the same file. Put an anchor definition in the place you want to link to, using this format: <a name="MyAnchor">. It seems the best place to put this anchor is in a dedicated markdown cell immediatelty in front of where you want to link to, it does not work in a cell with other text.
From elsewhere in the file link to this anchor by using the following format
<a href="#MyAnchor">my anchor</a>
It is also possible to attach anchors to headers as explained in https://stackoverflow.com/questions/16630969/ipython-notebook-anchor-link-to-refer-a-cell-directly-from-outside
http://nbviewer.ipython.org/github/ipython/nbconvert-examples/blob/master/citations/Tutorial.ipynb - LaTeX citations in the notebook.
The <a href="#LaTeX docs with template control">LaTeX docs with template control</a> script provides full citation support when creating PDF documents.
Cross linking and table of contents
IPython does not yet support the automatic creation of a table of contents, but it can be added manually by placing an HTML anchor immediately before the section heading and then linking to the anchor. Note that in IPYthon the browser's 'back arrow' won't navigate back to the previous location in the page.
# Table of Contents
- [Overview](#Overview)
- [Learning Python and hints and tips](#LearningPythonandhintsandtips)
- [Introducing Python for scientific work](#IntroducingPythonforscientificwork)
<a name="Overview"></a>
##Overview
This notebook provides a brief summary ....
<a name="LearningPythonandhintsandtips"></a>
##Learning Python, and hints and tips
There are many free ....
<a name="IntroducingPythonforscientificwork"></a>
##Introducing Python for scientific work
A very good introduction to Python for scientific work ....
Table of Contents
Overview
Learning Python and hints and tips
Introducing Python for scientific work
IPython's rich display system
IPython notebook can use all of the modern browsers' capabilities. This is called the rich display system, see here.
Plotting
Plotting packages and options
Scientific Plotting in Python
Interactive Plotting in IPython Notebook (Part 1/2): Bokeh
Interactive Plotting in IPython Notebook (Part 2/2): Plotly
Overview of Python Visualization Tools
Comparing Python web visualization libraries
matplotlib
matplotlib
Gallery
Examples
docs
bokeh
Welcome to bokeh
Gallery
Quickstart
Tutorials
User Guide
plotly
What is Plotly.js
Plotly.js Open-Source Announcement
Getting Started: Plotly for Python
Plotly Offline
User Guide
Plotly's Python API User Guide
Example 3-D plot
Matplotlib in IPython notebook
IPython can inline matplotlib plots.
http://nbviewer.ipython.org/urls/raw.github.com/jakevdp/matplotlib_pydata2013/master/notebooks/05_Animations.ipynb - animations
http://nbviewer.ipython.org/urls/raw.github.com/jakevdp/matplotlib_pydata2013/master/notebooks/03_Widgets.ipynb - interactivity
http://nbviewer.ipython.org/github/dpsanders/matplotlib-examples/blob/master/colorline.ipynb - multi-colour lines
End of explanation
import seaborn as sns
t = np.arange(0.0, 2.0, 0.01)
s = np.sin(2*np.pi*t)
pl.plot(t, s)
pl.xlabel('time (s)')
pl.ylabel('voltage (mV)')
pl.title('About as simple as it gets, folks')
pl.grid(True)
# savefig("test.png")
# show()
Explanation: By just importing seaborn, the Matplotlib graphs is given a different style. If seaborn is not installed do conda install seaborn.
End of explanation
HTML('<img src="images/ipythonhelp.png" width=400 height=200/>')
display(Image(filename='images/ipythonhelp.png', width=250, height=250))
Explanation: Matplotlib in qt window
Use the ipython magic command pylab to control the graphing backend, switch between inline and qt as required.<br>
http://stackoverflow.com/questions/14261903/how-can-i-open-the-interactive-matplotlib-window-in-ipython-notebook/14277370#14277370
Seaborn distribution plotting
The seaborn package provides special means to plot distributions
http://nbviewer.ipython.org/github/mwaskom/seaborn/blob/master/examples/plotting_distributions.ipynb
Including images into the notebook
There are (at least) two ways to include images in the notebook.
display(Image(filename='images/portalpage.png'))
HTML('<img src="files/images/portalpage.png" width=600 height=600/>')
The first form includes the image in its natural size, but the size can be adjusted by width and height function parameters. The second form injects HTML code and, likewise, allows you to set the image size.
End of explanation
# by default Image data are embedded
picUrl = 'https://raw.githubusercontent.com/NelisW/pyradi/master/pyradi/doc/_images/pyradi.png'
Embed = Image(picUrl)
display(Embed)
# if kwarg `url` is given, the embedding is assumed to be false
# SoftLinked = Image(url=picUrl)
# In each case, embed can be specified explicitly with the `embed` kwarg
# ForceEmbed = Image(url=picUrl, embed=True)
Explanation: Images can also be included as markdown by using the following format

Images can also be included as markdown by using the following format
<img src="images/ipythonhelp.png" width="200">
<img src="images/ipythonhelp.png" width="200">
Embedded vs non-embedded images
As of IPython 0.13, images are embedded by default for compatibility with QtConsole, and the ability to still be displayed offline.
http://www.slideviper.oquanta.info/nbcreveal/sky_test.html?theme=sky#/7
End of explanation
from IPython.display import SVG
SVG(filename='images/solidangleflatplate.svg')
Explanation: Embedding other media
SVG graphic.
End of explanation
from IPython.display import YouTubeVideo
# a talk about IPython at Sage Days at U. Washington, Seattle.
# Video credit: William Stein.
if False:
YouTubeVideo('1j_HxD4iLn8')
Explanation: Embed a video from YouTube.
End of explanation
if False:
HTML('<iframe src=https://en.wikipedia.org/wiki/Einstein width=700 height=350></iframe>')
Explanation: Embed an external web page.
End of explanation
# display a locally saved video file.
# it seems that only webm format works here
import io
import base64
from IPython.core.display import HTML
filename = './images/interpolationSphere.webm'
# video = io.open(filename, 'r+b').read()
# encoded = base64.b64encode(video)
# HTML(data='''<video alt="Data set video" controls>
# <source src="data:video/mp4;base64,{0}" type="video/mp4" />
# </video>'''.format(encoded.decode('ascii')))
HTML(
<div align="middle">
<video width="40%" controls>
<source src="{}" type="video/mp4">
</video></div>.format(filename))
Explanation: Embed a video from local file system
The following shell shows a recording of the Mayavi display. The file is locally saved in the notebook. It seems that the format must be webm format, the other formats (avi, mp4) did not work.
End of explanation
# import IPython.html.widgets as widgets
from IPython.display import display
import ipywidgets
from ipywidgets import widgets
[n for n in dir(ipywidgets) if n[0] == n[0].upper() and not n[0] == '_']
xx = widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test:',
)
y = ipywidgets.Checkbox(
description='Check me',
value=True,
)
w = ipywidgets.Dropdown(
options = [ 'test 1', 'option 2', 'selection 3',],
value='option 2',
description='Number:',
)
#use ordered dic to get required sorting sequence
from collections import OrderedDict
foclens = [ 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1., 1.1, 1.2, 1.3, 1.4, 1.5]
m = ipywidgets.Dropdown(
options = OrderedDict([(str(x), str(x)) for x in foclens]) ,
value='0.4',
description='Focal length:',
)
from IPython.display import display
display(xx)
display(y)
display(w)
display(m)
# http://stackoverflow.com/questions/28529157/dynamically-changing-dropdowns-in-ipython-notebook-widgets-and-spyre
# from IPython.html import widgets
from IPython.display import display
geo={'USA':['CHI','NYC'],'Russia':['MOW','LED']}
def print_city(city):
print(city)
def select_city(country):
cityW.options = geo[country]
scW = ipywidgets.Select(options=geo.keys())
init = scW.value
cityW = ipywidgets.Select(options=geo[init])
j = ipywidgets.interactive(print_city, city=cityW)
i = ipywidgets.interactive(select_city, country=scW)
display(i)
display(j)
Explanation: Interactive widgets
IPython includes an architecture for interactive widgets that tie together Python code running in the kernel and JavaScript/HTML/CSS running in the browser. These widgets enable users to explore their code and data interactively. For details see
http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Interactive%20Widgets/Index.ipynb
https://github.com/ipython/ipython/tree/master/examples/Interactive%20Widgets
https://www.youtube.com/watch?v=VaV10VNZCLA
https://www.youtube.com/watch?v=vE_CJTen15M
https://www.youtube.com/watch?v=o7Tb7YhJZR0
https://www.youtube.com/watch?v=wxVx54ax47s
https://github.com/ipython/ipython/wiki/Widgets
The following examples are taken from
https://github.com/ipython/ipython-in-depth/tree/master/examples/Interactive%20Widgets
Upgrading widgets from IPython 2 to 3:
http://ipython.org/ipython-doc/3/whatsnew/version3_widget_migration.html
IPython widgets in Jupyter/IPython 4
https://keminglabs.com/print-the-docs-pdfs/1518024.pdf
https://github.com/ipython/ipywidgets/blob/master/examples/notebooks/Widget%20Basics.ipynb
The widget ecosystem changes frequently, the following code may not work....
End of explanation
def doSomething(scale, thx):
print('scale={} thx={} product={}'.format(scale, thx, scale * thx))
return (scale, thx)
scale = 5.0
v = ipywidgets.interactive(doSomething, scale=ipywidgets.fixed(scale),
thx=ipywidgets.FloatSlider(value=128, min=0.0, max=255.0, step=1))
display(v)
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
students = widgets.VBox(visible=True, children=[
widgets.Checkbox(description="Student1:", value=False),
widgets.Checkbox(description="Student2:", value=False),
])
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, students, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.observe (on_student_toggle, 'value')
students.children[0].observe(on_student_toggle, 'value')
students.children[1].observe(on_student_toggle, 'value')
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.observe(on_student_toggle, 'value')
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
def on_string_change(name, value):
print(value)
# string.on_trait_change(on_string_change,'value')
string.on_submit(on_string_change,'value')
Explanation: The following two cells illustrate how a slider is used in the widgets.interactive function to test the value of the slider and then do something with the value. The example below shows how to pass 'fixed' or non-widget parameters to the function. Any number of such widgets may be passed, but they must all be named.
For more examples see the links shown above. An example of interactive image segmentation is shown in notebook '10-ImageUtilities' in this series.
End of explanation
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Import IPython's interact function which is used below to
# build the interactive widgets
from ipywidgets import interact#, interactive, fixed, interact_manual
# import ipywidgets as widgets
def plot_sine(frequency=4.0, grid_points=12, plot_original=True):
Plot discrete samples of a sine wave on the interval ``[0, 1]``.
x = np.linspace(0, 1, grid_points + 2)
y = np.sin(2 * frequency * np.pi * x)
xf = np.linspace(0, 1, 1000)
yf = np.sin(2 * frequency * np.pi * xf)
fig, ax = plt.subplots(figsize=(8, 6))
ax.set_xlabel('x')
ax.set_ylabel('signal')
ax.set_title('Aliasing in discretely sampled periodic signal')
if plot_original:
ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2)
ax.plot(x, y, marker='o', linewidth=2)
# The interact function automatically builds a user interface for exploring the
# plot_sine function.
interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 60, 1), plot_original=True);
Explanation: The following is an example by Ketcheson, Ahmadia and Granger taken from
http://www.nature.com/news/ipython-interactive-demo-7.21492?article=1.16261
It demonstrates aliasing during sampling of a signal. To see the effects of aliasing:
Run the next cell, then set the grid_points slider to 13.
Move the frequency slider to values above 10.
As the frequency increases, the measured signal (blue) has a lower frequency than the real one (red).
End of explanation
n_weights = 10
weight_sliders = [widgets.FloatSlider(value=0,min=-2,max=2,step=0.1,description=f's{i}',
disabled=False,continuous_update=False,orientation='horizontal',
readout=True,readout_format='.2f') for i in range(n_weights)]
def PlotSuper(**kwargs):
def f(x):
y=0
for i,weight in enumerate(kwargs.values()):
if i==0:
y+=weight
else:
y+=weight*np.sin(x*i)
return y
vf = np.vectorize(f)
xx= np.arange(0,6,0.1)
plt.plot(xx,vf(xx))
plt.gca().set_ylim(-5,5)
kwargs = {f's{i}':slider for i,slider in enumerate(weight_sliders)}
interact(PlotSuper,**kwargs)
Explanation: An example at https://github.com/NelisW/ComputationalRadiometry/blob/master/10-ImageUtilities.ipynb shows how to use interactive widgets when segmenting an image.
https://stackoverflow.com/questions/47102564/how-to-set-interact-arguments-programmatically
End of explanation
from ipywidgets import Button, Layout
b = Button(description='(50% width, 80px height) button',
layout=Layout(width='50%', height='80px'))
b
c = Button(description='Another button with the same layout', layout=b.layout)
c
from ipywidgets import Button, HBox, VBox
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=w) for w in words]
left_box = VBox([items[0], items[1]])
right_box = VBox([items[2], items[3]])
HBox([left_box, right_box])
from ipywidgets import IntSlider, Label
IntSlider(description=r'\(\int_0^t f\)')
from ipywidgets import Layout, Button, Box
items_layout = Layout( width='auto') # override the default width of the button to 'auto' to let the button grow
box_layout = Layout(display='flex',
flex_flow='column',
align_items='stretch',
border='solid',
width='50%')
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=word, layout=items_layout, button_style='danger') for word in words]
box = Box(children=items, layout=box_layout)
box
from ipywidgets import Layout, Button, Box, VBox
# Items flex proportionally to the weight and the left over space around the text
items_auto = [
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
Button(description='weight=3; auto', layout=Layout(flex='3 1 auto', width='auto'), button_style='danger'),
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
]
# Items flex proportionally to the weight
items_0 = [
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
Button(description='weight=3; 0%', layout=Layout(flex='3 1 0%', width='auto'), button_style='danger'),
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
]
box_layout = Layout(display='flex',
flex_flow='row',
align_items='stretch',
width='70%')
box_auto = Box(children=items_auto, layout=box_layout)
box_0 = Box(children=items_0, layout=box_layout)
VBox([box_auto, box_0])
from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider
form_item_layout = Layout(
display='flex',
flex_flow='row',
justify_content='space-between'
)
form_items = [
Box([Label(value='Age of the captain'), IntSlider(min=40, max=60)], layout=form_item_layout),
Box([Label(value='Egg style'),
Dropdown(options=['Scrambled', 'Sunny side up', 'Over easy'])], layout=form_item_layout),
Box([Label(value='Ship size'),
FloatText()], layout=form_item_layout),
Box([Label(value='Information'),
Textarea()], layout=form_item_layout)
]
form = Box(form_items, layout=Layout(
display='flex',
flex_flow='column',
border='solid 2px',
align_items='stretch',
width='50%'
))
form
from ipywidgets import Layout, Button, Box
item_layout = Layout(height='100px', min_width='40px')
items = [Button(layout=item_layout, description=str(i), button_style='warning') for i in range(40)]
box_layout = Layout(overflow_x='scroll',
border='3px solid black',
width='500px',
height='',
flex_flow='row',
display='flex')
carousel = Box(children=items, layout=box_layout)
VBox([Label('Scroll horizontally:'), carousel])
def makeplot(title,display_trend,marker,amplitude,step_size,periods,noise_scale,offset,trend):
pass
def interact_hookup(f, controls):
from ipywidgets import Output
out = Output()
def observer(change):
out.clear_output()
kwargs = {k:v.value for k,v in controls.items()}
with out:
f(**kwargs)
for k,w in controls.items():
w.observe(observer, 'value')
observer(None)
return out
w = dict(
title=widgets.Text(value='Hello World', placeholder='Type something', description='Title:', disabled=False),
display_trend=widgets.ToggleButton(value=False, description='Display Trend', icon='check'),
marker=widgets.RadioButtons(options=['x', 'o', '.'], value='x', description='Marker:'),
amplitude=widgets.FloatSlider(value=1, min=-5, max=5, description='Amplitude:'),
step_size=widgets.FloatSlider(value=0.1, min=0.01, max=0.1, step=0.01, description='Step size:'),
periods=widgets.FloatSlider(value=5, min=1, max=20, description='Periods:'),
noise_scale=widgets.FloatSlider(value=0.1, min=0.01, max=2, description='Noise:'),
offset=widgets.FloatSlider(value=0, min=-5, max=5, description='Offset:'),
trend=widgets.FloatSlider(value=1, min=-5, max=5, description='Trend:'),
)
output = interact_hookup(makeplot, w)
UI = VBox([
HBox([
VBox([
w['title'],
w['display_trend'],
w['marker'],
]),
VBox([
w['amplitude'],
w['step_size'],
w['periods'],
w['noise_scale'],
w['offset'],
w['trend'],
])
]),
output
])
display(UI)
Explanation: https://github.com/jupyter-widgets/ipywidgets/blob/master/docs/source/examples/Widget%20Styling.ipynb
shows how to style the widgets using css.
End of explanation
import numpy as np
from ipywidgets import HBox,VBox,Button,FloatSlider,FloatProgress,interactive
# set up the widgets with precalculated values
# these sliders and prog bars are visible and are updated below in the softmax function
sliders = {'1':[2.5,0.31], '2':[-1.,0.009], '3':[3.2,0.633], '4':[0.5,0.043]}
sld = {key:FloatSlider(min=-5.0, max=+5.0, value=f'{sliders[key][0]}', step=0.05,description=f'$z^L_{key}$') for key in sliders}
prb = {key:FloatProgress(value=f'{sliders[key][1]}',min=0,max=1.0,step=0.01,description=f'$a^L_{key}$',bar_style='info',orientation='horizontal') for key in sliders}
# build and display the widget grid in pairs of sliders and prog bars
lstD = [HBox([sld[key], prb[key]]) for key in sld]
display(VBox(lstD))
# function is invoked if any of the sliders change
# and the result is used to change the progress bar
def softmax(**lstZ):
sum = 0
for key in lstZ:
sum += np.exp(lstZ[key])
for key in lstZ:
prb[key].value = np.exp(lstZ[key])/sum
# `interactive` does not display/show the widgets, already done above.
w = interactive(softmax, **sld )
Explanation: Interdependent widgets
The softmax function is used in neural networks.
Suppose we have a network with four neurons, and four corresponding weighted inputs, which we'll denote $z_{1}^{L}, z_{2}^{L}, z_{3}^{L}$, and $z_{4}^{L}$.
According to this function, the activation $a^L_j$ of the $j$ output neuron is
\begin{equation}
a_{j}^{L}=\frac{e^{z_{j}^{L}}}{\sum_{k} e^{z_{k}^{L}}}
\label{eq:c03-78}
\end{equation}
where in the denominator we sum over all the inputs $z^L_j$.
As you increase any one component, its output will increase
Shown below are adjustable sliders showing possible values for the weighted inputs, and a graph of the corresponding output activations. A good place to start exploration is by using the bottom slider to increase $z_{4}^{L}$.
As you increase $z_{4}^{L}$, you'll see an increase in the corresponding output activation, $a_{4}^{L}$, and a decrease in the other output activations. Similarly, if you decrease $z_{4}^{L}$ then $a_{4}^{L}$ will decrease, and all the other output activations will increase. In fact, if you look closely, you'll see that in both cases the total change in the other activations exactly compensates for the change in $a_{4}^{L}$. The reason is that the output activations are guaranteed to always sum up to 1.
In the code below there is a direct match between each slider and progress bar next to it.
The sliders and progress bars are created in a dict comprehension, using the same keys for sliders and progress bars.
These widgets have global scope and are available inside the softmax function.
The widgets are displayed manually (not automatically in interact) with the idea that these will be updated later.
ipywidgets.interact automatically displays the widgets when invoked, but we already displayed the widgets. Hence, the interactive function is called rather than interact, because interactive does not display/show the widgets.
End of explanation
def update_progress(progress, bar_length=20):
from IPython.display import clear_output
if isinstance(progress, int):
progress = float(progress)
if not isinstance(progress, float):
progress = 0
if progress < 0:
progress = 0
if progress >= 1:
progress = 1
block = int(round(bar_length * progress))
clear_output(wait = True)
text = "Progress: [{0}] {1:.1f}%".format( "#" * block + "-" * (bar_length - block), progress * 100)
print(text)
Explanation: The following information is somewhat esoteric, you need not go into this
Simple progress bar
Note that clear_output wipes the entire cell output, including previous output
https://mikulskibartosz.name/how-to-display-a-progress-bar-in-jupyter-notebook-47bd4c2944bf
End of explanation
import time
print('before')
#Replace this with a real computation
number_of_elements = 10
for i in range(number_of_elements):
time.sleep(0.1)
# progress must be a float between 0 and 1
update_progress((i+1) / number_of_elements,bar_length=40)
print('after')
import pyradi.ryutils as ryutils
import time
print('before')
#Replace this with a real computation
number_of_elements = 10
for i in range(number_of_elements):
time.sleep(0.1)
# progress must be a float between 0 and 1
ryutils.update_progress((i+1) / number_of_elements,bar_length=40)
print('after')
Explanation: Test:
End of explanation
import nbformat
nb = nbformat.read('01-IPythonHintsAndTips.ipynb', as_version=4)
nb.cells[0:5]
Explanation: Notebook file format
From http://ipython.org/ipython-doc/stable/notebook/nbconvert.html#notebook-json-file-format
Binary data such as figures are also saved directly in the JSON file. This provides convenient single-file portability, but means that the files can be large; a diff of binary data is also not very meaningful. Since the binary blobs are encoded in a single line, they affect only one line of the diff output, but they are typically very long lines. You can use the Cell | All Output | Clear menu option to remove all output from a notebook prior to committing it to version control, if this is a concern.
Reading json files in IPython
The following code reads this file and prints the first five cells.
End of explanation
import markdown
class MD(str):
def _repr_html_(self):
return markdown.markdown(self)
import math
a = 2
MD(
Dynamic demonstration
--------------
This is a mixture of markdown **and** html:<br>
The square root of {0} <font color="green">used to be</font> somewhere near {1}.format(a,math.sqrt(a)))
Explanation: Running notebook servers
This document describes how you can secure a notebook server and how to run it on a public interface:
http://ipython.org/ipython-doc/rel-1.1.0/interactive/public_server.html
Markdown formatting in dynamic output display
http://catherinedevlin.blogspot.com/2013/06/easy-html-output-in-ipython-notebook.html
For this to work, you first have to install the Python markdown package (assuming you have the pip python package installer):
pip install markdown
Use the following function to render a Python string in markdown syntax to display in IPython:
End of explanation
from IPython.display import display, HTML
for x in range(3):
display(HTML("<p><i>Length</i> <b>" + str(x) + "</b>"))
Explanation: HTML formatting in dynamic output display
Use HTML to format the output of your code
http://python.6.x6.nabble.com/Printing-HTML-within-IPython-Notebook-IPython-specific-prettyprint-td5016624.html
End of explanation
class ListTable(list):
Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook.
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
import random
table = ListTable()
table.append(['x', 'y', 'x-y', '(x-y)**2'])
for i in range(7):
x = random.uniform(0, 10)
y = random.uniform(0, 10)
table.append([x, y, x-y, (x-y)**2])
table
Explanation: Displaying tables in HTML
http://calebmadrigal.com/display-list-as-table-in-ipython-notebook/
End of explanation
def poly2latex(p):
terms = ['%.2g' % p.coef[0]]
if len(p) > 1:
term = 'x'
c = p.coef[1]
if c!=1:
term = ('%.2g ' % c) + term
terms.append(term)
if len(p) > 2:
for i in range(2, len(p)):
term = 'x^%d' % i
c = p.coef[i]
if c!=1:
term = ('%.2g ' % c) + term
terms.append(term)
px = '$P(x)=%s$' % '+'.join(terms)
dom = r', domain: $[%.2g,\ %.2g]$' % tuple(p.domain)
return px+dom
import numpy as np
p = np.polynomial.Polynomial([1,2,3], [-10, 10])
from IPython.display import Latex
Latex(poly2latex(p))
Explanation: Fine-tuning IPython typographic output appearance
Changing the fonts, colours and layout to suit your own style.
http://slendrmeans.wordpress.com/2012/12/05/better-typography-for-ipython-notebooks/
http://zulko.wordpress.com/2013/04/14/customize-your-ipython-notebook-with-css/
Adding IPython display output to existing objects
http://nbviewer.ipython.org/url/github.com/ipython/ipython/raw/master/examples/notebooks/Custom%20Display%20Logic.ipynb
For example, define a function that pretty-prints a polynomial as a LaTeX string:
End of explanation
ip = get_ipython()
latex_formatter = ip.display_formatter.formatters['text/latex']
latex_formatter.for_type_by_name('numpy.polynomial.polynomial',
'Polynomial', poly2latex)
p2 = np.polynomial.Polynomial([-20, 71, -15, 1])
p2
Explanation: But you can instruct IPython to use default display as follows:
End of explanation
htmlContent = ''
def header(text):
raw_html = '<h1>' + str(text) + '</h1>'
return raw_html
def box(text):
raw_html = '<div style="border:1px dotted black;padding:2em;">'+str(text)+'</div>'
return raw_html
def addContent(raw_html):
global htmlContent
htmlContent += raw_html
# Example
addContent( header("This is an autogenerated header") )
addContent( box("This is some text in a box") )
from IPython.core.display import HTML
HTML(htmlContent)
Explanation: Making slides from IPython notebooks
IPython is the tool of choice for presentations at Python conferences today - you hardly see a slideshow that was not made with IPython.
http://www.slideviper.oquanta.info/tutorial/slideshow_tutorial_slides.html#/
http://www.damian.oquanta.info/posts/make-your-slides-with-ipython.html
http://nbviewer.ipython.org/github/fperez/nb-slideshow-template/blob/master/install-support.ipynb
https://hannes-brt.github.io/blog/2013/08/11/ipython-slideshows-will-change-the-way-you-work/
Now one of the coolest new features are the Reveal.js based slideshows. Here is an example by the developer of the slideshow feature Damián Avila which shows how to turn any IPython Notebook into a slideshow and how to include math, images, videos, tables, etc.
http://www.slideviper.oquanta.info/tutorial/slideshow_tutorial_slides.html?transition=none#/
http://hannes-brt.github.io/blog/2013/08/11/ipython-slideshows-will-change-the-way-you-work/ - hiding code in slide shows.
http://nbviewer.ipython.org/urls/gist.github.com/damianavila/5970218/raw/766d41eab9a16850a2a4447f14e93e7ed88f6b08/using_local_reveal.ipynb - local copy of reveal.js
https://www.youtube.com/watch?v=rBS6hmiK-H8 - youtube video
First (1) create the IPython notebook as a regular notebook, then (2) change each cell's metadata to set its slideshow status, then (3) save the notebook, (4) convert the notebook to the slideshow format, and (5) serve the slideshow html file on an http server. Some of these steps are described next.
Step (2): Click on the the "Cell Toolbar" dropdown combobox: select the "Slideshow" option. On the top-right side of each cell will appear a dropdown combobox where you can define the slideshow status of that specific cell. Select the appropriate type for each cell.
Steps (4) and (5): The slide show runs in a reveal.js javascript environment, but requires that the file be served on an http server. In order to convert the notebook to slide show and serve in a browser, type the following command:<br>
ipython nbconvert --to slides --post serve filename<br>
where filename is the name of the ipython notebook you want to convert to a slide show.
Blogging with IPython
IPython is also used to created blogging pages:
http://blog.fperez.org/2012/09/blogging-with-ipython-notebook.html
http://www.damian.oquanta.info/
http://brunettoziosi.eu/posts/blogging-with-nikola-ipython-github.html
http://www.davidketcheson.info/2012/10/11/blogging_ipython_notebooks_with_jekyll.html
Customising the IPython notebook
The notebook is primarily rendered in HTML in the browser and when exported to HTML. As an HTML product it can be customised in terms of layout, font, colours and other elements of style. Likewise the exports to other formats, such as LaTeX, can also be similarly customised to a particular look and feel.
http://slendermeans.org/better-typography-for-ipython-notebooks.html
http://zulko.wordpress.com/2013/04/14/customize-your-ipython-notebook-with-css/
http://nbviewer.ipython.org/github/Carreau/posts/blob/master/Blog1.ipynb
Publishing your notebooks
If a notebook file (.ipynb) is available somewhere on the web, you can paste the URL into a text box on this website
http://nbviewer.ipython.org/ and it will render the notebook for you, returning a URL to the HTML rendered file. This new URL can be embedded as a hyperlink in an HTML file - when the user clicks on the link, the browser will display the rendered notebook. This is how the notebooks referred to in the next section are rendered.
http://developer.rackspace.com/blog/bookstore-for-ipython-notebooks.html
More HTML formatting
http://stackoverflow.com/questions/13770394/ipython-notebook-make-output-cells-like-markdown
End of explanation
class C:
def method(self):
pass
C.method is C.method
class C:
@classmethod
def method(cls):
pass
print(C.method is C.method)
print(id(C.method)==id(C.method))
a = C.method
b = C.method
print(id(a)==id(b))
C.__dict__
print(type(C.method))
print(type(C.__dict__['method']))
Explanation: Class Descriptors
https://twitter.com/jakevdp/status/1121873857973870592/photo/1
The instance method (aka plain function) remains unbound when retrieved from the class so C.method.__get__ just returns itself. By contrast, the class method gets bound to the class in that case, so it returns a new bound method object. In the first case both calls point to the same thing. In the second case since it is a classmethod, every call call creates a new instance of the method. I think this kind of explains the whole idea behind classmethods.
In the first cell, C.method evaluates to C.__dict__["method"] ; in second cell it evaluates to C.__dict__["method"].__get__(C, None) which turns new objects each time
In the first case, C.method points to the method() function.
In the second case, because the method() function is decorated with a @classmethod decorator, C.method returns an instance of bound method object. Every time you write C.method a new object is created.
To add to your confusion, id(Example.clsmethod) == id(Example.clsmethod), because method objects use a free list. After the first id() call, the object gets deallocated and the second call can reuse the same object.
End of explanation
class Foo:
_num_instances = 0
def __init__(self):
self._num_instances += 1
# self.__class__._num_instances += 1
f1 = Foo()
f2 = Foo()
print(Foo._num_instances)
Explanation: Class and Instance Attributes
https://twitter.com/jakevdp/status/1120898594519650304?s=09
Tricky Python bug I just hit due to an incorrect mental model of class \& instance attributes
Solution is to use self.__class__._num_instances += 1.
End of explanation
try:
import pyradi.ryutils as ryutils
print(ryutils.VersionInformation('matplotlib,numpy,pyradi,scipy,pandas'))
except:
print("pyradi.ryutils not found")
Explanation: Python and module versions, and dates
End of explanation |
13,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AdaptiveMD
Example 2 - Running of Tasks
Step1: Let's open our test project by its name. If you completed the previous example this should all work out of the box.
Step2: Open all connections to the MongoDB and Session so we can get started.
Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
Step3: Now restore our old ways to generate tasks by loading the previously used generators.
Step4: Run simulations
Now we really start simulations. The general way to do so is to create a simulation task and then submit it to a cluster to be executed. A Task object is a general description of what should be done and boils down to staging some files to your working directory, executing a bash script and finally moving files back from your working directory to a shared storage. RP takes care of most of this very elegantly and hence a Task is designed somewhat to cover the capabilities but in a somehow simpler and more pythonic way.
For example there is a RPC Python Call Task that allows you to execute a function remotely and pull back the results.
Functional Events
We want to first look into a way to run python code asynchroneously in the project. For this, write a function that should be executed. Start with opening a scheduler or using an existing one (in the latter case you need to make sure that when it is executed - which can take a while - the scheduler still exists).
If the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called.
Step5: turn a generator of your function use add strategy() and not strategy to the FunctionalEvent
Step6: and execute the event inside your project
Step7: after some time you will have 10 more trajectories. Just like that.
Step8: Tasks
To actually run simulations you need to have a scheduler (maybe a better name?). This instance can execute tasks or more precise you can use it to submit tasks which will be converted to ComputeUnitDescriptions and executed on the cluster previously chosen.
Step9: Now we are good to go and can run a first simulation
This works by creating a Trajectory object with a filename, a length and an initial frame. Then the engine will take this information and create a real trajectory with exactly this name, this initil frame and the given length.
Since this is such a common task you can also submit just a Trajectory without the need tp convert it to a Task first (which the engine can also do).
Out project can create new names automatically and so we want 4 new trajectories of length 100 and starting at the existing pdb_file we use to initialize the engine.
Step10: Let's submit and see
Step11: Once the trajectories exist these objects will be saved to the database. It might be a little confusing to have objects before they exist, but this way you can actually work with these trajectories like referencing even before they exist.
This would allow to write now a function that triggers when the trajectory comes into existance. But we are not doing this right now.
Wait is dangerous since it is blocking and you cannot do anything until all tasks are finished. Normally you do not need it. Especially in interactive sessions.
Step12: Look at all the files our project now contains.
Step13: Great! That was easy (I hope you agree).
Next we want to run a simple analysis.
Step14: Let's look at the model we generated
Step15: And pick some information
Step16: Next example will demonstrate on how to write a full adaptive loop
Events
A new concept. Tasks are great and do work for us. But so far we needed to submit tasks ourselves. In adaptive simulations we want this to happen automagically. To help with some of this events exist. This are basically a task_generator coupled with conditions on when to be executed.
Let's write a little task generator (in essence a function that returns tasks)
Step17: Now create an event.
Step18: .on specifies when something should be executed. In our case when the project has a number of 20 trajectories. This is not yet the case so this event will not do anything unless we simulation more trajectories.
.do specifies the function to be called.
The concept is borrowed from event based languages like often used in JavaScript.
You can build quite complex execution patterns with this. An event for example also knows when it is finished and this can be used as another trigger.
Step19: All events and tasks run parallel or at least get submitted and queue for execution in parallel. RP takes care of the actual execution.
Step20: So for now lets run more trajectories and schedule computation of models in regular intervals.
Step21: .repeat means to redo the same task when the last is finished (it will just append an infinite list of conditions to keep on running).
.until specifies a termination condition. The event will not be executed once this condition is met. Makes most sense if you use .repeat or if the trigger condition and stopping should be independent. You might say, run 100 times unless you have a good enough model.
Step22: Strategies (aka the brain)
The brain is just a collection of events. This makes it reuseable and easy to extend. | Python Code:
import sys, os
# stop RP from printing logs until severe
# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')
os.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'
from adaptivemd import (
Project,
Event, FunctionalEvent
)
from adaptivemd.engine.openmm import OpenMMEngine
from adaptivemd.analysis.pyemma import PyEMMAAnalysis
Explanation: AdaptiveMD
Example 2 - Running of Tasks
End of explanation
project = Project('test')
Explanation: Let's open our test project by its name. If you completed the previous example this should all work out of the box.
End of explanation
print project.files
print project.generators
print project.models
Explanation: Open all connections to the MongoDB and Session so we can get started.
Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
def strategy():
# create a new scheduler
local_scheduler = project.get_scheduler(cores=2)
# run 10 trajs of length 100 in parallel
tasks = local_scheduler.submit(project.new_ml_trajectory(
length=100, number=10))
# continue (all tasks need to be done)
yield tasks.is_done()
# close scheduler when job is done
local_scheduler.exit()
Explanation: Run simulations
Now we really start simulations. The general way to do so is to create a simulation task and then submit it to a cluster to be executed. A Task object is a general description of what should be done and boils down to staging some files to your working directory, executing a bash script and finally moving files back from your working directory to a shared storage. RP takes care of most of this very elegantly and hence a Task is designed somewhat to cover the capabilities but in a somehow simpler and more pythonic way.
For example there is a RPC Python Call Task that allows you to execute a function remotely and pull back the results.
Functional Events
We want to first look into a way to run python code asynchroneously in the project. For this, write a function that should be executed. Start with opening a scheduler or using an existing one (in the latter case you need to make sure that when it is executed - which can take a while - the scheduler still exists).
If the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called.
End of explanation
ev = FunctionalEvent(strategy())
Explanation: turn a generator of your function use add strategy() and not strategy to the FunctionalEvent
End of explanation
project.add_event(ev)
Explanation: and execute the event inside your project
End of explanation
print '# of files', len(project.trajectories)
Explanation: after some time you will have 10 more trajectories. Just like that.
End of explanation
scheduler = project.get_scheduler(cores=2) # get the default scheduler using 2 cores
Explanation: Tasks
To actually run simulations you need to have a scheduler (maybe a better name?). This instance can execute tasks or more precise you can use it to submit tasks which will be converted to ComputeUnitDescriptions and executed on the cluster previously chosen.
End of explanation
trajs = project.new_trajectory(pdb_file, 100, 4)
Explanation: Now we are good to go and can run a first simulation
This works by creating a Trajectory object with a filename, a length and an initial frame. Then the engine will take this information and create a real trajectory with exactly this name, this initil frame and the given length.
Since this is such a common task you can also submit just a Trajectory without the need tp convert it to a Task first (which the engine can also do).
Out project can create new names automatically and so we want 4 new trajectories of length 100 and starting at the existing pdb_file we use to initialize the engine.
End of explanation
scheduler.submit(trajs)
Explanation: Let's submit and see
End of explanation
scheduler.wait()
Explanation: Once the trajectories exist these objects will be saved to the database. It might be a little confusing to have objects before they exist, but this way you can actually work with these trajectories like referencing even before they exist.
This would allow to write now a function that triggers when the trajectory comes into existance. But we are not doing this right now.
Wait is dangerous since it is blocking and you cannot do anything until all tasks are finished. Normally you do not need it. Especially in interactive sessions.
End of explanation
print '# of files', len(project.files)
Explanation: Look at all the files our project now contains.
End of explanation
t = modeller.execute(list(project.trajectories))
scheduler(t)
scheduler.wait()
Explanation: Great! That was easy (I hope you agree).
Next we want to run a simple analysis.
End of explanation
print project.models
Explanation: Let's look at the model we generated
End of explanation
print project.models.last.data['msm']['P']
Explanation: And pick some information
End of explanation
def task_generator():
return [
engine.task_run_trajectory(traj) for traj in
project.new_ml_trajectory(100, 4)]
task_generator()
Explanation: Next example will demonstrate on how to write a full adaptive loop
Events
A new concept. Tasks are great and do work for us. But so far we needed to submit tasks ourselves. In adaptive simulations we want this to happen automagically. To help with some of this events exist. This are basically a task_generator coupled with conditions on when to be executed.
Let's write a little task generator (in essence a function that returns tasks)
End of explanation
ev = Event().on(project.on_ntraj(range(20,22,2))).do(task_generator)
Explanation: Now create an event.
End of explanation
def hello():
print 'DONE!!!'
return [] # todo: allow for None here
finished = Event().on(ev.on_done).do(hello)
scheduler.add_event(ev)
scheduler.add_event(finished)
Explanation: .on specifies when something should be executed. In our case when the project has a number of 20 trajectories. This is not yet the case so this event will not do anything unless we simulation more trajectories.
.do specifies the function to be called.
The concept is borrowed from event based languages like often used in JavaScript.
You can build quite complex execution patterns with this. An event for example also knows when it is finished and this can be used as another trigger.
End of explanation
print '# of files', len(project.files)
Explanation: All events and tasks run parallel or at least get submitted and queue for execution in parallel. RP takes care of the actual execution.
End of explanation
ev1 = Event().on(project.on_ntraj(range(30, 70, 4))).do(task_generator)
ev2 = Event().on(project.on_ntraj(38)).do(lambda: modeller.execute(list(project.trajectories))).repeat().until(ev1.on_done)
scheduler.add_event(ev1)
scheduler.add_event(ev2)
len(project.trajectories)
len(project.models)
Explanation: So for now lets run more trajectories and schedule computation of models in regular intervals.
End of explanation
print project.files
Explanation: .repeat means to redo the same task when the last is finished (it will just append an infinite list of conditions to keep on running).
.until specifies a termination condition. The event will not be executed once this condition is met. Makes most sense if you use .repeat or if the trigger condition and stopping should be independent. You might say, run 100 times unless you have a good enough model.
End of explanation
project.close()
Explanation: Strategies (aka the brain)
The brain is just a collection of events. This makes it reuseable and easy to extend.
End of explanation |
13,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An introduction to Gaussian Processes
Step1: The Gaussian Distribution
In this notebook, we'll go over the very basics of Gaussian Processes (GPs) and how to construct and draw samples from them. But first, let's review some stuff about the Gausssian distribution $-$ the familiar bell curve $-$ itself.
We're probably all familiar with numpy's built-in np.random.randn() function, which draws a sample from the standard normal $\mathcal{N}(0, 1)$, i.e., a Gaussian with zero mean and unit variance
Step2: To draw from a Gaussian distribution with a different mean $\mu$, we simply add $\mu$ to $u$, and to draw from a distribution with different variance $\sigma^2$, we simply multiply $u$ by $\sigma$. We can therefore draw from a Gaussian with mean (say) $10.0$ and variance $5.0^2$ by running
Step3: As I mentioned in my lecture, in a single dimension, the probability density function for a Gaussian with mean $\mu$ and variance $\sigma^2$ is
$$
p(y | \mu, \sigma^2) =
\frac{1}{\sqrt{2\pi\sigma^2}}
\exp\left[
-\frac{1}{2}\left(\frac{y - \mu}{\sigma}\right)^2
\right]
$$
In Exercise 1 below, we'll derive the expression for a Gaussian in $N$ dimensions. We'll do it for the case where there isn't any covariance across the $N$ dimensions, in which case the probability density is obtained by multiplying the expression above $N$ times.
<div style="background-color
Step5: Even though we derived the expression for the probability of a multidimensional Gaussian assuming no covariance between the different dimensions, the expression in Exercise 1 is general. In the most general case, the covariance matrix is allowed to have off-diagonal elements
Step7: <div style="background-color
Step8: Don't worry about the tprime keyword for now -- we'll use it later in Exercise 7.
<div style="background-color
Step10: These curves are samples from the (infinitely large) family of functions described by your GP!
<div style="background-color
Step12: Don't worry about sigma yet -- we'll discuss this below.
<div style="background-color
Step13: Since this is the probability of our data conditioned on our model, it is often referred to as a likelihood. Specifically, it is a marginal likelihood, since it's actually the likelihood of the data marginalized (integrated) over all the infinitely many curves in the family of functions defined by our GP.
<div style="background-color
Step14: Compute the log GP likelihood for this data, conditioned on different values of the lengthscale of the GP. Specifically, compute it for each of the following values
Step16: Our task is to estimate the values of $m$ and $b$, the slope and intercept of the line, respectively. Initially, we are going to assume there is no correlated noise. Our model for the $n^\mathrm{th}$ datapoint is thus
$$
\begin{align}
y_n \sim \mathcal{N}(m t_n + b, \sigma_n\mathbf{I})
\end{align}
$$
and the probability of the data given the model can be computed by calling our GP log-likelihood function
Step17: Once the chain finishes running, we're going to plot several draws from it on top of the data. We'll also plot the true line that generated the dataset (given by the variables m_true and b_true).
Step18: Let's also plot the corner plot to see how well we inferred the slope and the intercept | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from matplotlib import rcParams
rcParams["figure.dpi"] = 100
rcParams["figure.figsize"] = 12, 4
Explanation: An introduction to Gaussian Processes
End of explanation
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
u = np.random.randn(100000)
plt.hist(u, bins=50, density=1);
plt.axvline(0, color="C1", ls="--")
y = np.exp(-0.5) / (np.sqrt(2 * np.pi))
plt.plot([-1, 1], [y, y], "C1-")
plt.text(0.1, 0.05, r"$\mu$", color="C1", fontsize=20);
plt.text(0.3, 0.26, r"$\sigma$", color="C1", fontsize=20);
plt.text(-0.5, 0.26, r"$\sigma$", color="C1", fontsize=20);
Explanation: The Gaussian Distribution
In this notebook, we'll go over the very basics of Gaussian Processes (GPs) and how to construct and draw samples from them. But first, let's review some stuff about the Gausssian distribution $-$ the familiar bell curve $-$ itself.
We're probably all familiar with numpy's built-in np.random.randn() function, which draws a sample from the standard normal $\mathcal{N}(0, 1)$, i.e., a Gaussian with zero mean and unit variance:
End of explanation
np.random.seed(1)
u = np.random.randn(100000)
plt.hist(10.0 + 5.0 * u, bins=50, density=1);
Explanation: To draw from a Gaussian distribution with a different mean $\mu$, we simply add $\mu$ to $u$, and to draw from a distribution with different variance $\sigma^2$, we simply multiply $u$ by $\sigma$. We can therefore draw from a Gaussian with mean (say) $10.0$ and variance $5.0^2$ by running
End of explanation
N = 5
y = np.array([1.0, 2.0, 0.5, 0.75, 1.1])
mu = np.array([0.7, 1.5, 0.8, 0.7, 1.3])
sigma = np.array([0.3, 0.5, 0.5, 0.4, 0.5])
Explanation: As I mentioned in my lecture, in a single dimension, the probability density function for a Gaussian with mean $\mu$ and variance $\sigma^2$ is
$$
p(y | \mu, \sigma^2) =
\frac{1}{\sqrt{2\pi\sigma^2}}
\exp\left[
-\frac{1}{2}\left(\frac{y - \mu}{\sigma}\right)^2
\right]
$$
In Exercise 1 below, we'll derive the expression for a Gaussian in $N$ dimensions. We'll do it for the case where there isn't any covariance across the $N$ dimensions, in which case the probability density is obtained by multiplying the expression above $N$ times.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 1</h1>
</div>
Show (numerically) that the probability function for an uncorrelated Gaussian in $N$ dimensions
$$
p(\mathbf{y} | \boldsymbol{\mu}, \mathbf{\Sigma}) =
\prod_{n=0}^{N-1}
\frac{1}{\sqrt{2\pi\sigma_n^2}}
\exp\left[
-\frac{1}{2}\left(\frac{y_n - \mu_n}{\sigma_n}\right)^2
\right]
$$
can be written in vector form as
$$
p(\mathbf{y} | \boldsymbol{\mu}, \mathbf{\Sigma}) =
\frac{1}{\sqrt{(2\pi)^N \mathrm{det}(\mathbf{\Sigma})}}
\exp\left[
-\frac{1}{2}
(\mathbf{y} - \boldsymbol{\mu})^\top
\mathbf{\Sigma}^{-1}
(\mathbf{y} - \boldsymbol{\mu})
\right]
$$
where
$$
\mathbf{\Sigma} =
\begin{pmatrix}
\sigma_0^2 & 0 & 0 \
0 & \sigma_1^2 & 0 \
0 & 0 & \ddots
\end{pmatrix}
$$
is a diagonal covariance matrix. You can use the following random inputs to test the equivalence of the two expressions:
End of explanation
def draw_from_gaussian(mu, S, ndraws=1):
Generate samples from a multivariate gaussian
specified by covariance ``S`` and mean ``mu``.
return np.random.multivariate_normal(mu, S, (ndraws,))
Explanation: Even though we derived the expression for the probability of a multidimensional Gaussian assuming no covariance between the different dimensions, the expression in Exercise 1 is general. In the most general case, the covariance matrix is allowed to have off-diagonal elements:
$$
\mathbf{\Sigma} =
\begin{pmatrix}
\mathrm{Var(x_0)} & \mathrm{Cov(x_0, x_1)} & \cdots & \mathrm{Cov(x_0, x_N)} \ \
\mathrm{Cov(x_1, x_0)} & \mathrm{Var(x_1)} & \cdots & \mathrm{Cov(x_1, x_N)} \ \
\vdots & \vdots & \ddots & \vdots \ \
\mathrm{Cov(x_N, x_0)} & \mathrm{Cov(x_N, x_1)} & \cdots & \mathrm{Var(x_N)}
\end{pmatrix}
$$
Here, $\mathrm{Var(x_i)}$ is the variance, or square of the standard deviation, along the $i^\mathrm{th}$ dimension, and $\mathrm{Cov(x_i, x_j)}$ is the covariance $-$ a measure of how two variables vary jointly $-$ between the $i^\mathrm{th}$ and $j^\mathrm{th}$ dimensions.
It's hard to picture a multivariate Gaussian, but it helps to imagine what the contours of equal probability (or equipotential surfaces) look like. For a two dimensional standard normal with no covariance ($\mathbf{\Sigma} = \mathbf{I}$), these are just circles. As I add to the off-diagonal terms of the matrix, I introduce a preferred direction in the space, since now the dimensions are correlated. The circle becomes an ellipse whose elongation increases with the covariance. In the general case of a multidimensional Gaussian, the contours form a multidimensional ellipsoid, elongated by different amounts along different dimensions.
So. How do we draw samples from a multivariate Gaussian given $\mathbf{\Sigma}$? We use np.random.multivariate_normal.
Here's a function draw_from_gaussian(mu, S, ndraws=1) that returns ndraws samples from a multivariate Gaussian with mean mu and covariance matrix S. The shape of the output is (ndraws, ndim) where ndim is the dimension of the Gaussian:
End of explanation
def ExpSquaredCovariance(t, A=1.0, l=1.0, tprime=None):
Return the ``N x M`` exponential squared
covariance matrix.
if tprime is None:
tprime = t
TPrime, T = np.meshgrid(tprime, t)
return A ** 2 * np.exp(-0.5 * (T - TPrime) ** 2 / l ** 2)
Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1>
</div>
Use the function above to draw 10,000 samples from a zero-mean Gaussian with covariance
$$
\mathbf{\Sigma} =
\begin{pmatrix}
1 & 0.5 \
0.5 & 1
\end{pmatrix}
$$
Plot the "corner" plot for your samples using the corner package (!pip install corner):
python
from corner import corner
fig = corner(samples);
Vary the terms in the covariance matrix (recalling that it must be symmetric!) to get a sense of how they affect the joint distribution of samples.
Parametrizing the covariance
As the number of dimensions increases, and in particular as it becomes infinite (which it will, when we get to actual Gaussian Processes), it's no longer convenient to describe the covariance in terms of every single entry in the covariance matrix. Instead, it's useful to introduce the notion of a kernel, a function of just a few parameters that describes the overall structure of the covariance matrix.
In general, the covariance matrix can have any structure, but quite often in timeseries analysis data points close to each other in time are strongly correlated. This is true whether your timeseries contains photometric variability from a rotating star, the rise and fall of a supernova light curve, a gravitational wave signal, or PSF shape changes due to temperature fluctuations on the detector. This is because each of these processes have characteristic timescales over which they operate, and on timescales shorter than that, all measurements you make are likely to be close to relatively each other. But if you wait long enough, what your system will do down the line will be pretty decoupled from what it is doing right now.
So we can imagine defining a kernel that looks something like this:
<div><img src="kernel.png"></img></div>
where the covariance peaks at zero time lag and drops smoothly to zero as the time lag increases. This particular kernel is extremely useful and has a name: the Squared Exponential Kernel, defined as
$$
\begin{align}
k(t_i, t_j) = A^2 \exp\left(-\frac{\left(t_i - t_j\right)^2}{2l^2}\right)
\end{align}
$$
Here is a function ExpSquaredCovariance that returns the covariance matrix described by the squared exponential kernel for a timeseries t given an amplitude A and a timescale l:
End of explanation
t = np.linspace(0, 10, 1000)
mu = np.zeros_like(t)
S = ExpSquaredCovariance(t, A=1.0, l=1.0)
np.random.seed(1)
for i in range(30):
samples = draw_from_gaussian(mu, S)
plt.plot(t, samples.T, color="k", alpha=0.1)
mu_ = np.zeros_like(t)
plt.plot(t, mu_, color="C0")
plt.xlabel("time")
plt.ylabel("GP")
std_ = np.sqrt(S.diagonal())
plt.fill_between(t, mu_ - std_, mu_ + std_, color="C0", alpha=0.3);
plt.fill_between(t, mu_ - 2 * std_, mu_ + 2 * std_, color="C0", alpha=0.15);
Explanation: Don't worry about the tprime keyword for now -- we'll use it later in Exercise 7.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 3</h1>
</div>
For t = np.linspace(0, 2, 11), draw 10,000 samples from the Gaussian described by the covariance matrix above and plot the corresponding corner plot, labeling each subplot with the time point it represents. Comment on the structure of the plot and the correlations among the different dimensions.
From Gaussians to Gaussian Processes
So far we've been taking about Gaussians distributions and how they can jointly model a few random variables at a time. But what happens as the dimensionality of the Gaussian increases and eventually becomes infinite? The Gaussian no longer represents a collection of random variables, but instead the behaviour of a continuous function. And that's exactly what a Gaussian Process is: a distribution over functions with infinitely many points. We can't technically model a function with infinitely many points on a computer, but we can investigate what happens as the number of points becomes very large.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 4</h1>
</div>
Construct a 1000 x 1000 covariance matrix using the squared exponential kernel for a timeseries spanning 10 time units, and plot an image of it using plt.imshow. Comment on the structure of the matrix.
Below, we draw 30 samples from the zero-mean Gaussian described by the covariance matrix you plotted in the previous exercise. These are shown as the transluscent black curves. The mean of the process is the solid blue line, and the 1 and 2 sigma levels are indicated with the blue shading.
End of explanation
def compute_gp(t_train, y_train, t_test, sigma=0, **kwargs):
# Compute the required matrices
Stt = ExpSquaredCovariance(t_train, **kwargs)
Stt += sigma ** 2 * np.eye(Stt.shape[0])
Spp = ExpSquaredCovariance(t_test, **kwargs)
Spt = ExpSquaredCovariance(t_test, tprime=t_train, **kwargs)
# Compute the mean and covariance of the GP
mu = np.dot(Spt, np.linalg.solve(Stt, y_train))
S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T))
return mu, S
Explanation: These curves are samples from the (infinitely large) family of functions described by your GP!
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 5</h1>
</div>
In the code above, vary the hyperparameters of the GP, $A$ and $l$, and comment on how they change the behavior of the GP.
Conditioning on data
If you made it this far, you have actually implemented your own Gaussian Process! But it's still pretty simple, because so far we are drawing only from our prior: we know the Gaussian has a certain covariance and a certain mean (zero, in the case above), but nothing else about how it's supposed to behave. To apply this to a real dataset, we need to condition the GP on observations.
Say I make two perfectly noise-free observations:
$$
\begin{align}
y(t = 2.5) &= 1.0 \
y(t = 7.5) &= -1.0
\end{align}
$$
Again, $y$ could be anything: the flux received from a star, the strain due to a gravitational wave, etc. I want to draw new samples from my GP, but not just any samples: I explicitly want to draw functions that go through those two data points. One (very inefficient) way to do this is to draw a ton of samples and discard any samples that don't agree with the data. (In this particular case, that would be literally impossible, since I know $y(t)$ at those two points exactly: I will never draw at random a function whose value is exactly what I want it to be. But note that rejection sampling can actually be useful in other cases. Anyways.)
A better way to do this is to consider the joint distribution of the observed data $y(t)$ (usually called the "training set") and the points where I'm trying to predict the value of the function $y_\star(t_\star)$ (the "test set"):
$$
\begin{align}
\begin{pmatrix}
y \
y_\star
\end{pmatrix}
&=
\mathcal{N}
\left[
\mathbf{0},
\begin{pmatrix}
\mathbf{\Sigma}(t, t) & \mathbf{\Sigma}(t, t_\star)\
\mathbf{\Sigma}(t_\star, t) & \mathbf{\Sigma}(t_\star, t_\star)
\end{pmatrix}
\right]
\end{align}
$$
where $\mathbf{\Sigma}(t, t)$ is the $N_\mathrm{train} \times N_\mathrm{train}$ covariance matrix evaluated at all pairs of training points, $\mathbf{\Sigma}(t_\star, t_\star)$ is the $N_\mathrm{test} \times N_\mathrm{test}$ covariance matrix evaluated at all pairs of test points, and the remaining two entries are the (rectangular) covariance matrices evaluated at all pairs of (test, training) points.
Given this joint distribution, we can compute the conditional distribution of $y_\star$ given $y$ with some linear algebra (for a derivation, see Appendix A.2 of Rasmussen & Williams (2006) and references therein):
$$
\begin{align}
y_\star | y \sim \mathcal{N}\left(
\mathbf{\Sigma}(t_\star, t) \mathbf{\Sigma}(t, t)^{-1}y,
\mathbf{\Sigma}(t_\star, t_\star) -
\mathbf{\Sigma}(t_\star, t) \mathbf{\Sigma}(t, t)^{-1} \mathbf{\Sigma}(t, t_\star)
\right)
\end{align}
$$
In other words, given $y$, the distribution for $y_\star$ is still Gaussian, but this time with mean equal to
$$
\begin{align}
\mathbf{\Sigma}(t_\star, t) \mathbf{\Sigma}(t, t)^{-1}y
\end{align}
$$
and covariance
$$
\begin{align}
\mathbf{\Sigma}(t_\star, t_\star) - \mathbf{\Sigma}(t_\star, t) \mathbf{\Sigma}(t, t)^{-1} \mathbf{\Sigma}(t, t_\star)
\end{align}
$$
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 6</h1>
</div>
Convince yourself that when $t_\star = t$, the mean and covariance of the GP for $y_\star$ give you what you'd expect. When $t_\star \neq t$, what can you say about the variance of the GP after versus before data was collected?
Below, we define a function compute_gp that returns the mean and covariance of the Gaussian process at the test points (y_test(t_test)) conditioned on the values at the training points (y_train(t_train)). The **kwargs are for additional optional keyword arguments passed directly to the kernel (in this simple case, A and l).
End of explanation
def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):
Return the log of the GP likelihood for a datatset y(t)
with uncertainties sigma, modeled with a Squared Exponential
Kernel with amplitude A and lengthscale l.
# The covariance and its determinant
npts = len(t)
K = ExpSquaredCovariance(t, A=A, l=l) + sigma ** 2 * np.eye(npts)
# The log marginal likelihood
log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))
log_like -= 0.5 * np.linalg.slogdet(K)[1]
log_like -= 0.5 * npts * np.log(2 * np.pi)
return log_like
Explanation: Don't worry about sigma yet -- we'll discuss this below.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 7</h1>
</div>
Given
python
t_train = np.array([2.5, 7.5])
y_train = np.array([1.0, -1.0])
t_test = np.linspace(0, 10, 1000)
compute the mean and covariance of the GP. Draw 30 samples from it as we did above. Plot the mean function and shade the 1- and 2-$\sigma$ levels. Comment on the behavior of the GP. Does the GP correctly predict the training points? How does the variance behave close to and far from the training points? How does the GP behave very far from the training points?
The last thing we'll do regarding drawing from a GP is to account for noise in the observations. If the measurement error on the data points is Gaussian and uncorrelated (white), as we usually assume it to be, we simply add a term to the joint distribution of the training and test data:
$$
\begin{align}
\begin{pmatrix}
y \
y_\star
\end{pmatrix}
&=
\mathcal{N}
\left[
\mathbf{0},
\begin{pmatrix}
\mathbf{\Sigma}(t, t) + \sigma_n^2 \mathbf{I} & \mathbf{\Sigma}(t, t_\star)\
\mathbf{\Sigma}(t_\star, t) & \mathbf{\Sigma}(t_\star, t_\star)
\end{pmatrix}
\right]
\end{align}
$$
where $\sigma_n$ is the standard deviation for the $n^\mathrm{th}$ data point in the training set and $\mathbf{I}$ is the identiy matrix. If you look at our function compute_gp above, you'll see that it accepts a sigma keyword, and that it adds its square to the $\mathbf{\Sigma}(t, t)$ term in the covariance, as shown above.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 8</h1>
</div>
Re-plot the figure from Exercise 7, this time given an observational uncertainty $\sigma = 0.25$ at both training points. Comment on the new behavior of the GP.
GP optimization
In the previous section, we learned how to construct and sample from a simple GP. This is useful for making predictions, i.e., interpolating or extrapolating based on the data you measured. But the true power of GPs comes from their application to regression and inference: given a dataset $D$ and a model $M(\theta)$, what are the values of the model parameters $\theta$ that are consistent with $D$?
A very common use of GPs is to model things you don't have an explicit physical model for, so quite often they are used to model "nuisances" in the dataset. But just because you don't care about these nuisances doesn't mean they don't affect your inference: in fact, unmodelled correlated noise can often lead to strong biases in the parameter values you infer. In this notebook, we'll learn how to compute likelihoods of Gaussian Processes so that we can marginalize over the nuisance parameters (given suitable priors) and obtain unbiased estimates for the physical parameters we care about.
Let's return to the definition of the probability density function for a multivariate Gaussian:
$$
p(\mathbf{y} | \boldsymbol{\mu}, \mathbf{\Sigma}) =
\frac{1}{(2\pi)^N \mathrm{det}(\mathbf{\Sigma})}
\exp\left[
-\frac{1}{2}
(\mathbf{y} - \boldsymbol{\mu})^\top
\mathbf{\Sigma}^{-1}
(\mathbf{y} - \boldsymbol{\mu})
\right]
$$
It's usually easier to work in log-probability space since probabilities can typically be very small. Let's therefore take the log of this:
$$
\begin{align}
\ln p(\mathbf{y} | \boldsymbol{\mu}, \mathbf{\Sigma}) = -\frac{1}{2}(\mathbf{y} - \boldsymbol{\mu})^\top \mathbf{\Sigma}^{-1} (\mathbf{y} - \boldsymbol{\mu}) - \frac{1}{2}\ln \mathrm{det}\,\mathbf{\Sigma} - \frac{N}{2} \ln 2\pi
\end{align}
$$
We can now define a function to compute this log probability given our Squared Exponential covariance:
End of explanation
import matplotlib.pyplot as plt
t, y, sigma = np.loadtxt("data/sample_data.txt", unpack=True)
plt.plot(t, y, "k.", alpha=0.5, ms=3)
plt.xlabel("time")
plt.ylabel("data");
Explanation: Since this is the probability of our data conditioned on our model, it is often referred to as a likelihood. Specifically, it is a marginal likelihood, since it's actually the likelihood of the data marginalized (integrated) over all the infinitely many curves in the family of functions defined by our GP.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 9</h1>
</div>
Consider the following dataset:
End of explanation
t, y, sigma = np.loadtxt("data/sample_data_line.txt", unpack=True)
m_true, b_true, A_true, l_true = np.loadtxt("data/sample_data_line_truths.txt", unpack=True)
plt.errorbar(t, y, yerr=sigma, fmt="k.", label="observed")
plt.plot(t, m_true * t + b_true, color="C0", label="truth")
plt.legend(fontsize=12)
plt.xlabel("time")
plt.ylabel("data");
Explanation: Compute the log GP likelihood for this data, conditioned on different values of the lengthscale of the GP. Specifically, compute it for each of the following values:
python
l = np.linspace(0.2, 0.4, 300)
Assume you know the amplitude A to be unity (the default), and use an observational uncertainty sigma of 0.05. Then plot the likelihood as a function of $l$. To compute the likelihood from the log likelihood, do
python
like = np.exp(lnlike - np.max(lnlike))
This will normalize things so that the maximum likelihood is unity. (If you don't do this, you might run into floating point underflow errors, since the numbers we're dealing with are extremely small!)
Comment on your results. The true timescale of the dataset is $l = 0.3$. Were you able to correctly infer that?
You just performed your first GP inference problem! It was quite simple, since we assumed the data could be solely modeled with a GP. In practice, we usually have a base model (usually something that depends on physics, like a transit model) whose parameters we want to learn about; the GP is included as an additional model to capture the nuisance, correlated noise that's standing in the way of our measurement. We'll consider a problem like that in the next section.
Inference with a GP
The timeseries below was generated by a linear function of time, $y(t)= mt + b$. In addition to observational uncertainty $\sigma$ (white noise), there is a fair bit of correlated (red) noise, which we will assume is well described
by the squared exponential covariance with a certain (unknown) amplitude $A$ and timescale $l$.
End of explanation
def lnprob(p):
m, b = p
if (m < 0) or (m > 10):
return -np.inf
elif (b < 0) or (b > 30):
return -np.inf
model = m * t + b
lnlike = ln_gp_likelihood(t, y - model, sigma, A=0, l=1)
return lnlike
import emcee
print("Using emcee version {0}".format(emcee.__version__))
initial = [4.0, 15.0]
ndim = len(initial)
nwalkers = 32
p0 = initial + 1e-3 * np.random.randn(nwalkers, ndim)
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)
print("Running burn-in...")
p0, _, _ = sampler.run_mcmc(p0, 500, progress=True)
sampler.reset()
print("Running production...")
sampler.run_mcmc(p0, 1000, progress=True);
Explanation: Our task is to estimate the values of $m$ and $b$, the slope and intercept of the line, respectively. Initially, we are going to assume there is no correlated noise. Our model for the $n^\mathrm{th}$ datapoint is thus
$$
\begin{align}
y_n \sim \mathcal{N}(m t_n + b, \sigma_n\mathbf{I})
\end{align}
$$
and the probability of the data given the model can be computed by calling our GP log-likelihood function:
python
def lnprob(params):
m, b = params
model = m * t + b
return ln_gp_likelihood(t, y - model, sigma, A=0, l=1)
Note, importantly, that we are passing the residual vector, $y - (mt + b)$, to the GP, since above we coded up a zero-mean Gaussian process. We are therefore using the GP to model the residuals of the data after applying our physical model (the equation of the line).
To estimate the values of $m$ and $b$ we could generate a fine grid in those two parameters and compute the likelihood at every point, as we did above. But since we'll soon be fitting for four parameters (in the next part), we might as well upgrade our inference scheme and use the emcee package to do Markov Chain Monte Carlo (MCMC). If you haven't used emcee before, check out the first few tutorials on the documentation page. The basic setup for the problem is this:
```python
import emcee
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)
print("Running burn-in...")
p0, _, _ = sampler.run_mcmc(p0, nburn) # nburn = 500 should do
sampler.reset()
print("Running production...")
sampler.run_mcmc(p0, nsteps); # nsteps = 1000 should do
```
where nwalkers is the number of walkers (something like 20 or 30 is fine), ndim is the number of dimensions (2 in this case), and lnprob is the log-probability function for the data given the model. Finally, p0 is a list of starting positions for each of the walkers. We're going to pick eyeballed values for $m$ and $b$, then add a small random number to each to generate different initial positions for each walker. This will initialize all walkers in a ball centered on some point, and as the chain progresses they'll diffuse out and begin to explore the posterior.
End of explanation
# Plot the data
plt.errorbar(t, y, yerr=sigma, fmt=".k", capsize=0)
# The positions where the prediction should be computed
x = np.linspace(0, 10, 500)
# Plot 24 posterior samples
samples = sampler.flatchain
for s in samples[np.random.randint(len(samples), size=24)]:
m, b = s
model = m * x + b
plt.plot(x, model, color="#4682b4", alpha=0.3)
# Plot the truth
plt.plot(x, m_true * x + b_true, "C1", label="truth")
plt.ylabel("data")
plt.xlabel("time")
plt.title("fit assuming uncorrelated noise");
Explanation: Once the chain finishes running, we're going to plot several draws from it on top of the data. We'll also plot the true line that generated the dataset (given by the variables m_true and b_true).
End of explanation
import corner
labels = ["slope", "intercept"]
truths = [m_true, b_true]
corner.corner(sampler.flatchain, truths=truths, labels=labels);
Explanation: Let's also plot the corner plot to see how well we inferred the slope and the intercept:
End of explanation |
13,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantum SVM (variational method)
The QSVM_Kernel notebook here demonstrates a kernel based approach. This notebook shows a variational method.
For further information please see
Step1: First we prepare the dataset, which is used for training, testing and the finally prediction.
Note
Step2: With the dataset ready we initialize the necessary inputs for the algorithm
Step3: With everything setup, we can now run the algorithm.
For the testing, the result includes the details and the success ratio.
For the prediction, the result includes the predicted labels. | Python Code:
from datasets import *
from qiskit_aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name
from qiskit_aqua.input import get_input_instance
from qiskit_aqua import run_algorithm
Explanation: Quantum SVM (variational method)
The QSVM_Kernel notebook here demonstrates a kernel based approach. This notebook shows a variational method.
For further information please see: https://arxiv.org/pdf/1804.11326.pdf
This notebook shows the SVM implementation based on the variational method.
End of explanation
n = 2 # dimension of each data point
training_dataset_size = 20
testing_dataset_size = 10
sample_Total, training_input, test_input, class_labels = ad_hoc_data(training_size=training_dataset_size,
test_size=testing_dataset_size,
n=n, gap=0.3, PLOT_DATA=True)
datapoints, class_to_label = split_dataset_to_data_and_labels(test_input)
print(class_to_label)
Explanation: First we prepare the dataset, which is used for training, testing and the finally prediction.
Note: You can easily switch to a different dataset, such as the Breast Cancer dataset, by replacing 'ad_hoc_data' to 'Breast_cancer' below.
End of explanation
params = {
'problem': {'name': 'svm_classification', 'random_seed': 10598},
'algorithm': {'name': 'QSVM.Variational', 'override_SPSA_params': True},
'backend': {'name': 'qasm_simulator', 'shots': 1024},
'optimizer': {'name': 'SPSA', 'max_trials': 200, 'save_steps': 1},
'variational_form': {'name': 'RYRZ', 'depth': 3},
'feature_map': {'name': 'SecondOrderExpansion', 'depth': 2}
}
algo_input = get_input_instance('SVMInput')
algo_input.training_dataset = training_input
algo_input.test_dataset = test_input
algo_input.datapoints = datapoints[0]
Explanation: With the dataset ready we initialize the necessary inputs for the algorithm:
- the input dictionary (params)
- the input object containing the dataset info (algo_input).
End of explanation
result = run_algorithm(params, algo_input)
print("testing success ratio: ", result['testing_accuracy'])
print("predicted classes:", result['predicted_classes'])
Explanation: With everything setup, we can now run the algorithm.
For the testing, the result includes the details and the success ratio.
For the prediction, the result includes the predicted labels.
End of explanation |
13,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accessing and Plotting Meshes
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: The 'protomesh'
The 'protomesh' is the mesh of each star in its own reference frame at periastron. The coordinates are defined such that the x-axis points towards the other component in the parent orbit.
To build the protomesh, set 'protomesh' to be True, either in the compute options or directly as a keyword argument when calling run_compute.
Step3: You'll see that the resulting model has a single dataset kind ('mesh') and with a dataset tag of 'protomesh'.
Step4: Now let's look at the parameters in the protomesh
Step5: The 'pbmesh'
'pbmesh' is an automatically-created dataset in the returned model which stores the mesh at every time-point at which it was required to be built by other existing datasets.
Again, these will only be stored in the returned model if pbmesh=True is passed during run_compute or is True in the passed compute options.
Step6: Our model now has dataset kinds for both the 'mesh' and 'lc' and has dataset tags for our newly-created 'lc01' dataset as well as the 'pbmesh' datasets in the model created only because pbmesh=True.
Step7: This time let's look at the parameters in the 'pbmesh' dataset of the model.
Step8: As you can see, the intensities are not available here - their dataset tags match the dataset of the light curve. Instead let's access the mesh by dataset-kind
Step9: To plot the intensities as color on the mesh, we can just plot the mesh and then reference the correct column by using twig access
Step10: The 'Mesh' Dataset Kind
If you want to force the plot itself to build at specific times but not have any observables (necessarily) computed or filled at those times, you can create a mesh dataset.
Let's create a mesh dataset that fills in the missing times from our lc dataset.
Step11: Now let's run_compute with protomesh and pbmesh set to False (these will default to the values in the compute options - which themselves are defaulted to False).
Step12: As expected, the resulting model has dataset kinds for both mesh and lc, as well as datasets for 'mesh01' and 'lc01' - but the 'pbmesh' and 'protomesh' entries are no longer created (since protomesh and pbmesh are both False).
Step13: The meshes are only stored at the times of the mesh dataset - not at the times of the lc dataset.
Step14: Since there was no lc requested at these times, the 'intensities' columns will be empty.
Step15: But we can still plot any of the dataset-independent quantities
Step16: If you want the meshes stored at both the times in the 'mesh' dataset and all other datasets, simply set pbmesh to True. | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Accessing and Plotting Meshes
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.run_compute(protomesh=True)
Explanation: The 'protomesh'
The 'protomesh' is the mesh of each star in its own reference frame at periastron. The coordinates are defined such that the x-axis points towards the other component in the parent orbit.
To build the protomesh, set 'protomesh' to be True, either in the compute options or directly as a keyword argument when calling run_compute.
End of explanation
print b['model'].kinds
print b['model'].datasets
Explanation: You'll see that the resulting model has a single dataset kind ('mesh') and with a dataset tag of 'protomesh'.
End of explanation
b.filter(dataset='protomesh', context='model')
b.filter(dataset='protomesh', context='model', component='primary')
b.get_value(dataset='protomesh', context='model', component='primary', qualifier='teffs')
axs, artists = b.filter(dataset='protomesh', context='model', component='secondary').plot(facecolor='teffs', edgecolor=None)
Explanation: Now let's look at the parameters in the protomesh
End of explanation
b.add_dataset('lc', times=[0,1,2], dataset='lc01')
b.run_compute(pbmesh=True)
Explanation: The 'pbmesh'
'pbmesh' is an automatically-created dataset in the returned model which stores the mesh at every time-point at which it was required to be built by other existing datasets.
Again, these will only be stored in the returned model if pbmesh=True is passed during run_compute or is True in the passed compute options.
End of explanation
print b['model'].kinds
print b['model'].datasets
Explanation: Our model now has dataset kinds for both the 'mesh' and 'lc' and has dataset tags for our newly-created 'lc01' dataset as well as the 'pbmesh' datasets in the model created only because pbmesh=True.
End of explanation
b.filter(dataset='pbmesh', context='model')
b.filter(dataset='pbmesh', context='model', component='primary')
Explanation: This time let's look at the parameters in the 'pbmesh' dataset of the model.
End of explanation
b.filter(kind='mesh', context='model', component='primary')
b.filter(dataset='lc01', kind='mesh', context='model', component='primary')
Explanation: As you can see, the intensities are not available here - their dataset tags match the dataset of the light curve. Instead let's access the mesh by dataset-kind:
End of explanation
axs, artists = b.filter(kind='mesh', context='model', time=1.0).plot(facecolor='intensities@lc01', edgecolor='teffs')
Explanation: To plot the intensities as color on the mesh, we can just plot the mesh and then reference the correct column by using twig access:
End of explanation
b.get_value('times@lc01@dataset')
b.add_dataset('mesh', times=[0.5, 1.5], dataset='mesh01')
Explanation: The 'Mesh' Dataset Kind
If you want to force the plot itself to build at specific times but not have any observables (necessarily) computed or filled at those times, you can create a mesh dataset.
Let's create a mesh dataset that fills in the missing times from our lc dataset.
End of explanation
b.run_compute(protomesh=False, pbmesh=False)
Explanation: Now let's run_compute with protomesh and pbmesh set to False (these will default to the values in the compute options - which themselves are defaulted to False).
End of explanation
print b['model'].kinds
print b['model'].datasets
Explanation: As expected, the resulting model has dataset kinds for both mesh and lc, as well as datasets for 'mesh01' and 'lc01' - but the 'pbmesh' and 'protomesh' entries are no longer created (since protomesh and pbmesh are both False).
End of explanation
b.filter(kind='mesh', context='model').times
Explanation: The meshes are only stored at the times of the mesh dataset - not at the times of the lc dataset.
End of explanation
b.get_value(kind='mesh', context='model', dataset='lc01', time=0.5, qualifier='intensities', component='primary')
Explanation: Since there was no lc requested at these times, the 'intensities' columns will be empty.
End of explanation
b.filter(dataset='mesh01', kind='mesh', context='model', component='primary', time=0.5)
axs, artists = b.filter(dataset='mesh01', kind='mesh', context='model', time=0.5).plot(facecolor='teffs', edgecolor=None)
Explanation: But we can still plot any of the dataset-independent quantities
End of explanation
b.run_compute(pbmesh=True)
b.filter(kind='mesh', context='model').times
Explanation: If you want the meshes stored at both the times in the 'mesh' dataset and all other datasets, simply set pbmesh to True.
End of explanation |
13,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neal's funnel
This notebook introduces a toy distribution introduced by Radford Neal and is the $d+1$ dimensional,
$p(\boldsymbol{x},\nu) = \left[\prod_{i=1}^{d} \mathcal{N}(x_i|0,e^{\nu / 2})\right] \mathcal{N}(\nu|0,3),$
which has shown to cause problems for samplers owing to its "funnel" shaped geometry in the marginals $(x_i,\nu)$,
$p(x_i,\nu) = \mathcal{N}(x_i|0,e^{\nu / 2})\mathcal{N}(\nu|0,3),$
which we now plot.
Step1: We can also sample independently from this toy LogPDF, and add that to the visualisation
Step2: We now try to sample from the distribution with MCMC
Step3: The adaptive covariance fails to get into the funnel region.
Step4: Now check how close the result is to the expected result, using the Kullback-Leibler divergence, and compare this to the result from sampling directly.
Step5: Hamiltonian Monte Carlo fares much better on this curved density.
Step6: Hamiltonian Monte Carlo does better than adaptive but still not great.
Step7: Visualising the path of one of the chains the sampler struggles to explore both the neck and the outside region efficiently. | Python Code:
import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create log pdf
log_pdf = pints.toy.NealsFunnelLogPDF()
# Plot marginal density
levels = np.linspace(-7, -1, 20)
x = np.linspace(-10, 10, 100)
y = np.linspace(-10, 10, 100)
X, Y = np.meshgrid(x, y)
Z = [[log_pdf.marginal_log_pdf(i, j) for i in x] for j in y]
plt.contour(X, Y, Z, levels = levels)
plt.xlabel('x_i')
plt.ylabel('nu')
plt.show()
Explanation: Neal's funnel
This notebook introduces a toy distribution introduced by Radford Neal and is the $d+1$ dimensional,
$p(\boldsymbol{x},\nu) = \left[\prod_{i=1}^{d} \mathcal{N}(x_i|0,e^{\nu / 2})\right] \mathcal{N}(\nu|0,3),$
which has shown to cause problems for samplers owing to its "funnel" shaped geometry in the marginals $(x_i,\nu)$,
$p(x_i,\nu) = \mathcal{N}(x_i|0,e^{\nu / 2})\mathcal{N}(\nu|0,3),$
which we now plot.
End of explanation
direct = log_pdf.sample(1500)
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.scatter(direct[:, 0], direct[:, 9], alpha=0.2)
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.show()
Explanation: We can also sample independently from this toy LogPDF, and add that to the visualisation:
End of explanation
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform(-25, 25, size=(3, 10))
mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HaarioBardenetACMC)
# Stop after 10000 iterations
mcmc.set_max_iterations(3000)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
Explanation: We now try to sample from the distribution with MCMC:
End of explanation
stacked = np.vstack(chains)
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.scatter(stacked[:, 0], stacked[:, 9], alpha=0.2)
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.show()
Explanation: The adaptive covariance fails to get into the funnel region.
End of explanation
print(log_pdf.kl_divergence(stacked))
print(log_pdf.kl_divergence(direct))
Explanation: Now check how close the result is to the expected result, using the Kullback-Leibler divergence, and compare this to the result from sampling directly.
End of explanation
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform(0, 10, size=(3, 10))
sigma0 = np.repeat(0.25, 10)
mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HamiltonianMCMC, sigma0=sigma0)
# Stop after 10000 iterations
mcmc.set_max_iterations(500)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
stacked = np.vstack(chains)
Explanation: Hamiltonian Monte Carlo fares much better on this curved density.
End of explanation
print(log_pdf.kl_divergence(stacked))
print(log_pdf.kl_divergence(direct))
Explanation: Hamiltonian Monte Carlo does better than adaptive but still not great.
End of explanation
divergent_transitions = mcmc.samplers()[0].divergent_iterations()
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.plot(chains[2][:, 1], chains[2][:, 9], alpha=0.5)
plt.scatter(chains[0][divergent_transitions, 0], chains[0][divergent_transitions, 1], color='red')
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.show()
Explanation: Visualising the path of one of the chains the sampler struggles to explore both the neck and the outside region efficiently.
End of explanation |
13,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Fold Quick Start
TensorFlow Fold is a library for turning complicated Python data structures into TensorFlow Tensors.
Step1: The basic elements of Fold are blocks. We'll start with some blocks that work on simple data types.
Step2: Blocks are functions with associated input and output types.
Step3: We can use eval() to see what a block does with its input
Step4: Not very exciting. We can compose simple blocks together with Record, like so
Step5: We can see that Fold's type system is a bit richer than vanilla TF; we have tuple types! Running a record block does what you'd expect
Step6: One useful thing you can do with blocks is wire them up to create pipelines using the >> operator, which performs function composition. For example, we can take our two tuple tensors and compose it with Concat, like so
Step7: Note that because Python dicts are unordered, Fold always sorts the outputs of a record block by dictionary key. If you want to preserve order you can construct a Record block from an OrderedDict.
The whole point of Fold is to get your data into TensorFlow; the Function block lets you convert a TITO (Tensors In, Tensors Out) function to a block
Step8: This is all very cute, but where's the beef? Things start to get interesting when our inputs contain sequences of indeterminate length. The Map block comes in handy here
Step9: There's no TF type for sequences of indeterminate length, but Fold has one
Step10: Right, but you've done the TF RNN Tutorial and even poked at seq-to-seq. You're a wizard with dynamic rnns. What does Fold offer?
Well, how about jagged arrays?
Step11: The Fold type system is fully compositional; any block you can create can be composed with Map to create a sequence, or Record to create a tuple, or both to create sequences of tuples or tuples of sequences
Step12: Most of the time, you'll eventually want to get one or more tensors out of your sequence, for wiring up to your particular learning task. Fold has a bunch of built-in reduction functions for this that do more or less what you'd expect
Step13: The general form of such functions is Reduce
Step14: If the order of operations is important, you should use Fold instead of Reduce (but if you can use Reduce you should, because it will be faster)
Step15: Now, let's do some learning! This is the part where "magic" happens; if you want a deeper understanding of what's happening here you might want to jump right to our more formal blocks tutorial or learn more about running blocks in TensorFlow
Step16: The reduce_net_block function creates a block (net_block) that contains a two-layer fully connected (FC) network that takes a pair of scalar tensors as input and produces a scalar tensor as output. This network gets applied in a binary tree to reduce a sequence of scalar tensors to a single scalar tensor.
One thing to notice here is that we are calling tf.squeeze with axis=1, even though the Fold output type of td.FC(1, activation=None) (and hence the input type of the enclosing Function block) is a TensorType with shape (1). This is because all Fold blocks actually run on TF tensors with an implicit leading batch dimension, which enables execution via dynamic batching. It is important to bear this in mind when creating Function blocks that wrap functions that are not applied elementwise.
Step17: The random_example function generates training data consisting of (example, fn(example)) pairs, where example is a random list of numbers, e.g.
Step18: Now we're going to train a neural network to approximate a reduction function of our choosing. Calling eval() repeatedly is super-slow and cannot exploit batch-wise parallelism, so we create a Compiler. See our page on running blocks in TensorFlow for more on Compilers and how to use them effectively.
Step19: Breaking news
Step20: Oh noes! What went wrong? Note that we trained our network to compute min on positive numbers; negative numbers are outside of its input distribution. | Python Code:
# boilerplate
import random
import tensorflow as tf
sess = tf.InteractiveSession()
import tensorflow_fold as td
Explanation: TensorFlow Fold Quick Start
TensorFlow Fold is a library for turning complicated Python data structures into TensorFlow Tensors.
End of explanation
scalar_block = td.Scalar()
vector3_block = td.Vector(3)
Explanation: The basic elements of Fold are blocks. We'll start with some blocks that work on simple data types.
End of explanation
def block_info(block):
print("%s: %s -> %s" % (block, block.input_type, block.output_type))
block_info(scalar_block)
block_info(vector3_block)
Explanation: Blocks are functions with associated input and output types.
End of explanation
scalar_block.eval(42)
vector3_block.eval([1,2,3])
Explanation: We can use eval() to see what a block does with its input:
End of explanation
record_block = td.Record({'foo': scalar_block, 'bar': vector3_block})
block_info(record_block)
Explanation: Not very exciting. We can compose simple blocks together with Record, like so:
End of explanation
record_block.eval({'foo': 1, 'bar': [5, 7, 9]})
Explanation: We can see that Fold's type system is a bit richer than vanilla TF; we have tuple types! Running a record block does what you'd expect:
End of explanation
record2vec_block = record_block >> td.Concat()
record2vec_block.eval({'foo': 1, 'bar': [5, 7, 9]})
Explanation: One useful thing you can do with blocks is wire them up to create pipelines using the >> operator, which performs function composition. For example, we can take our two tuple tensors and compose it with Concat, like so:
End of explanation
negative_block = record2vec_block >> td.Function(tf.negative)
negative_block.eval({'foo': 1, 'bar': [5, 7, 9]})
Explanation: Note that because Python dicts are unordered, Fold always sorts the outputs of a record block by dictionary key. If you want to preserve order you can construct a Record block from an OrderedDict.
The whole point of Fold is to get your data into TensorFlow; the Function block lets you convert a TITO (Tensors In, Tensors Out) function to a block:
End of explanation
map_scalars_block = td.Map(td.Scalar())
Explanation: This is all very cute, but where's the beef? Things start to get interesting when our inputs contain sequences of indeterminate length. The Map block comes in handy here:
End of explanation
block_info(map_scalars_block)
Explanation: There's no TF type for sequences of indeterminate length, but Fold has one:
End of explanation
jagged_block = td.Map(td.Map(td.Scalar()))
block_info(jagged_block)
Explanation: Right, but you've done the TF RNN Tutorial and even poked at seq-to-seq. You're a wizard with dynamic rnns. What does Fold offer?
Well, how about jagged arrays?
End of explanation
seq_of_tuples_block = td.Map(td.Record({'foo': td.Scalar(), 'bar': td.Scalar()}))
seq_of_tuples_block.eval([{'foo': 1, 'bar': 2}, {'foo': 3, 'bar': 4}])
tuple_of_seqs_block = td.Record({'foo': td.Map(td.Scalar()), 'bar': td.Map(td.Scalar())})
tuple_of_seqs_block.eval({'foo': range(3), 'bar': range(7)})
Explanation: The Fold type system is fully compositional; any block you can create can be composed with Map to create a sequence, or Record to create a tuple, or both to create sequences of tuples or tuples of sequences:
End of explanation
((td.Map(td.Scalar()) >> td.Sum()).eval(range(10)),
(td.Map(td.Scalar()) >> td.Min()).eval(range(10)),
(td.Map(td.Scalar()) >> td.Max()).eval(range(10)))
Explanation: Most of the time, you'll eventually want to get one or more tensors out of your sequence, for wiring up to your particular learning task. Fold has a bunch of built-in reduction functions for this that do more or less what you'd expect:
End of explanation
(td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.multiply))).eval(range(1,10))
Explanation: The general form of such functions is Reduce:
End of explanation
((td.Map(td.Scalar()) >> td.Fold(td.Function(tf.divide), tf.ones([]))).eval(range(1,5)),
(td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.divide), tf.ones([]))).eval(range(1,5))) # bad, not associative!
Explanation: If the order of operations is important, you should use Fold instead of Reduce (but if you can use Reduce you should, because it will be faster):
End of explanation
def reduce_net_block():
net_block = td.Concat() >> td.FC(20) >> td.FC(1, activation=None) >> td.Function(lambda xs: tf.squeeze(xs, axis=1))
return td.Map(td.Scalar()) >> td.Reduce(net_block)
Explanation: Now, let's do some learning! This is the part where "magic" happens; if you want a deeper understanding of what's happening here you might want to jump right to our more formal blocks tutorial or learn more about running blocks in TensorFlow
End of explanation
def random_example(fn):
length = random.randrange(1, 10)
data = [random.uniform(0,1) for _ in range(length)]
result = fn(data)
return data, result
Explanation: The reduce_net_block function creates a block (net_block) that contains a two-layer fully connected (FC) network that takes a pair of scalar tensors as input and produces a scalar tensor as output. This network gets applied in a binary tree to reduce a sequence of scalar tensors to a single scalar tensor.
One thing to notice here is that we are calling tf.squeeze with axis=1, even though the Fold output type of td.FC(1, activation=None) (and hence the input type of the enclosing Function block) is a TensorType with shape (1). This is because all Fold blocks actually run on TF tensors with an implicit leading batch dimension, which enables execution via dynamic batching. It is important to bear this in mind when creating Function blocks that wrap functions that are not applied elementwise.
End of explanation
random_example(sum)
random_example(min)
def train(fn, batch_size=100):
net_block = reduce_net_block()
compiler = td.Compiler.create((net_block, td.Scalar()))
y, y_ = compiler.output_tensors
loss = tf.nn.l2_loss(y - y_)
train = tf.train.AdamOptimizer().minimize(loss)
sess.run(tf.global_variables_initializer())
validation_fd = compiler.build_feed_dict(random_example(fn) for _ in range(1000))
for i in range(2000):
sess.run(train, compiler.build_feed_dict(random_example(fn) for _ in range(batch_size)))
if i % 100 == 0:
print(i, sess.run(loss, validation_fd))
return net_block
Explanation: The random_example function generates training data consisting of (example, fn(example)) pairs, where example is a random list of numbers, e.g.:
End of explanation
sum_block = train(sum)
sum_block.eval([1, 1])
Explanation: Now we're going to train a neural network to approximate a reduction function of our choosing. Calling eval() repeatedly is super-slow and cannot exploit batch-wise parallelism, so we create a Compiler. See our page on running blocks in TensorFlow for more on Compilers and how to use them effectively.
End of explanation
min_block = train(min)
min_block.eval([2, -1, 4])
Explanation: Breaking news: deep neural network learns to calculate 1 + 1!!!!
Of course we've done something a little sneaky here by constructing a model that can only represent associative functions and then training it to compute an associative function. The technical term for being sneaky in machine learning is inductive bias.
End of explanation
min_block.eval([0.3, 0.2, 0.9])
Explanation: Oh noes! What went wrong? Note that we trained our network to compute min on positive numbers; negative numbers are outside of its input distribution.
End of explanation |
13,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:35
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
13,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Create TensorFlow DNN model </h1>
This notebook illustrates
Step1: <h2> Create TensorFlow model using TensorFlow's Estimator API </h2>
<p>
First, write an input_fn to read the data.
Step2: Next, define the feature columns
Step3: To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
Step4: Finally, train! | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
Explanation: <h1> Create TensorFlow DNN model </h1>
This notebook illustrates:
<ol>
<li> Creating a model using the high-level Estimator API
</ol>
End of explanation
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
TRAIN_STEPS = 1000
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size=10*batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
return _input_fn
Explanation: <h2> Create TensorFlow model using TensorFlow's Estimator API </h2>
<p>
First, write an input_fn to read the data.
End of explanation
# Define feature columns
def get_categorical(name, values):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(name, values))
def get_cols():
# Define column types
return [\
get_categorical('is_male', ['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
get_categorical('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
Explanation: Next, define the feature columns
End of explanation
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.placeholder(tf.string, [None]),
'mother_age': tf.placeholder(tf.float32, [None]),
'plurality': tf.placeholder(tf.string, [None]),
'gestation_weeks': tf.placeholder(tf.float32, [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNRegressor(
model_dir = output_dir,
feature_columns = get_cols(),
hidden_units = [64, 32],
config = run_config)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Explanation: To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
End of explanation
# Run the model
shutil.rmtree('babyweight_trained', ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate('babyweight_trained')
Explanation: Finally, train!
End of explanation |
13,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Effect of September-11 Terrorist Attack on Hate Crimes Against Muslims in the United Stateds
Course
Step1: Data Import (2005 - 2015)
Files for years 2005-2015 are available in excel format. The files are downloaded into the working directory, and then imported using pandas library. For the above-mentioned years,the data is available in different tables and these tables categorise the data differently. We have used Table1, since it categorises the hate criimes against religion into different religions, which is most appropriate for this project. For example, for year 2015, we will get Table 1 from the source.
2015 Data
Download Source = https
Step2: 2014 Data
Download Source = https
Step3: 2013 Data
Download Source = https
Step4: 2012 Data
Download Source = https
Step5: 2011 Data
Download Source = https
Step6: 2010 Data
Download Source = https
Step7: 2009 Data
Download Source = https
Step8: 2007 Data
Download Source = https
Step9: 2006 Data
Download Source = https
Step10: 2005 Data
Download Source = https
Step11: Data Web Scraping
Step12: Data Import 1996 - 2004
Files for yeas 1995-2003, Table 1s are available within pdf reports, and will require a seperate importation technique as compared to the excel files. The relevant data is manually copied from each pdf file seperate, and stored into seperte local files.
Steps for Data Collection using example of 2003
Step13: DataFrame Description for a particular year
Headers
Step14: Combining DataFrames for all years into one DataFrame
all_years is the list of the DataFrames for all years.
We want to combine the data for all the years into one DataFrame so that it can be used for analysis.
Folloing Steps Are taken for Combining the Data
Step15: Q
Step16: Answer
Step17: Q
Step18: Answer
Step19: Answer
Step20: Answer
The ratio being discussed is shown above. Hate crimes targetting Muslims as a ratio of the hate-crimes motivated by religion has increased a lot in 2011 because of September-11 terrorist attack.Before, 2011, it was below 3% consistently, and in 2011 it went beyond 25%. After 2011, it never went down to its pre-2011 number. This also shows that the September-11 incident has increased the general sentiment against Muslims, and even after over a decade, the effect of September-11 on hate crimes against Muslims is clearly evident. Moreover, we can also see that the ratio has been rising in recent years showing that even though the number of hate-crimes against religions as a whole are decreasing, among those numbers, the ratio of attacks on Muslims in increaseing.
Q | Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for pandas
import requests
from bs4 import BeautifulSoup
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
plt.style.use('ggplot')
Explanation: Effect of September-11 Terrorist Attack on Hate Crimes Against Muslims in the United Stateds
Course: Data Bootcamp
Report Author: Muaz Ahmad
Report Date: May 12, 2017
New York University
Leonard N. Stern School of Business
Image Source = http://www.huffingtonpost.com/entry/hate-crimes-muslims-since-911_us_57e00644e4b04a1497b59970
Research Question:
Main Research Question:
Q: Did September-11 terrorist attack have an impact on hate crimes against Muslims? How much and what impact did the incident have?
Analysis Questions: I will be using the data to answer the following questions. Answers to the following questions will be used to answer the main research question.
How have the hate crimes against Muslims changed in terms of number of incidents per year?
Did September-11 Terrorist Attack had an impact on the hate crimes against Muslims? If so, how much impact did September-11 Terrorist Attack had?
How have the hate crimes against All religion changed in terms of number of incidents per year?
What percentage of hate crimes motivated by religion identity target Muslims every year?
On average what percentages of attacks motivated by religion targetted Muslims, before and after the September-11 Terrorist Attack?
Data Source
The project focuses on the affect of 9/11 incident on the change in the hate crimes against Muslims in the United States. The data used for the project has been collected as Hate Crime Statistics through the Uniform Crime Reporting (UCR) Program of Federal Bureau of Investigation (FBI). The data is avialable on from FBI's website, where the data is reported on an yearly basis.
The data is available for years 1995 to 2015, except 2009.
For each year: The data has been divided into different tables, based on the following aspects:
Incidents and Offenses
Victims
Offenders
Location Type
Hate Crime by Jurisdication
The projects uses the data categorised based on Incidents and Offenses, since the data is categorised into different types of hate crimes including Anti-Religion. The project utelises the data from year 1995-2015.
Limitations of the Data:
The data does not include the statistics for Hawaii. The data is not reported for Hawaii in the records.
The FBI collects data from independent law-enforcing agencies in different towns, cities, counties, metropolitan areas and university areas. Therefore, the data is contingent upon their reporting.
There is no data available for 2009.
Preliminaries
End of explanation
url = 'table1_2015.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2015 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E", headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2015 = data_2015[12:27]
#columns = ['Incidents','Offenses', 'Victims','Known Offenders']
#original_2015 = religion_2015.copy(deep=True)
#for col in columns:
# new_val = religion_2015.iloc[5][col] + religion_2015[col][7:14].sum()
#print(new_val)
#religion_2015 = religion_2015.set_value('Anti-Other Religion',col,new_val)
#religion_2015.ix['Anti-Other Religion',col] = new_val
religion_2015
Explanation: Data Import (2005 - 2015)
Files for years 2005-2015 are available in excel format. The files are downloaded into the working directory, and then imported using pandas library. For the above-mentioned years,the data is available in different tables and these tables categorise the data differently. We have used Table1, since it categorises the hate criimes against religion into different religions, which is most appropriate for this project. For example, for year 2015, we will get Table 1 from the source.
2015 Data
Download Source = https://ucr.fbi.gov/hate-crime/2015/tables-and-data-declarations/1tabledatadecpdf
The file is saved locally as "table1_2015.xls"
End of explanation
url = 'table1_2014.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2014 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2014 = data_2014[9:17]
Explanation: 2014 Data
Download Source = https://ucr.fbi.gov/hate-crime/2014/tables/table-1
The file is saved locally as "table1_2014.xls"
End of explanation
url = 'table1_2013.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2013 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2013 = data_2013[9:17]
Explanation: 2013 Data
Download Source = https://ucr.fbi.gov/hate-crime/2013/tables/1tabledatadecpdf/table_1_incidents_offenses_victims_and_known_offenders_by_bias_motivation_2013.xls
The file is saved locally as "table1_2013.xls"
End of explanation
url = 'table1_2012.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2012 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2012 = data_2012[8:16]
Explanation: 2012 Data
Download Source = https://ucr.fbi.gov/hate-crime/2012/tables-and-data-declarations/1tabledatadecpdf/table_1_incidents_offenses_victims_and_known_offenders_by_bias_motivation_2012.xls
The file is saved locally as "table1_2012.xls"
End of explanation
url = 'table1_2011.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2011 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2011 = data_2011[8:16]
Explanation: 2011 Data
Download Source = https://ucr.fbi.gov/hate-crime/2011/tables/table-1
The file is saved locally as "table1_2011.xls"
End of explanation
url = 'table1_2010.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2010 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2010 = data_2010[7:15]
Explanation: 2010 Data
Download Source = https://ucr.fbi.gov/hate-crime/2010/tables/table-1-incidents-offenses-victims-and-known-offenders-by-bias-motivation-2010.xls
The file is saved locally as "table1_2010.xls"
End of explanation
url = 'table1_2008.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2008 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2008 = data_2008[7:15]
Explanation: 2009 Data
Download Source = https://ucr.fbi.gov/hate-crime/2010/tables/table-1-incidents-offenses-victims-and-known-offenders-by-bias-motivation-2010.xls
The file is saved locally as "table1_2010.xls"
2008 Data
Download Source = https://ucr.fbi.gov/hate-crime/2008
The file is saved locally as "table1_2008.xls"
End of explanation
url = 'table1_2007.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2007 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2007 = data_2007[7:15]
Explanation: 2007 Data
Download Source = https://ucr.fbi.gov/hate-crime/2007
The file is saved locally as "table1_2007.xls"
End of explanation
url = 'table1_2006.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2006 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2006 = data_2006[7:15]
Explanation: 2006 Data
Download Source = https://ucr.fbi.gov/hate-crime/2006
The file is saved locally as "table1_2006.xls"
End of explanation
url = 'table1_2005.xls'
headers = ['Incidents','Offenses','Victims1','Known offenders2']
data_2005 = pd.read_excel(url, skiprows=3, skipfooter=3, parse_cols="A,B,C,D,E",headers = None, names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])
religion_2005 = data_2005[7:15]
Explanation: 2005 Data
Download Source = https://ucr.fbi.gov/hate-crime/2005
The file is saved locally as "table1_2005.xls"
End of explanation
target = 'source_2004.txt'
target = open(target, "w")
url = "https://www2.fbi.gov/ucr/hc2004/hctable1.htm"
data_2004 = requests.get(url)
data_2004_soup = BeautifulSoup(data_2004.content, 'html.parser')
data_2004_soup
religion_part = data_2004_soup.find_all('tr')
for row_number in range(9,17):
row = religion_part[row_number]
tmp_string = ''
table_header = row.find('th')
table_values = row.find_all('td')
tmp_string += table_header.text + ' '
for tb in table_values:
tmp_string += tb.text + ' '
tmp_string = tmp_string[:-1].replace('\n','') + '\n'
target.write(tmp_string)
target.close()
Explanation: Data Web Scraping: 2004
Download Source = https://www2.fbi.gov/ucr/hc2004/hctable1.htm
The data for 2004 is available as a HTML table on the website.
Following steps are followed in data collection for 2004:
Request the content of the source page using python request libray.
Format the import webpage contect using python BeautifulSoup libray's 'html.parser'.
Since the relevant data is available in html table, loop through the relevant table rows, and extract the data values.
For each row, write the extracted data values into a local file, named 'source_2004.txt'
End of explanation
# Global Variables
all_years = [] # list of all the DataFrames.
sourcenames = ["source_"+str(year)+".txt" for year in range(1996,2005)] # list of source files names for 1996-2003, to be converted to .csv
targetnames = ["table1_"+str(year)+".csv" for year in range(1996,2005)] # List of name of all .csv files, to be imported in DataFrames
datanames = ["religion_"+str(year) for year in range(1996,2005)] # List of name of all dataframes, to be created e.g religion_1998,religion1999
'''
Steps for cleaing and converting the files to .csv format,
and loading them in pandas DataFrames, using year 2003 as example:
'''
# Loop through the years 1996 to 2003 and repeat the same steps.
for i in range(9):
source = sourcenames[i]
target = targetnames[i]
try:
#Open the source file e.g source_2003
source = open(source,"r",)
except:
print("Could not open the source file")
else:
# Open the target file e.g table1_2003.csv
target = open(target, "w")
lines = source.readlines();
rows = len(lines)
cols = 5
# Loop through each line in the source file:
for line in lines:
# Remove the endline character i.e '\n'
line = line.replace('\n','')
# Remove all the commas ',' from the line.
line = line.replace(",","")
# Split the line into an array, using empty space as split character
line_elements= line.split(' ')
# Check if the number of array elements are greater than. If so, array[:-4] are part of the index in the table: join these elements into one element.
if len(line_elements) > 5:
# join the resulting elemets into a string using ',' as join character, and ending the string with newline character '\n'.
new_line = " ".join(line_elements[:-4]) + ',' + ','.join(line_elements[-4:]) + '\n'
else:
# join the resulting elemets into a string using ',' as join character, and ending the string with newline character '\n'.
new_line = ','.join(line_elements) + '\n'
# write the resutling string to target file.
target.write(new_line)
# Close the target and source files.
source.close()
target.close()
url = targetnames[i]
# Use pandas_readcsv(filename) method to read the .csv file into DataFrames. Set DataFrame headers to ["Motivation","Incidents","Offenses","Victims","Known Offenders"]. Name the returned DataFrame as religion_2003.
exec('%s = pd.read_csv(url, engine = "python", names = ["Motivation","Incidents","Offenses","Victims","Known Offenders"])' % (datanames[i]))
# Save religion_2003 to all_years array of DataFrames.
exec('all_years.append(%s)' % (datanames[i]))
# adding DataFrames for years 2005-2015 excluding 2009 into the all_years list
all_years.extend([religion_2005,religion_2006,religion_2007,religion_2008,religion_2010,religion_2011,religion_2012,religion_2013,religion_2014])
print('Variable dtypes:\n', religion_2000.dtypes, sep='')
religion_1996
rel = religion_1996['Motivation']
rel
Explanation: Data Import 1996 - 2004
Files for yeas 1995-2003, Table 1s are available within pdf reports, and will require a seperate importation technique as compared to the excel files. The relevant data is manually copied from each pdf file seperate, and stored into seperte local files.
Steps for Data Collection using example of 2003:
The pdf data report is available on FBI's website.
Copy the relevant table rows representing the religion section from Table 1 in the report.
Paste the copied data into a local file named 'source_2003.txt'
Save the file.
Repeat the same steps for year 1996-2002 too.
Once the Data has been saved locally, it is cleaned and converted into .csv format, such that it can be directly imported into DataFrames afterwards.
Steps for cleaing and converting the files to .csv format, and loading them in pandas DataFrames, using year 2003 as example:
Open the source file e.g source_2003
Open the target file e.g table1_2003.csv
Loop through each line in the source file:
Remove the endline character i.e '\n'
Remove all the commas ',' from the line.
Split the line into an array, using empty space as split character.
Check if the number of array elements are greater than. If so, array[:-4] are part of the index in the table: join these elements into one element.
join the resulting elemets into a string using ',' as join character, and ending the string with newline character '\n'.
write the resutling string to target file.
Close the target and source files.
Use pandas_readcsv(filename) method to read the .csv file into DataFrames. Set DataFrame headers to ["Motivation","Incidents","Offenses","Victims","Known Offenders"]. Name the returned DataFrame as religion_2003.
Save religion_2003 to all_years array of DataFrames.
Loop through the years 1996 to 2003 and repeat the same steps.
End of explanation
religion_2003
Explanation: DataFrame Description for a particular year
Headers:
Motivation: The Motivation behind the hate crime. Anti-Islamic means hate crimes motivated by sentiment againsts Islam/Muslims(followers of Islam)
Incidents: Total Number of reported incidents of hate crimes, for a particular motivation
Offenses: Total Number of reported offenses of hate crimes, for a particular motivation
Victims : Total Number of reported victims of hate crimes, for a particular motivation
Known Offender: Total Number of reported known offenses of hate crimes, for a particular motivation
Indexes : Motivation (Following motivations have been recorded)
Religion: (Total Number for All Religions)
Anti-Jewish
Anti-Catholic
Anti-Protestant
Anti-Islamic
Anti-Other Religious Group (Total Number for Other religius groups)
Anti-Multi-Religious Group (Total Number for crimes which targetted multiple religions together)
Anti-Atheism/Agnosticism/etc.
Example for Year 2003 is shown below
End of explanation
#Variables and Description
# List of Indices (Motivation) in a DataFrame for a particular yaer
header_rows = ['All Religion','Anti-Jewish','Anti-Catholic','Anti-Protestants','Anti-Islamic','Anti-Other Religion','Anti-Multiple Religion,Group','Anti-Atheism/Agnosticism/etc.']
# List of headers in a DataFrame for particular yaer
columns = ['Incidents','Offenses', 'Victims','Known Offenders']
# List of headers for the new DataFrame
all_years_headers = []
#List of list of all values in the DataFrames for all years
all_years_list=[]
# List of the new indices, representing all reported years, for the new DataFrams.
all_years_keys = []
'''
Folloing Steps Are taken for Combining the Data:
'''
'''
Combine 8 Motivations with the different data values' headers:
* Use the 8 motivations : ['All Religion','Anti-Jewish','Anti-Catholic','Anti-Protestants','Anti-Islamic','
Anti-Other Religion','Anti-Multiple Religion,Group','Anti-Atheism/Agnosticism/etc.']
* Use the 4 Data Values headers = ['Incidents','Offenses', 'Victims','Known Offenders']
* Create 32 headers such that for each motivation, there are 4 different headers for the different data values.
* E.g for 'Anti-Jewish' motivation, the resulting headers will be Anti-Jewish: Incidents,Anti-Jewish: Offenses,
Anti-Jewish: Victims', and Anti-Jewish: Known Offenders.
* all_years_headers is the list of all the generated headers.
'''
for row in header_rows:
for col in columns:
header_val = row + ': ' + str(col)
all_years_headers.append(header_val)
'''
Generate a list called all_years_keys, which will correspond to the indices of the new DataFrame.
'''
for i in list(range(1996,2009)) + list(range(2010, 2015)):
all_years_keys.append(str(i))
count = 0
'''
Create the combined DataFrame:
'''
# Loop through all_year - the list of the DataFrames representing each year *
for single_year in all_years:
tmp_list =[]
# Within each DataFrameLoop through all rows :
for row in range(8):
current_row = single_year.iloc[row]
# Within each row, loop through all column values
for col in columns:
# add the column values into a temporary list
tmp_list.append(current_row[col])
# Add the temporary list cosisting of all the data values of the data frame into all_years_list.
all_years_list.append(tmp_list)
count+=1
'''
Create the DataFrame using all_years_list as data, all_years_keys as indices, all_years_headers as headers.
Name this DataFrame hc, representing hate crimes
'''
hc = pd.DataFrame(all_years_list, columns= all_years_headers, index = all_years_keys)
hc
Explanation: Combining DataFrames for all years into one DataFrame
all_years is the list of the DataFrames for all years.
We want to combine the data for all the years into one DataFrame so that it can be used for analysis.
Folloing Steps Are taken for Combining the Data:
Combine 8 Motivations with the different data values' headers:
Use the 8 motivations : ['All Religion','Anti-Jewish','Anti-Catholic','Anti-Protestants','Anti-Islamic','Anti-Other Religion','Anti-Multiple Religion,Group','Anti-Atheism/Agnosticism/etc.']
Use the 4 Data Values headers = ['Incidents','Offenses', 'Victims','Known Offenders']
Create 32 headers such that for each motivation, there are 4 different headers for the different data values.
E.g for 'Anti-Jewish' motivation, the resulting headers will be Anti-Jewish: Incidents,Anti-Jewish: Offenses,Anti-Jewish: Victims', and Anti-Jewish: Known Offenders.
all_years_headers is the list of all the generated headers.
Generate a list called all_years_keys, which will correspond to the indices of the new DataFrame.
all_years_keys = ['1996',
'1997',
'1998',
'1999',
'2000',
'2001',
'2002',
'2003',
'2004',
'2005',
'2006',
'2007',
'2008',
'2010',
'2011',
'2012',
'2013',
'2014']
Create the combined DataFrame:
Loop through all_year - the list of the DataFrames representing each year.
Within each DataFrameLoop through all rows within each DataFrame:
Within each row, loop through all column values
add the column values into a temporary list
Add the temporary list cosisting of all the data values of the data frame into all_years_list. all_years_list is the double-nested list of data values for all years.
Create the DataFrame using all_years_list as data, all_years_keys as indices, all_years_headers as headers. Name this DataFrame hc, representing hate crimes
End of explanation
anti_islam = hc['Anti-Islamic: Incidents']
anti_islam.plot(kind='line',
grid = True,
title = 'Anti-Islam Hate Crimes',
sharey = True,
sharex = True,
use_index = True,
legend = True,
fontsize = 10
)
Explanation: Q: 1. How have the hate crimes against Muslims changed in terms of number of incidents per year?
End of explanation
print(anti_islam)
Explanation: Answer:
The number of hate crime incidents against Muslims have fluctuated a lot over the years. The most striking number of incidents took place in 2011 as shown by the graph above. Before 2011, the maximum number of incidents were 32, and the minimum were 211. In 2011, the number of incidents were 481. After 2011, the max number of incidents were 156, and the minimum were 105. In recent years, the hate crimes against Muslims have started to rise again.From 1995 to 2014, the number of incidents changed as shown below:
End of explanation
anti_islam_2011 = anti_islam[5]
anti_islam_2010 = anti_islam[4]
anti_islam_2012 = anti_islam[6]
percentage_change_2011 = (((anti_islam_2011 - anti_islam_2010)/anti_islam_2010)*100)
percentage_change_2012 = (((anti_islam_2012 - anti_islam_2011)/anti_islam_2011)*100)
print("Hate Crimes against Muslims growth in 2011 from 2010: ", percentage_change_2011, '%')
print("Hate Crimes against Muslims growth in 2010 from 2011: ", percentage_change_2012, '%')
anti_islam_before_2011 = anti_islam[:5].mean()
anti_islam_after_2011 = anti_islam[6:].mean()
print('Average hate crimes against Muslims before 2011: ', anti_islam_before_2011)
print('Average hate crimes against Muslims before 2011: ', anti_islam_after_2011)
avg = (((anti_islam_after_2011 - anti_islam_before_2011)/anti_islam_before_2011)*100)
print('Percentage increased in the average number of hate crimes against Muslims after 2011: ', avg)
Explanation: Q : Did September-11 Terrorist Attack had an impact on the hate crimes against Muslims? If so, how much impact did September-11 Terrorist Attack had?
End of explanation
anti_religion = hc['All Religion: Incidents']
anti_religion.plot(kind='line',
title = 'Hate Crimes Against All Religion',
sharey = True,
sharex = True,
use_index = True,
legend = True)
anti_religion_2011 = anti_religion[5]
anti_religion_2010 = anti_religion[4]
anti_religion_2012 = anti_religion[6]
avg_before_2011 = anti_religion[:5].mean()
avg_after_2011 = anti_religion[6:].mean()
avg_after_2008 = anti_religion[13:].mean()
print('Average Number of Crimes before 2011 : ', avg_before_2011)
print('Avearage Number of Crimes after 2011 : ', avg_after_2011)
print('Avearage Number of Crimes after 2008 : ', avg_after_2008)
print('Hate Crimes in 2011 : ', anti_religion_2011)
Explanation: Answer:
September-11 Terrorist Attack had a huge impact on the number of hate-crimes against Muslims. The incident took place in 2011, where there were 481 hate crimes against Muslims, as opposed to 28 in 2010. The number of hate crimes against Muslims increased by more than 16 times (1672%) in 2011 as compared to 2010. In the following year (2012), the number of hate crimes against Muslims decreased by almost 68%. The average number of hate crimes against Muslims were 27 before 2011, and after 2011, they have increased to an average of 142. The average number of hate crimes against Muslims increased by more than 4 times (421%).
Q: How have the hate crimes against All religion changed in terms of number of incidents per year?
End of explanation
anti_muslim_percentage= (hc['Anti-Islamic: Incidents']/hc['All Religion: Incidents'])*100
anti_muslim_percentage.plot(kind = 'line',
title = 'Percentage of Hate Crimes Against Muslims Among All Religion',
sharey = True,
sharex = True,
use_index = True)
Explanation: Answer:
As shown in the graph above, the number of hate_crimes against all religion fluctuated by going up and down in between 1996 and 2008, with a very high peak in 2011. Since 2008, the number has seen a consistent and steady decrease. It is the same year, Barrack Obama got elected as the President of the United States. The average number of crimes before 2011 were 1412, and after 2011 they were 1288. In 2011, there were 1828, and most of the stark increase can be contributed towards the stark increase in the hate crimes against Muslims.
Q: What percentage of hate crimes motivated by religion identity target Muslims every year?
End of explanation
avg_before_2011 = something[:5].mean()
#not including 2011 in either average before or after 2011
avg_after_2011 = something[6:].mean()
perc_increase = (((avg_after_2011 - avg_before_2011)/avg_before_2011)*100)
print(avg_before_2011, avg_after_2011, perc_increase)
growth_list = []
Explanation: Answer
The ratio being discussed is shown above. Hate crimes targetting Muslims as a ratio of the hate-crimes motivated by religion has increased a lot in 2011 because of September-11 terrorist attack.Before, 2011, it was below 3% consistently, and in 2011 it went beyond 25%. After 2011, it never went down to its pre-2011 number. This also shows that the September-11 incident has increased the general sentiment against Muslims, and even after over a decade, the effect of September-11 on hate crimes against Muslims is clearly evident. Moreover, we can also see that the ratio has been rising in recent years showing that even though the number of hate-crimes against religions as a whole are decreasing, among those numbers, the ratio of attacks on Muslims in increaseing.
Q: On average what percentages of attacks motivated by religion targetted Muslims, before and after the September-11 Terrorist Attack?
End of explanation |
13,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
X Inactivation
I'd like to explore the state of genes on the X chromosome and see to what degree
the iPSCs reactivate their inactive Xs.
Step1: Inactivation for Single Sample
Let's take a look at one sample.
Step2: This figure shows the distribution of gene major allele frequencies for
genes on the X chromosome and for genes on the autosomes. We can see that
autosomal genes often have a MajAF near 50% with few genes near 100%. However,
many X chromosome genes are near 100%. These genes are likely still inactivated.
Step3: We can see that the X chromosome is enriched for genes with strong ASE
likely due to incomplete X reactivation.
Inactivation for All Samples
Let's take a look at all female samples.
Step4: NANOG involved in the regulation of Xist (more NANOG expression -> less XIST expression though supposedly this doesn’t affect reactivation). OCT4 and SOX2 also supposed to affect XIST expresssion. cMYC, REX1, and KLF4 affect TSIX (this is all mouse stuff I think) (Minkovsky).
Step5: These heatmaps are all aligned by row (so the first row across all heatmaps
is the same sample, the second row across all heatmaps is the same sample, etc.).
The heatmaps are ordered by each sample's XIST expression (shown in the second heatmap).
The X MajAF distribution heatmap is the same as the histogram above for one sample
but now we are stacking up the histograms for all samples. We can see that many
samples have genes that are inactive although the amount of inactivation varies
between samples and is highly correlated with XIST.
The X chromosome is clearly enriched for having ASE which is probably due
to incomplete X reactivation.
Step6: X chromosome gene expression
Are reactivated genes expressed more highly?
Step7: The histogram above shows that genes have higher average expression
in samples where the gene does not have significant ASE.
Step8: Reactivation percent
Only some genes show residual inactivation. Are these the same genes across samples?
Step9: The above histogram shows what percentage of samples were significant
for ASE. Note that a gene in a sample is
included here only if it was tested by MBASED for ASE. I restricted to
genes that were tested by MBASED in at least 20% of the samples.
Reactivation across the X chromosome
I'd like to look at how genes are reactivated across the X chromosome.
For instance, is reactivation correlated with replication timing or
L1 density? I can consider things like
replication timing
distance from centromere or telomere
XIST binding motif density (does it have a binding motif?)
distance from X inactivation center
L1 density
Step10: Replication timing
Step11: L1 elements
I downloaded the repeat masker database from the table browser (group
Step12: Combine features
Step13: I downloaded the banding track from the table browser (group
Step14: Whole chromosome
LINE 1 elements
Step15: Position
Step16: Replicating timing
Step17: p arm
LINE 1 elements
Step18: Position
Step19: Replication timing
Step20: q arm
LINE 1 elements
Step21: Position
Step22: Replication timing
Step23: Reactivation QTLs
Step25: I'm going to substract the mean from each sample (column) to account for
differences in overall reactivation.
Step26: The following genes are nominally significant but don't pass the FDR correction. | Python Code:
import cPickle
import datetime
import glob
import os
import random
import re
import subprocess
import cdpybio as cpb
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pybedtools as pbt
import scipy
import scipy.stats as stats
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
import statsmodels as sms
import cardipspy as cpy
import ciepy
%matplotlib inline
%load_ext rpy2.ipython
import socket
if socket.gethostname() == 'fl-hn1' or socket.gethostname() == 'fl-hn2':
pbt.set_tempdir('/frazer01/home/cdeboever/tmp')
outdir = os.path.join(ciepy.root, 'output',
'x_inactivation')
cpy.makedir(outdir)
private_outdir = os.path.join(ciepy.root, 'private_output',
'x_inactivation')
cpy.makedir(private_outdir)
sns.set_style('whitegrid')
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rsem_tpm.tsv')
tpm = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rnaseq_metadata.tsv')
rna_meta = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'subject_metadata.tsv')
subject_meta = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'wgs_metadata.tsv')
wgs_meta = pd.read_table(fn, index_col=0)
gene_info = pd.read_table(cpy.gencode_gene_info, index_col=0)
genes = pbt.BedTool(cpy.gencode_gene_bed)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'cnvs.tsv')
cnvs = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data',
'mbased_major_allele_freq.tsv')
maj_af = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data',
'mbased_p_val_ase.tsv')
ase_pval = pd.read_table(fn, index_col=0)
locus_p = pd.Panel({'major_allele_freq':maj_af, 'p_val_ase':ase_pval})
locus_p = locus_p.swapaxes(0, 2)
snv_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'mbased_snv',
'*_snv.tsv'))
count_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'allele_counts',
'*mbased_input.tsv'))
snv_res = {}
for fn in snv_fns:
snv_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)
count_res = {}
for fn in count_fns:
count_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)
snv_p = pd.Panel(snv_res)
# We'll keep female subjects with no CNVs on the X chromosome.
sf = subject_meta[subject_meta.sex == 'F']
meta = sf.merge(rna_meta, left_index=True, right_on='subject_id')
s = set(meta.subject_id) & set(cnvs.ix[cnvs.chr == 'chrX', 'subject_id'])
meta = meta[meta.subject_id.apply(lambda x: x not in s)]
meta = meta.ix[[x for x in snv_p.items if x in meta.index]]
snv_p = snv_p.ix[meta.index]
a = meta.shape[0]
b = len(set(meta.subject_id))
print('Using {} samples from {} female donors.'.format(a, b))
snv_p = snv_p.ix[meta.index]
locus_p = locus_p.ix[meta.index]
# Filter and take log.
tpm_f = tpm[meta[meta.sex == 'F'].index]
tpm_f = tpm_f[(tpm_f != 0).sum(axis=1) > 0]
log_tpm = np.log10(tpm_f + 1)
# Mean center.
log_tpm_c = (log_tpm.T - log_tpm.mean(axis=1)).T
# Variance normalize.
log_tpm_n = (log_tpm_c.T / log_tpm_c.std(axis=1)).T
Explanation: X Inactivation
I'd like to explore the state of genes on the X chromosome and see to what degree
the iPSCs reactivate their inactive Xs.
End of explanation
df = locus_p.ix[meta.index[0], :, :].dropna()
x_single = df[gene_info.ix[df.index, 'chrom'] == 'chrX']
notx_single = df[gene_info.ix[df.index, 'chrom'] != 'chrX']
fig, axs = plt.subplots(1, 2, figsize=(8, 4))
ax = axs[0]
x_single.major_allele_freq.hist(ax=ax, bins=np.arange(0.5, 1.05, 0.05))
ax.set_xlim(0.5, 1)
ax.set_title('X chromosome genes')
ax.set_ylabel('Number of genes')
ax.set_xlabel('Major allele frequency')
ax = axs[1]
notx_single.major_allele_freq.hist(ax=ax, bins=np.arange(0.5, 1.05, 0.05))
ax.set_xlim(0.5, 1)
ax.set_title('Autosomal genes')
ax.set_ylabel('Number of genes')
ax.set_xlabel('Major allele frequency')
plt.tight_layout();
#plt.savefig(os.path.join(outdir, 'single_sample_majaf.pdf'))
Explanation: Inactivation for Single Sample
Let's take a look at one sample.
End of explanation
fig, axs = plt.subplots(1, 2, figsize=(8, 4))
ax = axs[0]
(-np.log10(x_single.p_val_ase + x_single.p_val_ase[x_single.p_val_ase != 0].min())).hist(ax=ax)
ax.set_title('X chromosome genes')
ax.set_ylabel('Number of genes')
ax.set_xlabel('$-\log_{10}$ $p$-value')
ax = axs[1]
(-np.log10(notx_single.p_val_ase + notx_single.p_val_ase[notx_single.p_val_ase != 0].min())).hist(ax=ax)
ax.set_title('Autosomal genes')
ax.set_ylabel('Number of genes')
ax.set_xlabel('$-\log_{10}$ $p$-value')
plt.tight_layout();
Explanation: This figure shows the distribution of gene major allele frequencies for
genes on the X chromosome and for genes on the autosomes. We can see that
autosomal genes often have a MajAF near 50% with few genes near 100%. However,
many X chromosome genes are near 100%. These genes are likely still inactivated.
End of explanation
t = locus_p.ix[:, :, 'major_allele_freq']
x_all = locus_p.ix[:, set(t.index) & set(gene_info[gene_info.chrom == 'chrX'].index), :]
notx_all = locus_p.ix[:, set(t.index) & set(gene_info[gene_info.chrom != 'chrX'].index), :]
Explanation: We can see that the X chromosome is enriched for genes with strong ASE
likely due to incomplete X reactivation.
Inactivation for All Samples
Let's take a look at all female samples.
End of explanation
genes_to_plot = ['XIST', 'TSIX', 'NANOG', 'POU5F1', 'SOX2', 'MYC', 'ZFP42', 'KLF4']
t = pd.Series(gene_info.index, index=gene_info.gene_name)
exp = log_tpm_n.ix[t[genes_to_plot]].T
exp.columns = genes_to_plot
sns.corrplot(exp);
genes_to_plot = ['XIST', 'TSIX']
t = pd.Series(gene_info.index, index=gene_info.gene_name)
exp = log_tpm_n.ix[t[genes_to_plot]].T
exp.columns = genes_to_plot
# exp = log_tpm_n.ix[[gene_info[gene_info.gene_name == 'XIST'].index[0],
# gene_info[gene_info.gene_name == 'TSIX'].index[0]]].T
# exp.columns = ['XIST', 'TSIX']
exp = exp.ix[x_all.items].sort_values(by='XIST', ascending=False)
fig = plt.figure(figsize=(12, 4))
gs = gridspec.GridSpec(1, 4, width_ratios=[0.5, 1.5, 3, 3])
ax = plt.subplot(gs[0])
sns.heatmap(np.array([meta.ix[exp.index, 'passage'].values]).T,
xticklabels=False, yticklabels=False, ax=ax)
ax.set_ylabel('')
ax.set_title('Passage')
ax = plt.subplot(gs[1])
sns.heatmap(exp, yticklabels=False, ax=ax)
ax.set_ylabel('')
ax.set_title('Expression')
#for t in ax.get_xticklabels():
# t.set_rotation(30)
ax = plt.subplot(gs[2])
r = x_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False],
bins=np.arange(0.5, 1.05, 0.05)))
r = r.apply(lambda z: z.value_counts())
sns.heatmap(r.ix[exp.index], yticklabels=False, ax=ax)
xmin,xmax = ax.get_xlim()
ax.set_xticks(np.arange(xmin, xmax + 1, 2))
ax.set_xticklabels(np.arange(0.5, 1.05, 0.1), rotation=30)
ax.set_ylabel('')
ax.set_title('X MajAF distribution')
ax = plt.subplot(gs[3])
r = notx_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False],
bins=np.arange(0.5, 1.05, 0.05)))
r = r.apply(lambda z: z.value_counts())
sns.heatmap(r.ix[exp.index], yticklabels=False, ax=ax)
xmin,xmax = ax.get_xlim()
ax.set_xticks(np.arange(xmin, xmax + 1, 2))
ax.set_xticklabels(np.arange(0.5, 1.05, 0.1), rotation=30)
ax.set_ylabel('')
ax.set_title('Autosomal MajAF distribution')
gs.tight_layout(fig)
fig.savefig(os.path.join(outdir, 'x_reactivation_heatmap.png'), dpi=600)
Explanation: NANOG involved in the regulation of Xist (more NANOG expression -> less XIST expression though supposedly this doesn’t affect reactivation). OCT4 and SOX2 also supposed to affect XIST expresssion. cMYC, REX1, and KLF4 affect TSIX (this is all mouse stuff I think) (Minkovsky).
End of explanation
t = x_all.ix[:, :, 'p_val_ase']
freq = (t < 0.005).sum() / (t.isnull() == False).sum()
print('{:.2f}% of genes per sample have significant ASE on chrX.'.format(freq.mean() * 100))
t = notx_all.ix[:, :, 'p_val_ase']
freq = (t < 0.005).sum() / (t.isnull() == False).sum()
print('{:.2f}% of genes per sample have significant ASE on autosomes.'.format(freq.mean() * 100))
xist_gene_id = gene_info[gene_info.gene_name == 'XIST'].index[0]
tsix_gene_id = gene_info[gene_info.gene_name == 'TSIX'].index[0]
r = stats.spearmanr(meta.passage, log_tpm_n.ix[xist_gene_id, meta.index])
print('Passage and XIST expression are correlated (r={:.2f}) with p={:.3f}.'.format(
r.correlation, r.pvalue))
r = stats.spearmanr(meta.passage, log_tpm_n.ix[tsix_gene_id, meta.index])
print('Passage and TSIX expression are correlated (r={:.2f}) with p={:.3f}.'.format(
r.correlation, r.pvalue))
percent_ase = ((x_all.ix[:, :, 'p_val_ase'] < 0.005).sum() /
(x_all.ix[:, :, 'p_val_ase'].isnull() == False).sum())
r = stats.spearmanr(percent_ase, meta.ix[percent_ase.index, 'passage'])
print('Percent ASE and passage are not correlated (r={:.2f}, p={:.3f}).'.format(
r.correlation, r.pvalue))
r = stats.spearmanr(percent_ase, log_tpm_n.ix[xist_gene_id, percent_ase.index])
print('Percent ASE and XIST expression are correlated (r={:.2f}) with p={:.3e}.'.format(
r.correlation, r.pvalue))
r = stats.spearmanr(percent_ase, log_tpm_n.ix[tsix_gene_id, percent_ase.index])
print('Percent ASE and TSIX expression are correlated (r={:.2f}) with p={:.3e}.'.format(
r.correlation, r.pvalue))
Explanation: These heatmaps are all aligned by row (so the first row across all heatmaps
is the same sample, the second row across all heatmaps is the same sample, etc.).
The heatmaps are ordered by each sample's XIST expression (shown in the second heatmap).
The X MajAF distribution heatmap is the same as the histogram above for one sample
but now we are stacking up the histograms for all samples. We can see that many
samples have genes that are inactive although the amount of inactivation varies
between samples and is highly correlated with XIST.
The X chromosome is clearly enriched for having ASE which is probably due
to incomplete X reactivation.
End of explanation
fn = os.path.join(outdir, 'x_ase_exp.tsv')
if not os.path.exists(fn):
t = locus_p.ix[:, :, 'p_val_ase']
t = t.ix[set(t.index) & set(gene_info[gene_info.chrom == 'chrX'].index)]
t = t[t.isnull().sum(axis=1) <= 0.8 * t.shape[1]]
t = t.ix[set(log_tpm_n.index) & set(t.index)]
rows = []
for i in t.index:
se = t.ix[i]
se = se[se.isnull() == False]
a = log_tpm_n.ix[i, se[se <= 0.005].index]
b = log_tpm_n.ix[i, se[se > 0.005].index]
rows.append([se.shape[0], a.shape[0], b.shape[0], a.mean(), b.mean()])
x_exp = pd.DataFrame(rows, columns=['num_samples', 'num_sig', 'num_not_sig',
'mean_sig_exp', 'mean_not_sig_exp'],
index=t.index)
x_exp = x_exp[(x_exp.num_sig >= 5) & (x_exp.num_not_sig >= 5)]
x_exp.to_csv(fn, sep='\t')
else:
x_exp = pd.read_table(fn, index_col=0)
(x_exp.mean_sig_exp - x_exp.mean_not_sig_exp).hist()
plt.ylabel('Number of genes')
plt.xlabel('Mean expression difference (sig - not sig)')
xmin, xmax = plt.xlim()
plt.xlim(-max(abs(xmin), abs(xmax)), max(abs(xmin), abs(xmax)))
ymin, ymax = plt.ylim()
plt.vlines(0, ymin, ymax);
Explanation: X chromosome gene expression
Are reactivated genes expressed more highly?
End of explanation
plt.scatter(x_exp.mean_sig_exp, x_exp.mean_not_sig_exp)
# xmin,xmax = plt.xlim()
# ymin,ymax = plt.ylim()
# plt.plot([-1, 2], [-1, 2])
# plt.xlim(-1, 1.75)
# plt.ylim(-1, 1.75)
len(set([x[0] for x in no_sig_ase.index]))
len(set([x[0] for x in sig_ase.index]))
t = locus_p.ix[:, :, 'p_val_ase']
t = t.ix[set(t.index) & set(gene_info[gene_info.chrom == 'chrX'].index)]
t = t[t.isnull().sum(axis=1) <= 0.8 * t.shape[1]]
t = t.ix[set(log_tpm_n.index) & set(t.index)]
exp = log_tpm_n.ix[t.index, t.columns]
no_sig_ase = exp[t > 0.005].stack()
sig_ase = exp[t < 0.005].stack()
pdfs = pd.DataFrame(index=np.arange(-5, 5 + 0.1, 0.1))
density = scipy.stats.gaussian_kde(no_sig_ase)
pdfs['no_ase'] = density(pdfs.index)
density = scipy.stats.gaussian_kde(sig_ase)
pdfs['ase'] = density(pdfs.index)
pdfs.to_csv(os.path.join(outdir, 'expression_densities.tsv'), sep='\t')
pdfs.plot()
plt.ylabel('Density')
plt.xlabel('$\log$ TPM $z$-score');
Explanation: The histogram above shows that genes have higher average expression
in samples where the gene does not have significant ASE.
End of explanation
t = locus_p.ix[:, :, 'p_val_ase']
t = t.ix[set(t.index) & set(gene_info[gene_info.chrom == 'chrX'].index)]
t = t[t.isnull().sum(axis=1) <= 0.8 * t.shape[1]]
freq = (t[t.isnull() == False] < 0.005).sum(axis=1) / (t.isnull() == False).sum(axis=1)
freq.hist()
plt.ylabel('Number of genes')
plt.xlabel('Percentage of samples with ASE ($p$ < 0.005)');
tt = locus_p.ix[:, t.index, 'major_allele_freq']
tt.mean(axis=1).hist()
plt.ylabel('Number of genes')
plt.xlabel('Average major allele frequency per gene');
Explanation: Reactivation percent
Only some genes show residual inactivation. Are these the same genes across samples?
End of explanation
t = gene_info.ix[x_all.major_axis]
r = ((t.end - t.start) / 2).astype(int)
start = (t.start + r - (((t.end - t.start) / 2 % 1) == 0)).astype(int).astype(str)
end = (t.end - r).astype(int).astype(str)
s = '\n'.join(t.chrom + '\t' + start + '\t' + end + '\t' +
pd.Series(t.index, index=t.index)) + '\n'
xgenes_center_bt = pbt.BedTool(s, from_string=True)
xgenes_center_bt = xgenes_center_bt.sort()
Explanation: The above histogram shows what percentage of samples were significant
for ASE. Note that a gene in a sample is
included here only if it was tested by MBASED for ASE. I restricted to
genes that were tested by MBASED in at least 20% of the samples.
Reactivation across the X chromosome
I'd like to look at how genes are reactivated across the X chromosome.
For instance, is reactivation correlated with replication timing or
L1 density? I can consider things like
replication timing
distance from centromere or telomere
XIST binding motif density (does it have a binding motif?)
distance from X inactivation center
L1 density
End of explanation
# Replication timing data.
rt = pd.read_table('/publicdata/replication_domain_db_20151103/RD_sm300_2936763_hFibiPS4p72.hg19.txt',
low_memory=False, skiprows=15, index_col=0)
rt = rt[rt.Chromosome == 'chrX']
s = '\n'.join(rt.Chromosome + '\t' + rt.Start_Position.astype(str) +
'\t' + rt.End_Position.astype(str) + '\t' + rt.Data_Value.astype(str))
rt_bt = pbt.BedTool(s, from_string=True)
rt_bt = rt_bt.sort()
res = xgenes_center_bt.closest(rt_bt, d=True)
df = res.to_dataframe()
rt_by_gene = pd.Series(df.thickEnd.values, index=df.name)
Explanation: Replication timing
End of explanation
fn = os.path.join(outdir, 'line_one_elements.tsv')
if not os.path.exists(fn):
rmsk = os.path.join(outdir, 'rmsk_db.txt.gz')
repeat_db = pd.read_table(rmsk, low_memory=False)
line_one_elements = repeat_db[repeat_db.repFamily == 'L1']
line_one_elements = line_one_elements[line_one_elements.genoName == 'chrX']
line_one_elements.to_csv(fn, sep='\t')
else:
line_one_elements = pd.read_table(fn, index_col=0)
fn = os.path.join(outdir, 'line_one_elements.bed')
if not os.path.exists(fn):
repeats = pd.read_table(os.path.join(outdir, 'rmsk.bed'), header=None, low_memory=False)
repeats = repeats[repeats[0] == 'chrX']
repeat_db = pd.read_table(rmsk, low_memory=False)
se = pd.Series(dict(zip(repeat_db.repName, repeat_db.repFamily)))
r = repeats[repeats[3].apply(lambda x: se[x] == 'L1')]
s = '\n'.join(['\t'.join(x) for x in r.astype(str).values]) + '\n'
line_one_bt = pbt.BedTool(s, from_string=True)
line_one_bt = line_one_bt.sort()
line_one_bt.saveas(fn)
else:
line_one_bt = pbt.BedTool(fn)
res = xgenes_center_bt.window(line_one_bt, w=100000)
df = res.to_dataframe()
line_one_by_gene = df.name.value_counts()
line_one_bedgraph = os.path.join(outdir, 'line_one.bedGraph')
if not os.path.exists(line_one_bedgraph):
res = line_one_bt.genome_coverage(g=pbt.genome_registry.hg19, bg=True)
res.saveas(line_one_bedgraph)
line_one_bw = os.path.join(outdir, 'line_one.bw')
if not os.path.exists(line_one_bw):
!bedGraphToBigWig {line_one_bedgraph} /software/bedtools-2.25.0/genomes/human.hg19.genome {line_one_bw}
line_one_bam = os.path.join(outdir, 'line_one_uniq_sorted.bam')
if not os.path.exists(line_one_bam):
df = line_one_bt.to_dataframe()
df.name = df.name + '_' + pd.Series(range(df.shape[0])).astype(str)
fn = os.path.join(outdir, 'line_one_uniq.bed')
df.to_csv(fn, header=None, index=None, sep='\t')
out = os.path.join(outdir, 'line_one_uniq.bam')
!bedToBam -i {fn} -g /frazer01/software/bedtools-2.25.0/genomes/human.hg19.genome > {out}
!sambamba sort -o {line_one_bam} {out}
!sambamba index {line_one_bam}
!rm {out}
fn = os.path.join(outdir, 'alu_elements.tsv')
if not os.path.exists(fn):
rmsk = os.path.join(outdir, 'rmsk_db.txt.gz')
repeat_db = pd.read_table(rmsk, low_memory=False)
alu_elements = repeat_db[repeat_db.repFamily == 'Alu']
alu_elements.to_csv(fn, sep='\t')
else:
alu_elements = pd.read_table(fn, index_col=0)
fn = os.path.join(outdir, 'alu_elements.bed')
if not os.path.exists(fn):
repeats = pd.read_table(os.path.join(outdir, 'rmsk.bed'), header=None, low_memory=False)
repeat_db = pd.read_table(rmsk, low_memory=False)
se = pd.Series(dict(zip(repeat_db.repName, repeat_db.repFamily)))
r = repeats[repeats[3].apply(lambda x: se[x] == 'Alu')]
s = '\n'.join(['\t'.join(x) for x in r.astype(str).values]) + '\n'
alu_bt = pbt.BedTool(s, from_string=True)
alu_bt.saveas(fn)
else:
alu_bt = pbt.BedTool(fn)
res = xgenes_center_bt.window(alu_bt, w=100000)
df = res.to_dataframe()
alu_by_gene = df.name.value_counts()
fn = os.path.join(outdir, 'line_two_elements.tsv')
if not os.path.exists(fn):
rmsk = os.path.join(outdir, 'rmsk_db.txt.gz')
repeat_db = pd.read_table(rmsk, low_memory=False)
line_two_elements = repeat_db[repeat_db.repFamily == 'L2']
line_two_elements.to_csv(fn, sep='\t')
else:
line_two_elements = pd.read_table(fn, index_col=0)
fn = os.path.join(outdir, 'line_two_elements.bed')
if not os.path.exists(fn):
repeats = pd.read_table(os.path.join(outdir, 'rmsk.bed'), header=None, low_memory=False)
repeat_db = pd.read_table(rmsk, low_memory=False)
se = pd.Series(dict(zip(repeat_db.repName, repeat_db.repFamily)))
r = repeats[repeats[3].apply(lambda x: se[x] == 'L2')]
s = '\n'.join(['\t'.join(x) for x in r.astype(str).values]) + '\n'
line_two_bt = pbt.BedTool(s, from_string=True)
line_two_bt.saveas(fn)
else:
line_two_bt = pbt.BedTool(fn)
res = xgenes_center_bt.window(line_two_bt, w=100000)
df = res.to_dataframe()
line_two_by_gene = df.name.value_counts()
Explanation: L1 elements
I downloaded the repeat masker database from the table browser (group: Repeats,
track: RepeatMasker, output format: all fields from selected table) to the file
rmsk_db.txt.gz. I also downloaded the database as a bed file from the table browser
to the file rmsk.bed. I put both of these in the output directory.
End of explanation
df = xgenes_center_bt.to_dataframe()
pos = pd.Series(df.start.values, index=df.name)
features = pd.DataFrame({'rep_timing':rt_by_gene, 'line1':line_one_by_gene, 'pos':pos,
'alu':alu_by_gene, 'line2':line_two_by_gene})
Explanation: Combine features
End of explanation
cyto = pd.read_table(os.path.join(outdir, 'cytoBand_db.txt'))
cyto.columns = [x.replace('#', '') for x in cyto.columns]
cyto.ix[(cyto.chrom == 'chrX') & (cyto.gieStain == 'acen')]
%%R
suppressPackageStartupMessages(library(lme4))
lmm_features = pd.DataFrame(x_all.ix[:, :, 'major_allele_freq'].stack(),
columns=['major_allele_freq'])
lmm_features.index.names = ['gene_id', 'sample_id']
lmm_features = lmm_features.reset_index()
t = pd.DataFrame(x_all.ix[:, :, 'p_val_ase'].stack(),
columns=['p_val_ase'])
t.index.names = ['gene_id', 'sample_id']
t = t.reset_index()
lmm_features['p_val_ase'] = t['p_val_ase']
lmm_features = lmm_features.merge(features[['line1', 'alu', 'line2', 'pos', 'rep_timing']],
left_on='gene_id', right_index='True')
t = log_tpm_n.stack()
t = t.reset_index()
t.columns = ['gene_id', 'sample_id', 'exp']
t.index = t.gene_id + ':' + t.sample_id
lmm_features['exp'] = t.ix[lmm_features.gene_id + ':' + lmm_features.sample_id, 'exp'].values
lmm_features = lmm_features.dropna()
# random.seed('5454')
# rand = [random.random() for x in range(lmm_features.shape[0])]
# lmm_features['random'] = rand
random.seed('5454')
rand = [random.random() for x in set(lmm_features.gene_id)]
se = pd.Series(rand, index=set(lmm_features.gene_id))
lmm_features['random'] = se[lmm_features.gene_id].values
lmm_features_p = lmm_features[lmm_features.pos < 58100000]
lmm_features_q = lmm_features[lmm_features.pos > 63000000]
lmm_features.pos = lmm_features.pos - lmm_features.pos.min()
lmm_features.pos = lmm_features.pos / lmm_features.pos.max()
lmm_features.line1 = lmm_features.line1 - lmm_features.line1.min()
lmm_features.line1 = lmm_features.line1 / lmm_features.line1.max()
lmm_features.alu = lmm_features.alu - lmm_features.alu.min()
lmm_features.alu = lmm_features.alu / lmm_features.alu.max()
lmm_features.line2 = lmm_features.line2 - lmm_features.line2.min()
lmm_features.line2 = lmm_features.line2 / lmm_features.line2.max()
lmm_features_p = lmm_features.ix[lmm_features_p.index]
lmm_features_q = lmm_features.ix[lmm_features_q.index]
%%R -i lmm_features
model = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id), data=lmm_features)
summary(model)
%%R -i lmm_features_p
model = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id), data=lmm_features_p)
summary(model)
%%R -i lmm_features_q
model = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id), data=lmm_features_q)
summary(model)
Explanation: I downloaded the banding track from the table browser (group: Mapping and Sequencing,
track: Chromosome Band, table: cytoBand) to cytoBand_db.txt in the output directory.
End of explanation
%%R
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + pos + rep_timing + (1|sample_id),
data=lmm_features, REML=FALSE)
anova(model.null, model.full)
Explanation: Whole chromosome
LINE 1 elements
End of explanation
%%R
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + line1 + rep_timing + (1|sample_id),
data=lmm_features, REML=FALSE)
anova(model.null, model.full)
Explanation: Position
End of explanation
%%R
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + line1 + pos + (1|sample_id),
data=lmm_features, REML=FALSE)
anova(model.null, model.full)
Explanation: Replicating timing
End of explanation
%%R -i lmm_features_p
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features_p, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + pos + rep_timing + (1|sample_id),
data=lmm_features_p, REML=FALSE)
anova(model.null, model.full)
Explanation: p arm
LINE 1 elements
End of explanation
%%R -i lmm_features_p
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features_p, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + line1 + rep_timing + (1|sample_id),
data=lmm_features_p, REML=FALSE)
anova(model.null, model.full)
Explanation: Position
End of explanation
%%R -i lmm_features_p
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features_p, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + line1 + pos + (1|sample_id),
data=lmm_features_p, REML=FALSE)
anova(model.null, model.full)
Explanation: Replication timing
End of explanation
%%R -i lmm_features_q
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features_q, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + pos + rep_timing + (1|sample_id),
data=lmm_features_q, REML=FALSE)
anova(model.null, model.full)
Explanation: q arm
LINE 1 elements
End of explanation
%%R -i lmm_features_q
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features_q, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + line1 + rep_timing + (1|sample_id),
data=lmm_features_q, REML=FALSE)
anova(model.null, model.full)
Explanation: Position
End of explanation
%%R -i lmm_features_q
model.full = lmer(major_allele_freq ~ exp + line1 + pos + rep_timing + (1|sample_id),
data=lmm_features_q, REML=FALSE)
model.null = lmer(major_allele_freq ~ exp + line1 + pos + (1|sample_id),
data=lmm_features_q, REML=FALSE)
anova(model.null, model.full)
rt_data = rt[['Chromosome', 'Start_Position', 'End_Position', 'Data_Value']]
rt_data.columns = ['chromosome', 'start', 'end', 'Replication Timing']
%%R
suppressPackageStartupMessages(library(Gviz))
%%R -i rt_data
ideoTrack <- IdeogramTrack(genome = "hg19", chromosome = "chrX")
rtTrack <- DataTrack(range=rt_data, genome="hg19", type=c("polygon"),
chromosome="chrX", name="Replication Timing")
%%R -i line_one_bw
#bamFile <- system.file(line_one_bam, package = "Gviz")
lineTrack <- DataTrack(range=line_one_bw, genome="hg19", type="l", window=-1,
chromosome="chrX", name="L1 Elements", )
lmm_features = lmm_features.merge(gene_info[['chrom', 'start', 'end']],
left_on='gene_id', right_index=True)
lmm_features.columns = [c.replace('chrom', 'chromosome') for c in lmm_features.columns]
t = x_all.ix[:, :, 'major_allele_freq']
r = gene_info.ix[t.index, ['start', 'end']]
%%R -i t,r
mafTrack <- DataTrack(range=r, data=t, genome="hg19", type=c("smooth", "p"), alpha=0.75, lwd=8,
span=0.05,
chromosome="chrX", name="Major Allele Frequency")
%%R
plotTracks(c(ideoTrack, rtTrack, mafTrack), from=63000000, to=155270560)
%%R
plotTracks(c(ideoTrack, rtTrack, lineTrack, mafTrack), from=63000000, to=155270560)
%%R
plotTracks(c(ideoTrack, rtTrack, lineTrack, mafTrack), from=0, to=58100000)
%%R
plotTracks(c(ideoTrack, rtTrack, lineTrack, mafTrack), from=0, to=58100000)
%%R -i rt_bedgraph,sig_bedgraph,mean_freq_z_bedgraph,line_one_bam
ideoTrack <- IdeogramTrack(genome = "hg19", chromosome = "chrX")
# sig <- system.file(sig_bedgraph, package = "Gviz")
sigTrack <- DataTrack(range=sig_bedgraph, genome="hg19", type=c("smooth", "p"),
chromosome="chrX", name="Percent Significant ASE")
# mfz <- system.file(sig_bedgraph, package = "Gviz")
mfzTrack <- DataTrack(range=mean_freq_z_bedgraph, genome="hg19", type=c("smooth", "p"),
chromosome="chrX", name="Mean Frequency z Score")
#rt <- system.file(rt_bedgraph, package = "Gviz")
rtTrack <- DataTrack(range=rt_bedgraph, genome="hg19", type="l",
chromosome="chrX", name="Replication Timing")
lineTrack <- DataTrack(range=line_one_bam, genome="hg19", type="l", window=-1,
chromosome="chrX", name="L1 Elements")
Explanation: Replication timing
End of explanation
g = set(gene_info[gene_info.chrom == 'chrX'].index) & set(ase_pval.index)
s = meta[(meta.sex == 'F') & (meta.in_eqtl)].index
fx_maj_af = maj_af.ix[g, s]
fx_maj_af_f = fx_maj_af[(fx_maj_af.isnull() == False).sum(axis=1) >= 40]
fx_maj_af_f.shape
g = set(gene_info[gene_info.chrom == 'chrX'].index) & set(ase_pval.index)
s = meta[(meta.sex == 'F') & (meta.in_eqtl)].index
fx_maj_af = maj_af.ix[g, s]
fx_maj_af_f = fx_maj_af[(fx_maj_af.isnull() == False).sum(axis=1) >= 40]
cpy.makedir(os.path.join(outdir, 'inact_qtl'))
fn = os.path.join(ciepy.root, 'output', 'eqtl_input', 'gene_to_regions.p')
gene_to_regions = cPickle.load(open(fn, 'rb'))
fx_maj_af_f.mean().hist()
plt.ylabel('Number of samples')
plt.xlabel('Mean major allele frequency');
Explanation: Reactivation QTLs
End of explanation
fx_maj_af_f = fx_maj_af_f - fx_maj_af_f.mean()
fx_maj_af_f.shape
def run_emmax_sge(gene_id, mem=4):
Run EMMAX for X inactivation eQTL.
se = fx_maj_af_f.ix[gene_id].dropna()
se = cpb.general.transform_standard_normal(se)
wgs = cpy.get_best_wgs_sample(meta.ix[se.index, 'subject_id'])
se.index = wgs.wgs_id
se = se[sorted(se.index)]
toutdir = os.path.join(outdir, 'inact_qtl', gene_id)
cpy.makedir(toutdir)
samples = os.path.join(toutdir, 'emmax_samples.tsv')
with open(samples, 'w') as f:
f.write('\n'.join(se.index) + '\n')
exp = os.path.join(toutdir, 'maj_af_std_norm.tsv')
pd.DataFrame(se, columns=[gene_id]).T.to_csv(exp, sep='\t')
vcf = '/projects/CARDIPS/pipeline/WGS/mergedVCF/CARDIPS_201512.femaleX.PASS.vcf.gz'
regions = ','.join([x[3:] for x in gene_to_regions[gene_id]])
kin = os.path.join(ciepy.root, 'output', 'eqtl_input', 'wgs.kin')
res = datetime.datetime.now()
date = re.sub(r'\D', '_', str(res))
fn = os.path.join(toutdir, '{}_{}.sh'.format(gene_id, date))
with open(fn, 'w') as f:
f.write('#!/bin/bash\n\n')
f.write('#$ -N emmax_{}_{}_x\n'.format(gene_id, date))
num_threads = 4
f.write('#$ -l short\n')
f.write('#$ -l h_vmem={}G\n'.format(mem / num_threads))
f.write('#$ -pe smp {}\n'.format(num_threads))
f.write('#$ -S /bin/bash\n')
f.write('#$ -o {}/emmax_{}_{}_x.out\n'.format(toutdir, gene_id, date))
f.write('#$ -e {}/emmax_{}_{}_x.err\n\n'.format(toutdir, gene_id, date))
f.write('module load cardips/1\n')
f.write('source activate cie\n\n')
cpy.makedir(toutdir)
c = 'python {} \\\n\t'.format(os.path.join(ciepy.root, 'scripts', 'run_emmax.py'))
c += ' \\\n\t'.join([
gene_id,
vcf,
regions,
exp,
samples,
kin,
toutdir,
])
f.write(c + '\n\n')
subprocess.check_call('qsub {}'.format(fn), shell=True)
if not os.path.exists(os.path.join(outdir, 'inact_qtl')):
for g in fx_maj_af_f.index:
run_emmax_sge(g)
dys = glob.glob(os.path.join(outdir, 'inact_qtl', '*'))
gene_ids = []
pvalues = []
for dy in dys:
gene_id = os.path.split(dy)[1]
res_fn = os.path.join(os.path.join(dy, '{}.tsv'.format(gene_id)))
res = ciepy.read_emmax_output(res_fn)
min_fn = os.path.join(os.path.join(dy, 'minimum_pvalues.tsv'))
min_pvals = pd.read_table(min_fn, header=None, squeeze=True)
pvalues.append((1 + sum(min_pvals <= res.PVALUE.min())) / float(min_pvals.shape[0] + 1))
gene_ids.append(gene_id)
pvalues = pd.Series(pvalues, index=gene_ids)
pvalues.hist()
plt.xlabel('$p$-value')
plt.ylabel('Number of genes');
(-np.log10(pvalues)).hist()
plt.xlabel('$-\log_{10} p$-value')
plt.ylabel('Number of genes');
r = sms.sandbox.stats.multicomp.multipletests(pvalues, method='fdr_bh')
pvalues_bh = pd.Series(r[1], index=pvalues.index)
sum(pvalues_bh < 0.05)
pvalues = pvalues.sort_values()
Explanation: I'm going to substract the mean from each sample (column) to account for
differences in overall reactivation.
End of explanation
gene_info.ix[pvalues[pvalues < 0.05].index]
Explanation: The following genes are nominally significant but don't pass the FDR correction.
End of explanation |
13,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MapNode
If you want to iterate over a list of inputs, but need to feed all iterated outputs afterward as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs.
Imagine that you have a list of items (let's say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). MapNode can solve this problem. Imagine you have the following workflow
Step1: We see that this function just takes a numeric input and returns its squared value.
Step2: What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a MapNode.
iterfield
The MapNode constructor has a field called iterfield, which tells it what inputs should be expecting a list.
Step3: Because iterfield can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists.
Step4: But not every input needs to be an iterfield.
Step5: As in the case of iterables, each underlying MapNode execution can happen in parallel. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you process large amounts of data efficiently and reproducibly.
In more advanced applications it is useful to be able to iterate over items of nested lists (for example [[1,2],[3,4]]). MapNode allows you to do this with the "nested=True" parameter. Outputs will preserve the same nested structure as the inputs.
Why is this important?
Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterward, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. Iterables can't do that. They would split up the pipeline. Therefore, we need MapNodes.
<img src="../static/images/mapnode.png" width="300">
Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes
Step6: If we try to specify the input for the Gunzip node with a simple Node, we get the following error
Step7: bash
TraitError
Step8: Now, we just have to create a workflow, connect the nodes and we can run it
Step9: Exercise 1
Create a workflow to calculate a sum of factorials of numbers from a range between $n_{min}$ and $n_{max}$, i.e.
Step10: let's print all nodes
Step11: the final result should be 10
Step12: we can also check the results of two other nodes | Python Code:
from nipype import Function
def square_func(x):
return x ** 2
square = Function(["x"], ["f_x"], square_func)
Explanation: MapNode
If you want to iterate over a list of inputs, but need to feed all iterated outputs afterward as one input (an array) to the next node, you need to use a MapNode. A MapNode is quite similar to a normal Node, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs.
Imagine that you have a list of items (let's say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). MapNode can solve this problem. Imagine you have the following workflow:
<img src="../static/images/mapnode.png" width="325">
Node A outputs a list of files, but node B accepts only one file. Additionally, C expects a list of files. What you would like is to run B for every file in the output of A and collect the results as a list and feed it to C. Something like this:
```python
from nipype import Node, MapNode, Workflow
a = Node(interface=A(), name="a")
b = MapNode(interface=B(), name="b", iterfield=['in_file'])
c = Node(interface=C(), name="c")
my_workflow = Workflow(name="my_workflow")
my_workflow.connect([(a,b,[('out_files','in_file')]),
(b,c,[('out_file','in_files')])
])
```
Let's demonstrate this with a simple function interface:
End of explanation
square.run(x=2).outputs.f_x
Explanation: We see that this function just takes a numeric input and returns its squared value.
End of explanation
from nipype import MapNode
square_node = MapNode(square, name="square", iterfield=["x"])
square_node.inputs.x = [0, 1, 2, 3]
res = square_node.run()
res.outputs.f_x
Explanation: What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a MapNode.
iterfield
The MapNode constructor has a field called iterfield, which tells it what inputs should be expecting a list.
End of explanation
def power_func(x, y):
return x ** y
power = Function(["x", "y"], ["f_xy"], power_func)
power_node = MapNode(power, name="power", iterfield=["x", "y"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = [0, 1, 2, 3]
res = power_node.run()
print(res.outputs.f_xy)
Explanation: Because iterfield can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists.
End of explanation
power_node = MapNode(power, name="power", iterfield=["x"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = 3
res = power_node.run()
print(res.outputs.f_xy)
Explanation: But not every input needs to be an iterfield.
End of explanation
from nipype.algorithms.misc import Gunzip
from nipype.interfaces.spm import Realign
from nipype import Node, MapNode, Workflow
# Here we specify a list of files (for this tutorial, we just add the same file twice)
files = ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz',
'/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz']
realign = Node(Realign(register_to_mean=True),
name='motion_correction')
Explanation: As in the case of iterables, each underlying MapNode execution can happen in parallel. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you process large amounts of data efficiently and reproducibly.
In more advanced applications it is useful to be able to iterate over items of nested lists (for example [[1,2],[3,4]]). MapNode allows you to do this with the "nested=True" parameter. Outputs will preserve the same nested structure as the inputs.
Why is this important?
Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterward, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. Iterables can't do that. They would split up the pipeline. Therefore, we need MapNodes.
<img src="../static/images/mapnode.png" width="300">
Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes:
- Gunzip, to unzip the files (plural)
- Realign, to do the motion correction
End of explanation
gunzip = Node(Gunzip(), name='gunzip',)
try:
gunzip.inputs.in_file = files
except(Exception) as err:
if "TraitError" in str(err.__class__):
print("TraitError:", err)
else:
raise
else:
raise
Explanation: If we try to specify the input for the Gunzip node with a simple Node, we get the following error:
End of explanation
gunzip = MapNode(Gunzip(), name='gunzip',
iterfield=['in_file'])
gunzip.inputs.in_file = files
Explanation: bash
TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz', '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'] <class 'list'> was specified.
But if we do it with a MapNode, it works:
End of explanation
mcflow = Workflow(name='realign_with_spm')
mcflow.connect(gunzip, 'out_file', realign, 'in_files')
mcflow.base_dir = '/output'
mcflow.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Now, we just have to create a workflow, connect the nodes and we can run it:
End of explanation
#write your solution here
from nipype import Workflow, Node, MapNode, Function
import os
def range_fun(n_min, n_max):
return list(range(n_min, n_max+1))
def factorial(n):
# print("FACTORIAL, {}".format(n))
import math
return math.factorial(n)
def summing(terms):
return sum(terms)
wf_ex1 = Workflow('ex1')
wf_ex1.base_dir = os.getcwd()
range_nd = Node(Function(input_names=['n_min', 'n_max'],
output_names=['range_list'],
function=range_fun),
name='range_list')
factorial_nd = MapNode(Function(input_names=['n'],
output_names=['fact_out'],
function=factorial),
iterfield=['n'],
name='factorial')
summing_nd = Node(Function(input_names=['terms'],
output_names=['sum_out'],
function=summing),
name='summing')
range_nd.inputs.n_min = 0
range_nd.inputs.n_max = 3
wf_ex1.add_nodes([range_nd])
wf_ex1.connect(range_nd, 'range_list', factorial_nd, 'n')
wf_ex1.connect(factorial_nd, 'fact_out', summing_nd, "terms")
eg = wf_ex1.run()
Explanation: Exercise 1
Create a workflow to calculate a sum of factorials of numbers from a range between $n_{min}$ and $n_{max}$, i.e.:
$$\sum {k=n{min}}^{n_{max}} k! = 0! + 1! +2! + 3! + \cdots$$
if $n_{min}=0$ and $n_{max}=3$
$$\sum _{k=0}^{3} k! = 0! + 1! +2! + 3! = 1 + 1 + 2 + 6 = 10$$
Use Node for a function that creates a list of integers and a function that sums everything at the end. Use MapNode to calculate factorials.
End of explanation
eg.nodes()
Explanation: let's print all nodes:
End of explanation
list(eg.nodes())[2].result.outputs
Explanation: the final result should be 10:
End of explanation
print(list(eg.nodes())[0].result.outputs)
print(list(eg.nodes())[1].result.outputs)
Explanation: we can also check the results of two other nodes:
End of explanation |
13,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Writing Low-Level TensorFlow Code
Learning Objectives
Practice defining and performing basic operations on constant Tensors
Use Tensorflow's automatic differentiation capability
Learn how to train a linear regression from scratch with TensorFLow
Introduction
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method
Step2: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does
Step3: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
Step4: You can convert a native TF tensor to a NumPy array using .numpy()
Step5: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function
Step6: Let's also create a test dataset to evaluate our models
Step7: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set
Step8: Using mean squared error, our loss is
Step9: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model
Step10: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information
Step11: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
Lab Task #3
Step12: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set
Step13: This is indeed much better!
Bonus
Try modelling a non-linear function such as | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1 || pip install tensorflow==2.1
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
print(tf.__version__)
Explanation: Writing Low-Level TensorFlow Code
Learning Objectives
Practice defining and performing basic operations on constant Tensors
Use Tensorflow's automatic differentiation capability
Learn how to train a linear regression from scratch with TensorFLow
Introduction
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name="my_variable")
x.assign(45.8)
x
x.assign_add(4)
x
x.assign_sub(3)
x
Explanation: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method:
x.assign(new_value)
x.assign_add(value_to_be_added)
x.assign_sub(value_to_be_subtracted
End of explanation
# TODO 1a
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1b
a = # TODO -- Your code here.
b = # TODO -- Your code here.
c = # TODO -- Your code here.
d = # TODO -- Your code here.
print("c:", c)
print("d:", d)
# TODO 1c
# tf.math.exp expects floats so we need to explicitly give the type
a = # TODO -- Your code here.
b = # TODO -- Your code here.
print("b:", b)
Explanation: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
tf.add allows to add the components of a tensor
tf.multiply allows us to multiply the components of a tensor
tf.subtract allow us to substract the components of a tensor
tf.math.* contains the usual math operations to be applied on the components of a tensor
and many more...
Most of the standard aritmetic operations (tf.add, tf.substrac, etc.) are overloaded by the usual corresponding arithmetic symbols (+, -, etc.)
Lab Task #1: Performing basic operations on Tensors
1. Compute the sum of the constants a and b below using tf.add and + and verify both operations produce the same values.
2. Compute the product of the constants a and b below using tf.multiply and * and verify both operations produce the same values.
3. Compute the exponential of the constant a using tf.math.exp. Note, you'll need to specify the type for this operation.
End of explanation
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py)
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np)
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf)
Explanation: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
End of explanation
a_tf.numpy()
Explanation: You can convert a native TF tensor to a NumPy array using .numpy()
End of explanation
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print(f"X:{X}")
print(f"Y:{Y}")
Explanation: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
End of explanation
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print(f"X_test:{X_test}")
print(f"Y_test:{Y_test}")
Explanation: Let's also create a test dataset to evaluate our models:
End of explanation
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
Explanation: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
End of explanation
errors = (Y_hat - Y) ** 2
loss = tf.reduce_mean(errors)
loss.numpy()
Explanation: Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
End of explanation
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
Explanation: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model:
End of explanation
# TODO 2
def compute_gradients(X, Y, w0, w1):
# TODO -- Your code here.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
Explanation: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information:
python
with tf.GradientTape() as tape:
loss = # computation
This will allow us to later compute the gradients of any tensor computed within the tf.GradientTape context with respect to instances of tf.Variable:
python
gradients = tape.gradient(loss, [w0, w1])
We illustrate this procedure with by computing the loss gradients with respect to the model weights:
Lab Task #2: Complete the function below to compute the loss gradients with respect to the model weights w0 and w1.
End of explanation
# TODO 3
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = # TODO -- Your code here.
if step % 100 == 0:
loss = # TODO -- Your code here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
Explanation: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
Lab Task #3: Complete the for loop below to train a linear regression.
1. Use compute_gradients to compute dw0 and dw1.
2. Then, re-assign the value of w0 and w1 using the .assign_sub(...) method with the computed gradient values and the LEARNING_RATE.
3. Finally, for every 100th step , we'll compute and print the loss. Use the loss_mse function we created above to compute the loss.
End of explanation
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
Explanation: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
End of explanation
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-(X**2))
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = 0.02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print(f"STEP: {STEPS} MSE: {loss_mse(Xf, Y, W)}")
plt.figure()
plt.plot(X, Y, label="actual")
plt.plot(X, predict(Xf, W), label="predicted")
plt.legend()
Explanation: This is indeed much better!
Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
End of explanation |
13,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run from bootstrap paths
Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored.
Tasks covered in this notebook
Step1: Loading things from storage
First we'll reload some of the stuff we stored before. Of course, this starts with opening the file.
Step2: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.
Step3: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one.
Step4: initialize engine
if we do not select a platform the fastest possible will be chosen but we explicitly request to use the one in the config file
Step5: Running RETIS
Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object.
Step6: Before we can sample we still need to set the actual MoveScheme which determines the
set of moves to apply to our set of samples and effectively doing the steps in
replica (sampleset) space. We pick the default scheme for mstis and feed it with
the engine to be used.
Step7: and finally generate the PathSampler object to conduct the simulation.
Step8: Now everything is ready | Python Code:
%matplotlib inline
import openpathsampling as paths
import numpy as np
import math
# the openpathsampling OpenMM engine
import openpathsampling.engines.openmm as eng
Explanation: Run from bootstrap paths
Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored.
Tasks covered in this notebook:
* Loading OPS objects from storage
* Ways of assigning initial trajectories to initial samples
* Setting up a path sampling simulation with various move schemes
* Visualizing trajectories while the path sampling is running
End of explanation
old_store = paths.AnalysisStorage("ala_mstis_bootstrap.nc")
Explanation: Loading things from storage
First we'll reload some of the stuff we stored before. Of course, this starts with opening the file.
End of explanation
print("PathMovers:", len(old_store.pathmovers))
print("Engines:", len(old_store.engines))
print("Samples:", len(old_store.samples))
print("Trajectories:", len(old_store.trajectories))
print("Ensembles:", len(old_store.ensembles))
print("SampleSets:", len(old_store.samplesets))
print("Snapshots:", len(old_store.snapshots))
print("Networks:", len(old_store.networks))
Explanation: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.
End of explanation
# template = old_store.snapshots[0]
engine = old_store.engines['default']
mstis = old_store.networks[0]
sset = old_store.tag['sampleset']
Explanation: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one.
End of explanation
#platform = 'CUDA'
#engine.initialize(platform)
print('Engine uses platform `%s`' % engine.platform)
sset.sanity_check()
Explanation: initialize engine
if we do not select a platform the fastest possible will be chosen but we explicitly request to use the one in the config file
End of explanation
# logging creates ops_output.log file with details of what the calculation is doing
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
storage = paths.storage.Storage("ala_mstis_production.nc", "w")
storage.snapshots.save(old_store.snapshots[0]);
Explanation: Running RETIS
Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object.
End of explanation
scheme = paths.DefaultScheme(mstis, engine)
Explanation: Before we can sample we still need to set the actual MoveScheme which determines the
set of moves to apply to our set of samples and effectively doing the steps in
replica (sampleset) space. We pick the default scheme for mstis and feed it with
the engine to be used.
End of explanation
mstis_calc = paths.PathSampling(
storage=storage,
sample_set=sset,
move_scheme=scheme
)
mstis_calc.save_frequency = 10
#mstis_calc.save_frequency = 1
Explanation: and finally generate the PathSampler object to conduct the simulation.
End of explanation
mstis_calc.run(10000)
print(len(storage.steps))
# commented out during development, so we can "run all" and then do more
storage.close()
Explanation: Now everything is ready: let's run the simulation! The first step takes a little since all
necessary information, i.e. the engines, topologies, initial snapshots, ..., need to be
stored. Then the monte carlo steps will be performed.
End of explanation |
13,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab
Step1: Create Google Cloud Storage bucket for storing Vertex Pipeline artifacts
Step2: Import libraries
Step3: Create BigQuery dataset
Step4: Create BigQuery dataset for ML classification task
Step5: Create a Vertex AI managed dataset resource for pipeline dataset lineage tracking
Initialize Vertex AI Python SDK
Step6: Create Vertex managed tabular dataset
Step7: Create a TFX pipeline
Step8: Write model code
Step9: Write pipeline definition with the TFX SDK
Step10: Run your TFX pipeline on Vertex Pipelines
Create a Artifact Registry on Google Cloud for your pipeline container image
Step11: Set the pipeline configurations for the Vertex AI run
Step12: Build the TFX pipeline container image
Step13: Compile the TFX pipeline
Step14: Extracting pipeline run metadata
Step16: Upload trained model from Google Cloud Storage to Vertex AI | Python Code:
GOOGLE_CLOUD_PROJECT_ID = !(gcloud config get-value core/project)
GOOGLE_CLOUD_PROJECT_ID = GOOGLE_CLOUD_PROJECT_ID[0]
GOOGLE_CLOUD_REGION = 'us-central1'
BQ_DATASET_NAME = 'chicago_taxifare_tips'
BQ_TABLE_NAME = 'chicago_taxi_tips_ml'
BQ_LOCATION = 'US'
BQ_URI = f"bq://{GOOGLE_CLOUD_PROJECT_ID}.{BQ_DATASET_NAME}.{BQ_TABLE_NAME}"
DATASET_DISPLAY_NAME = 'chicago-taxifare-tips'
MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier'
PIPELINE_NAME = f'{MODEL_DISPLAY_NAME}-train-pipeline'
Explanation: Lab: Chicago taxifare tip prediction on Google Cloud Vertex Pipelines using the TFX SDK
Learning objectives
Define a machine learning pipeline to predict taxi fare tips using the TFX SDK.
Compile and run a TFX pipeline on Google Cloud's Vertex Pipelines.
Dataset
The Chicago Taxi Trips dataset is one of the public datasets hosted with BigQuery, which includes taxi trips from 2013 to the present, reported to the City of Chicago in its role as a regulatory agency. The task is to predict whether a given trip will result in a tip > 20%.
Setup
Define constants
End of explanation
GCS_LOCATION = f"gs://{GOOGLE_CLOUD_PROJECT_ID}-tfx"
!gsutil mb -l $GOOGLE_CLOUD_REGION $GCS_LOCATION
Explanation: Create Google Cloud Storage bucket for storing Vertex Pipeline artifacts
End of explanation
import os
import tensorflow as tf
import tfx
import kfp
from google.cloud import bigquery
from google.cloud import aiplatform as vertex_ai
print(f"tensorflow: {tf.__version__}")
print(f"tfx: {tfx.__version__}")
print(f"kfp: {kfp.__version__}")
print(f"Google Cloud Vertex AI Python SDK: {vertex_ai.__version__}")
Explanation: Import libraries
End of explanation
!bq --location=$BQ_LOCATION mk -d \
$GOOGLE_CLOUD_PROJECT_ID:$BQ_DATASET_NAME
Explanation: Create BigQuery dataset
End of explanation
SAMPLE_SIZE = 20000
YEAR = 2020
sql_script = '''
CREATE OR REPLACE TABLE `@PROJECT_ID.@DATASET.@TABLE`
AS (
WITH
taxitrips AS (
SELECT
trip_start_timestamp,
trip_seconds,
trip_miles,
payment_type,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
tips,
fare
FROM
`bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE 1=1
AND pickup_longitude IS NOT NULL
AND pickup_latitude IS NOT NULL
AND dropoff_longitude IS NOT NULL
AND dropoff_latitude IS NOT NULL
AND trip_miles > 0
AND trip_seconds > 0
AND fare > 0
AND EXTRACT(YEAR FROM trip_start_timestamp) = @YEAR
)
SELECT
trip_start_timestamp,
EXTRACT(MONTH from trip_start_timestamp) as trip_month,
EXTRACT(DAY from trip_start_timestamp) as trip_day,
EXTRACT(DAYOFWEEK from trip_start_timestamp) as trip_day_of_week,
EXTRACT(HOUR from trip_start_timestamp) as trip_hour,
trip_seconds,
trip_miles,
payment_type,
ST_AsText(
ST_SnapToGrid(ST_GeogPoint(pickup_longitude, pickup_latitude), 0.1)
) AS pickup_grid,
ST_AsText(
ST_SnapToGrid(ST_GeogPoint(dropoff_longitude, dropoff_latitude), 0.1)
) AS dropoff_grid,
ST_Distance(
ST_GeogPoint(pickup_longitude, pickup_latitude),
ST_GeogPoint(dropoff_longitude, dropoff_latitude)
) AS euclidean,
CONCAT(
ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickup_longitude,
pickup_latitude), 0.1)),
ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropoff_longitude,
dropoff_latitude), 0.1))
) AS loc_cross,
IF((tips/fare >= 0.2), 1, 0) AS tip_bin,
IF(ABS(MOD(FARM_FINGERPRINT(STRING(trip_start_timestamp)), 10)) < 9, 'UNASSIGNED', 'TEST') AS ml_use
FROM
taxitrips
LIMIT @LIMIT
)
'''
sql_script = sql_script.replace(
'@PROJECT_ID', GOOGLE_CLOUD_PROJECT_ID).replace(
'@DATASET', BQ_DATASET_NAME).replace(
'@TABLE', BQ_TABLE_NAME).replace(
'@YEAR', str(YEAR)).replace(
'@LIMIT', str(SAMPLE_SIZE))
bq_client = bigquery.Client(project=GOOGLE_CLOUD_PROJECT_ID, location=BQ_LOCATION)
job = bq_client.query(sql_script)
_ = job.result()
%%bigquery
SELECT ml_use, COUNT(*)
FROM chicago_taxifare_tips.chicago_taxi_tips_ml
GROUP BY ml_use
Explanation: Create BigQuery dataset for ML classification task
End of explanation
vertex_ai.init(project=GOOGLE_CLOUD_PROJECT_ID, location=GOOGLE_CLOUD_REGION)
Explanation: Create a Vertex AI managed dataset resource for pipeline dataset lineage tracking
Initialize Vertex AI Python SDK
End of explanation
tabular_dataset = vertex_ai.TabularDataset.create(display_name=f"{DATASET_DISPLAY_NAME}", bq_source=f"{BQ_URI}")
tabular_dataset.gca_resource
Explanation: Create Vertex managed tabular dataset
End of explanation
PIPELINE_DIR="tfx_taxifare_tips"
Explanation: Create a TFX pipeline
End of explanation
%%writefile {PIPELINE_DIR}/model_training/features.py
%%writefile {PIPELINE_DIR}/model_training/preprocessing.py
%%writefile {PIPELINE_DIR}/model_training/model.py
Explanation: Write model code
End of explanation
%%writefile {PIPELINE_DIR}/pipeline.py
%%writefile {PIPELINE_DIR}/runner.py
Explanation: Write pipeline definition with the TFX SDK
End of explanation
ARTIFACT_REGISTRY="tfx-taxifare-tips"
# TODO: create a Docker Artifact Registry using the gcloud CLI.
# Documentation link: https://cloud.google.com/sdk/gcloud/reference/artifacts/repositories/create
!gcloud artifacts repositories create {ARTIFACT_REGISTRY} \
--repository-format=docker \
--location={GOOGLE_CLOUD_REGION} \
--description="Artifact registry for TFX pipeline images for Chicago taxifare prediction."
IMAGE_NAME="tfx-taxifare-tips"
IMAGE_TAG="latest"
IMAGE_URI=f"{GOOGLE_CLOUD_REGION}-docker.pkg.dev/{GOOGLE_CLOUD_PROJECT_ID}/{ARTIFACT_REGISTRY}/{IMAGE_NAME}:{IMAGE_TAG}"
Explanation: Run your TFX pipeline on Vertex Pipelines
Create a Artifact Registry on Google Cloud for your pipeline container image
End of explanation
os.environ["DATASET_DISPLAY_NAME"] = DATASET_DISPLAY_NAME
os.environ["MODEL_DISPLAY_NAME"] = MODEL_DISPLAY_NAME
os.environ["PIPELINE_NAME"] = PIPELINE_NAME
os.environ["GOOGLE_CLOUD_PROJECT_ID"] = GOOGLE_CLOUD_PROJECT_ID
os.environ["GOOGLE_CLOUD_REGION"] = GOOGLE_CLOUD_REGION
os.environ["GCS_LOCATION"] = GCS_LOCATION
os.environ["TRAIN_LIMIT"] = "5000"
os.environ["TEST_LIMIT"] = "1000"
os.environ["BEAM_RUNNER"] = "DataflowRunner"
os.environ["TRAINING_RUNNER"] = "vertex"
os.environ["TFX_IMAGE_URI"] = IMAGE_URI
os.environ["ENABLE_CACHE"] = "1"
from tfx_taxifare_tips.tfx_pipeline import config
import importlib
importlib.reload(config)
for key, value in config.__dict__.items():
if key.isupper(): print(f'{key}: {value}')
Explanation: Set the pipeline configurations for the Vertex AI run
End of explanation
!echo $TFX_IMAGE_URI
# !docker build . -t test-image
!gcloud builds submit --tag $TFX_IMAGE_URI . --timeout=20m --machine-type=e2-highcpu-8
Explanation: Build the TFX pipeline container image
End of explanation
import tfx_taxifare_tips
# importlib.reload(tfx_taxifare_tips)
PIPELINE_DEFINITION_FILE = f'{config.PIPELINE_NAME}.json'
from tfx_taxifare_tips.tfx_pipeline import pipeline_runner
pipeline_definition = pipeline_runner.compile_training_pipeline(PIPELINE_DEFINITION_FILE)
pipeline_job = vertex_ai.pipeline_jobs.PipelineJob(
display_name=config.PIPELINE_NAME,
template_path=PIPELINE_DEFINITION_FILE,
pipeline_root=os.path.join(config.ARTIFACT_STORE_URI,config.PIPELINE_NAME)
)
pipeline_job.run(sync=False)
Explanation: Compile the TFX pipeline
End of explanation
pipeline_df = vertex_ai.get_pipeline_df(PIPELINE_NAME)
pipeline_df = pipeline_df[pipeline_df.pipeline_name == PIPELINE_NAME]
pipeline_df.T
Explanation: Extracting pipeline run metadata
End of explanation
Pipeline definition code.
import os
import sys
import logging
from typing import Text
import tensorflow_model_analysis as tfma
from tfx.proto import example_gen_pb2, transform_pb2, pusher_pb2
from tfx.v1.types.standard_artifacts import Model, ModelBlessing, Schema
from tfx.v1.extensions.google_cloud_big_query import BigQueryExampleGen
from tfx.v1.extensions.google_cloud_ai_platform import Trainer as VertexTrainer
from tfx.v1.dsl import Pipeline, Importer, Resolver, Channel
from tfx.v1.dsl.experimental import LatestBlessedModelStrategy
from tfx.v1.components import (
StatisticsGen,
ExampleValidator,
Transform,
Evaluator,
Pusher,
)
from tfx_taxifare_tips.tfx_pipeline import config
from tfx_taxifare_tips.model_training import features, bq_datasource_utils
import os, time
from tfx.orchestration.experimental.interactive.interactive_context import (
InteractiveContext,
)
ARTIFACT_STORE = os.path.join(os.sep, "home", "jupyter", "artifact-store")
SERVING_MODEL_DIR = os.path.join(os.sep, "home", "jupyter", "serving_model")
DATA_ROOT = "../../../data"
PIPELINE_NAME = "tfx-covertype-classifier"
PIPELINE_ROOT = os.path.join(
ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")
)
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None,
)
import_schema = Importer(
source_uri="tfx_taxifare_tips/raw_schema",
artifact_type=Schema,
).with_id("SchemaImporter")
context.run(import_schema)
import_schema.outputs["result"].get()[0].uri
examplevalidator = ExampleValidator(
statistics=statisticsgen.outputs["statistics"],
schema=import_schema.outputs["result"],
).with_id("ExampleValidator")
Explanation: Upload trained model from Google Cloud Storage to Vertex AI
End of explanation |
13,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Digit Recognizer
A BEGINNER'S GUIDE
Using
- Multi-layer Perceptron Model (MLP)
- Convolutional Neural Network (CNN) Model
- Keras
Import Libraries
Step1: Loading Train and Test datasets
Step2: Get values of data
Step3: Viewing shape and content of data
Step4: Plotting images and their class values
Step5: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
Step6: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
Step7: Splitting train dataset into training and validation set
We split the train dataset into two parts in 9
Step8: Define Simple Perceptron Model
Generally, neural networks have the following properties
Step9: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Validation dataset is used for validation. The epochs may be increased to improve accuracy.
Finally, validation dataset is used to evaluate the model by calculating the model's classification accuracy.
Step10: Plot correctly and incorrectly predicted images
Let's plot some images which are correctly predicted and some images which are incorrectly predicted on our validation dataset.
Step11: Confusion Matrix
Step12: The above confusion matrix heatmap shows that
Step13: Get values of data
Step14: View shape and content of data
Step15: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
Step16: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
Step17: Splitting train dataset into training and validation set
We split the train dataset into two parts in 9
Step18: Reshaping images
The image dimension expected by Keras for 2D (two-dimensional) convolution is in the format of [pixels][width][height].
For RGB color image, the first dimension (pixel) value would be 3 for the red, green and blue components. It's like having 3 image inputs for every single color image. In our case (for MNIST handwritten images), we have gray scale images. Hence, the pixel dimension is set as 1.
Step19: Define Convolutional Neural Network (CNN) Model
Convolution Layer
- We define 32 feature maps with the size of 5x5 matrix
- We use ReLU (Rectified Linear Units) as the activation function
- This layer expects input image size of 1x28x28 ([pixels][height][weight])
Max Pooling Layer
- It has a pool size of 2x2
Dropout Layer
- Configured to randomly exclude 20% of neurons in the layer to reduce overfitting
Flatten
- Flattens the image into a single dimensional vector which is required as input by the fully connected layer
Fully connected Layer
- Contains 128 neurons
- relu is used as an activation function
- Output layer has num_classes=10 neurons for the 10 classes
- softmax activation function is used in the output layer
- adam gradient descent algorithm is used as optimizer to learn and update weights
Step20: To compile the model, there are different optimizers present in Keras like Stochastic Gradient Descent optimizer, Adam optimizer, RMSprop optimizer, etc.
Step21: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Validation data is used as validation set. The epochs may be increased to improve accuracy.
Finally, validation data is used to evaluate the model by calculating the model's classification accuracy.
Step22: Accuracy (98.61%) of Convolution Neural Network (CNN) model has improved as compared to the accuracy (97.95%) of Multi-layer Perceptron (MLP) model.
The accuracy of CNN model can be further increased by
Step23: Confusion Matrix
Step24: Using Multi-layer Perceptron (MLP) Model, we had the following heatmap outcome
Step25: Accuracy has improved from 98.61% to 98.83%.
Submission to Kaggle | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set() # setting seaborn default for plots
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from keras.utils import np_utils
from keras.datasets import mnist
# for Multi-layer Perceptron (MLP) model
from keras.models import Sequential
from keras.layers import Dense
# for Convolutional Neural Network (CNN) model
from keras.layers import Dropout, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
# fix for issue: https://github.com/fchollet/keras/issues/2681
from keras import backend as K
K.set_image_dim_ordering('th')
Explanation: Digit Recognizer
A BEGINNER'S GUIDE
Using
- Multi-layer Perceptron Model (MLP)
- Convolutional Neural Network (CNN) Model
- Keras
Import Libraries
End of explanation
train = pd.read_csv('train.csv')
print (train.shape)
train.head()
test = pd.read_csv('test.csv')
print (test.shape)
test.head()
y_train = train['label']
X_train = train.drop(labels=['label'], axis=1)
X_test = test
print (y_train.value_counts())
sns.countplot(y_train)
X_train.head()
# check for corrupted images in the datasets
# i.e. check if there are any empty pixel values
print (X_train.isnull().any().sum())
print (X_test.isnull().any().sum())
Explanation: Loading Train and Test datasets
End of explanation
X_train = X_train.values.astype('float32') # pixel values of all images in train set
y_train = y_train.values.astype('int32') # labels of all images
X_test = test.values.astype('float32') # pixel values of all images in test set
Explanation: Get values of data
End of explanation
print (X_train.shape)
print (y_train.shape)
print (y_train[0])
print (X_train[0])
Explanation: Viewing shape and content of data
End of explanation
plt.figure(figsize=[20,8])
for i in range(6):
plt.subplot(1,6,i+1)
# Here, we reshape the 784 pixels vector values into 28x28 pixels image
plt.imshow(X_train[i].reshape(28, 28), cmap='gray', interpolation='none')
plt.title("Class {}".format(y_train[i]))
# fix random seed for reproducibility
random_seed = 7
np.random.seed(random_seed)
Explanation: Plotting images and their class values
End of explanation
# pixel values are gray scale between 0 and 255
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
print (X_train[1])
Explanation: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
End of explanation
print (y_train.shape)
print (y_train[0])
# one hot encode outputs
# note that we have new variables with capital Y
# Y_train is different than y_train
Y_train = np_utils.to_categorical(y_train)
num_classes = Y_train.shape[1]
print (y_train.shape, Y_train.shape)
print (y_train[0], Y_train[0])
Explanation: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
End of explanation
# Split the entire training set into two separate sets: Training set and Validation set
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size = 0.10, random_state=random_seed)
print (X_train.shape, Y_train.shape, X_val.shape, Y_val.shape)
num_pixels = X_train.shape[1]
print (Y_val)
# converting one-hot format of digits to normal values/labels
y_val = np.argmax(Y_val, 1) # reverse of to_categorical
print (y_val)
# Note that: capital Y_val contains values in one-hot format and small y_val contains normal digit values
Explanation: Splitting train dataset into training and validation set
We split the train dataset into two parts in 9:1 ratio. 90% will be the actual training set and the remaining 10% will be the validation/testing set. We train our model using the training set and test the accuracy of the model using the validation set.
End of explanation
def baseline_model():
# create model
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
Explanation: Define Simple Perceptron Model
Generally, neural networks have the following properties:
- an input layer as a single vector
- zero or multiple hidden layers after input layer
- an output layer after hidden layers which represents class scores in classification problem
- each neuron in a hidden layer is fully connected to all neurons in the previous layer
- neurons in a single layer function independently and do not have any connection with other neurons of the same layer
A single-layer perceptron model is the simplest kind of neural network where there are only two layers: input layer and output layer. The inputs are directly fed into the outputs via a series of weights. It's a feed-forward network where the information moves in only one direction, i.e. forward direction from input nodes to output nodes.
A multi-layer perceptron model is the other kind of neural network where there are one or more hidden layers in between input and output layers. The information flows from input layer to hidden layers and then to output layers. These models can be of feed-forward type or they can also use back-propagation method. In back-propagation, the error is calculated in the output layer by computing the difference of actual output and predicted output. The error is then distributed back to the network layers. Based on this error, the algorithm will adjust the weights of each connection in order to reduce the error value. This type of learning is also referred as deep learning.
We create a simple neural network model with one hidden layer with 784 neurons. Our input layer will also have 784 neurons as we have flattened out training dataset into a single 784 dimensional vector.
softmax activation is used in the output layer.
adam gradient descent optimizer is used to learn weights.
End of explanation
model = baseline_model()
model.fit(X_train, Y_train, validation_data=(X_val, Y_val), epochs=5, batch_size=200, verbose=1)
model.summary()
scores = model.evaluate(X_val, Y_val, verbose=0)
print (scores)
print ('Score: {}'.format(scores[0]))
print ('Accuracy: {}'.format(scores[1]))
Explanation: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Validation dataset is used for validation. The epochs may be increased to improve accuracy.
Finally, validation dataset is used to evaluate the model by calculating the model's classification accuracy.
End of explanation
# get predicted values
predicted_classes = model.predict_classes(X_val)
# get index list of all correctly predicted values
correct_indices = np.nonzero(np.equal(predicted_classes, y_val))[0]
# get index list of all incorrectly predicted values
incorrect_indices = np.nonzero(np.not_equal(predicted_classes, y_val))[0]
print ('Correctly predicted: %i' % np.size(correct_indices))
print ('Incorrectly predicted: %i' % np.size(incorrect_indices))
plt.figure(figsize=[20,8])
for i, correct in enumerate(correct_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_val[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], y_val[correct]))
plt.figure(figsize=[20,8])
for i, incorrect in enumerate(incorrect_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_val[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], y_val[incorrect]))
Explanation: Plot correctly and incorrectly predicted images
Let's plot some images which are correctly predicted and some images which are incorrectly predicted on our validation dataset.
End of explanation
# we have digit labels from 0 to 9
# we can either manually create a class variable with those labels
# class_names = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# or, we can take unique values from train dataset's labels
class_names = np.unique(y_train)
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_val, predicted_classes)
np.set_printoptions(precision=2)
print ('Confusion Matrix in Numbers')
print (cnf_matrix)
print ('')
cnf_matrix_percent = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
print ('Confusion Matrix in Percentage')
print (cnf_matrix_percent)
print ('')
true_class_names = class_names
predicted_class_names = class_names
df_cnf_matrix = pd.DataFrame(cnf_matrix,
index = true_class_names,
columns = predicted_class_names)
df_cnf_matrix_percent = pd.DataFrame(cnf_matrix_percent,
index = true_class_names,
columns = predicted_class_names)
plt.figure(figsize = (8,6))
#plt.subplot(121)
ax = sns.heatmap(df_cnf_matrix, annot=True, fmt='d')
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
ax.set_title('Confusion Matrix in Numbers')
'''
plt.subplot(122)
ax = sns.heatmap(df_cnf_matrix_percent, annot=True)
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
'''
Explanation: Confusion Matrix
End of explanation
train = pd.read_csv('train.csv')
print (train.shape)
train.head()
test = pd.read_csv('test.csv')
print (test.shape)
test.head()
y_train = train['label']
X_train = train.drop(labels=['label'], axis=1)
X_test = test
Explanation: The above confusion matrix heatmap shows that:
- Most of value 2 were predicted as 7. 6 images of digit 2 were predicted as 7.
- Similarly, 6 images of digit 9 were predicted as 7.
- The third highest wrong prediction was of number 5. 5 images of digit 5 were predicted as 3.
The accuracy of the model may improve if we increase the epoch/iteration number while fitting the model. Currently, it is set as 5. We can increase it to 10 and see the accuracy output.
Improve Accuracy using Convolution Neural Network (CNN) Model
Convolutional Neural Networks (CNN) are similar to Multi-layer Perceptron Neural Networks. They are also made up of neurons that have learnable weights and biases. CNNs have been successfully applied to analyzing visual imagery. They are mostly being applied in image and video recognition, recommender systems and natural language processing.
A CNN consists of multiple hidden layers. The hidden layers are either convolutional, pooling or fully connected.
Convolution layer: Feature extraction is done in this layer. This layer applies convolution operation to the input and pass the result to the next layer. In the image classification problem, a weight matrix is defined in the convolution layer. A dot product is computed between the weight matrix and a small part (as the size of the weight matrix) of the input image. The weight runs across the image such that all the pixels are covered at least once, to give a convolved output.
The weight matrix behaves like a filter in an image extracting particular information from the original image matrix.
A weight combination might be extracting edges, while another one might a particular color, while another one might just blur the unwanted noise.
The weights are learnt such that the loss function is minimized similar to a Multi-layer Perceptron.
Therefore weights are learnt to extract features from the original image which help the network in correct prediction.
When we have multiple convolutional layers, the initial layer extract more generic features, while as the network gets deeper, the features extracted by the weight matrices are more and more complex and more suited to the problem at hand.
Reference: Architecture of Convolutional Neural Networks (CNNs) demystified
Stride: While computing the dot product, if the weight matrix moves 1 pixel at a time then we call it a stride of 1. Size of the image keeps on reducing as we increase the stride value.
Padding: Padding one or more layer of zeros across the image helps to resolve the output image size reduction issue caused by stride. Initial size of the image is retained after the padding is done.
Pooling layer: Reduction in number of feature parameters is done in this layer. When the image size is too larger, then we need a pooling layer in-between two convolution layers. This layer helps to reduce the number of trainable parameters of the input image. The sole purpose of pooling is to reduce the spatial size of the image. This layer is also used to control overfitting.
- Max pooling: Uses maximum value from each of the cluster of the prior layer
- Average pooling: Uses the average value from each of the cluster of the prior layer
Fully connected layer: This layer comes after convolution and pooling layers. This layer connects each neuron in one layer to every neuron in another layer. This is similar to the concept of layer connection of Multi-layer perceptron model. Error is computed in the output layer by computing the difference in actual output and predicted output. After that, back-propagation is used to update the weight and biases for error and loss reduction.
Load train and test data
Let's again load the train and test datasets.
End of explanation
X_train = X_train.values.astype('float32') # pixel values of all images in train set
y_train = y_train.values.astype('int32') # labels of all images
X_test = test.values.astype('float32') # pixel values of all images in test set
Explanation: Get values of data
End of explanation
print (X_train.shape)
print (y_train.shape)
print (X_train[1])
Explanation: View shape and content of data
End of explanation
# pixel values are gray scale between 0 and 255
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
print (X_train[1])
Explanation: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
End of explanation
print (y_train.shape)
print (y_train[0])
# one hot encode outputs
# note that we have new variables with capital Y
# Y_train is different than y_train
Y_train = np_utils.to_categorical(y_train)
num_classes = Y_train.shape[1]
print (y_train.shape, Y_train.shape)
print (y_train[0], Y_train[0])
Explanation: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
End of explanation
# Split the entire training set into two separate sets: Training set and Validation set
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size = 0.10, random_state=random_seed)
print (X_train.shape, Y_train.shape, X_val.shape, Y_val.shape)
num_pixels = X_train.shape[1]
print (Y_val)
# converting one-hot format of digits to normal values/labels
y_val = np.argmax(Y_val, 1) # reverse of to_categorical
print (y_val)
# Note that: capital Y_val contains values in one-hot format and small y_val contains normal digit values
Explanation: Splitting train dataset into training and validation set
We split the train dataset into two parts in 9:1 ratio. 90% will be the actual training set and the remaining 10% will be the validation/testing set. We train our model using the training set and test the accuracy of the model using the validation set.
End of explanation
# reshape to be [samples][pixels][width][height]
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')
X_val = X_val.reshape(X_val.shape[0], 1, 28, 28).astype('float32')
print (num_pixels, X_train.shape, X_test.shape, X_val.shape)
print (X_train[1])
Explanation: Reshaping images
The image dimension expected by Keras for 2D (two-dimensional) convolution is in the format of [pixels][width][height].
For RGB color image, the first dimension (pixel) value would be 3 for the red, green and blue components. It's like having 3 image inputs for every single color image. In our case (for MNIST handwritten images), we have gray scale images. Hence, the pixel dimension is set as 1.
End of explanation
# baseline model for CNN
def baseline_model():
# create model
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
Explanation: Define Convolutional Neural Network (CNN) Model
Convolution Layer
- We define 32 feature maps with the size of 5x5 matrix
- We use ReLU (Rectified Linear Units) as the activation function
- This layer expects input image size of 1x28x28 ([pixels][height][weight])
Max Pooling Layer
- It has a pool size of 2x2
Dropout Layer
- Configured to randomly exclude 20% of neurons in the layer to reduce overfitting
Flatten
- Flattens the image into a single dimensional vector which is required as input by the fully connected layer
Fully connected Layer
- Contains 128 neurons
- relu is used as an activation function
- Output layer has num_classes=10 neurons for the 10 classes
- softmax activation function is used in the output layer
- adam gradient descent algorithm is used as optimizer to learn and update weights
End of explanation
# Example of using RMSprop optimizer
#from keras.optimizers import RMSprop, SGD
#model.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['accuracy'])
#model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.001), metrics=['accuracy'])
Explanation: To compile the model, there are different optimizers present in Keras like Stochastic Gradient Descent optimizer, Adam optimizer, RMSprop optimizer, etc.
End of explanation
model = baseline_model()
history = model.fit(X_train, Y_train, validation_data=(X_val, Y_val), epochs=5, batch_size=200, verbose=1)
history_dict = history.history
history_dict.keys()
plt.figure(figsize=[10,4])
plt.subplot(121)
plt.plot(range(1, len(history_dict['val_acc'])+1), history_dict['val_acc'])
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.subplot(122)
plt.plot(range(1, len(history_dict['val_loss'])+1), history_dict['val_loss'])
plt.xlabel('Epochs')
plt.ylabel('Loss')
model.summary()
scores = model.evaluate(X_val, Y_val, verbose=0)
print (scores)
print ('Score: {}'.format(scores[0]))
print ('Accuracy: {}'.format(scores[1]))
Explanation: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Validation data is used as validation set. The epochs may be increased to improve accuracy.
Finally, validation data is used to evaluate the model by calculating the model's classification accuracy.
End of explanation
# get predicted values
predicted_classes = model.predict_classes(X_val)
# get index list of all correctly predicted values
correct_indices = np.nonzero(np.equal(predicted_classes, y_val))[0]
# get index list of all incorrectly predicted values
incorrect_indices = np.nonzero(np.not_equal(predicted_classes, y_val))[0]
print ('Correctly predicted: %i' % np.size(correct_indices))
print ('Incorrectly predicted: %i' % np.size(incorrect_indices))
plt.figure(figsize=[20,8])
for i, correct in enumerate(correct_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_val[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], y_val[correct]))
plt.figure(figsize=[20,8])
for i, incorrect in enumerate(incorrect_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_val[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], y_val[incorrect]))
Explanation: Accuracy (98.61%) of Convolution Neural Network (CNN) model has improved as compared to the accuracy (97.95%) of Multi-layer Perceptron (MLP) model.
The accuracy of CNN model can be further increased by:
- increasing the epoch number while fitting the model
- adding more convolution and pooling layers to the model
Plot correctly and incorrectly predicted images
Let's plot some images which are correctly predicted and some images which are incorrectly predicted on our test dataset.
End of explanation
# we have digit labels from 0 to 9
# we can either manually create a class variable with those labels
# class_names = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# or, we can take unique values from train dataset's labels
class_names = np.unique(y_train)
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_val, predicted_classes)
np.set_printoptions(precision=2)
print ('Confusion Matrix in Numbers')
print (cnf_matrix)
print ('')
cnf_matrix_percent = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
print ('Confusion Matrix in Percentage')
print (cnf_matrix_percent)
print ('')
true_class_names = class_names
predicted_class_names = class_names
df_cnf_matrix = pd.DataFrame(cnf_matrix,
index = true_class_names,
columns = predicted_class_names)
df_cnf_matrix_percent = pd.DataFrame(cnf_matrix_percent,
index = true_class_names,
columns = predicted_class_names)
plt.figure(figsize = (8,6))
#plt.subplot(121)
ax = sns.heatmap(df_cnf_matrix, annot=True, fmt='d')
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
ax.set_title('Confusion Matrix in Numbers')
'''
plt.subplot(122)
ax = sns.heatmap(df_cnf_matrix_percent, annot=True)
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
'''
Explanation: Confusion Matrix
End of explanation
def baseline_model():
# create model
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5, 5), input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = baseline_model()
# fit the model
history = model.fit(X_train, Y_train, validation_data=(X_val, Y_val), epochs=10, batch_size=200)
history_dict = history.history
history_dict.keys()
plt.figure(figsize=[10,4])
plt.subplot(121)
plt.plot(range(1, len(history_dict['val_acc'])+1), history_dict['val_acc'])
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.subplot(122)
plt.plot(range(1, len(history_dict['val_loss'])+1), history_dict['val_loss'])
plt.xlabel('Epochs')
plt.ylabel('Loss')
model.summary()
scores = model.evaluate(X_val, Y_val, verbose=0)
print (scores)
print ('Score: {}'.format(scores[0]))
print ('Accuracy: {}'.format(scores[1]))
Explanation: Using Multi-layer Perceptron (MLP) Model, we had the following heatmap outcome:
- Most of value 2 were predicted as 7. 6 images of digit 2 were predicted as 7.
- Similarly, 6 images of digit 9 were predicted as 7.
- The third highest wrong prediction was of number 5. 5 images of digit 5 were predicted as 3.
Using Convolutional Neural Network (CNN) Model, we had the following improvements:
- Number 2 predicted as 7 has been reduced from 6 to 4.
- Number 9 predicted as 7 has been reduced from 6 to 3.
- Number 5 predicted as 3 has been reduced from 5 to 2.
The accuracy of CNN model can be further increased by:
- increasing the epoch/iteration number while fitting the model
- adding more convolution and pooling layers to the model
Improving accuracy using multiple CNN layer
Let's try adding multiple convolution layers (Conv2D) and multiple fully-connected layers (Dense) as well.
The second Convolution layer will have 15 filters with the size of 3x3 matrix.
The second fully-connected layer will have 50 neurons.
We also use 10 epochs this time instead of 5.
End of explanation
# get predicted values for test dataset
predicted_classes = model.predict_classes(X_test)
submissions = pd.DataFrame({'ImageId': list(range(1, len(predicted_classes) + 1)),
"Label": predicted_classes})
submissions.to_csv("submission.csv", index=False, header=True)
Explanation: Accuracy has improved from 98.61% to 98.83%.
Submission to Kaggle
End of explanation |
13,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background
Unlike Issue-Label Bot which predicts generic bug, feature-request and question labels, we are attempting to build the capability to predict repo-specific labels. One of the primary challenges of doing this is a dearth of labeled examples for a particular repo. Therefore, we attempt to generate features via transfer learning from a language model trained over a large corpus of GitHub issues. These features are then fed downstream to a classifier with the goal of enabling the classifier to predict personalized issue labels based upon existing hand-labeled issues present in a repository.
As an initial test, we will evaluate the ability to predict sig/ labels on the Kubernetes/Kubernetes repo.
In order to measure the efficacy of these embeddings, we will use DataRobot as a benchmark to see if adding embeddings from transfer learning improves model performance relative to TFIDF n-gram techniques featurization of text.
SQL Query In BigQuery
```sql
standardSQL
SELECT *
FROM (
SELECT
updated_at
, MAX(updated_at) OVER (PARTITION BY url) as last_time
, FORMAT("%T", ARRAY_CONCAT_AGG(labels)) as labels
, repo, url, title, body, len_labels
FROM(
SELECT
TIMESTAMP(REGEXP_REPLACE(JSON_EXTRACT(payload, '$.issue.updated_at'), "\"", "")) as updated_at
, REGEXP_EXTRACT(JSON_EXTRACT(payload, '$.issue.url'), r'https
Step1: Explore The Data
Question from @cblecker
@Hamel Husain that's how often a PR/issue has two different sig labels on it?
Step2: Count Labels
Step3: Top 50 Labels
Step4: Sig/ Labels
Step5: See correlation among labels
Step6: Obtain Baseline With Automated Machine Learning
Step7: Get Embeddings and Repeat
Step8: Compare Transfer Learning vs. Regular Methods | Python Code:
import pandas as pd
import numpy as np
from random import randint
from matplotlib import pyplot as plt
import re
pd.set_option('max_colwidth', 1000)
df = pd.read_csv('https://storage.googleapis.com/issue_label_bot/k8s_issues/000000000000.csv')
df.labels = df.labels.apply(lambda x: eval(x))
df.head()
#remove target leakage from kubernetes which are the bot commands
df['body'] = df.body.apply(lambda x: re.sub('(/sig|/kind|/status/triage/|priority) \S+', '', str(x)))
Explanation: Background
Unlike Issue-Label Bot which predicts generic bug, feature-request and question labels, we are attempting to build the capability to predict repo-specific labels. One of the primary challenges of doing this is a dearth of labeled examples for a particular repo. Therefore, we attempt to generate features via transfer learning from a language model trained over a large corpus of GitHub issues. These features are then fed downstream to a classifier with the goal of enabling the classifier to predict personalized issue labels based upon existing hand-labeled issues present in a repository.
As an initial test, we will evaluate the ability to predict sig/ labels on the Kubernetes/Kubernetes repo.
In order to measure the efficacy of these embeddings, we will use DataRobot as a benchmark to see if adding embeddings from transfer learning improves model performance relative to TFIDF n-gram techniques featurization of text.
SQL Query In BigQuery
```sql
standardSQL
SELECT *
FROM (
SELECT
updated_at
, MAX(updated_at) OVER (PARTITION BY url) as last_time
, FORMAT("%T", ARRAY_CONCAT_AGG(labels)) as labels
, repo, url, title, body, len_labels
FROM(
SELECT
TIMESTAMP(REGEXP_REPLACE(JSON_EXTRACT(payload, '$.issue.updated_at'), "\"", "")) as updated_at
, REGEXP_EXTRACT(JSON_EXTRACT(payload, '$.issue.url'), r'https://api.github.com/repos/(.*)/issues') as repo
, JSON_EXTRACT(payload, '$.issue.url') as url
-- extract the title and body removing parentheses, brackets, and quotes
, LOWER(TRIM(REGEXP_REPLACE(JSON_EXTRACT(payload, '$.issue.title'), r"\n|(|)|[|]|#|*||\"", ' '))) as title
, LOWER(TRIM(REGEXP_REPLACE(JSON_EXTRACT(payload, '$.issue.body'), r"\\n|\(|\)|\[|\]|#|\*||\"", ' '))) as body
, REGEXP_EXTRACT_ALL(JSON_EXTRACT(payload, "$.issue.labels"), ',"name\":"(.+?)","color') as labels
, ARRAY_LENGTH(REGEXP_EXTRACT_ALL(JSON_EXTRACT(payload, "$.issue.labels"), ',"name\":"(.+?)","color')) as len_labels
FROM githubarchive.month.20*
WHERE
_TABLE_SUFFIX BETWEEN '1601' and '1912'
and type="IssuesEvent"
)
WHERE
repo = 'kubernetes/kubernetes'
GROUP BY updated_at, repo, url, title, body, len_labels
)
WHERE last_time = updated_at and len_labels >= 1
```
The results of the above query can be downloaded as a csv file from this link:
https://storage.googleapis.com/issue_label_bot/k8s_issues/000000000000.csv
End of explanation
def count_sig(l):
return(sum(['sig/' in x for x in l]))
from matplotlib.ticker import PercentFormatter
sig_counts = df.labels.apply(lambda x: count_sig(x))
plt.hist(sig_counts, weights=np.ones(len(sig_counts)) / len(sig_counts))
plt.gca().yaxis.set_major_formatter(PercentFormatter(1))
plt.title(f'Distribution of # of sig/ labels for kubernetes/kubernetes\n {len(sig_counts):,} issues pulled from GHArchive.')
plt.show()
Explanation: Explore The Data
Question from @cblecker
@Hamel Husain that's how often a PR/issue has two different sig labels on it?
End of explanation
from collections import Counter
c = Counter()
for row in df.labels:
c.update(row)
print(f'There are {len(c.keys())} unique labels in kubernetes/kubernetes')
nsig = sum(['sig/' in x for x in list(c.keys())])
print(f"number of sig labels: {nsig}")
Explanation: Count Labels
End of explanation
c.most_common(50)
len([(k, c[k]) for k in c if c[k] >= 100])
Explanation: Top 50 Labels
End of explanation
sig_labels = [x for x in list(c.keys()) if 'sig/' in x]
for l in sig_labels:
print(f'{l}: {c[l]}')
Explanation: Sig/ Labels
End of explanation
min_freq = 30
def contains_sig(l):
if not l:
return False
else:
# make sure there are at least 10 issues labeled with that value
return max(['sig/' in x and c[x] >=min_freq for x in l])
sig_df = df[df.labels.apply(lambda x: contains_sig(x))]
print(f'{sig_df.shape[0]:,} issues have sig/ labels')
sig_labels = [k for k in c.keys() if c[k] >= min_freq and 'sig/' in k]
print(f'{len(sig_labels)} sig labels that have at least {min_freq} issues')
# build an indicator matrix
indicator = []
for l in sig_df.labels.values:
zer = np.zeros(len(sig_labels))
mask = [sig_labels.index(x) for x in l if x in sig_labels]
zer[mask] = 1
indicator.append(zer[None, :])
indicator_matrix = pd.DataFrame(np.concatenate(indicator, axis=0), columns=sig_labels).astype(int)
corr_grid = indicator_matrix.T.dot(indicator_matrix)
for i, x in enumerate(corr_grid):
corr_grid.iloc[i][i:] = 0
import seaborn as sns
import matplotlib.pyplot as plt
#cmap = sns.diverging_palette(220, 10, as_cmap=True)
#normalize correlation grid
for label in corr_grid:
corr_grid.loc[label] = corr_grid.loc[label] / c[label]
plt.figure(figsize=(16, 14))
plt.title('Co-Occurence Matrix')
sns.heatmap(corr_grid, square=True, vmin=0, vmax=.4, mask=corr_grid<=0.05)
Explanation: See correlation among labels
End of explanation
def part_assign():
i = randint(1, 10)
if i <=5:
return i
else:
return 6
combined_sig_df = pd.concat([sig_df.reset_index(), indicator_matrix.reset_index()], axis=1)
combined_sig_df['part'] = combined_sig_df.repo.apply(lambda x: part_assign())
combined_sig_df.to_hdf('combined_sig_df.hdf')
combined_sig_df = pd.read_hdf('combined_sig_df.hdf')
#! pip install datarobot
import datarobot as dr
from datarobot import UserCV
from fastai.core import parallel
from datarobot import Blueprint
ucv = UserCV(user_partition_col='part', cv_holdout_level=6, seed=123)
dr.Client(token='something-something', endpoint='https://app.datarobot.com/api/v2')
def create_dr_proj(label):
temp_df = combined_sig_df[['title', 'body', 'part', label]]
proj = dr.Project.create(sourcedata=temp_df,
project_name=label,
)
proj.set_target(label,
positive_class=1,
partitioning_method=ucv,
target_type='Binary',
mode=dr.AUTOPILOT_MODE.MANUAL,
worker_count=9,
max_wait=600000)
bps = proj.get_blueprints()
bp = [b for b in bps if 'Nystroem' in str(b)][0]
proj.train(bp, sample_pct=49.8)
proj.unlock_holdout()
return proj
proj_list = []
for i, label in enumerate(sig_labels):
try:
print(f'creating project {i}: {label}')
proj = create_dr_proj(label)
proj_list.append(proj)
except:
pass
predictions = []
for proj in proj_list:
print(f'getting predictions for holdout set for {str(proj)}')
label = proj.target.replace('_', '-')
temp_df = combined_sig_df[['title', 'body', 'part', label]]
temp_df = temp_df[temp_df.part == 6]
ds = proj.upload_dataset(temp_df)
m = proj.get_models()[0]
predict_job = m.request_predictions(ds.id)
yhat = predict_job.get_result_when_complete()
predictions.append({label: yhat['positive_probability']})
result = {}
for d in predictions:
result.update(d)
baseline_holdout_predictions_df = pd.DataFrame(result)
baseline_holdout_predictions_df.columns = ['p_'+x for x in baseline_holdout_predictions_df.columns]
assert len(baseline_holdout_predictions_df) == len(combined_sig_df[combined_sig_df.part == 6])
predictions_df = pd.concat([combined_sig_df[combined_sig_df.part == 6].reset_index(drop=True),
baseline_holdout_predictions_df.reset_index(drop=True)], axis=1)
predictions_df['version'] = 'baseline'
predictions_df.to_hdf('prediction_baseline_df.hdf')
Explanation: Obtain Baseline With Automated Machine Learning
End of explanation
import pandas as pd
from inference import InferenceWrapper, pass_through
import os
import torch
from torch.cuda import empty_cache
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
wrapper = InferenceWrapper(model_path='/ds/lang_model/models_uxgcl1e1/',
model_file_name='trained_model_uxgcl1e1.hdf')
empty_cache()
combined_sig_df = pd.read_hdf('combined_sig_df.hdf')
# text = wrapper.process_df(combined_sig_df)
# text.to_hdf('textlm_df.hdf')
text = pd.read_hdf('textlm_df.hdf')
assert text['text'].isna().sum() == 0
features = []
from tqdm.auto import tqdm
with torch.no_grad():
for t in tqdm(text['text'].values):
feat = wrapper.get_pooled_features(t).cpu()
features.append(feat)
empty_cache()
feat_matrix = torch.cat(features, dim=0).numpy()
feat_matrix = feat_matrix[:, :1600]
feat_df = pd.DataFrame(feat_matrix)
feat_df.columns = ['f_' + str(x) for x in feat_df.columns]
feat_df.to_csv('feat_df.csv', index=False)
feat_df = pd.read_csv('feat_df.csv')
lm_combined_df = pd.concat([combined_sig_df.reset_index(drop=True),
feat_df.reset_index(drop=True)], axis=1)
import datarobot as dr
from datarobot import UserCV
ucv = UserCV(user_partition_col='part', cv_holdout_level=6, seed=123)
dr.Client(token='something', endpoint='https://app.datarobot.com/api/v2')
def create_dr_proj(label):
temp_df = lm_combined_df[['title', 'body', 'part', label] + list(feat_df.columns)]
proj = dr.Project.create(sourcedata=temp_df,
project_name='lm_'+label,
)
proj.set_target(label,
positive_class=1,
partitioning_method=ucv,
target_type='Binary',
mode=dr.AUTOPILOT_MODE.QUICK,
worker_count=9,
max_wait=600000)
proj.unlock_holdout()
return proj
proj_list_lm = []
for i, label in enumerate(sig_labels):
try:
print(f'creating project {i}: lm_{label}')
proj = create_dr_proj(label)
proj_list_lm.append(proj)
except:
pass
Explanation: Get Embeddings and Repeat
End of explanation
import datarobot as dr
from datarobot import UserCV
dr.Client(token='something-something', endpoint='https://app.datarobot.com/api/v2')
def get_metrics(modelobj):
return modelobj.metrics['AUC']['holdout']
projects = [p for p in dr.Project.list() if p.project_name.startswith('lm_')]
'hamel'.replace('am', 'gg')
label = []
category = []
auc = []
for proj in projects:
print(f'getting metrics for {proj.project_name}')
models = [m for m in proj.get_models() if m.sample_pct > 45]
baseline_model = sorted([m for m in models if m.featurelist_name == 'text only'], key=get_metrics, reverse=True)[0]
deep_model = sorted([m for m in models if m.featurelist_name != 'text only'], key=get_metrics, reverse=True)[0]
baseline_auc = get_metrics(baseline_model)
deep_auc = get_metrics(deep_model)
label.extend([proj.project_name.replace('lm_', '')] * 2)
category.extend(['baseline', 'deep'])
auc.extend([baseline_auc, deep_auc])
import pandas as pd
compare_df = pd.DataFrame({'label': label,
'category': category,
'auc': auc})
pivot = compare_df.pivot(index='label', columns='category', values='auc')
pivot['winner'] = pivot.apply(lambda x: 'deep' if x.deep > x.baseline else 'baseline', axis=1)
pivot['abs diff'] = pivot.apply(lambda x: abs(x.deep - x.baseline), axis=1)
pivot['label count'] = [c[x] for x in pivot.index.values]
pivot.sort_values(by=['label count'], ascending=False)
wrapper
len(wrapper.learn.data.vocab.itos)
pivot.to_hdf('pivot_df.hdf')
import pandas as pd
score_df = pd.read_hdf('score_df.hdf')
score_df.set_index('label', inplace=True)
score_df.columns = ['deep2']
new_pivot = pivot.join(score_df, how='left')[['baseline', 'deep', 'deep2', 'label count']]
def winner(x):
if x.baseline > x.deep and x.baseline > x.deep2:
return 'baseline'
elif x.deep > x.deep2:
return 'deep'
elif x.deep2 > x.deep:
return 'deep2'
new_pivot.dropna(inplace=True)
new_pivot['winner'] = new_pivot.apply(lambda x: winner(x), axis=1)
new_pivot['baseline minus best deep'] = new_pivot.apply(lambda x: x.baseline - max(x.deep, x.deep2), axis=1)
new_pivot['abs diff'] = new_pivot.apply(lambda x: abs(x['baseline minus best deep']), axis=1)
new_pivot.sort_values('label count', ascending=False)
new_pivot.mean()
Explanation: Compare Transfer Learning vs. Regular Methods
End of explanation |
13,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note to Amazon EC2 users
Step1: We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
Step2: Load Wikipedia data and extract TF-IDF features
Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
Step3: Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
Step4: As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
Step5: We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
Step6: EM in high dimensions
EM for high-dimensional data requires some special treatment
Step7: Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
Step8: Initializing covariances
To initialize our covariance parameters, we compute $\hat{\sigma}{k, j}^2 = \sum{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
Step9: Running EM
Now that we have initialized all of our parameters, run EM.
Step10: Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix).
A sample output may be
Step11: Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
Step12: Quiz Question
Step13: Quiz Question
Step14: Quiz Question | Python Code:
import graphlab
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
Explanation: Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Fitting a diagonal covariance Gaussian mixture model to text data
In a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component.
* Computational cost becomes prohibitive in high dimensions: score calculations have complexity cubic in the number of dimensions M if the Gaussian has a full covariance matrix.
* A model with many parameters require more data: observe that a full covariance matrix for an M-dimensional Gaussian will have M(M+1)/2 parameters to fit. With the number of parameters growing roughly as the square of the dimension, it may quickly become impossible to find a sufficient amount of data to make good inferences.
Both of these issues are avoided if we require the covariance matrix of each component to be diagonal, as then it has only M parameters to fit and the score computation decomposes into M univariate score calculations. Recall from the lecture that the M-step for the full covariance is:
\begin{align}
\hat{\Sigma}k &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_i-\hat{\mu}_k)(x_i - \hat{\mu}_k)^T
\end{align}
Note that this is a square matrix with M rows and M columns, and the above equation implies that the (v, w) element is computed by
\begin{align}
\hat{\Sigma}{k, v, w} &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}{kv})(x{iw} - \hat{\mu}_{kw})
\end{align}
When we assume that this is a diagonal matrix, then non-diagonal elements are assumed to be zero and we only need to compute each of the M elements along the diagonal independently using the following equation.
\begin{align}
\hat{\sigma}^2_{k, v} &= \hat{\Sigma}{k, v, v} \
&= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})^2
\end{align}
In this section, we will use an EM implementation to fit a Gaussian mixture model with diagonal covariances to a subset of the Wikipedia dataset. The implementation uses the above equation to compute each variance term.
We'll begin by importing the dataset and coming up with a useful representation for each article. After running our algorithm on the data, we will explore the output to see whether we can give a meaningful interpretation to the fitted parameters in our model.
Import necessary packages
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.
End of explanation
from em_utilities import *
Explanation: We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
End of explanation
wiki = graphlab.SFrame('people_wiki.gl/').head(5000)
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
Explanation: Load Wikipedia data and extract TF-IDF features
Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
End of explanation
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
Explanation: Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
End of explanation
tf_idf = normalize(tf_idf)
Explanation: As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
End of explanation
for i in range(5):
doc = tf_idf[i]
print(np.linalg.norm(doc.todense()))
Explanation: We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
End of explanation
from sklearn.cluster import KMeans
np.random.seed(5)
num_clusters = 25
# Use scikit-learn's k-means to simplify workflow
#kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1) # uncomment to use parallelism -- may break on your installation
kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=1)
kmeans_model.fit(tf_idf)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
means = [centroid for centroid in centroids]
Explanation: EM in high dimensions
EM for high-dimensional data requires some special treatment:
* E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python.
* All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by sparsity of data.
* Initially, some words may be entirely absent from a cluster, causing the M step to produce zero mean and variance for those words. This means any data point with one of those words will have 0 probability of being assigned to that cluster since the cluster allows for no variability (0 variance) around that count being 0 (0 mean). Since there is a small chance for those words to later appear in the cluster, we instead assign a small positive variance (~1e-10). Doing so also prevents numerical overflow.
We provide the complete implementation for you in the file em_utilities.py. For those who are interested, you can read through the code to see how the sparse matrix implementation differs from the previous assignment.
You are expected to answer some quiz questions using the results of clustering.
Initializing mean parameters using k-means
Recall from the lectures that EM for Gaussian mixtures is very sensitive to the choice of initial means. With a bad initial set of means, EM may produce clusters that span a large area and are mostly overlapping. To eliminate such bad outcomes, we first produce a suitable set of initial means by using the cluster centers from running k-means. That is, we first run k-means and then take the final set of means from the converged solution as the initial means in our EM algorithm.
End of explanation
num_docs = tf_idf.shape[0]
weights = []
for i in xrange(num_clusters):
# Compute the number of data points assigned to cluster i:
num_assigned = sum(cluster_assignment == i) # YOUR CODE HERE
w = float(num_assigned) / num_docs
weights.append(w)
cluster_assignment
Explanation: Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
End of explanation
covs = []
for i in xrange(num_clusters):
member_rows = tf_idf[cluster_assignment==i]
cov = (member_rows.multiply(member_rows) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \
+ means[i]**2
cov[cov < 1e-8] = 1e-8
covs.append(cov)
Explanation: Initializing covariances
To initialize our covariance parameters, we compute $\hat{\sigma}{k, j}^2 = \sum{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
End of explanation
out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10)
out['loglik']
Explanation: Running EM
Now that we have initialized all of our parameters, run EM.
End of explanation
# Fill in the blanks
def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):
print('')
print('==========================================================')
num_clusters = len(means)
for c in xrange(num_clusters):
print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))
print('\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance'))
# The k'th element of sorted_word_ids should be the index of the word
# that has the k'th-largest value in the cluster mean. Hint: Use np.argsort().
sorted_word_ids = np.argsort(means[c])[-5:] # YOUR CODE HERE
for i in sorted_word_ids[:5]:
print '{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word['category'][i],
means[c][i],
covs[c][i])
print '\n=========================================================='
'''By EM'''
visualize_EM_clusters(tf_idf, out['means'], out['covs'], map_index_to_word)
Explanation: Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix).
A sample output may be:
```
==========================================================
Cluster 0: Largest mean parameters in cluster
Word Mean Variance
football 1.08e-01 8.64e-03
season 5.80e-02 2.93e-03
club 4.48e-02 1.99e-03
league 3.94e-02 1.08e-03
played 3.83e-02 8.45e-04
...
```
End of explanation
np.random.seed(5) # See the note below to see why we set seed=5.
num_clusters = len(means)
num_docs, num_words = tf_idf.shape
random_means = []
random_covs = []
random_weights = []
for k in range(num_clusters):
# Create a numpy array of length num_words with random normally distributed values.
# Use the standard univariate normal distribution (mean 0, variance 1).
# YOUR CODE HERE
mean = np.random.normal(loc=0, scale=1, size=num_words)
# Create a numpy array of length num_words with random values uniformly distributed between 1 and 5.
# YOUR CODE HERE
cov = np.random.uniform(low = 1.0, high = 5.0, size = num_words)
# Initially give each cluster equal weight.
# YOUR CODE HERE
weight = 1.0 / num_clusters
random_means.append(mean)
random_covs.append(cov)
random_weights.append(weight)
Explanation: Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
End of explanation
out_random_init = EM_for_high_dimension(tf_idf, random_means, random_covs, random_weights, cov_smoothing=1e-5)
print out_random_init['loglik']
Explanation: Quiz Question: Try fitting EM with the random initial parameters you created above. (Use cov_smoothing=1e-5.) Store the result to out_random_init. What is the final loglikelihood that the algorithm converges to?
End of explanation
print out_random_init['loglik'][-1] - out['loglik'][-1]
Explanation: Quiz Question: Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means?
End of explanation
# YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word.
visualize_EM_clusters(tf_idf, out_random_init['means'], out_random_init['covs'],
map_index_to_word)
Explanation: Quiz Question: For the above model, out_random_init, use the visualize_EM_clusters method you created above. Are the clusters more or less interpretable than the ones found after initializing using k-means?
End of explanation |
13,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p class="note">
ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code.
</p>
Today as small return for the ReproduceIt series
I try to reproduce a simple but nice data analysis and webapp that braid.io did
called Most Beyonces are 14 years old and most Kanyes are about 11.
The article analyses the trend of names of some music artits (Beyonce, Kanye and Madona) in the US, it also has some nice possible explanations for the ups and downs in time, its a quick read. The data is based on Social Security Office and can be downloaded from the SSN website
Step1: Beyonce
Now that the data is into a simple dataframe we can just filter by the name we want and make a Bar Chart. | Python Code:
%matplotlib inline
import pandas as pd
import os
data_dir = os.path.expanduser("~/data/names/names")
files = os.listdir(data_dir)
data = pd.DataFrame(columns=["year", "name", "sex", "occurrences"])
for fname in files:
if fname.endswith(".txt"):
fpath = os.path.join(data_dir, fname)
df = pd.read_csv(fpath, header=None, names=["name", "sex", "occurrences"])
df["year"] = int(fname[3:7])
data = data.append(df)
data.year = data.year.astype(int)
data.head()
data.shape
data.dtypes
Explanation: <p class="note">
ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code.
</p>
Today as small return for the ReproduceIt series
I try to reproduce a simple but nice data analysis and webapp that braid.io did
called Most Beyonces are 14 years old and most Kanyes are about 11.
The article analyses the trend of names of some music artits (Beyonce, Kanye and Madona) in the US, it also has some nice possible explanations for the ups and downs in time, its a quick read. The data is based on Social Security Office and can be downloaded from the SSN website: Beyond the Top 1000 Names
The data is very small and loading it into pandas and plotting using bokeh it was very easy.
End of explanation
beyonce = data[data["name"] == "Beyonce"][["year", "occurrences"]]
from bokeh.charts import ColumnDataSource, Bar, output_notebook, show
from bokeh.models import HoverTool
output_notebook()
p = Bar(data=beyonce, label="year", values="occurrences", title="No. Babies named Beyoncé",
color="#0277BD", ylabel='', tools="save,reset")
show(p)
Explanation: Beyonce
Now that the data is into a simple dataframe we can just filter by the name we want and make a Bar Chart.
End of explanation |
13,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Polynomios
Step1: La clase poly1D representa polinomios unidimensionales con base en sus coeficientes. Sea el polinomio
$$ p(x) = 6 x^2 + x - 2 $$
su representación en NumPy es
Step2: Se puede evaluar el valor de $p(x)$
Step3: Se puede determinar el orden del polinomio con
Step4: y sus raíces
Step5: Y que tal el polinomio con raíces complejas $p2(x) = 2 x^2 -3 x +7$
Step6: Se puede obtener la gráfica del polinomio
Step7: Ajuste de curvas mediante polinomios
Preparación de un dataset de puntos agregando ruido a un conjunto de puntos conocidos
Step8: Teniendo un conjunto de values x,y se puede determinar el polimonio que mejor se ajusta
Step9: Obtener la gráfica de las observaciones y del polinomio de mejor ajuste | Python Code:
import numpy as np
Explanation: Polynomios
End of explanation
p = np.poly1d([6., 1., -2.])
Explanation: La clase poly1D representa polinomios unidimensionales con base en sus coeficientes. Sea el polinomio
$$ p(x) = 6 x^2 + x - 2 $$
su representación en NumPy es
End of explanation
p(0), p(1), p(10)
Explanation: Se puede evaluar el valor de $p(x)$
End of explanation
p.order
Explanation: Se puede determinar el orden del polinomio con
End of explanation
p.roots
# Comprobando las raíces
p(p.roots)
Explanation: y sus raíces
End of explanation
np.poly1d([2.,-3.,7.]).roots
Explanation: Y que tal el polinomio con raíces complejas $p2(x) = 2 x^2 -3 x +7$
End of explanation
import matplotlib.pyplot as plt
xvalues = np.linspace(-10.,10.,100)
plt.plot(xvalues, p(xvalues), '-')
plt.show()
Explanation: Se puede obtener la gráfica del polinomio
End of explanation
y = p(xvalues) + np.random.randn(xvalues.size)*p(xvalues).std()/2
Explanation: Ajuste de curvas mediante polinomios
Preparación de un dataset de puntos agregando ruido a un conjunto de puntos conocidos
End of explanation
# Intentar ajustar un polinomio de grado 3
np.polyfit(xvalues,y,deg=3)
# Intentar ajustar un polinomio de grado 2
p3 = np.poly1d( np.polyfit(xvalues,y,deg=2) )
p3
Explanation: Teniendo un conjunto de values x,y se puede determinar el polimonio que mejor se ajusta
End of explanation
plt.plot(xvalues, y, 'xr', xvalues, p3(xvalues), '-b')
plt.show()
Explanation: Obtener la gráfica de las observaciones y del polinomio de mejor ajuste
End of explanation |
13,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
print(text)
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
return None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
13,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoder/Decoder Dialogue Management
Here we use a simple Encoder/Decoder GRU network to predict answers from the Cornell Movie-Dialog Corpus. We use PyTorch as a deep learning framework.
Most of the code in this notebook comes from the following tutorial on English-French translation.
https
Step1: Dataset
Load Cornell dataset
We start loading the corpus' dialogs as Episodes (class due.episode.Episode). We limit the number of episodes to load so we can test the code more easily.
Step2: (alternative) Load Star Wars Dataset
Step3: Text cleaning
Here we define functions for a simple text processing pipeline, where we just convert sentences to lowercase and tokenize them using SpaCy.
Step5: Dataset generation
Here we generate a dataset of utterances and their responses. The output of this section is
Step12: Vocabulary
Here we index all the words in the corpus so that we can associate each word with a numeric ID, and vice versa.
TODO
Step14: Embeddings
We could initialize the model's embedding layer with random weights, but we expect better results using pre-trained word embeddings instead. We chose GloVe 6B, 300d word vectors for this purpose.
To set these vectors as default embeddings for our network we need to prepare a matrix of (vocabulary_size, embedding_dim) elements where the i-th row is the embedding vector of the word of index i in our vocabulary.
Step15: 1-by-1 training
Here we define a simple model that can be trained one sentence pair at the time. To reduce training time and improve generalization capabilities, usually deep learning systems are trained in batches. Batch training is implemented later on in this Notebook.
Encoding
Here we define a function to encode a sentence into a Torch tensor of indices
Step16: Model
The model we used is copied straight from the one presented in the reference tutorial (https
Step17: Training
Here we define a function to process training for a single pair of sentences.
TODO
Step18: Model initialization
This instantiate a fresh model. You should run this cell once before running your training epochs.
Step19: Epoch
Here we run a training Epoch, that is, we run the whole dataset through the training procedure. This cell can be executed many times to run multiple Epochs (be careful not to re-initialize the model across Epochs
Step20: Evaluation
Step21: Testing
Step22: Batch training
Instead of feeding sentence pairs one by one, we want the training procedure to predict a number of samples before computing the loss and completing the optimization step. This is called batch training.
The code below is inspired to https
Step23: Model
We still compare the mode's output with the previous one
Step25: Batch iterator
We want a function that takes our lists X and y and return them one batch at the time
batches()
Step26: Batch to tensor
Once we have a batch (a list of sentences), we want to turn it into something that can be fed to the model.
Step28: pad_sequence()
Step30: batch_to_tensor()
Step31: Training
Step32: Model serialization
Step33: Save
Step34: Load
Step35: Resume training | Python Code:
from __future__ import unicode_literals, print_function, division
from io import open
import unicodedata
import string
import re
import random
from datetime import datetime
from collections import defaultdict
from six import iteritems
import numpy as np
import torch
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from tqdm import tqdm_notebook as tqdm
DEVICE
Explanation: Encoder/Decoder Dialogue Management
Here we use a simple Encoder/Decoder GRU network to predict answers from the Cornell Movie-Dialog Corpus. We use PyTorch as a deep learning framework.
Most of the code in this notebook comes from the following tutorial on English-French translation.
https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
We apply the Machine Translation framework described in the tutorial to Dialogue Management by processing sentences in the corpus by pairs: we encode the sentence, and decode the answer.
End of explanation
from due.corpora import cornell
import itertools
N_DIALOGS = 100
episodes = list(itertools.islice(cornell.episode_generator(), N_DIALOGS))
# episodes = cornell.load()
episodes[95].events
Explanation: Dataset
Load Cornell dataset
We start loading the corpus' dialogs as Episodes (class due.episode.Episode). We limit the number of episodes to load so we can test the code more easily.
End of explanation
import pickle
from due.episode import Episode
saved_episodes_filename = 'SW_EPISODES.pkl'
with open(saved_episodes_filename, 'rb') as f:
saved_episodes = pickle.load(f)
episodes = [Episode.load(e) for e in saved_episodes]
Explanation: (alternative) Load Star Wars Dataset
End of explanation
from due.nlp.preprocessing import normalize_sentence
s = "Can we make this quick? Roxanne Korrine and Andrew Barrett are having an incredibly horrendous public break- up on the quad. Again."
s_normalized = normalize_sentence(s, False)
print(s_normalized)
Explanation: Text cleaning
Here we define functions for a simple text processing pipeline, where we just convert sentences to lowercase and tokenize them using SpaCy.
End of explanation
from due.event import Event
# from due.episode import extract_utterance_pairs
def _is_utterance(event):
return event.type == Event.Type.Utterance
def extract_utterance_pairs(episode, preprocess_f=None):
Process Events in an Episode, extracting all the Utterance Event pairs that
can be interpreted as one dialogue turn (ie. an Agent's utterance, and a
different Agent's response).
In particular, Event pairs are extracted from the Episode so that:
* Both Events are Utterances (currently, non-utterances will raise an exception)
* The second Event immediately follows the first
* The two Events are acted by two different Agents
This means that if an utterance has more than one answers, only the first
one will be included in the result.
If a `preprocess_f` function is specified, resulting utterances will be run
through this function before being returned. A LRU Cache is applied to
`preprocess_f`, as most sentences will be returned as both utterances and
answers/
Return two lists of the same length, so that each utterance `X_i` in the
first list has its response `y_i` in the second.
:param episode: an Episode
:type episode: :class:`due.episode.Episode`
:param preprocess_f: when given, sentences will be run through this function before being returned
:type preprocess_f: `func`
:return: a list of utterances and the list of their answers (one per utterance)
:rtype: (`list`, `list`)
preprocess_f = lru_cache(4)(preprocess_f) if preprocess_f else lambda x: x
result_X = []
result_y = []
for e1, e2 in zip(episode.events, episode.events[1:]):
if not _is_utterance(e1) or not _is_utterance(e2):
raise NotImplementedError("Non-utterance Events are not supported yet")
if e1.agent != e2.agent and e1.payload and e2.payload:
result_X.append(preprocess_f(e1.payload))
result_y.append(preprocess_f(e2.payload))
return result_X, result_y
extract_utterance_pairs(episodes[0])
from tqdm import tqdm_notebook as tqdm
X = []
y = []
for e in tqdm(episodes):
try:
episode_X, episode_y = extract_utterance_pairs(e)
except AttributeError:
print("Skipping episode with events: %s" % e.events)
X.extend(episode_X)
y.extend(episode_y)
Explanation: Dataset generation
Here we generate a dataset of utterances and their responses. The output of this section is:
A list of utterances (str) X
A list of responses (str) y, one per utterance in X.
Example:
X: ["hi", "hello how are you?", "i'm fine thanks", ...]
y: ["hello how are you?", "i'm fine thanks", "good to hear", ...]
Note that within an Episode i, y_i is just X_i[1:]. This is not true when X and y are obtained concatenating data from multiple episodes.
End of explanation
# from due.vocabulary import Vocabulary
from due import __version__
UNK = '<UNK>'
SOS = '<SOS>'
EOS = '<EOS>'
class Vocabulary():
def __init__(self):
self.word_to_index = {}
self.index_to_word = {}
self.index_to_count = defaultdict(int)
self.current_index = 0
self.add_word(UNK) # Unknown token
self.add_word(SOS) # Start of String
self.add_word(EOS) # End of String
def add_word(self, word):
Add a new word to the dictionary.
:param word: the word to add
:type word: `str`
if word in self.word_to_index:
index = self.word_to_index[word]
else:
index = self.current_index
self.current_index += 1
self.word_to_index[word] = index
self.index_to_word[index] = word
self.index_to_count[index] += 1
def index(self, word):
Retrieve a word's index in the Vocabulary. Return the index of the <UNK>
token if not present.
:param word: the word to look up
:type word: `str`
:return: the word's index if existing, *<UNK>*'s index otherwise
:rtype: `int`
if word in self.word_to_index:
return self.word_to_index[word]
return self.word_to_index[UNK]
def word(self, index):
Return the word corresponding to the given index
:param index: the index to look up
:type index: `int`
:return: the words corresponding to the given index
:rtype: `str`
return self.index_to_word[index]
def size(self):
Return the number of words in the Vocabulary
:return: number of words in the Vocabulary
:rtype: `int`
return len(self.word_to_index)
def save(self):
Return a serializable `dict` representing the Vocabulary.
:return: a serializable representation of self
:rtype: `dict`
return {
'_version': __version__,
'word_to_index': self.word_to_index,
'index_to_word': self.index_to_word,
'index_to_count': self.index_to_count,
'current_index': self.current_index,
}
@staticmethod
def load(data):
result = Vocabulary()
result.word_to_index = data['word_to_index']
result.index_to_word = data['index_to_word']
result.index_to_count = data['index_to_count']
result.current_index = data['current_index']
return result
vocabulary_full = Vocabulary()
for sentence in set(X + y):
for word in sentence.split():
vocabulary_full.add_word(word)
vocabulary_full.size()
def prune_vocabulary(vocabulary, min_occurrences):
Return a copy of the given vocabulary where words with less than
`min_occurrences` occurrences are removed.
:param vocabulary: a Vocabulary
:type vocabulary: :class:`due.nlp.vocabulary.Vocabulary`
:param min_occurrences: minimum number of occurrences for a word to be kept
:type min_occurrences: `int`
:return: a pruned copy of the given vocabulary
:rtype: :class:`due.nlp.vocabulary.Vocabulary`
result = Vocabulary()
for index, count in iteritems(vocabulary.index_to_count):
if count >= min_occurrences:
result.add_word(vocabulary.word(index))
return result
vocabulary = prune_vocabulary(vocabulary_full, min_occurrences=2)
vocabulary.size()
Explanation: Vocabulary
Here we index all the words in the corpus so that we can associate each word with a numeric ID, and vice versa.
TODO: consider using torchtext instead
End of explanation
from due import resource_manager
rm = resource_manager
def get_embedding_matrix(vocabulary, embeddings_stream, embedding_dim, stub=False):
Return a N x D matrix, where N is the number of words in the vocabulary,
and D is the given embeddings' dimensionality. The *i*-th word in the matrix
contains the embedding of the word with index *i* in the Vocabulary.
Sample usage:
.. code-block:: python
with rm.open_resource_file('embeddings.glove6B', 'glove.6B.300d.txt') as f:
embedding_matrix = get_embedding_matrix(vocabulary, f, 300)
:param vocabulary: a Vocabulary
:type vocabulary: :class:`due.nlp.vocabulary.Vocabulary`
:param embeddings_stream: stream to a resource containing word embeddings in the word2vec format
:type embeddings_stream: *file*
:param embedding_dim: dimensionality of the embeddings
:type embedding_dim: `int`
:param stub: if True, return a random N x D matrix without reading the embedding source
:type stub: bool
if stub:
return np.random.rand(vocabulary.size(), embedding_dim)
unk_index = vocabulary.index(UNK)
result = np.zeros((vocabulary.size(), 300))
for line in tqdm(embeddings_stream):
line_split = line.split()
word = line_split[0]
index = vocabulary.index(word)
if index != unk_index:
vector = [float(x) for x in line_split[1:]]
result[index, :] = vector
sos_index = vocabulary.index(SOS)
result[sos_index, :] = np.ones(300)
return result
EMBEDDING_DIM = 300
with rm.open_resource_file('embeddings.glove6B', 'glove.6B.300d.txt') as f:
embedding_matrix = torch.FloatTensor(get_embedding_matrix(vocabulary, f, EMBEDDING_DIM), device=DEVICE)
# embedding_matrix = torch.FloatTensor(get_embedding_matrix(vocabulary, None, EMBEDDING_DIM, stub=True), device=DEVICE)
embedding_matrix.size()
Explanation: Embeddings
We could initialize the model's embedding layer with random weights, but we expect better results using pre-trained word embeddings instead. We chose GloVe 6B, 300d word vectors for this purpose.
To set these vectors as default embeddings for our network we need to prepare a matrix of (vocabulary_size, embedding_dim) elements where the i-th row is the embedding vector of the word of index i in our vocabulary.
End of explanation
def sentence_to_tensor(sentence):
sentence_indexes = [vocabulary.index(w) for w in sentence.split()]
sentence_indexes.append(vocabulary.index('<EOS>'))
return torch.tensor(sentence_indexes, dtype=torch.long, device=DEVICE).view(-1, 1)
sentence_to_tensor(X[0])
Explanation: 1-by-1 training
Here we define a simple model that can be trained one sentence pair at the time. To reduce training time and improve generalization capabilities, usually deep learning systems are trained in batches. Batch training is implemented later on in this Notebook.
Encoding
Here we define a function to encode a sentence into a Torch tensor of indices
End of explanation
class EncoderRNN(nn.Module):
def __init__(self, hidden_size, embedding_matrix):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
# self.embedding = nn.Embedding(vocabulary_size, embedding_size) # random init
self.embedding = nn.Embedding.from_pretrained(embedding_matrix, freeze=False)
embedding_dim = self.embedding.embedding_dim
self.gru = nn.GRU(embedding_dim, hidden_size)
def forward(self, input_data, hidden):
embedded = self.embedding(input_data).view(1, 1, -1)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def init_hidden(self):
return torch.zeros(1, 1, self.hidden_size, device=DEVICE)
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, embedding_matrix):
super(DecoderRNN, self).__init__()
self.hidden_size = hidden_size
# self.embedding = nn.Embedding(vocabulary_size, embedding_size)
self.embedding = nn.Embedding.from_pretrained(embedding_matrix, freeze=False)
embedding_dim = self.embedding.embedding_dim
vocabulary_size = self.embedding.num_embeddings
self.gru = nn.GRU(embedding_dim, hidden_size)
self.out = nn.Linear(hidden_size, vocabulary_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input_data, hidden):
output = self.embedding(input_data).view(1, 1, -1)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = self.out(output[0])
output = self.softmax(output)
return output, hidden
def init_hidden(self):
return torch.zeros(1, 1, self.hidden_size, device=DEVICE)
Explanation: Model
The model we used is copied straight from the one presented in the reference tutorial (https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html).
Note that attention is not implemented yet.
End of explanation
import random
TEACHER_FORCING_RATIO = 0.5
MAX_LENGTH = 500 # Will raise an error if a longer sentence is encountered
def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
encoder_hidden = encoder.init_hidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=DEVICE)
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0, 0]
decoder_input = torch.tensor([[vocabulary.index('<SOS>')]], device=DEVICE)
decoder_hidden = encoder_hidden
# use_teacher_forcing = True if random.random() < TEACHER_FORCING_RATIO else False
use_teacher_forcing = True
if use_teacher_forcing:
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
loss += criterion(decoder_output, target_tensor[di])
decoder_input = target_tensor[di]
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.item() / target_length
Explanation: Training
Here we define a function to process training for a single pair of sentences.
TODO: implement training with no teacher forcing
End of explanation
from datetime import datetime
LEARNING_RATE = 0.01
VOCABULARY_SIZE = vocabulary.size()
EMBEDDING_SIZE = 300
HIDDEN_SIZE = 512
encoder = EncoderRNN(HIDDEN_SIZE, embedding_matrix).to(DEVICE)
decoder = DecoderRNN(HIDDEN_SIZE, embedding_matrix).to(DEVICE)
encoder_optimizer = optim.SGD(encoder.parameters(), lr=LEARNING_RATE)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=LEARNING_RATE)
criterion = nn.NLLLoss()
epoch = 0
Explanation: Model initialization
This instantiate a fresh model. You should run this cell once before running your training epochs.
End of explanation
PRINT_EVERY = 50
i = 1
tick = datetime.now()
loss_sum = 0.0
for input_sentence, target_sentence in tqdm(zip(X, y)):
input_tensor = sentence_to_tensor(input_sentence)
target_tensor = sentence_to_tensor(target_sentence)
loss = train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)
loss_sum += loss
if i%PRINT_EVERY == 0:
print(i, loss_sum/PRINT_EVERY)
loss_sum = 0.0
i += 1
tock = datetime.now()
epoch += 1
print(tock-tick)
print(i, loss_sum/PRINT_EVERY)
Explanation: Epoch
Here we run a training Epoch, that is, we run the whole dataset through the training procedure. This cell can be executed many times to run multiple Epochs (be careful not to re-initialize the model across Epochs: that would reset training to Epoch 1).
End of explanation
# TODO
Explanation: Evaluation
End of explanation
def predict_answer(input_sentence, vocabulary, encoder, decoder):
result = []
input_tensor = sentence_to_tensor(input_sentence)
input_length = input_tensor.size(0)
encoder_hidden = encoder.init_hidden()
for ei in range(input_length):
_, encoder_hidden = encoder(input_tensor[ei], encoder_hidden)
decoder_input = torch.tensor([[vocabulary.index('<SOS>')]], device=DEVICE)
decoder_hidden = encoder_hidden
for di in range(MAX_LENGTH):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
topv, topi = decoder_output.topk(1)
decoder_input = topi.squeeze().detach()
predicted_index = decoder_input.item()
if predicted_index == vocabulary.index('<EOS>'):
break
result.append(vocabulary.word(predicted_index))
return " ".join(result)
predict_answer("what's the meaning of life?'", vocabulary, encoder, decoder)
Explanation: Testing
End of explanation
# Fake embedding layer
embedding = nn.Embedding(5, 10).to(DEVICE)
# Single sentence tensor
sentence_indexes = [1, 2, 3]
sentence_tensor = torch.tensor(sentence_indexes, dtype=torch.long, device=DEVICE).view(-1, 1)
input_data = sentence_tensor[0]
input_data
BATCH_SIZE = 2
# Batch tensor
input_batch = torch.tensor([1, 4], device=DEVICE).view(-1, 1)
input_batch
embedding(input_data)
embedding(input_batch)
embedding(input_batch).view(1, BATCH_SIZE, -1)
Explanation: Batch training
Instead of feeding sentence pairs one by one, we want the training procedure to predict a number of samples before computing the loss and completing the optimization step. This is called batch training.
The code below is inspired to https://github.com/pengyuchen/PyTorch-Batch-Seq2seq/blob/master/seq2seq_translation_tutorial.py
Exploration
Here we compare our model's output in the single-sentence case vs. batch.
End of explanation
class EncoderRNNBatch(nn.Module):
def __init__(self, hidden_size, embedding_matrix):
super(EncoderRNNBatch, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding.from_pretrained(embedding_matrix, freeze=False)
embedding_dim = self.embedding.embedding_dim
self.gru = nn.GRU(embedding_dim, hidden_size)
def forward(self, input_data, batch_size, hidden):
embedded = self.embedding(input_data).view(1, batch_size, -1)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def init_hidden(self, batch_size):
return torch.zeros(1, batch_size, self.hidden_size, device=DEVICE)
encoder = EncoderRNN(32, embedding_matrix).to(DEVICE)
encoder_batch = EncoderRNNBatch(32, embedding_matrix).to(DEVICE)
# 1-by-1 model
encoder_hidden = encoder.init_hidden()
encoder(input_data, encoder_hidden)
# Batch model
encoder_hidden_batch = encoder_batch.init_hidden(BATCH_SIZE)
encoder_batch(input_batch, BATCH_SIZE, encoder_hidden_batch)
class DecoderRNNBatch(nn.Module):
def __init__(self, hidden_size, embedding_matrix):
super(DecoderRNNBatch, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding.from_pretrained(embedding_matrix, freeze=False)
embedding_dim = self.embedding.embedding_dim
vocabulary_size = self.embedding.num_embeddings
self.gru = nn.GRU(embedding_dim, hidden_size)
self.out = nn.Linear(hidden_size, vocabulary_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input_data, batch_size, hidden):
output = self.embedding(input_data).view(1, batch_size, -1)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = self.out(output[0])
output = self.softmax(output)
return output, hidden
def init_hidden(self, batch_size):
return torch.zeros(1, BATCH_SIZE, self.hidden_size, device=DEVICE)
# vocabulary_size=10, embedding_dim=5
toy_embedding_matrix = torch.FloatTensor(np.random.rand(10, 5), device=DEVICE)
decoder = DecoderRNN(32, toy_embedding_matrix).to(DEVICE)
decoder_batch = DecoderRNNBatch(32, toy_embedding_matrix).to(DEVICE)
# 1-by-1 model
decoder_input = torch.tensor([[vocabulary.index('<SOS>')]], device=DEVICE)
decoder_hidden = encoder_hidden
decoder(decoder_input, decoder_hidden)
# Batch model
decoder_input_batch = torch.tensor([[vocabulary.index('<SOS>')]*BATCH_SIZE], device=DEVICE)
decoder_hidden_batch = encoder_hidden_batch
decoder_batch(decoder_input_batch, BATCH_SIZE, decoder_hidden_batch)
try:
del encoder
del decoder
del decoder_batch
del encoder_hidden
del encoder_hidden_batch
del decoder_input
del decoder_hidden
del decoder_input_batch
del decoder_hidden_batch
except NameError:
pass
Explanation: Model
We still compare the mode's output with the previous one
End of explanation
def batches(X, y, batch_size):
Generate two sequences of batches from the input lists `X` and `y`, so that
each batch contains `batch_size` elements.
>>> list(batches([0, 1, 2, 3, 4, 5, 6], ['a', 'b', 'c', 'd', 'e', 'f', 'g'], 3))
[([0, 1, 2], ['a', 'b', 'c']), ([3, 4, 5], ['d', 'e', 'f']), ([6], ['g'])]
:param X: a sequence of elements
:type X: `list`
:param y: a sequence of elements
:type y: `list`
:param batch_size: number of elements in each batch
:type batch_size: `int`
:return: a generator of the list of batches
:rtype: `list` of (`list`, `list`)
for i in range(int(np.ceil(len(X)/batch_size))):
start_index = i*batch_size
end_index = start_index + batch_size
yield X[start_index:end_index], y[start_index:end_index]
list(batches([0, 1, 2, 3, 4, 5, 6], ['a', 'b', 'c', 'd', 'e', 'f', 'g'], 3))
Explanation: Batch iterator
We want a function that takes our lists X and y and return them one batch at the time
batches()
End of explanation
sentence_to_tensor(X[0])[0] # Input of normal encoder
input_batch # What we want
Explanation: Batch to tensor
Once we have a batch (a list of sentences), we want to turn it into something that can be fed to the model.
End of explanation
def pad_sequence(sequence, pad_value, final_length):
Trim the sequence if longer than final_length, pad it with pad_value if shorter.
In any case at lest one pad element will be left at the end of the sequence (this is
because we usually pad with the <EOS> token)
>>> pad_sequence([1, 2, 3], 0, 5)
[1, 2, 3, 0, 0]
>>> pad_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 0, 5)
[1, 2, 3, 4, 0]
:param sequence: any sequence of elements
:type sequence: `list`-like
:param pad_value: a value to pad the sequence with
:type pad_value: *any*
:param final_length: length of the final sequence
:type final_length: `int`
:return: the padded (or shortened) sequence, with at least one trailing `pad_value`
:rtype: `list`
if len(sequence) >= final_length:
result = sequence[:final_length]
result[-1] = pad_value
return result
return sequence + [pad_value] * (final_length - len(sequence))
pad_sequence([1, 2, 3], 0, 5)
pad_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 0, 5)
a = np.array([[1, 2, 3], [4, 5, 6]])
a = a.transpose()
np.expand_dims(a, 2)[0]
Explanation: pad_sequence()
End of explanation
def batch_to_tensor(batch, vocabulary, max_words=None, device=None):
Receive a list of sentences (strings), return a (*n_words* x *batch_size* x 1)
tensor `m`, so that `m[i]` contains an array `a` of *batch_size* rows and 1
column, so that `a[j]` contains the index of the `i`-th word in the `j`-th
sentence in the batch.
The **maximum number of words** in the sentences can be limited to
`max_word`. If `max_words` is not set, the limit will be set by the longest
sentence in the batch.
Sentences that are shorter than the maximum length in the resulting matrix
will be **padded** with EOS. At least one EOS token is appended to every
sentence in the resulting matrix.
:param batch: a list of sentence
:type batch: `list` of `str`
:param vocabulary: a Vocabulary to look up word indexes
:type vocabulary: :class:`due.nlp.vocabulary.Vocabulary`
:param max_words: sentences shorter than `max_words` will be trimmed
:type max_words: `int`
:param device: a Torch device to map the tensor to (eg. `torch.device("cuda")`)
:type device: :class:`torch.device`
:return: a Torch tensor that is equivalent to the output of :func:`batch_to_matrix`
:rtype: :class:`torch.tensor`
sentence_indexes = [[vocabulary.index(w) for w in sentence.split()] for sentence in batch]
max_length = max([len(x) for x in sentence_indexes])
if max_words:
max_length = min(max_length, max_words)
sentence_indexes = [pad_sequence(s, vocabulary.index(EOS), max_length+1) for s in sentence_indexes]
result = np.transpose(sentence_indexes)
result = np.expand_dims(result, axis=2)
return torch.tensor(result, dtype=torch.long, device=device)
batch = ['this is a sentence', 'this is another much longer sentence', 'short sentence']
batch_tensor = batch_to_tensor(batch, vocabulary, device=DEVICE)
n_words = batch_tensor.size(0)
batch_size = batch_tensor.size(1)
first_word = batch_tensor[0]
print(n_words)
print(batch_size)
print(first_word)
Explanation: batch_to_tensor()
End of explanation
torch.cuda.empty_cache()
TEACHER_FORCING_RATIO = 1.
MAX_LENGTH = 20
def train_batch(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
batch_size = input_tensor.size(1)
encoder_hidden = encoder.init_hidden(batch_size)
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
# encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=DEVICE)
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei], batch_size, encoder_hidden)
# encoder_outputs[ei] = encoder_output[0, 0]
decoder_input = torch.tensor([[vocabulary.index('<SOS>')]*batch_size], device=DEVICE)
decoder_hidden = encoder_hidden
# use_teacher_forcing = True if random.random() < TEACHER_FORCING_RATIO else False
use_teacher_forcing = True
if use_teacher_forcing:
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input, batch_size, decoder_hidden)
# print("decoder_output:", decoder_output, decoder_output.size())
# print("target_tensor[di]:", target_tensor[di], target_tensor[di].size())
loss += criterion(decoder_output, target_tensor[di].view(batch_size))
decoder_input = target_tensor[di]
else:
eos_tensor = torch.tensor([vocabulary.index('<EOS>')], device=DEVICE)
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input, batch_size, decoder_hidden)
topv, topi = decoder_output.topk(1)
decoder_input = topi.squeeze().detach()
predicted_words = target_tensor[di].view(batch_size)
loss += criterion(decoder_output, predicted_words)
if (predicted_words == eos_tensor*batch_size).all():
break
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.item() / target_length
LEARNING_RATE = 0.01
VOCABULARY_SIZE = vocabulary.size()
EMBEDDING_SIZE = 300
HIDDEN_SIZE = 512
encoder = EncoderRNNBatch(HIDDEN_SIZE, embedding_matrix).to(DEVICE)
decoder = DecoderRNNBatch(HIDDEN_SIZE, embedding_matrix).to(DEVICE)
encoder_optimizer = optim.SGD(encoder.parameters(), lr=LEARNING_RATE)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=LEARNING_RATE)
criterion = nn.NLLLoss()
epoch = 0
len(X)
def predict_answer_batch(input_sentence, vocabulary, encoder, decoder):
result = []
input_tensor = batch_to_tensor([input_sentence], vocabulary, device=DEVICE)
input_length = input_tensor.size(0)
batch_size = input_tensor.size(1)
encoder_hidden = encoder.init_hidden(batch_size)
for ei in range(input_length):
_, encoder_hidden = encoder(input_tensor[ei], batch_size, encoder_hidden)
decoder_input = torch.tensor([[vocabulary.index('<SOS>')] * batch_size], device=DEVICE)
decoder_hidden = encoder_hidden
for di in range(MAX_LENGTH):
decoder_output, decoder_hidden = decoder(decoder_input, batch_size, decoder_hidden)
topv, topi = decoder_output.topk(1)
decoder_input = topi.squeeze().detach()
# print(decoder_output)
predicted_index = decoder_input.item()
if predicted_index == vocabulary.index('<EOS>'):
break
result.append(vocabulary.word(predicted_index))
return " ".join(result)
BATCH_SIZE = 64
PRINT_EVERY = 500
EPOCHS = 1
for _ in range(EPOCHS):
i = 1
tick = datetime.now()
loss_sum = 0.0
loss_sum_partial = 0.0
for input_batch, target_batch in tqdm(batches(X, y, BATCH_SIZE)):
input_tensor = batch_to_tensor(input_batch, vocabulary, MAX_LENGTH, device=DEVICE)
target_tensor = batch_to_tensor(target_batch, vocabulary, MAX_LENGTH, device=DEVICE)
loss = train_batch(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)
loss_sum += loss
loss_sum_partial += loss
if i%PRINT_EVERY == 0:
print(i, loss_sum_partial/PRINT_EVERY)
loss_sum_partial = 0.0
i += 1
tock = datetime.now()
epoch += 1
print(tock-tick)
print(i, loss_sum/i)
print(predict_answer_batch("hi", vocabulary, encoder, decoder))
print(predict_answer_batch("how are you?", vocabulary, encoder, decoder))
print(predict_answer_batch("what's your name?", vocabulary, encoder, decoder))
print(predict_answer_batch("My name is Anna", vocabulary, encoder, decoder))
print(predict_answer_batch("What's the meaning of life?", vocabulary, encoder, decoder))
print()
epoch
Explanation: Training
End of explanation
MODEL_NAME = "encdec-cornell-TEST"
model_filename = "%s_MODEL.pt" % MODEL_NAME
dataset_filename = "%s_DATASET.pt" % MODEL_NAME
Explanation: Model serialization
End of explanation
model = {
'encoder': encoder.state_dict(),
'decoder': decoder.state_dict(),
'epoch': epoch,
'embedding_matrix': embedding_matrix
}
torch.save(model, model_filename)
dataset = {
"X": X,
"y": y,
"vocabulary": vocabulary.save()
}
torch.save(dataset, dataset_filename)
Explanation: Save
End of explanation
from due.nlp.preprocessing import normalize_sentence
from due.nlp.vocabulary import Vocabulary, get_embedding_matrix
dataset_deserialized = torch.load(dataset_filename)
X_deserialized = dataset_deserialized["X"]
y_deserialized = dataset_deserialized["y"]
vocabulary_deserialized = Vocabulary.load(dataset_deserialized['vocabulary'])
model_deserialized = torch.load(model_filename)
embedding_matrix_deserialized = model_deserialized['embedding_matrix']
encoder_deserialized = EncoderRNNBatch(HIDDEN_SIZE, embedding_matrix_deserialized).to(DEVICE)
encoder_deserialized.load_state_dict(model_deserialized['encoder'])
decoder_deserialized = DecoderRNNBatch(HIDDEN_SIZE, embedding_matrix_deserialized).to(DEVICE)
decoder_deserialized.load_state_dict(model_deserialized['decoder'])
epoch_deserialized = model_deserialized['epoch']
Explanation: Load
End of explanation
from due.nlp.batches import batches, pad_sequence, batch_to_tensor
X = X_deserialized
y = y_deserialized
vocabulary = vocabulary_deserialized
encoder = encoder_deserialized
decoder = decoder_deserialized
epoch = epoch_deserialized
criterion = nn.NLLLoss()
from datetime import datetime
LEARNING_RATE = 0.01
VOCABULARY_SIZE = vocabulary.size()
EMBEDDING_SIZE = 300
HIDDEN_SIZE = 512
BATCH_SIZE = 64
embedding_matrix = embedding_matrix_deserialized
encoder_optimizer = optim.SGD(encoder.parameters(), lr=LEARNING_RATE)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=LEARNING_RATE)
criterion = nn.NLLLoss()
Explanation: Resume training
End of explanation |
13,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the two contig names you sent me it's simplest to do this
Step1: If you have a genuinely big file then I would do the following
Step2: Ya! There's two contigs. | Python Code:
desired_contigs = ['Contig' + str(x) for x in [1131, 3182, 39106, 110, 5958]]
desired_contigs
Explanation: Using the two contig names you sent me it's simplest to do this:
End of explanation
grab = [c for c in contigs if c.name in desired_contigs]
len(grab)
Explanation: If you have a genuinely big file then I would do the following:
End of explanation
import os
print(os.getcwd())
write_contigs_to_file('data2/sequences_desired.fa', grab)
[c.name for c in grab[:100]]
import os
os.path.realpath('')
Explanation: Ya! There's two contigs.
End of explanation |
13,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1
Step1: 2
Step2: 3
Step3: 4
Step4: 5
Step5: 6
Step6: 7
Step7: 8 | Python Code:
cat = True
dog = False
print(type(cat))
Explanation: 1: Booleans
Instructions
Assign the value True to the variable cat and the value False to the variable dog. Then use the print() function and the type() function to display the type for cat.
Answer
End of explanation
from cities import cities
print(cities)
first_alb = cities[0] == 'Albuquerque'
second_alb = cities[1] == 'Albuquerque'
first_last = cities[0] == cities[-1]
print(first_alb, second_alb, first_last)
Explanation: 2: Boolean operators
Instructions
Use the Boolean operators to determine if the following pairs of values are equivalent:
first element of cities and the string "Albuquerque". Assign the resulting Boolean value to first_alb
second element of cities and the string "Albuquerque". Assign the resulting Boolean value to second_alb
first element of cities and the last element in cities. Assign the resulting Boolean value to first_last
End of explanation
crime_rates = [749, 371, 828, 503, 1379, 425, 408, 542, 1405, 835, 1288, 647, 974, 1383, 455, 658, 675, 615, 2122, 423, 362, 587, 543, 563, 168, 992, 1185, 617, 734, 1263, 784, 352, 397, 575, 481, 598, 1750, 399, 1172, 1294, 992, 522, 1216, 815, 639, 1154, 1993, 919, 594, 1160, 636, 752, 130, 517, 423, 443, 738, 503, 413, 704, 363, 401, 597, 1776, 722, 1548, 616, 1171, 724, 990, 169, 1177, 742]
print(crime_rates)
first = crime_rates[0]
first_500 = first > 500
first_749 = first >= 749
first_last = first >= crime_rates[-1]
print(first_500, first_749, first_last)
Explanation: 3: Booleans with greater than
Instructions
The variable crime_rates is a list of integers containing the crime rates from the dataset. Perform the following comparisons:
evaluate if the first element in crime_rates is larger than the integer 500, assign the Boolean result to first_500
evaluate if the first element in crime_rates is larger than or equal to 749, assign the Boolean result to first_749
evaluate if the first element in crime_rates is greater than or equal to the last element in crime_rates, assign the Boolean result to first_last
Answer
End of explanation
second = crime_rates[1]
second_500 = second < 500
second_371 = second <= 371
second_last = second <= crime_rates[-1]
print(second_500, second_371, second_last)
Explanation: 4: Booleans with less than
Instructions
The variable crime_rates is a list containing the crime rates from the dataset as integers. Perform the following comparisons:
* determine if the second element in crime_rates is smaller than the integer 500, assign the Boolean result to second_500
* determine if the second element in crime_rates is smaller than or equal to 371, assign the Boolean result to second_371
* determine if the second element in crime_rates is smaller than or equal to the last element in crime_rates, assign the Boolean result to second_last
Answer
End of explanation
result = 0
if cities[2] == u"Anchorage":
result = 1
assert result == 1
Explanation: 5: If statements
Instructions
Determine if the third element in cities is equivalent to the string "Anchorage". If it is equivalent, change the variable result to 1.
Answer
End of explanation
reqults = 0
if crime_rates[0] > 500:
if crime_rates[0] > 300:
results = 3
Explanation: 6: Nesting if statements
Instructions
Nest if statements in the following order:
first one checks if the first element in crime_rates is larger than 500
second one checks if the second element in crime_rates is larger than 300
if both statements evaluate to True, assign the value 3 to the variable results
Answer
End of explanation
five_hundred_list = []
for cr in crime_rates:
if cr > 500:
five_hundred_list.append(cr)
assert all([_>500 for _ in five_hundred_list])
Explanation: 7: If statements and for loops
Instructions
Create a new list, five_hundred_list, that contains only the elements from crime_rates that are greater than 500. To accomplish this, you'll need a for loop and an if statement:
the for loop specifies which list we want to iterate over and the name of the iterator variable (we use cr in our answer)
the if statement determines if the current element (cr) is larger than 500
if the current element (cr) is larger than 500, use the append() method to add it to five_hundred_list
Answer
End of explanation
print(crime_rates)
highest = crime_rates[0]
for cr in crime_rates:
if cr > highest:
highest = cr
Explanation: 8: Find the highest crime rate
Instructions
Now [...] we can find the highest crime rate. crime_rates is a list of integers where each integer is a crime rate.
One strategy is to:
assign the value at index 0 from crime_rates to a new integer variable called highest
use a for loop to compare each value in crime_rates to highest and assign that value to highest if it's larger
Find the largest integer in crime_rates using the strategy we just discussed and assign that value to the variable highest.
Answer
End of explanation |
13,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(Introduction to Tensorflow) * 10^6
In this notebook, we modify the tensor-fied intro to TensorFlow notebook to use placeholder tensors and feed in data from a data set of millions of points. This is a derivation of Jared Ostmeyer's Naked Tensor code.
Step1: Define placeholder tensors of length batch_size whose values will be filled in during graph execution
Step2: Define graph that incorporates placeholders
Step3: Sample from the full data set while running the session | Python Code:
import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
xs = np.linspace(0., 8., 8000000) # eight million points spaced evenly over the interval zero to eight
ys = 0.3*xs-0.8+np.random.normal(scale=0.25, size=len(xs)) # eight million labels given xs, m=0.3, b=-0.8, plus normally-distributed noise
fig, ax = plt.subplots()
data_subset = pd.DataFrame(list(zip(xs, ys)), columns=['x', 'y']).sample(n=1000)
_ = ax.scatter(data_subset.x, data_subset.y)
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
batch_size = 8 # sample mini-batches of size eight for each step of gradient descent
Explanation: (Introduction to Tensorflow) * 10^6
In this notebook, we modify the tensor-fied intro to TensorFlow notebook to use placeholder tensors and feed in data from a data set of millions of points. This is a derivation of Jared Ostmeyer's Naked Tensor code.
End of explanation
xs_placeholder = tf.placeholder(tf.float32, [batch_size])
ys_placeholder = tf.placeholder(tf.float32, [batch_size])
Explanation: Define placeholder tensors of length batch_size whose values will be filled in during graph execution
End of explanation
ys_model = m*xs_placeholder+b
total_error = tf.reduce_sum((ys_placeholder-ys_model)**2)
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(total_error) # demo 0.01, 0.0001
initializer_operation = tf.global_variables_initializer()
Explanation: Define graph that incorporates placeholders
End of explanation
with tf.Session() as session:
session.run(initializer_operation)
n_batches = 1000 # 10, then 1000
for iteration in range(n_batches):
random_indices = np.random.randint(len(xs), size=batch_size) # sample the batch by random selection
feed = { # feeds are dictionaries
xs_placeholder: xs[random_indices],
ys_placeholder: ys[random_indices]
}
session.run(optimizer_operation, feed_dict=feed) # minimize cost with the mini-batch
slope, intercept = session.run([m, b])
slope
intercept
Explanation: Sample from the full data set while running the session
End of explanation |
13,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 6
Step1: 2. a contourf map (of the first timestep) on a LambertConformal projection (with coastlines)
Step2: 3. a block plot (pcolormesh) map (of the first timestep) in its native projection (with coastlines)
Step3: 4. a line plot showing forecast_period vs air_temperature for the first latitude and longitude points (hint
Step4: 5. a scatter plot showing longitude vs air_temperature for the first time and latitude points | Python Code:
qplt.contour(cube[:, 0])
plt.show()
Explanation: Exercise 6: Use the a1b sample data ('A1B_north_america.nc'), with appropriate slicing, to produce the following:
1. a contour plot of longitude vs time
End of explanation
import cartopy.crs as ccrs
ax = plt.axes(projection=ccrs.LambertConformal())
ax.coastlines()
iplt.contourf(cube[0])
plt.show()
Explanation: 2. a contourf map (of the first timestep) on a LambertConformal projection (with coastlines)
End of explanation
iplt.pcolormesh(cube[0])
plt.gca().coastlines()
plt.show()
Explanation: 3. a block plot (pcolormesh) map (of the first timestep) in its native projection (with coastlines)
End of explanation
series = cube[:, 0, 0]
qplt.plot(series.coord('forecast_period'), series)
plt.show()
Explanation: 4. a line plot showing forecast_period vs air_temperature for the first latitude and longitude points (hint: plot accepts two arguments for the x and y axes)
End of explanation
lon_slice = cube[0, 20, :]
qplt.scatter(lon_slice.coord('longitude'), lon_slice)
plt.show()
Explanation: 5. a scatter plot showing longitude vs air_temperature for the first time and latitude points
End of explanation |
13,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Secchi Disk
Overview
More information about Secchi DIsk can be found at
Step1: Step 1
Step2: Rotation of the image for an angle of t
Image Thresholding
Step3: Edge Detection
Edge detection using Canny edge detection algorithm
Step4: Adaptive Threshold
Step5: Black and white | Python Code:
import os, sys
from os.path import expanduser
os.path
home = expanduser("~")
sys.path.append('/usr/local/Cellar/opencv/3.3.1_1/lib/python3.6/site-packages/')
sys.path.append(home + '/.pyenv/versions/OPENCV/lib/python3.6/site-packages/')
import cv2
cv2.__version__
! pip install numpy > tmp.log
! pip install matplotlib >> tmp.log
%matplotlib inline
Explanation: Secchi Disk
Overview
More information about Secchi DIsk can be found at:
https://en.wikipedia.org/wiki/Secchi_disk
Figure: Different kinds of Secchi disks. A marine style on the left and the freshwater version on the right [wikipedia]
Setup for OSX
First lest setup the environment for OSX
End of explanation
import cv2
import matplotlib.pyplot as plt
images = []
for i in range(1403, 1420):
images.append(cv2.imread('IMG_' + str(i) + '.jpg'))
n = len(images)
figures = []
fig = plt.figure(figsize=(18, 16))
for i in range(1,3*n+1):
figures.append(fig.add_subplot(n,3,i))
count = 0
for img in images:
figures[count].imshow(img)
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
figures[count+1].plot(histr,color = col)
figures[count+2].hist(img.ravel(),256,[0,256])
count += 3
print("Legend")
print("First column = image of Secchi disk")
print("Second column = histogram of colors in image")
print("Third column = histogram of all values")
plt.show()
Explanation: Step 1: Record the video
Record the video on the robot
Step 2: Analyse the images from the Video
For now we just selected 4 images from the video
End of explanation
def threshold(img):
ret,thresh = cv2.threshold(img,150,255,cv2.THRESH_BINARY)
plt.subplot(1,2,1), plt.imshow(img, cmap='gray')
plt.subplot(1,2,2), plt.imshow(thresh, cmap='gray')
threshold(img1)
threshold(img2)
threshold(img3)
threshold(img4)
Explanation: Rotation of the image for an angle of t
Image Thresholding
End of explanation
def find_edge(img):
edges = cv2.Canny(img,50,200)
plt.subplot(121),plt.imshow(img,cmap = 'gray')
plt.subplot(122),plt.imshow(edges,cmap = 'gray')
find_edge(img1)
find_edge(img2)
find_edge(img3)
find_edge(img4)
Explanation: Edge Detection
Edge detection using Canny edge detection algorithm
End of explanation
th = cv2.adaptiveThreshold(img1, 255, cv2.THRESH_BINARY,150,5)
plt.subplot(121),plt.imshow(img1,cmap = 'gray')
plt.subplot(122),plt.imshow(th,cmap = 'gray')
Explanation: Adaptive Threshold
End of explanation
bw1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
plt.imshow(bw1, cmap='gray')
Explanation: Black and white
End of explanation |
13,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
2. Mathematical Groundwork
Previous
Step1: Import section specific modules | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
2. Mathematical Groundwork
Previous: 2.11 Least-squares Minimization
Next: 2.13 Spherical Trigonometry
Import standard modules:
End of explanation
pass
Explanation: Import section specific modules:
End of explanation |
13,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - Calculer x**n le plus rapidement possible
C'est un exercice courant lors des entretiens d'embauche. Il faut savoir ce qu'est la dichotomie et la notation binaire d'un nombre.
Step1: Enoncé
Comme $n$ est entier, la façon la plus simple est de calculer $x \times x \times ... \times x$ mais existe-t-il plus rapide que cela ?
Solution
L'idée de départ consiste à écrire $x^{2n}=(x^n)^2$. En extrapolant, on en déduit que si $n=2^k$, alors le coût du calcul de $x^n$ consistera en $k$ itérations en on $2^k$.
Step2: Lorsque $n$ n'est pas une puissance de 2, il suffit que le décomposer en écriture binaire. Si $n = \sum_k a_k 2^k$, avec $a_k \in {0,1}$, alors $x^n = \prod_k x^{a_k 2^k}$. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - Calculer x**n le plus rapidement possible
C'est un exercice courant lors des entretiens d'embauche. Il faut savoir ce qu'est la dichotomie et la notation binaire d'un nombre.
End of explanation
def puissance2k(x,k):
while k > 0 :
x *= x
k -= 1
return x
for i in range(0,4) :
print ( "2^(2^{0})=2^{1}={2}".format( i, 2**i, puissance2k ( 2, i ) ) )
Explanation: Enoncé
Comme $n$ est entier, la façon la plus simple est de calculer $x \times x \times ... \times x$ mais existe-t-il plus rapide que cela ?
Solution
L'idée de départ consiste à écrire $x^{2n}=(x^n)^2$. En extrapolant, on en déduit que si $n=2^k$, alors le coût du calcul de $x^n$ consistera en $k$ itérations en on $2^k$.
End of explanation
def puissance(x,n):
r = 1
while n > 0 :
if n % 2 == 1 : r *= x
x *= x
n //= 2
return r
for i in range(0,9) :
print("2^{0}={1}".format(i, puissance( 2, i)))
Explanation: Lorsque $n$ n'est pas une puissance de 2, il suffit que le décomposer en écriture binaire. Si $n = \sum_k a_k 2^k$, avec $a_k \in {0,1}$, alors $x^n = \prod_k x^{a_k 2^k}$.
End of explanation |
13,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Fusion
Step1: Read in Data
Using GeoData we first have to get the data into the proper format. This example uses RISR data retrived from Madrigal, which can be read in by using the readMat_hdf5 function. This will create the GeoData instance labeled as risr. The OMTI file is read in using readOMTI function, which creates the GeoData instance named omti. Lastly to get linearly scaled electron density we use the method changedata from GeoData.
Step2: Time Register Data
The next step once the data has been read into memory is registering the data in time and space. The method timeregister will find all of the instances where the data from risr instance overlap with data from omti. The data is then reduced to only the instances where there are common times between omti and risr. The other times are tossed out so when we interpolate we don't have to interpolate data we have no use for.
Step3: Interpolation
The reduced data is now interpolated into a common Cartesian frame. The method interpolate performs changes the coordinate systems, and interpolates the data and changes the instance of the class to the new coordinate system. For the omti instance we only interpolate in two dimensions as we only have azimuth and elevation info. We pick an altitude to interpolate over and file any gaps with NaNs. The risr data is in three dimensions as we have range information for it. Currently the fastest method of interpolation is the linear interpolation method.
Step4: Plotting
We plot the data number of different ways. First we use mayavi to create a 3-D reconstruction of the radar data with the OMTI data as a slice under it. Then we take a single altitude slice and plot it in the next subplot. The subplot on the bottom left shows the OMTI but with altitude slice of radar data over it as a contour plot. Lastly the bottom right hand plot shows the beam positions of the radar. Each of these plots have their own plotting function and can be used if you're using GeoData, assuming the proper coordinate systems are being used. For the most part it is required to first interpolate the data into a Cartisian frame as this is a common frame to help understand the physical processes taking place. | Python Code:
%matplotlib inline
import matplotlib
from __future__ import division,print_function
import logging
import pdb
import os
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import numpy as np
# Import Geodata modules
from GeoData import GeoData
from GeoData.plotting import slice2DGD,plot3Dslicempl, plotbeamposGD, insertinfo, contourGD
from GeoData import utilityfuncs
Explanation: Data Fusion: Radar and Optical Systems
In this example data from two sources will be combined. This example will use data from RISR that was retreved through Madrigal and Optical Mesosphere and Thermosphere Imager (OMTI) all-sky imager[Shiokawa et. al. 1999].
End of explanation
def revpower(x1,x2):
return x2**x1
omtiName = 'OMTIdata.h5'
risrName = 'ran120219.004.hdf5'
isrparams=['nel']
#creating GeoData objects of the 2 files, given a specific parameter
omti = GeoData.GeoData(utilityfuncs.readOMTI,(omtiName, ['optical']))
risr = GeoData.GeoData(utilityfuncs.readMad_hdf5,(risrName, isrparams))
#converting logarthmic electron density (nel) array into electron density (ne) array
risr.changedata('nel','ne',revpower,[10.0])
Explanation: Read in Data
Using GeoData we first have to get the data into the proper format. This example uses RISR data retrived from Madrigal, which can be read in by using the readMat_hdf5 function. This will create the GeoData instance labeled as risr. The OMTI file is read in using readOMTI function, which creates the GeoData instance named omti. Lastly to get linearly scaled electron density we use the method changedata from GeoData.
End of explanation
reglistlist = omti.timeregister(risr)
keepomti = [i for i in range(len(reglistlist)) if len(reglistlist[i])>0]
reglist = list(set(np.concatenate(reglistlist)))
# Reduce the size of the GeoData Instances.
risr_classred =risr.timeslice(reglist,'Array')
omti = omti.timeslice(keepomti,'Array')
# Re-register in time
reglistfinal = omti.timeregister(risr_classred)
Explanation: Time Register Data
The next step once the data has been read into memory is registering the data in time and space. The method timeregister will find all of the instances where the data from risr instance overlap with data from omti. The data is then reduced to only the instances where there are common times between omti and risr. The other times are tossed out so when we interpolate we don't have to interpolate data we have no use for.
End of explanation
xvec,yvec,zvec = [np.linspace(-100.0,500.0,25),np.linspace(0.0,600.0,25),np.linspace(100.0,500.0,25)]
x,y,z = np.meshgrid(xvec,yvec,zvec)
x2d,y2d = np.meshgrid(xvec,yvec)
new_coords =np.column_stack((x.ravel(),y.ravel(),z.ravel()))
new_coords2 = np.column_stack((x2d.ravel(),y2d.ravel(),140.0*np.ones(y2d.size)))
#%% interpolate risr data
risr_classred.interpolate(new_coords, newcoordname='Cartesian', method='linear', fill_value=np.nan)
#%% interpolate omti data
omti.interpolate(new_coords2, newcoordname='Cartesian', twodinterp = True,method='linear', fill_value=np.nan)
Explanation: Interpolation
The reduced data is now interpolated into a common Cartesian frame. The method interpolate performs changes the coordinate systems, and interpolates the data and changes the instance of the class to the new coordinate system. For the omti instance we only interpolate in two dimensions as we only have azimuth and elevation info. We pick an altitude to interpolate over and file any gaps with NaNs. The risr data is in three dimensions as we have range information for it. Currently the fastest method of interpolation is the linear interpolation method.
End of explanation
omtitime = 14
ilist = reglistfinal[omtitime]
risrtime = ilist[0]
omtislices = [[],[],[140]]
risrslices = [[100],[300],[]]
vbounds = [[200,800],[5e10,5e11]]
figmplf = plt.figure(figsize=(16,12), facecolor='w')
ax1 = figmplf.add_subplot(221, projection='3d')
ax2 = figmplf.add_subplot(222)
ax3 = figmplf.add_subplot(223)
ax4 = figmplf.add_subplot(224)
try:
slice_list = plot3Dslicempl(omti, omtislices, vbounds[0], time=omtitime, cmap='gray',
gkey='optical', fig=figmplf,ax=ax1)
slice_list2, cbar1 = plot3Dslicempl(risr_classred, risrslices, vbounds[1], time=risrtime, cmap='viridis',
gkey='ne', fig=figmplf, ax=ax1,units='m^{-3}', colorbar=True, view=[30,50])
titlestr1 = '$N_e$ and OMTI at $thm'
newtitle = insertinfo(titlestr1, '', risr_classred.times[risrtime, 0], risr_classred.times[risrtime, 1])
ax1.set_title(newtitle+ '\n\n')
cbar1.set_label('$N_e$ in $m^{-3}$')
except Exception as e:
logging.error('trouble with 3D plot {}'.format(e))
(slice2,cbar2) = slice2DGD(risr_classred,'z',300,vbounds[1],title='$N_e$ at $thm',
time = risrtime,gkey = 'ne',fig=figmplf,ax=ax2)
cbar2.set_label('$N_e$ in $m^{-3}$')
(slice3,cbar3) = slice2DGD(omti,'z',omtislices[-1][0],vbounds[0],title='OMTI at $thm',
time = omtitime,cmap='Greys',gkey = 'optical',fig=figmplf,ax=ax3,cbar=False)
plt.hold(True)
(slice4,cbar4) = contourGD(risr_classred,'z',300,vbounds[1],title='$N_e$ at $thm',
time = risrtime,gkey = 'ne',fig=figmplf,ax=ax3)
cbar4.set_label('$N_e$ in $m^{-3}$')
ax4 = plt.subplot(2,2,4,polar=True)
bmpos = plotbeamposGD(risr,fig=figmplf,ax=ax4)
plt.tight_layout()
Explanation: Plotting
We plot the data number of different ways. First we use mayavi to create a 3-D reconstruction of the radar data with the OMTI data as a slice under it. Then we take a single altitude slice and plot it in the next subplot. The subplot on the bottom left shows the OMTI but with altitude slice of radar data over it as a contour plot. Lastly the bottom right hand plot shows the beam positions of the radar. Each of these plots have their own plotting function and can be used if you're using GeoData, assuming the proper coordinate systems are being used. For the most part it is required to first interpolate the data into a Cartisian frame as this is a common frame to help understand the physical processes taking place.
End of explanation |
13,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Robust Linear Regression
This example has been lifted from the PyMC Docs, and adapted to for Bambi by Tyler James Burch (\@tjburch on GitHub).
Many toy datasets circumvent problems that practitioners run into with real data. Specifically, the assumption of normality can be easily violated by outliers, which can cause havoc in traditional linear regression. One way to navigate this is through robust linear regression, outlined in this example.
First load modules and set the RNG for reproducibility.
Step1: Next, generate pseudodata. The bulk of the data will be linear with noise distributed normally, but additionally several outliers will be interjected.
Step2: Plot this data. The three data points in the top left are the interjected data.
Step3: To highlight the problem, first fit a standard normally-distributed linear regression.
Step4: Remember, the true intercept was 1, the true slope was 2. The recovered intercept is much higher, and the slope is much lower, so the influence of the outliers is apparent.
Visually, looking at the recovered regression line and posterior predictive HDI highlights the problem further.
Step5: The recovered regression line, as well as the $0.5\sigma$ and $1\sigma$ bands are shown.
Clearly there is skew in the fit. At lower $x$ values, the regression line is far higher than the true line. This is a result of the outliers, which cause the model to assume a higher value in that regime.
Additionally the uncertainty bands are too wide (remember, the $1\sigma$ band ought to cover 68% of the data, while here it covers most of the points). Due to the small probability mass in the tails of a normal distribution, the outliers have an large effect, causing the uncertainty bands to be oversized.
Clearly, assuming the data are distributed normally is inducing problems here. Bayesian robust linear regression forgoes the normality assumption by instead using a Student T distribution to describe the distribution of the data. The Student T distribution has thicker tails, and by allocating more probability mass to the tails, outliers have a less strong effect.
Comparing the two distributions,
Step6: As we can see, the tails of the Student T are much larger, which means values far from the mean are more likely when compared to the normal distribution.
The T distribution is specified by a number of degrees of freedom ($\nu$). In numpy.random.standard_t this is the parameter df, in the pymc T distribution, it's nu. It is constrained to real numbers greater than 0. As the degrees of freedom increase, the probability in the tails Student T distribution decrease. In the limit of $\nu \rightarrow + \infty$, the Student T distribution is a normal distribution. Below, the T distribution is plotted for various $\nu$.
Step7: In Bambi, the way to specify a regression with Student T distributed data is by passing "t" to the family parameter of a Model.
Step8: Note the new parameter in the model, y_nu. This is the aforementioned degrees of freedom. If this number were very high, we would expect it to be well described by a normal distribution. However, the HDI of this spans from 1.5 to 3.7, meaning that the tails are much heavier than a normal distribution. As a result of the heavier tails, y_sigma has also dropped precipitously from the normal model, meaning the oversized uncertainty bands from above have shrunk.
Comparing the extracted values of the two models,
Step9: Here we can see the mean recovered values of both the slope and intercept are far closer to the true values using the robust regression model compared to the normally distributed one.
Visually comparing the robust regression line,
Step10: This is much better. The true and recovered regression lines are much closer, and the uncertainty bands are appropriate sized. The effect of the outliers is not entirely gone, the recovered line still slightly differs from the true line, but the effect is far smaller, which is a result of the Student T likelihood function ascribing a higher probability to outliers than the normal distribution. Additionally, this inference is based on sampling methods, so it is expected to have small differences (especially given a relatively small number of samples).
Last, another way to evaluate the models is to compare based on Leave-one-out Cross-validation (LOO), which provides an estimate of accuracy on out-of-sample predictions.
Step11: Here it is quite obvious that the Student T model is much better, due to having a clearly larger value of LOO. | Python Code:
import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
az.style.use("arviz-darkgrid")
np.random.seed(1111)
Explanation: Robust Linear Regression
This example has been lifted from the PyMC Docs, and adapted to for Bambi by Tyler James Burch (\@tjburch on GitHub).
Many toy datasets circumvent problems that practitioners run into with real data. Specifically, the assumption of normality can be easily violated by outliers, which can cause havoc in traditional linear regression. One way to navigate this is through robust linear regression, outlined in this example.
First load modules and set the RNG for reproducibility.
End of explanation
size = 100
true_intercept = 1
true_slope = 2
x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * x
# add noise
y = true_regression_line + np.random.normal(scale=0.5, size=size)
# Add outliers
x_out = np.append(x, [0.1, 0.15, 0.2])
y_out = np.append(y, [8, 6, 9])
data = pd.DataFrame({
"x": x_out,
"y": y_out
})
Explanation: Next, generate pseudodata. The bulk of the data will be linear with noise distributed normally, but additionally several outliers will be interjected.
End of explanation
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, xlabel="x", ylabel="y", title="Generated data and underlying model")
ax.plot(x_out, y_out, "x", label="sampled data")
ax.plot(x, true_regression_line, label="true regression line", lw=2.0)
plt.legend(loc=0);
Explanation: Plot this data. The three data points in the top left are the interjected data.
End of explanation
# Note, "gaussian" is the default argument for family. Added to be explicit.
gauss_model = bmb.Model("y ~ x", data, family="gaussian")
gauss_fitted = gauss_model.fit(draws=2000)
gauss_model.predict(gauss_fitted, kind="pps", draws=1000)
az.summary(gauss_fitted)
Explanation: To highlight the problem, first fit a standard normally-distributed linear regression.
End of explanation
plt.figure(figsize=(7, 5))
# Plot Data
plt.plot(x_out, y_out, "x", label="data")
# Plot recovered linear regression
x_range = np.linspace(min(x_out), max(x_out), 2000)
y_pred = gauss_fitted.posterior.x.values.mean() * x_range + gauss_fitted.posterior.Intercept.values.mean()
plt.plot(x_range, y_pred,
color="black",linestyle="--",
label="Recovered regression line"
)
# Plot HDIs
for interval in [0.38, 0.68]:
az.plot_hdi(x_out, gauss_fitted.posterior_predictive.y,
hdi_prob=interval, color="firebrick")
# Plot true regression line
plt.plot(x, true_regression_line,
label="True regression line", lw=2.0, color="black")
plt.legend(loc=0);
Explanation: Remember, the true intercept was 1, the true slope was 2. The recovered intercept is much higher, and the slope is much lower, so the influence of the outliers is apparent.
Visually, looking at the recovered regression line and posterior predictive HDI highlights the problem further.
End of explanation
normal_data = np.random.normal(loc=0, scale=1, size=100_000)
t_data = np.random.standard_t(df=1, size=100_000)
bins = np.arange(-8,8,0.15)
plt.hist(normal_data,
bins=bins, density=True,
alpha=0.6,
label="Normal"
)
plt.hist(t_data,
bins=bins,density=True,
alpha=0.6,
label="Student T"
)
plt.xlabel("x")
plt.ylabel("Probability density")
plt.xlim(-8,8)
plt.legend();
Explanation: The recovered regression line, as well as the $0.5\sigma$ and $1\sigma$ bands are shown.
Clearly there is skew in the fit. At lower $x$ values, the regression line is far higher than the true line. This is a result of the outliers, which cause the model to assume a higher value in that regime.
Additionally the uncertainty bands are too wide (remember, the $1\sigma$ band ought to cover 68% of the data, while here it covers most of the points). Due to the small probability mass in the tails of a normal distribution, the outliers have an large effect, causing the uncertainty bands to be oversized.
Clearly, assuming the data are distributed normally is inducing problems here. Bayesian robust linear regression forgoes the normality assumption by instead using a Student T distribution to describe the distribution of the data. The Student T distribution has thicker tails, and by allocating more probability mass to the tails, outliers have a less strong effect.
Comparing the two distributions,
End of explanation
bins = np.arange(-8,8,0.15)
for ndof in [0.1, 1, 10]:
t_data = np.random.standard_t(df=ndof, size=100_000)
plt.hist(t_data,
bins=bins,density=True,
label=f"$\\nu = {ndof}$",
histtype="step"
)
plt.hist(normal_data,
bins=bins, density=True,
histtype="step",
label="Normal"
)
plt.xlabel("x")
plt.ylabel("Probability density")
plt.xlim(-6,6)
plt.legend();
Explanation: As we can see, the tails of the Student T are much larger, which means values far from the mean are more likely when compared to the normal distribution.
The T distribution is specified by a number of degrees of freedom ($\nu$). In numpy.random.standard_t this is the parameter df, in the pymc T distribution, it's nu. It is constrained to real numbers greater than 0. As the degrees of freedom increase, the probability in the tails Student T distribution decrease. In the limit of $\nu \rightarrow + \infty$, the Student T distribution is a normal distribution. Below, the T distribution is plotted for various $\nu$.
End of explanation
t_model = bmb.Model("y ~ x", data, family="t")
t_fitted = t_model.fit(draws=2000)
t_model.predict(t_fitted, kind="pps", draws=100)
az.summary(t_fitted)
Explanation: In Bambi, the way to specify a regression with Student T distributed data is by passing "t" to the family parameter of a Model.
End of explanation
def get_slope_intercept(mod):
return (
mod.posterior.x.values.mean(),
mod.posterior.Intercept.values.mean()
)
gauss_slope, gauss_int = get_slope_intercept(gauss_fitted)
t_slope, t_int = get_slope_intercept(t_fitted)
pd.DataFrame({
"Model":["True","Normal","T"],
"Slope":[2, gauss_slope, t_slope],
"Intercept": [1, gauss_int, t_int]
}).set_index("Model").T.round(decimals=2)
Explanation: Note the new parameter in the model, y_nu. This is the aforementioned degrees of freedom. If this number were very high, we would expect it to be well described by a normal distribution. However, the HDI of this spans from 1.5 to 3.7, meaning that the tails are much heavier than a normal distribution. As a result of the heavier tails, y_sigma has also dropped precipitously from the normal model, meaning the oversized uncertainty bands from above have shrunk.
Comparing the extracted values of the two models,
End of explanation
plt.figure(figsize=(7, 5))
# Plot Data
plt.plot(x_out, y_out, "x", label="data")
# Plot recovered robust linear regression
x_range = np.linspace(min(x_out), max(x_out), 2000)
y_pred = t_fitted.posterior.x.values.mean() * x_range + t_fitted.posterior.Intercept.values.mean()
plt.plot(x_range, y_pred,
color="black",linestyle="--",
label="Recovered regression line"
)
# Plot HDIs
for interval in [0.05, 0.38, 0.68]:
az.plot_hdi(x_out, t_fitted.posterior_predictive.y,
hdi_prob=interval, color="firebrick")
# Plot true regression line
plt.plot(x, true_regression_line,
label="true regression line", lw=2.0, color="black")
plt.legend(loc=0);
Explanation: Here we can see the mean recovered values of both the slope and intercept are far closer to the true values using the robust regression model compared to the normally distributed one.
Visually comparing the robust regression line,
End of explanation
models = {
"gaussian": gauss_fitted,
"Student T": t_fitted
}
df_compare = az.compare(models)
df_compare
az.plot_compare(df_compare, insample_dev=False);
Explanation: This is much better. The true and recovered regression lines are much closer, and the uncertainty bands are appropriate sized. The effect of the outliers is not entirely gone, the recovered line still slightly differs from the true line, but the effect is far smaller, which is a result of the Student T likelihood function ascribing a higher probability to outliers than the normal distribution. Additionally, this inference is based on sampling methods, so it is expected to have small differences (especially given a relatively small number of samples).
Last, another way to evaluate the models is to compare based on Leave-one-out Cross-validation (LOO), which provides an estimate of accuracy on out-of-sample predictions.
End of explanation
%load_ext watermark
%watermark -n -u -v -iv -w
Explanation: Here it is quite obvious that the Student T model is much better, due to having a clearly larger value of LOO.
End of explanation |
13,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We create two lingüistic variables, temperature and humidity. Those variables will contain five fuzzy set each.
Step1: Creamos dos variables lingüísticas de salida (output1, output2, cada una con sus dos conjuntos difusos, fa y fb.
Step2: Por último, añadimos un bloque de reglas que relaciona las entradas con las salidas. | Python Code:
# Temperature sensor (range 0 to 40)
temperature = lvars.InputLVar('temperature', (10, 40))
temperature['MB'] = mfs.LineDescMF(10, 15)
temperature['B'] = mfs.TriMF(10, 18, 20)
temperature['N'] = mfs.TriMF(18, 20, 25)
temperature['A'] = mfs.TriMF(20, 25, 30)
temperature['MA'] = mfs.LineAscMF(25, 30)
# Humidity sensor (range 0% to 100%)
humidity = lvars.InputLVar('humidity', (0, 100))
humidity['MB'] = mfs.LineDescMF(10, 20)
humidity['B'] = mfs.TriMF(10, 25, 40)
humidity['N'] = mfs.TriMF(30, 40, 50)
humidity['A'] = mfs.TriMF(40, 55, 70)
humidity['MA'] = mfs.LineAscMF(60, 70)
plot_lvar([temperature, humidity])
Explanation: We create two lingüistic variables, temperature and humidity. Those variables will contain five fuzzy set each.
End of explanation
boiler = lvars.OutputLVar('boiler', domain=(-20, 20), defuzz=defuzz.CoG(resolution=5))
boiler['BG'] = mfs.TriMF(-15, -10, -7.5)
boiler['BN'] = mfs.TriMF(-10, -5, -2.5)
boiler['BP'] = mfs.TriMF(-7.5, -2.5, 0)
boiler['M'] = mfs.TriMF(-1, 0, 1)
boiler['SP'] = mfs.TriMF(0, 2.5, 7.5)
boiler['SN'] = mfs.TriMF(2.5, 5, 10)
boiler['SG'] = mfs.TriMF(7.5, 10, 15)
plot_lvar(boiler)
Explanation: Creamos dos variables lingüísticas de salida (output1, output2, cada una con sus dos conjuntos difusos, fa y fb.
End of explanation
rb = rules.RuleBlock(
and_op=operators.Minimum(),
or_op=operators.Maximum(),
not_op=operators.Zadeh(),
agg_op=operators.Minimum(),
acc_op=operators.Maximum()
)
rb[1] = 'if temperature is MB and humidity is MB then boiler is SN'
rb[2] = 'if temperature is MB and humidity is B then boiler is SN'
rb[3] = 'if temperature is MB and humidity is N then boiler is SG'
rb[4] = 'if temperature is MB and humidity is A then boiler is SG'
rb[5] = 'if temperature is MB and humidity is MA then boiler is SG'
rb[6] = 'if temperature is B and humidity is MB then boiler is M'
rb[7] = 'if temperature is B and humidity is B then boiler is M'
rb[8] = 'if temperature is B and humidity is N then boiler is SP'
rb[9] = 'if temperature is B and humidity is A then boiler is SP'
rb[10] = 'if temperature is B and humidity is MA then boiler is SN'
rb[11] = 'if temperature is N and humidity is MB then boiler is M'
rb[12] = 'if temperature is N and humidity is B then boiler is M'
rb[13] = 'if temperature is N and humidity is N then boiler is M'
rb[14] = 'if temperature is N and humidity is A then boiler is M'
rb[15] = 'if temperature is N and humidity is MA then boiler is BP'
rb[16] = 'if temperature is A and humidity is MB then boiler is M'
rb[17] = 'if temperature is A and humidity is B then boiler is M'
rb[18] = 'if temperature is A and humidity is N then boiler is BP'
rb[19] = 'if temperature is A and humidity is A then boiler is BP'
rb[20] = 'if temperature is A and humidity is MA then boiler is BN'
rb[21] = 'if temperature is MA and humidity is MB then boiler is BP'
rb[22] = 'if temperature is MA and humidity is B then boiler is BN'
rb[23] = 'if temperature is MA and humidity is N then boiler is BN'
rb[24] = 'if temperature is MA and humidity is A then boiler is BG'
rb[25] = 'if temperature is MA and humidity is MA then boiler is BG'
fc = controller.FuzzyController([temperature, humidity], [boiler], rb)
from mpl_toolkits.mplot3d import Axes3D
points = []
for x in frange(*reversed(temperature.domain)):
for y in frange(*reversed(humidity.domain)):
points.append((x, y, fc.eval({
'temperature': x,
'humidity': y,
})['boiler']))
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
ax.invert_yaxis()
ax.plot_trisurf(
[p[0] for p in points],
[p[1] for p in points],
[p[2] for p in points],
cmap=plt.cm.jet,
linewidth=0.1
)
Explanation: Por último, añadimos un bloque de reglas que relaciona las entradas con las salidas.
End of explanation |
13,181 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Is it possible in PyTorch to change the learning rate of the optimizer in the middle of training dynamically (I don't want to define a learning rate schedule beforehand)? | Problem:
import numpy as np
import pandas as pd
import torch
optim = load_data()
for param_group in optim.param_groups:
param_group['lr'] = 0.0005 |
13,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 1. Initialization of setup
Step2: 2. The Mass Matrix
Now we initialize the mass and stiffness matrices. In general, the mass matrix at the elemental level is given
\begin{equation}
M_{ji}^e \ = \ w_j \ \rho (\xi) \ \frac{\mathrm{d}x}{\mathrm{d}\xi} \delta_{ij} \vert_ {\xi = \xi_j}
\end{equation}
Exercise 1
Implements the mass matrix using the integration weights at GLL locations $w$, the jacobian $J$, and density $\rho$. Then, perform the global assembly of the mass matrix, compute its inverse, and display the inverse mass matrix to visually inspect how it looks like.
Step3: 3. The Stiffness matrix
On the other hand, the general form of the stiffness matrix at the elemtal level is
\begin{equation}
K_{ji}^e \ = \ \sum_{k = 1}^{N+1} w_k \mu (\xi) \partial_\xi \ell_j (\xi) \partial_\xi \ell_i (\xi) \left(\frac{\mathrm{d}\xi}{\mathrm{d}x} \right)^2 \frac{\mathrm{d}x}{\mathrm{d}\xi} \vert_{\xi = \xi_k}
\end{equation}
Exercise 2
Implements the stiffness matrix using the integration weights at GLL locations $w$, the jacobian $J$, and shear stress $\mu$. Then, perform the global assembly of the mass matrix and display the matrix to visually inspect how it looks like.
Step4: 4. Finite element solution
Finally we implement the spectral element solution using the computed mass $M$ and stiffness $K$ matrices together with a finite differences extrapolation scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation} | Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
from gll import gll
from lagrange1st import lagrange1st
from ricker import ricker
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Spectral Element Method - 1D Elastic Wave Equation</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
This notebook presents the numerical solution for the 1D elastic wave equation
\begin{equation}
\rho(x) \partial_t^2 u(x,t) = \partial_x (\mu(x) \partial_x u(x,t)) + f(x,t),
\end{equation}
using the spectral element method. This is done after a series of steps summarized as follow:
1) The wave equation is written into its Weak form
2) Apply stress Free Boundary Condition after integration by parts
3) Approximate the wave field as a linear combination of some basis
\begin{equation}
u(x,t) \ \approx \ \overline{u}(x,t) \ = \ \sum_{i=1}^{n} u_i(t) \ \varphi_i(x)
\end{equation}
4) Use the same basis functions in $u(x, t)$ as test functions in the weak form, the so call Galerkin principle.
6) The continuous weak form is written as a system of linear equations by considering the approximated displacement field.
\begin{equation}
\mathbf{M}^T\partial_t^2 \mathbf{u} + \mathbf{K}^T\mathbf{u} = \mathbf{f}
\end{equation}
7) Time extrapolation with centered finite differences scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation}
where $\mathbf{M}$ is known as the mass matrix, and $\mathbf{K}$ the stiffness matrix.
The above solution is exactly the same presented for the classic finite-element method. Now we introduce appropriated basis functions and integration scheme to efficiently solve the system of matrices.
Interpolation with Lagrange Polynomials
At the elemental level (see section 7.4), we introduce as interpolating functions the Lagrange polynomials and use $\xi$ as the space variable representing our elemental domain:
\begin{equation}
\varphi_i \ \rightarrow \ \ell_i^{(N)} (\xi) \ := \ \prod_{j \neq i}^{N+1} \frac{\xi - \xi_j}{\xi_i-\xi_j}, \qquad i,j = 1, 2, \dotsc , N + 1
\end{equation}
Numerical Integration
The integral of a continuous function $f(x)$ can be calculated after replacing $f(x)$ by a polynomial approximation that can be integrated analytically. As interpolating functions we use again the Lagrange polynomials and
obtain Gauss-Lobatto-Legendre quadrature. Here, the GLL points are used to perform the integral.
\begin{equation}
\int_{-1}^1 f(x) \ dx \approx \int {-1}^1 P_N(x) dx = \sum{i=1}^{N+1}
w_i f(x_i)
\end{equation}
End of explanation
# Initialization of setup
# ---------------------------------------------------------------
nt = 10000 # number of time steps
xmax = 10000. # Length of domain [m]
vs = 2500. # S velocity [m/s]
rho = 2000 # Density [kg/m^3]
mu = rho * vs**2 # Shear modulus mu
N = 3 # Order of Lagrange polynomials
ne = 250 # Number of elements
Tdom = .2 # Dominant period of Ricker source wavelet
iplot = 20 # Plotting each iplot snapshot
# variables for elemental matrices
Me = np.zeros(N+1, dtype = float)
Ke = np.zeros((N+1, N+1), dtype = float)
# ----------------------------------------------------------------
# Initialization of GLL points integration weights
[xi, w] = gll(N) # xi, N+1 coordinates [-1 1] of GLL points
# w Integration weights at GLL locations
# Space domain
le = xmax/ne # Length of elements
# Vector with GLL points
k = 0
xg = np.zeros((N*ne)+1)
xg[k] = 0
for i in range(1,ne+1):
for j in range(0,N):
k = k+1
xg[k] = (i-1)*le + .5*(xi[j+1]+1)*le
# ---------------------------------------------------------------
dxmin = min(np.diff(xg))
eps = 0.1 # Courant value
dt = eps*dxmin/vs # Global time step
# Mapping - Jacobian
J = le/2
Ji = 1/J # Inverse Jacobian
# 1st derivative of Lagrange polynomials
l1d = lagrange1st(N) # Array with GLL as columns for each N+1 polynomial
Explanation: 1. Initialization of setup
End of explanation
# Elemental Mass matrix
# ---------------------------------------------------------------
for i in range(0, N+1):
Me[i] = rho * w[i] * J #stored as a vector since it's diagonal
# Global Mass matrix
# ---------------------------------------------------------------
k = -1
ng = (ne-1)*N + N + 1
M = np.zeros(2*ng)
for i in range(1, ne+1):
for j in range(0, N+1):
k = k + 1
if i>1:
if j==0:
k = k - 1
M[k] = M[k] + Me[j]
# Inverse matrix of M
# ---------------------------------------------------------------
Minv = np.identity(ng)
for i in range(0,ng):
Minv[i,i] = 1./M[i]
# ---------------------------------------------------------------
# Display inverse mass matrix inv(M)
# ---------------------------------------------------------------
plt.imshow(Minv)
plt.title('Mass Matrix $\mathbf{M}$')
plt.axis("off")
plt.tight_layout()
plt.show()
Explanation: 2. The Mass Matrix
Now we initialize the mass and stiffness matrices. In general, the mass matrix at the elemental level is given
\begin{equation}
M_{ji}^e \ = \ w_j \ \rho (\xi) \ \frac{\mathrm{d}x}{\mathrm{d}\xi} \delta_{ij} \vert_ {\xi = \xi_j}
\end{equation}
Exercise 1
Implements the mass matrix using the integration weights at GLL locations $w$, the jacobian $J$, and density $\rho$. Then, perform the global assembly of the mass matrix, compute its inverse, and display the inverse mass matrix to visually inspect how it looks like.
End of explanation
# Elemental Stiffness Matrix
# ---------------------------------------------------------------
for i in range(0, N+1):
for j in range(0, N+1):
for k in range(0, N+1):
Ke[i,j] = Ke[i,j] + mu*w[k]*Ji**2 *J*l1d[i,k]*l1d[j,k]
# Global Stiffness Matrix
# ---------------------------------------------------------------
K = np.zeros([ng, ng])
# Values except at element boundaries
for k in range(1,ne+1):
i0 = (k-1)*N + 1
j0 = i0
for i in range(-1,N):
for j in range(-1,N):
K[i0+i,j0+j] = Ke[i+1,j+1]
# Values at element boundaries
for k in range(2,ne+1):
i0 = (k - 1)*N
j0 = i0
K[i0,j0] = Ke[0,0] + Ke[N,N]
# ---------------------------------------------------------------
# Display stiffness matrix K
# ---------------------------------------------------------------
plt.figure()
plt.imshow(K)
plt.title('Stiffness Matrix $\mathbf{K}$')
plt.axis("off")
plt.tight_layout()
plt.show()
Explanation: 3. The Stiffness matrix
On the other hand, the general form of the stiffness matrix at the elemtal level is
\begin{equation}
K_{ji}^e \ = \ \sum_{k = 1}^{N+1} w_k \mu (\xi) \partial_\xi \ell_j (\xi) \partial_\xi \ell_i (\xi) \left(\frac{\mathrm{d}\xi}{\mathrm{d}x} \right)^2 \frac{\mathrm{d}x}{\mathrm{d}\xi} \vert_{\xi = \xi_k}
\end{equation}
Exercise 2
Implements the stiffness matrix using the integration weights at GLL locations $w$, the jacobian $J$, and shear stress $\mu$. Then, perform the global assembly of the mass matrix and display the matrix to visually inspect how it looks like.
End of explanation
# SE Solution, Time extrapolation
# ---------------------------------------------------------------
# initialize source time function and force vector f
src = ricker(dt,Tdom)
isrc = int(np.floor(ng/2)) # Source location
# Initialization of solution vectors
u = np.zeros(ng)
uold = u
unew = u
f = u
# Initialize animated plot
# ---------------------------------------------------------------
plt.figure(figsize=(10,6))
lines = plt.plot(xg, u, lw=1.5)
plt.title('SEM 1D Animation', size=16)
plt.xlabel(' x (m)')
plt.ylabel(' Amplitude ')
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for it in range(nt):
# Source initialization
f= np.zeros(ng)
if it < len(src):
f[isrc-1] = src[it-1]
# Time extrapolation
unew = dt**2 * Minv @ (f - K @ u) + 2 * u - uold
uold, u = u, unew
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in lines:
l.remove()
del l
# --------------------------------------
# Display lines
lines = plt.plot(xg, u, color="black", lw = 1.5)
plt.gcf().canvas.draw()
Explanation: 4. Finite element solution
Finally we implement the spectral element solution using the computed mass $M$ and stiffness $K$ matrices together with a finite differences extrapolation scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation}
End of explanation |
13,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aerospace Design via Quasiconvex Optimization
Consider a triangle, or a wedge, located within a hypersonic flow. A standard aerospace design optimization problem is to design the wedge to maximize the lift-to-drag ratio (L/D) (or conversely minimize the D/L ratio), subject to certain geometric constraints. In this example, the wedge is known to have a constant hypotenuse, and our job is to choose its width and height.
The drag-to-lift ratio is given by
$$
\frac{\mathrm{D}}{\mathrm{L}} = \frac{\mathrm{c_d}}{\mathrm{c_l}},
$$
where $\mathrm{c_d}$ and $\mathrm{c_l}$ are drag and lift coefficients, respectively, that are obtained by integrating the projection of the pressure coefficient in directions parallel to, and perpendicular to, the body.
It turns out that the the drag-to-lift ratio is a quasilinear function, as we'll now show. We will assume the pressure coefficient is given by the Newtonian sine-squared law for whetted areas of the body,
$$
\mathrm{c_p} = 2(\hat{v}\cdot\hat{n})^2
$$
and elsewhere $\mathrm{c_p} = 0$. Here, $\hat{v}$ is the free stream direction, which for simplicity we will assume is parallel to the body so that, $\hat{v} = \langle 1, 0 \rangle$, and $\hat{n}$ is the local unit normal. For a wedge defined by width $\Delta x$, and height $\Delta y$,
$$
\hat{n} = \langle -\Delta y/s,-\Delta x/s \rangle
$$
where $s$ is the hypotenuse length. Therefore,
$$
\mathrm{c_p} = 2((1)(-\Delta y/s)+(0)(-\Delta x/s))^2 = \frac{2 \Delta y^2}{s^2}
$$
The lift and drag coefficients are given by
$$
\begin{align}
\mathrm{c_d} &= \frac{1}{c}\int_0^s -\mathrm{c_p}\hat{n}_x \mathrm{d}s \
\mathrm{c_l} &= \frac{1}{c}\int_0^s -\mathrm{c_p}\hat{n}_y \mathrm{d}s
\end{align}
$$
Where $c$ is the reference chord length of the body. Given that $\hat{n}$, and therefore $\mathrm{c_p}$ are constant over the whetted surface of the body,
$$
\begin{align}
\mathrm{c_d} &= -\frac{s}{c}\mathrm{c_p}\hat{n}_x = \frac{s}{c}\frac{2 \Delta y^2}{s^2}\frac{\Delta y}{s} \
\mathrm{c_l} &= -\frac{s}{c}\mathrm{c_p}\hat{n}_y = \frac{s}{c}\frac{2 \Delta y^2}{s^2}\frac{\Delta x}{s}
\end{align}
$$
Assuming $s=1$, so that $\Delta y = \sqrt{1-\Delta x^2}$, plugging in the above into the equation for $D/L$, we obtain
$$
\frac{\mathrm{D}}{\mathrm{L}} = \frac{\Delta y}{\Delta x} = \frac{\sqrt{1-\Delta x^2}}{\Delta x} = \sqrt{\frac{1}{\Delta x^2}-1}.
$$
This function is representable as a DQCP, quasilinear function. We plot it below, and then we write it using DQCP.
Step1: Minimizing this objective function subject to constraints representing payload requirements is a standard aerospace design problem. In this case we will consider the constraint that the wedge must be able to contain a rectangle of given length and width internally along its hypotenuse. This is representable as a convex constraint.
Step2: Once the solution has been found, we can create a plot to verify that the rectangle is inscribed within the wedge. | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import math
x = np.linspace(.25,1,num=201)
obj = []
for i in range(len(x)):
obj.append(math.sqrt(1/x[i]**2-1))
plt.plot(x,obj)
import cvxpy as cp
x = cp.Variable(pos=True)
obj = cp.sqrt(cp.inv_pos(cp.square(x))-1)
print("This objective function is", obj.curvature)
Explanation: Aerospace Design via Quasiconvex Optimization
Consider a triangle, or a wedge, located within a hypersonic flow. A standard aerospace design optimization problem is to design the wedge to maximize the lift-to-drag ratio (L/D) (or conversely minimize the D/L ratio), subject to certain geometric constraints. In this example, the wedge is known to have a constant hypotenuse, and our job is to choose its width and height.
The drag-to-lift ratio is given by
$$
\frac{\mathrm{D}}{\mathrm{L}} = \frac{\mathrm{c_d}}{\mathrm{c_l}},
$$
where $\mathrm{c_d}$ and $\mathrm{c_l}$ are drag and lift coefficients, respectively, that are obtained by integrating the projection of the pressure coefficient in directions parallel to, and perpendicular to, the body.
It turns out that the the drag-to-lift ratio is a quasilinear function, as we'll now show. We will assume the pressure coefficient is given by the Newtonian sine-squared law for whetted areas of the body,
$$
\mathrm{c_p} = 2(\hat{v}\cdot\hat{n})^2
$$
and elsewhere $\mathrm{c_p} = 0$. Here, $\hat{v}$ is the free stream direction, which for simplicity we will assume is parallel to the body so that, $\hat{v} = \langle 1, 0 \rangle$, and $\hat{n}$ is the local unit normal. For a wedge defined by width $\Delta x$, and height $\Delta y$,
$$
\hat{n} = \langle -\Delta y/s,-\Delta x/s \rangle
$$
where $s$ is the hypotenuse length. Therefore,
$$
\mathrm{c_p} = 2((1)(-\Delta y/s)+(0)(-\Delta x/s))^2 = \frac{2 \Delta y^2}{s^2}
$$
The lift and drag coefficients are given by
$$
\begin{align}
\mathrm{c_d} &= \frac{1}{c}\int_0^s -\mathrm{c_p}\hat{n}_x \mathrm{d}s \
\mathrm{c_l} &= \frac{1}{c}\int_0^s -\mathrm{c_p}\hat{n}_y \mathrm{d}s
\end{align}
$$
Where $c$ is the reference chord length of the body. Given that $\hat{n}$, and therefore $\mathrm{c_p}$ are constant over the whetted surface of the body,
$$
\begin{align}
\mathrm{c_d} &= -\frac{s}{c}\mathrm{c_p}\hat{n}_x = \frac{s}{c}\frac{2 \Delta y^2}{s^2}\frac{\Delta y}{s} \
\mathrm{c_l} &= -\frac{s}{c}\mathrm{c_p}\hat{n}_y = \frac{s}{c}\frac{2 \Delta y^2}{s^2}\frac{\Delta x}{s}
\end{align}
$$
Assuming $s=1$, so that $\Delta y = \sqrt{1-\Delta x^2}$, plugging in the above into the equation for $D/L$, we obtain
$$
\frac{\mathrm{D}}{\mathrm{L}} = \frac{\Delta y}{\Delta x} = \frac{\sqrt{1-\Delta x^2}}{\Delta x} = \sqrt{\frac{1}{\Delta x^2}-1}.
$$
This function is representable as a DQCP, quasilinear function. We plot it below, and then we write it using DQCP.
End of explanation
a = .05 # USER INPUT: height of rectangle, should be at most b
b = .65 # USER INPUT: width of rectangle
constraint = [a*cp.inv_pos(x)-(1-b)*cp.sqrt(1-cp.square(x))<=0]
print(constraint)
prob = cp.Problem(cp.Minimize(obj), constraint)
prob.solve(qcp=True, verbose=True)
print('Final L/D Ratio = ', 1/obj.value)
print('Final width of wedge = ', x.value)
print('Final height of wedge = ', math.sqrt(1-x.value**2))
Explanation: Minimizing this objective function subject to constraints representing payload requirements is a standard aerospace design problem. In this case we will consider the constraint that the wedge must be able to contain a rectangle of given length and width internally along its hypotenuse. This is representable as a convex constraint.
End of explanation
y = math.sqrt(1-x.value**2)
lambda1 = a*x.value/y
lambda2 = a*x.value**2/y+a*y
lambda3 = a*x.value-y*(a*x.value/y-b)
plt.plot([0,x.value],[0,0],'b.-')
plt.plot([0,x.value],[0,-y],'b.-')
plt.plot([x.value,x.value],[0,-y],'b.-')
pt1 = [lambda1*x.value,-lambda1*y]
pt2 = [(lambda1+b)*x.value,-(lambda1+b)*y]
pt3 = [(lambda1+b)*x.value+a*y,-(lambda1+b)*y+a*x.value]
pt4 = [lambda1*x.value+a*y,-lambda1*y+a*x.value]
plt.plot([pt1[0],pt2[0]],[pt1[1],pt2[1]],'r.-')
plt.plot([pt2[0],pt3[0]],[pt2[1],pt3[1]],'r.-')
plt.plot([pt3[0],pt4[0]],[pt3[1],pt4[1]],'r.-')
plt.plot([pt4[0],pt1[0]],[pt4[1],pt1[1]],'r.-')
plt.axis('equal')
Explanation: Once the solution has been found, we can create a plot to verify that the rectangle is inscribed within the wedge.
End of explanation |
13,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
13,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Monte Carlo method works by using random points to estimate ratios and values, the most basic example, shown here, is to calculate the value of $\pi$ by comparing the ratio of points in a circle with ones in the inscribed square.
Obs.
Step1: We know the area of the circle is $\pi r^2$,if we sample a random point in the square, since they are all equally probable, we know the chance of it belonging inside the circle is the area of the circle divided by the area of the rectangle $P_{circle} = \dfrac{\pi r^2}{A_{rectangle}}$, we can then calculate $\pi = \dfrac{P_{circle} A_{rectangle}}{r^2}$ where $P_{circle}$ is the probability a point is inside the circle.
Step2: As we can see the approximation converges, if we take the mean, we can approximate the value of $\pi$.
Now, lets see if we can do the same using the circumference, it's length is $2r\pi$, we will try to use the same method, using the formula $\pi = \dfrac{P_{circ}A_{rectangle}}{2r}$.
Step3: Well, it's pretty hard for a point to fall exactly on the line, let's add a tolerance, to do this, we will consider a circular crown with a width of $w$, our new area will be then $\pi(a^2 - b^2)$ where $a = r+\frac{w}{2}$ and $b = r-\frac{w}{2}$, by doing some math (check down at the end of the notebook) we figure out the new formula $\pi = \frac{P_{crown}A_{rectangle}}{2rw}$, again, $P_{crown}$ is the probability of a point being within the circular crown. | Python Code:
#First, lets plot a circle, centered arround 0 with a radius of 1
plot_circle()
Explanation: The Monte Carlo method works by using random points to estimate ratios and values, the most basic example, shown here, is to calculate the value of $\pi$ by comparing the ratio of points in a circle with ones in the inscribed square.
Obs.: Here we use a rectangle, but the principle is the same
End of explanation
#Let's define a function for aproximating pi with a given number of points
def monte_carlo_pi(points, return_points=False):
inside = 0
point_list = []
for i in range(points):
x,y = (random.uniform(-1.5,1.5), random.uniform(-1,1))
if (x**2+y**2)**(1/2) <= 1:
inside += 1
if return_points:
point_list.append((x,y))
#The area of our rectangle is 6, since it goes from -1.5 to 1.5 and -1 to 1
pi = (inside/points) * (3*2)
if not return_points:
return pi
return (pi, point_list)
pi, points_list = monte_carlo_pi(1000,return_points=True)
plot_circle(points_list)
print("Value of pi (approximation): %.4f" % pi)
#Now, lets see how many points we need for a good estimation
max_points = 10000
step = 100
point_number = range(1, max_points,step)
estimated = [monte_carlo_pi(number) for number in point_number]
total_mean = sum(estimated)/ len(estimated)
point_number = range(1, 1000,10)
estimated = [monte_carlo_pi(number) for number in point_number]
total_mean = sum(estimated)/ len(estimated)
plt.figure(figsize=(8, 2))
plt.hlines(math.pi, 0, 100, color='green')
plt.hlines(total_mean, 0, 100, color='red')
plt.plot(estimated)
plt.legend(["Iteration", "Real", "Mean"], loc="best")
plt.xlabel("Number of points used in aproximation (x10)")
plt.ylabel("Values of Pi")
plt.show()
print("The approximated value of pi is %.4f" % total_mean)
Explanation: We know the area of the circle is $\pi r^2$,if we sample a random point in the square, since they are all equally probable, we know the chance of it belonging inside the circle is the area of the circle divided by the area of the rectangle $P_{circle} = \dfrac{\pi r^2}{A_{rectangle}}$, we can then calculate $\pi = \dfrac{P_{circle} A_{rectangle}}{r^2}$ where $P_{circle}$ is the probability a point is inside the circle.
End of explanation
#Let's define a function for approximating pi with a given number of points using the circumference
def monte_carlo_pi_circ(points, return_points=False):
inside = 0
point_list = []
for i in range(points):
x,y = (random.uniform(-1.5,1.5), random.uniform(-1,1))
if (x**2+y**2)**(1/2) == 1:
inside += 1
if return_points:
point_list.append((x,y))
#The area of our rectangle is 6, since it goes from -1.5 to 1.5 and -1 to 1
#But we need to divide by 2 times the radius which is 1
pi = (inside/points) * 3
if not return_points:
return pi
return (pi, point_list)
pi, points_list = monte_carlo_pi_circ(1000,return_points=True)
plot_circle(points_list, color_func=lambda x,y:(1,0,0) if (x**2+y**2)**(1/2) ==1 else (0,0,1,0.5))
Explanation: As we can see the approximation converges, if we take the mean, we can approximate the value of $\pi$.
Now, lets see if we can do the same using the circumference, it's length is $2r\pi$, we will try to use the same method, using the formula $\pi = \dfrac{P_{circ}A_{rectangle}}{2r}$.
End of explanation
#First, let's add a width parameter to our function
def monte_carlo_pi_circ(points, return_points=False, width = 0.1):
inside = 0
#We take half of the width in each side
epsilon = width/2
point_list = []
for i in range(points):
# We must add half the width for each side so we don't clip the poles of the annulus
x,y = (random.uniform(-1.5,1.5), random.uniform(-1-epsilon,1+epsilon))
#Check if it's within the annulus
if 1-epsilon <= (x**2+y**2)**(1/2) <= 1+epsilon:
inside += 1
if return_points:
point_list.append((x,y))
#The area of our rectangle is 6, since it goes from -1.5 to 1.5 and -1 to 1
#But we need to divide by 2 times the radius which is 1, and also, divide by the tolerance
#print("Probability a point is inside the annulus: %.4f" % (inside/points))
pi = (inside/points) * (2+w)*3/(2*w)
if not return_points:
return pi
return (pi, point_list)
w = 0.1
pi, points_list = monte_carlo_pi_circ(3000,return_points=True, width=w)
plot_circle(points_list, color_func=lambda x,y:(1,0,0, 0.85) if 1-(w/2) <=(x**2+y**2)**(1/2) <= 1+(w/2) else (0,0,1,0.5))
print(pi)
max_points = 10000
step = 100
point_number = range(1, max_points,step)
estimated = [monte_carlo_pi_circ(number,width=w) for number in point_number]
total_mean = sum(estimated)/ len(estimated)
plt.figure(figsize=(8, 2))
plt.hlines(math.pi, 0, max_points/step, color='green')
plt.hlines(total_mean, 0, max_points/step, color='red')
plt.plot(estimated)
plt.legend(["Iteration", "Real", "Mean"], loc="best")
plt.xlabel("Number of points used in aproximation")
plt.ylabel("Values of Pi")
plt.show()
print("The approximated value of pi is %.4f" % total_mean)
Explanation: Well, it's pretty hard for a point to fall exactly on the line, let's add a tolerance, to do this, we will consider a circular crown with a width of $w$, our new area will be then $\pi(a^2 - b^2)$ where $a = r+\frac{w}{2}$ and $b = r-\frac{w}{2}$, by doing some math (check down at the end of the notebook) we figure out the new formula $\pi = \frac{P_{crown}A_{rectangle}}{2rw}$, again, $P_{crown}$ is the probability of a point being within the circular crown.
End of explanation |
13,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Linear Algebra - Inverse
Key Equation
Step1: Column-wise / Vectors (2x2)
We can also solve this equation in a vector way, by thinking of this as linear combination of two vectors
$$ \begin{bmatrix}1 \ 2\end{bmatrix} x + \begin{bmatrix}3 \ -1\end{bmatrix} y = \begin{bmatrix}15 \ 2\end{bmatrix} $$
Now we need to draw these vectors and see the result
Step2: Now we know the answer to this is a linear combination of the two vectors. So we multiply the first vector by 3 and the second vector by 4 and add the two
Step3: Matrix Way (2 x 2) - Using Elimination
Now our 2 variable, 2 equation problem is
Step4: Matrix Way (2x2) - Using Inverse
Now our 2 variable, 2 equation problem is
Step5: 3 x 3 Equation
Let take a more involved example - 3 linear equations, with 3 unknown.
$$ x + y + z = 3 $$
$$ 3x + 8y + z = 12 $$
$$ 5x - 4y + 3z = 4 $$
Parallel
$$ 4x + 9y + 2z = 15 $$
Algebaric (3 x 3)
Now it is easy to solve the equation and get the answer
$$ x = 1, y = 1, z = 1 $$
Row-wise / Planes (3 x 3)
Step6: Column-wise / Vectors (3 x 3)
$$ \begin{bmatrix}1 \ 3 \ 5\end{bmatrix} x + \begin{bmatrix}1 \ 8 \ -4\end{bmatrix} y + \begin{bmatrix}1 \ 1 \ 3\end{bmatrix} z = \begin{bmatrix}3 \ 12 \ 4 \end{bmatrix} $$
Now we need to draw these vectors and see the result
Step7: Matrix Way (3 x 3)
Let us write it in the form
$$ Ax = b $$
$$ \begin{bmatrix}1 & 1 & 1 \ 3 & 8 & 1 \ 5 & -4 & 3\end{bmatrix}\begin{bmatrix} x \y \ z\end{bmatrix}= \begin{bmatrix}3 \ 12 \ 4 \end{bmatrix} $$
Let us find
Step8: Exercises on Matrices
$$ U = \begin{bmatrix}3 & 1 & 1 \ 3 & 8 & 1 \ 5 & -4 & 3\end{bmatrix}$$
$$ V = \begin{bmatrix}2 & -3 & -4 \ 3 & 5 & -6 \ -1 & -3 & 2\end{bmatrix}$$
$$ W = \begin{bmatrix}2 & 3 \ -1 & 2 \ -3 & 1\end{bmatrix}$$
$$ T = \begin{bmatrix}2 & 3 \ 4 & 6 \end{bmatrix}$$
$$ S = \begin{bmatrix}3 & 1 & 2 \ 1 & 4 & 5 \ 2 & 5 & 6 \end{bmatrix}$$
$$ Z = \begin{bmatrix}1 & - 1 & 0\end{bmatrix}$$
Write the matrices as np.matrix?
Step9: 1. Matrix Addition
$$ \begin{bmatrix}a & b\ c & d\end{bmatrix} + \begin{bmatrix}e & f\ g & h\end{bmatrix} = \begin{bmatrix}a + e & b + f \ c + g & d + h\end{bmatrix} $$
What is $ U + V$?
What is $ V + U $?
What is $ W + U$?
2. Scalar Multiplication
$$ \beta * \begin{bmatrix}a & b\ c & d\end{bmatrix} = \begin{bmatrix}\beta a & \beta b\ \beta c & \beta d \end{bmatrix} $$
What is $ 3 * U$ ?
What is $2.5 * W$?
3. Matrix Multiplication
$$ A_{m \times n} * B_{n \times p} = C_{m \times p} $$
Example 1
$$ A_{2 \times 2} * B_{2 \times 2} = C_{2 \times 2} $$
$$ \begin{bmatrix}a & b\ c & d\end{bmatrix} \begin{bmatrix}e & f\ g & h\end{bmatrix} = \begin{bmatrix}ae + bg & af + bh\ ce + dg & cf + dh\end{bmatrix} $$
Example 2
$$ A_{3 \times 3} * B_{3 \times 1} = C_{3 \times 1} $$
$$ \begin{bmatrix}a & b & c \ d & e & f \ g & e & f\end{bmatrix} \begin{bmatrix}x \ y \ z \end{bmatrix} = \begin{bmatrix}ax + by+ cz \ dx + ey + fz \ gx + ey + fz \end{bmatrix} $$
Here is a visual explanation for this - http
Step10: What is inverse of $W$ i.e. $W^{-1}$? Why does this not work? | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (10, 6)
x = np.arange(-10, 10, 1)
y1 = (15 - x)/3
y2 = (2 - 2*x)/-1
plt.plot(x, y1)
plt.text(x[-1], y1[-1], 'row1')
plt.plot(x, y2)
plt.text(x[-1], y2[-1], 'row2')
plt.axhline(0, color='grey', linewidth=1)
plt.axvline(0, color='grey', linewidth=1)
plt.xlabel('x')
plt.ylabel('y')
Explanation: Intro to Linear Algebra - Inverse
Key Equation: $Ax =b ~~ \text{for} ~~ n \times n $
The starting linear algebra problem is solving - n linear equation, with n unknowns
2 x 2 Equation
Let us start with the most simple one - 2 linear equations, with 2 unknown.
$$ x + 3y = 15 $$
$$ 2x - y = 2 $$
Algebaric (2 x 2)
Now it is easy to solve the equation and get the answer.
$$ x + 3y = 15 ~~~~ \space (eq1)$$
$$ 2x - y = 2 ~~~~ \space (eq2)$$
E<sub>1</sub> elimination
We keep the first equation as it is and we elminate x from the second equation.
To do that we multiply the first equation by 2 and subtract from the first equation i.e. $eq2 - 2* eq1$
$$ x + 3y = 15 ~~~~ \space (eq1) $$
$$ -7y = -28 ~~~~ \space (eq3)$$
E<sub>2</sub> elimination
Now we do back elimination and we eliminate y from the first equation
We divide the the third equation by -7 i.e. $eq3 / -7 $ And we multiply the third equation by 3/7 and add it to the first equation i.e. $eq3 * 3 /7 + eq1$.
$$ x = 3 ~~~~ \space (eq4) $$
$$ y = 4 ~~~~ \space (eq5)$$
And there is our answer.
This is a simplified representation of the Gauss-Jordan Elimination
Row-wise / Lines (2 x 2)
Let us solve this the traditional way by thinking of them as row wise and solving it. We can plot each of these queations and see where they intersect.
$$ x + 3y = 15 ~~~~ \textit{(row 1) }$$
$$ 2x - y = 2 ~~~~ \textit{(row 2) }$$
End of explanation
# All the vectors start at 0, 0
vX1 = np.array([0,0,1,2])
vY1 = np.array([0,0,3,-1])
b = np.array([0,0,15,2])
vector1 = [vX1, vY1, b]
X,Y,U,V = zip(*vector1)
X,Y,U,V
def vector_plot (vector):
X,Y,U,V = zip(*vector)
C = [1,2,3]
plt.figure()
ax = plt.gca()
ax.quiver(X,Y,U,V,C, angles='xy',scale_units='xy',scale=1)
ax.set_xlim([-15,15])
ax.set_ylim([-9,9])
plt.axhline(0, color='grey', linewidth=1)
plt.axvline(0, color='grey', linewidth=1)
plt.axes().set_aspect('equal')
vector_plot(vector1)
Explanation: Column-wise / Vectors (2x2)
We can also solve this equation in a vector way, by thinking of this as linear combination of two vectors
$$ \begin{bmatrix}1 \ 2\end{bmatrix} x + \begin{bmatrix}3 \ -1\end{bmatrix} y = \begin{bmatrix}15 \ 2\end{bmatrix} $$
Now we need to draw these vectors and see the result
End of explanation
# VX1 vectors start at (0, 0), while VY2 starts at the end of VX1
vX2 = np.array([0,0,3,6])
vY2 = np.array([3,6,12,-4])
b = np.array([0,0,15,2])
vector2 = [vX2, vY2, b]
vector_plot(vector2)
Explanation: Now we know the answer to this is a linear combination of the two vectors. So we multiply the first vector by 3 and the second vector by 4 and add the two
End of explanation
from fractions import Fraction
A = np.matrix([[1,3],
[2,-1]])
b = np.matrix([[15],
[2]])
E1 = np.matrix([[1,0],
[-2,1]])
E2 = np.matrix([[Fraction (1,1),Fraction(3, 7)],
[Fraction(0,1),Fraction(-1, 7)]])
A
E1
E1*A
E2*E1*A
E2*E1*b
Explanation: Matrix Way (2 x 2) - Using Elimination
Now our 2 variable, 2 equation problem is:
$$ x + 3y = 15 ~~~~ \space (eq1)$$
$$ 2x - y = 2 ~~~~ \space (eq2)$$
We can write this in a matrix way as:
$$ \begin{bmatrix}1x & 3y\ 2x & -1y\end{bmatrix} = \begin{bmatrix}15 \ 2\end{bmatrix} $$
However, to generalize it is better to write it in the form:
$$ Ax = b $$
$$ \begin{bmatrix}1 & 3\ 2 & -1\end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}15 \ 2\end{bmatrix} $$
Now we can solve this using elimination as we did earlier in the algebraic formulation
E<sub>1</sub> elimination
To remove x from the second equation matrix, we write our first slimination matrix
$$ E_{1} = \begin{bmatrix}1 & 0\ -2 & 1\end{bmatrix} $$
So, we multiply $E_{1}$ to both sides.
$$ E_{1}Ax = E_{1}b $$
So our new equation is now:
$$ \begin{bmatrix}1 & 0\ -2 & 1\end{bmatrix} \begin{bmatrix}1 & 3\ 2 & -1\end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}1 & 0\ -2 & 1\end{bmatrix} \begin{bmatrix}15 \ 2\end{bmatrix} $$
This can be reduced by matrix-multiplication to:
$$ \begin{bmatrix}1 & 3\ 0 & -7\end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}15 \ -28\end{bmatrix} $$
which is now same as what we got after E1 elimination in algebraic formulation
$$ x + 3y = 15 ~~~~ \space (eq1) $$
$$ -7y = -28 ~~~~ \space (eq3)$$
E<sub>2</sub> elimination
To remove y from the first equation matrix, we write our second slimination matrix
$$ E_{2} = \begin{bmatrix}1 & 3/7\ 0 & -1/7\end{bmatrix} $$
So, we multiply $E_{2}$ to both sides.
$$ E_{2}E_{1}Ax = E_{2}E_{1}b $$
So our new equation is now:
$$ \begin{bmatrix}1 & 3/7\ 0 & -1/7\end{bmatrix} \begin{bmatrix}1 & 3\ 0 & -7\end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}1 & 3/7\ 0 & -1/7\end{bmatrix} \begin{bmatrix}15 \ -28\end{bmatrix} $$
This can be reduced by matrix-multiplication to:
$$ \begin{bmatrix}1 & 0\ 0 & 1\end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}3 \ 4\end{bmatrix} $$
Which is our answer
$$ x = 3 ~~~~ \space (eq4) $$
$$ y = 4 ~~~~ \space (eq5)$$
End of explanation
E2*E1
Ainv = np.linalg.inv(A)
Ainv
Ainv * b
Explanation: Matrix Way (2x2) - Using Inverse
Now our 2 variable, 2 equation problem is:
$$ x + 3y = 15 ~~~~ \space (eq1)$$
$$ 2x - y = 2 ~~~~ \space (eq2)$$
Now we know from the previous working that:
$$ E_{2}E_{1}Ax = E_{2}E_{1}b $$
and that I can change the brackets in my multiplication:
$$ E_{2}(E_{1}A) = (E_{2}E_{1})A $$
So I can first compute $E_{2}*E_{1}$ :
$$ E_{2}E_{1} =\begin{bmatrix}1 & 3/7\ 0 & -1/7\end{bmatrix} \begin{bmatrix}1 & 0\ -2 & 1\end{bmatrix} = \begin{bmatrix}1/7 & 3/7\ 2/7 & -1/7\end{bmatrix} $$
And I know that
$$ (E_{2}E_{1})A = I $$
$$ \begin{bmatrix}1/7 & 3/7\ 2/7 & -1/7\end{bmatrix} \begin{bmatrix}1 & 3\ 2 & -1\end{bmatrix} = \begin{bmatrix}1 & 0\ 0 & 1\end{bmatrix} $$
So now instead of calculating Elimination matrix, we need to just calculate $A^{-1}$ which when multiplied by $A$ gives the identity matrix - $I$
$$ A^{-1}A = I $$
$$ A^{-1} = \begin{bmatrix}1/7 & 3/7\ 2/7 & -1/7\end{bmatrix} $$
Hence it follows that
$$ A^{-1}Ax = A^{-1}b $$
$$ Ix = A^{-1}b $$
$$ x = A^{-1}b $$
So we can calcuate $x$ easily once we know $A^{-1}$
$$ \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}1/7 & 3/7\ 2/7 & -1/7\end{bmatrix} \begin{bmatrix}15\ 2\end{bmatrix} = \begin{bmatrix} 3 \ 4 \end{bmatrix}$$
End of explanation
from mpl_toolkits.mplot3d import Axes3D
xrange = np.arange(-10, 10, 1)
yrange = np.arange(-10, 10, 1)
x, y = np.meshgrid(xrange, yrange)
z1 = 3 - x - y
z2 = 12 - 3*x - 8*y
z3 = (15 - 4*x -9 *y)/(2)
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(x,y,z1, color='blue', alpha = 0.4)
plt3d.plot_surface(x,y,z2, color='red', alpha = 0.4)
plt3d.plot_surface(x,y,z3, color='green', alpha = 0.4)
plt3d.set_xlabel('x')
plt3d.set_ylabel('y')
plt3d.set_zlabel('z')
Explanation: 3 x 3 Equation
Let take a more involved example - 3 linear equations, with 3 unknown.
$$ x + y + z = 3 $$
$$ 3x + 8y + z = 12 $$
$$ 5x - 4y + 3z = 4 $$
Parallel
$$ 4x + 9y + 2z = 15 $$
Algebaric (3 x 3)
Now it is easy to solve the equation and get the answer
$$ x = 1, y = 1, z = 1 $$
Row-wise / Planes (3 x 3)
End of explanation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
plt.figure()
ax = plt.gca(projection='3d')
vX = Arrow3D([0,1],[0,3],[0,5], mutation_scale=20, lw=3, arrowstyle="-|>", color="r")
vY = Arrow3D([1,2],[3,11],[5,1],mutation_scale=20, lw=3, arrowstyle="-|>", color="c")
vZ = Arrow3D([2,3],[11,12],[1,4], mutation_scale=20, lw=3, arrowstyle="-|>", color="g")
b = Arrow3D([0,3],[0,12],[0,4],mutation_scale=20, lw=3, arrowstyle="-|>", color="k")
ax.add_artist(vX)
ax.add_artist(vY)
ax.add_artist(vZ)
ax.add_artist(b)
ax.set_xlim([0,12])
ax.set_ylim([0,12])
ax.set_zlim([0,12])
plt.draw()
Explanation: Column-wise / Vectors (3 x 3)
$$ \begin{bmatrix}1 \ 3 \ 5\end{bmatrix} x + \begin{bmatrix}1 \ 8 \ -4\end{bmatrix} y + \begin{bmatrix}1 \ 1 \ 3\end{bmatrix} z = \begin{bmatrix}3 \ 12 \ 4 \end{bmatrix} $$
Now we need to draw these vectors and see the result
End of explanation
A1 = np.matrix([[1,1,1],
[3,8,1],
[5,-4,3]])
b1 = np.matrix([[3],
[12],
[4]])
A1
b1
A1inv = np.linalg.inv(A1)
A1inv
A1inv*b1
Explanation: Matrix Way (3 x 3)
Let us write it in the form
$$ Ax = b $$
$$ \begin{bmatrix}1 & 1 & 1 \ 3 & 8 & 1 \ 5 & -4 & 3\end{bmatrix}\begin{bmatrix} x \y \ z\end{bmatrix}= \begin{bmatrix}3 \ 12 \ 4 \end{bmatrix} $$
Let us find:
$$ x = A^{-1}b $$
End of explanation
S = np.matrix([[3, 1, 2],
[1 , 4, 5],
[2 , 5 , 6]])
U = np.matrix([[3, 1, 1],
[3, 8, 1],
[5, -4, 3]])
V = np.matrix([[2, -3, -4],
[3, 5, -6],
[-1, -3, 2]])
T = np.matrix([[2 ,3],
[4 ,6]])
Z = np.matrix([[1, -1, 0]])
W = np.matrix([[2 ,3],
[-1 ,2],
[-3, 1]])
Explanation: Exercises on Matrices
$$ U = \begin{bmatrix}3 & 1 & 1 \ 3 & 8 & 1 \ 5 & -4 & 3\end{bmatrix}$$
$$ V = \begin{bmatrix}2 & -3 & -4 \ 3 & 5 & -6 \ -1 & -3 & 2\end{bmatrix}$$
$$ W = \begin{bmatrix}2 & 3 \ -1 & 2 \ -3 & 1\end{bmatrix}$$
$$ T = \begin{bmatrix}2 & 3 \ 4 & 6 \end{bmatrix}$$
$$ S = \begin{bmatrix}3 & 1 & 2 \ 1 & 4 & 5 \ 2 & 5 & 6 \end{bmatrix}$$
$$ Z = \begin{bmatrix}1 & - 1 & 0\end{bmatrix}$$
Write the matrices as np.matrix?
End of explanation
T
Explanation: 1. Matrix Addition
$$ \begin{bmatrix}a & b\ c & d\end{bmatrix} + \begin{bmatrix}e & f\ g & h\end{bmatrix} = \begin{bmatrix}a + e & b + f \ c + g & d + h\end{bmatrix} $$
What is $ U + V$?
What is $ V + U $?
What is $ W + U$?
2. Scalar Multiplication
$$ \beta * \begin{bmatrix}a & b\ c & d\end{bmatrix} = \begin{bmatrix}\beta a & \beta b\ \beta c & \beta d \end{bmatrix} $$
What is $ 3 * U$ ?
What is $2.5 * W$?
3. Matrix Multiplication
$$ A_{m \times n} * B_{n \times p} = C_{m \times p} $$
Example 1
$$ A_{2 \times 2} * B_{2 \times 2} = C_{2 \times 2} $$
$$ \begin{bmatrix}a & b\ c & d\end{bmatrix} \begin{bmatrix}e & f\ g & h\end{bmatrix} = \begin{bmatrix}ae + bg & af + bh\ ce + dg & cf + dh\end{bmatrix} $$
Example 2
$$ A_{3 \times 3} * B_{3 \times 1} = C_{3 \times 1} $$
$$ \begin{bmatrix}a & b & c \ d & e & f \ g & e & f\end{bmatrix} \begin{bmatrix}x \ y \ z \end{bmatrix} = \begin{bmatrix}ax + by+ cz \ dx + ey + fz \ gx + ey + fz \end{bmatrix} $$
Here is a visual explanation for this - http://matrixmultiplication.xyz/
What is $ U * V$?
What is $V * U$?
What is $ U * W$?
What is $ W * U$? Why does this not work?
What is $ Z * U$?
4. Matrix Inverse
What is inverse of $U$ i.e. $U^{-1}$?
What is inverse of $T$ i.e. $T^{-1}$? Why does this not work?
End of explanation
W
Explanation: What is inverse of $W$ i.e. $W^{-1}$? Why does this not work?
End of explanation |
13,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
Step1: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
Step2: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards
Step3: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation
Step4: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
Step5: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent
Step6: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
Step7: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
Step8: Visualizing training
Below I'll plot the total rewards for each episode. I took a rolling average too, in blue.
Step9: Testing
Let's checkout how our trained agent plays the game. | Python Code:
import gym
import tensorflow as tf
import numpy as np
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
print(rewards[-20:])
Explanation: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards:
End of explanation
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This lext line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # expotentional decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I took a rolling average too, in blue.
End of explanation
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation |
13,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experimenting BQML and AutoML with Vertex AI
Overview
This notebook demonstrates how to use Vertex AI Pipelines to rapid prototype a model using both AutoML and BQML, do an evaluation comparison, for a baseline, before progressing to a custom model.
Learning objectives
In this notebook, you learn how to use Vertex AI Predictions for rapid prototyping a model.
This notebook uses the following Google Cloud ML services
Step1: Please ignore any incompatibility warnings and errors.
Step2: Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
Step3: Otherwise, set your project ID here.
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step5: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this notebook, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. It is suggested that you choose a region where Vertex AI services are
available.
Step6: Only if your bucket doesn't already exist
Step7: Finally, validate access to your Cloud Storage bucket by examining its contents
Step8: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step9: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step10: Required imports
Step11: Determine some project and pipeline variables
Instructions
Step12: Downloading the data
The cell below will download the dataset into a CSV file and save it in GCS
Step13: Pipeline Components
Import to BQ
This component takes the csv file and imports it to a table in BigQuery. If the dataset does not exist, it will be created. If a table with the same name already exists, it will be deleted and recreated
Step16: Split Datasets
Splits the dataset in 3 slices
Step19: Train BQML Model
For this demo, you will use a simple linear regression model on BQML. However, you can be creative with other model architectures, such as Deep Neural Networks, XGboost, Logistic Regression, etc.
For a full list of models supported by BQML, look here
Step20: Interpret BQML Model Evaluation
When you do Hyperparameter tuning on the model creation query, the output of the pre-built component BigqueryEvaluateModelJobOp will be a table with the metrics obtained by BQML when training the model. In your BigQuery console, they look like the image below. You need to access them programmatically so you can compare them to the AutoML model.
<img src="https
Step22: Interpret AutoML Model Evaluation
Similar to BQML, AutoML also generates metrics during its model creation. These can be accessed in the UI, as seen below
Step23: Model Selection
Now that you have evaluated the models independently, you are going to move forward with only one of them. This election will be done based on the model evaluation metrics gathered in the previous steps.
Bear in mind that BQML and AutoML use different evaluation metric names, hence you had to do a mapping of these different nomenclatures.
Step24: Validate Infrastructure
Once the best model has been deployed, you will validate the endpoint by making a simple prediction to it.
Step25: The Pipeline
Step26: Running the Pipeline
Step27: Wait for the pipeline to complete
Currently, your pipeline is running asynchronous by using the submit() method. To have run it synchronously, you would have invoked the run() method.
In this last step, you block on the asynchronously executed waiting for completion using the wait() method.
Step28: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the notebook.
Otherwise, you can delete the individual resources you created in this notebook | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install Python package dependencies.
print("Installing libraries")
! pip3 install {USER_FLAG} --quiet google-cloud-pipeline-components kfp
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform google-cloud-bigquery
Explanation: Experimenting BQML and AutoML with Vertex AI
Overview
This notebook demonstrates how to use Vertex AI Pipelines to rapid prototype a model using both AutoML and BQML, do an evaluation comparison, for a baseline, before progressing to a custom model.
Learning objectives
In this notebook, you learn how to use Vertex AI Predictions for rapid prototyping a model.
This notebook uses the following Google Cloud ML services:
Vertex AI Pipelines
Vertex AI AutoML
Vertex AI BigQuery ML
Google Cloud Pipeline Components
The steps performed include:
Create a BigQuery and Vertex AI training dataset.
Train a BigQuery ML and AutoML model.
Extract evaluation metrics from the BigQueryML and AutoML models.
Select and deploy the best trained model.
Test the deployed model infrastructure.
Run the pipeline.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/automl_and_bqml.png" />
Dataset
The Abalone Dataset
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/dataset.png" />
<p>Dataset Credits</p>
<p>Dua, D. and Graff, C. (2019). UCI Machine Learning Repository <a href="http://archive.ics.uci.edu/ml">http://archive.ics.uci.edu/ml</a>. Irvine, CA: University of California, School of Information and Computer Science.</p>
<p><a href="https://archive.ics.uci.edu/ml/datasets/abalone">Direct link</a></p>
Attribute Information:
<p>Given is the attribute name, attribute type, the measurement unit and, a brief description. The number of rings is the value to predict: either as a continuous value or as a classification problem.</p>
<body>
<table>
<tr>
<th>Name</th>
<th>Data Type</th>
<th>Measurement Unit</th>
<th>Description</th>
</tr>
<tr>
<td>Sex</td>
<td>nominal</td>
<td>--</td>
<td>M, F, and I (infant)</td>
</tr>
<tr>
<td>Length</td>
<td>continuous</td>
<td>mm</td>
<td>Longest shell measurement</td>
</tr>
<tr>
<td>Diameter</td>
<td>continuous</td>
<td>mm</td>
<td>perpendicular to length</td>
</tr>
<tr>
<td>Height</td>
<td>continuous</td>
<td>mm</td>
<td>with meat in shell</td>
</tr>
<tr>
<td>Whole weight</td>
<td>continuous</td>
<td>grams</td>
<td>whole abalone</td>
</tr>
<tr>
<td>Shucked weight</td>
<td>continuous</td>
<td>grams</td>
<td>weight of meat</td>
</tr>
<tr>
<td>Viscera weight</td>
<td>continuous</td>
<td>grams</td>
<td>gut weight (after bleeding)</td>
</tr>
<tr>
<td>Shell weight</td>
<td>continuous</td>
<td>grams</td>
<td>after being dried</td>
</tr>
<tr>
<td>Rings</td>
<td>integer</td>
<td>--</td>
<td>+1.5 gives the age in years</td>
</tr>
</table>
</body>
Install additional packages
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Please ignore any incompatibility warnings and errors.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
! gcloud config set project $PROJECT_ID
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = f"gs://{BUCKET_NAME}"
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this notebook, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. It is suggested that you choose a region where Vertex AI services are
available.
End of explanation
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
import sys
from typing import NamedTuple
from google.cloud import aiplatform as vertex
from google.cloud import bigquery
from google_cloud_pipeline_components import \
aiplatform as vertex_pipeline_components
from google_cloud_pipeline_components.experimental import \
bigquery as bq_components
from kfp import dsl
from kfp.v2 import compiler
from kfp.v2.dsl import Artifact, Input, Metrics, Output, component
Explanation: Required imports
End of explanation
PIPELINE_JSON_PKG_PATH = "rapid_prototyping.json"
PIPELINE_ROOT = f"gs://{BUCKET_NAME}/pipeline_root"
DATA_FOLDER = f"{BUCKET_NAME}/data"
RAW_INPUT_DATA = f"gs://{DATA_FOLDER}/abalone.csv"
BQ_DATASET = "j90wipxexhrgq3cquanc5" # @param {type:"string"}
BQ_LOCATION = "US" # @param {type:"string"}
BQ_LOCATION = BQ_LOCATION.upper()
BQML_EXPORT_LOCATION = f"gs://{BUCKET_NAME}/artifacts/bqml"
DISPLAY_NAME = "rapid-prototyping"
ENDPOINT_DISPLAY_NAME = f"{DISPLAY_NAME}_endpoint"
image_prefix = REGION.split("-")[0]
BQML_SERVING_CONTAINER_IMAGE_URI = (
f"{image_prefix}-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest"
)
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
Explanation: Determine some project and pipeline variables
Instructions:
- Make sure the GCS bucket and the BigQuery Dataset do not exist. This script may delete any existing content.
- Your bucket must be on the same region as your Vertex AI resources.
- BQ region can be US or EU;
- Make sure your preferred Vertex AI region is supported [link].
End of explanation
! gsutil cp gs://cloud-samples-data/vertex-ai/community-content/datasets/abalone/abalone.data {RAW_INPUT_DATA}
Explanation: Downloading the data
The cell below will download the dataset into a CSV file and save it in GCS
End of explanation
@component(base_image="python:3.9", packages_to_install=["google-cloud-bigquery"])
def import_data_to_bigquery(
project: str,
bq_location: str,
bq_dataset: str,
gcs_data_uri: str,
raw_dataset: Output[Artifact],
table_name_prefix: str = "abalone",
):
from google.cloud import bigquery
# TODO 1: Construct a BigQuery client object.
client = bigquery.Client(project=project, location=bq_location)
def load_dataset(gcs_uri, table_id):
job_config = bigquery.LoadJobConfig(
schema=[
bigquery.SchemaField("Sex", "STRING"),
bigquery.SchemaField("Length", "NUMERIC"),
bigquery.SchemaField("Diameter", "NUMERIC"),
bigquery.SchemaField("Height", "NUMERIC"),
bigquery.SchemaField("Whole_weight", "NUMERIC"),
bigquery.SchemaField("Shucked_weight", "NUMERIC"),
bigquery.SchemaField("Viscera_weight", "NUMERIC"),
bigquery.SchemaField("Shell_weight", "NUMERIC"),
bigquery.SchemaField("Rings", "NUMERIC"),
],
skip_leading_rows=1,
# The source format defaults to CSV, so the line below is optional.
source_format=bigquery.SourceFormat.CSV,
)
print(f"Loading {gcs_uri} into {table_id}")
load_job = client.load_table_from_uri(
gcs_uri, table_id, job_config=job_config
) # Make an API request.
load_job.result() # Waits for the job to complete.
destination_table = client.get_table(table_id) # Make an API request.
print("Loaded {} rows.".format(destination_table.num_rows))
def create_dataset_if_not_exist(bq_dataset_id, bq_location):
print(
"Checking for existence of bq dataset. If it does not exist, it creates one"
)
dataset = bigquery.Dataset(bq_dataset_id)
dataset.location = bq_location
dataset = client.create_dataset(dataset, exists_ok=True, timeout=300)
print(f"Created dataset {dataset.full_dataset_id} @ {dataset.location}")
bq_dataset_id = f"{project}.{bq_dataset}"
create_dataset_if_not_exist(bq_dataset_id, bq_location)
raw_table_name = f"{table_name_prefix}_raw"
table_id = f"{project}.{bq_dataset}.{raw_table_name}"
print("Deleting any tables that might have the same name on the dataset")
client.delete_table(table_id, not_found_ok=True)
print("will load data to table")
load_dataset(gcs_data_uri, table_id)
raw_dataset_uri = f"bq://{table_id}"
raw_dataset.uri = raw_dataset_uri
Explanation: Pipeline Components
Import to BQ
This component takes the csv file and imports it to a table in BigQuery. If the dataset does not exist, it will be created. If a table with the same name already exists, it will be deleted and recreated
End of explanation
@component(
base_image="python:3.9",
packages_to_install=["google-cloud-bigquery"],
) # pandas, pyarrow and fsspec required to export bq data to csv
def split_datasets(
raw_dataset: Input[Artifact],
bq_location: str,
) -> NamedTuple(
"bqml_split",
[
("dataset_uri", str),
("dataset_bq_uri", str),
("test_dataset_uri", str),
],
):
from collections import namedtuple
from google.cloud import bigquery
raw_dataset_uri = raw_dataset.uri
table_name = raw_dataset_uri.split("bq://")[-1]
print(table_name)
raw_dataset_uri = table_name.split(".")
print(raw_dataset_uri)
project = raw_dataset_uri[0]
bq_dataset = raw_dataset_uri[1]
bq_raw_table = raw_dataset_uri[2]
client = bigquery.Client(project=project, location=bq_location)
def split_dataset(table_name_dataset):
training_dataset_table_name = f"{project}.{bq_dataset}.{table_name_dataset}"
split_query = f
CREATE OR REPLACE TABLE
`{training_dataset_table_name}`
AS
SELECT
Sex,
Length,
Diameter,
Height,
Whole_weight,
Shucked_weight,
Viscera_weight,
Shell_weight,
Rings,
CASE(ABS(MOD(FARM_FINGERPRINT(TO_JSON_STRING(f)), 10)))
WHEN 9 THEN 'TEST'
WHEN 8 THEN 'VALIDATE'
ELSE 'TRAIN' END AS split_col
FROM
`{project}.{bq_dataset}.abalone_raw` f
dataset_uri = f"{project}.{bq_dataset}.{bq_raw_table}"
print("Splitting the dataset")
query_job = client.query(split_query) # Make an API request.
query_job.result()
print(dataset_uri)
print(split_query.replace("\n", " "))
return training_dataset_table_name
def create_test_view(training_dataset_table_name, test_view_name="dataset_test"):
view_uri = f"{project}.{bq_dataset}.{test_view_name}"
query = f
CREATE OR REPLACE VIEW `{view_uri}` AS SELECT
Sex,
Length,
Diameter,
Height,
Whole_weight,
Shucked_weight,
Viscera_weight,
Shell_weight,
Rings
FROM `{training_dataset_table_name}` f
WHERE
f.split_col = 'TEST'
print(f"Creating view for --> {test_view_name}")
print(query.replace("\n", " "))
query_job = client.query(query) # Make an API request.
query_job.result()
return view_uri
table_name_dataset = "dataset"
dataset_uri = split_dataset(table_name_dataset)
test_dataset_uri = create_test_view(dataset_uri)
dataset_bq_uri = "bq://" + dataset_uri
print(f"dataset: {dataset_uri}")
result_tuple = namedtuple(
"bqml_split",
["dataset_uri", "dataset_bq_uri", "test_dataset_uri"],
)
return result_tuple(
dataset_uri=str(dataset_uri),
dataset_bq_uri=str(dataset_bq_uri),
test_dataset_uri=str(test_dataset_uri),
)
Explanation: Split Datasets
Splits the dataset in 3 slices:
- TRAIN
- EVALUATE
- TEST
AutoML and BigQuery ML use different nomenclatures for data splits:
BQML
How BQML splits the data: link
AutoML
How AutoML splits the data: link
<ul>
<li>Model trials
<p>The training set is used to train models with different preprocessing, architecture, and hyperparameter option combinations. These models are evaluated on the validation set for quality, which guides the exploration of additional option combinations. The best parameters and architectures determined in the parallel tuning phase are used to train two ensemble models as described below.</p></li>
<li>Model evaluation
<p>
Vertex AI trains an evaluation model, using the training and validation sets as training data. Vertex AI generates the final model evaluation metrics on this model, using the test set. This is the first time in the process that the test set is used. This approach ensures that the final evaluation metrics are an unbiased reflection of how well the final trained model will perform in production.</p></li>
<li>Serving model
<p>A model is trained with the training, validation, and test sets, to maximize the amount of training data. This model is the one that you use to request predictions.</p></li>
End of explanation
def _query_create_model(
project_id: str,
bq_dataset: str,
training_data_uri: str,
model_name: str = "linear_regression_model_prototyping",
):
model_uri = f"{project_id}.{bq_dataset}.{model_name}"
model_options = OPTIONS
( MODEL_TYPE='LINEAR_REG',
input_label_cols=['Rings'],
DATA_SPLIT_METHOD='CUSTOM',
DATA_SPLIT_COL='split_col'
)
query = f
CREATE OR REPLACE MODEL
`{model_uri}`
{model_options}
AS
SELECT
Sex,
Length,
Diameter,
Height,
Whole_weight,
Shucked_weight,
Viscera_weight,
Shell_weight,
Rings,
CASE(split_col)
WHEN 'TEST' THEN TRUE
ELSE
FALSE
END
AS split_col
FROM
`{training_data_uri}`;
print(query.replace("\n", " "))
return query
Explanation: Train BQML Model
For this demo, you will use a simple linear regression model on BQML. However, you can be creative with other model architectures, such as Deep Neural Networks, XGboost, Logistic Regression, etc.
For a full list of models supported by BQML, look here: End-to-end user journey for each model.
As pointed out before, BQML and AutoML use different split terminologies, so you do an adaptation of the <i>split_col</i> column directly on the SELECT portion of the CREATE model query:
When the value of DATA_SPLIT_METHOD is 'CUSTOM', the corresponding column should be of type BOOL. The rows with TRUE or NULL values are used as evaluation data. Rows with FALSE values are used as training data.
End of explanation
@component(base_image="python:3.9")
def interpret_bqml_evaluation_metrics(
bqml_evaluation_metrics: Input[Artifact], metrics: Output[Metrics]
) -> dict:
import math
metadata = bqml_evaluation_metrics.metadata
for r in metadata["rows"]:
rows = r["f"]
schema = metadata["schema"]["fields"]
output = {}
for metric, value in zip(schema, rows):
metric_name = metric["name"]
val = float(value["v"])
output[metric_name] = val
metrics.log_metric(metric_name, val)
if metric_name == "mean_squared_error":
rmse = math.sqrt(val)
metrics.log_metric("root_mean_squared_error", rmse)
metrics.log_metric("framework", "BQML")
print(output)
Explanation: Interpret BQML Model Evaluation
When you do Hyperparameter tuning on the model creation query, the output of the pre-built component BigqueryEvaluateModelJobOp will be a table with the metrics obtained by BQML when training the model. In your BigQuery console, they look like the image below. You need to access them programmatically so you can compare them to the AutoML model.
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/bqml-evaluate.png?">
The cell below shows you an example of how this can be done. BQML does not give you a root mean squared error to the list of metrics, so we're manually adding it to the metrics dictionary. For more information about the output, please check BQML's documentation.
End of explanation
# Inspired by Andrew Ferlitsch's work on https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_automl_pipeline_components.ipynb
@component(
base_image="python:3.9",
packages_to_install=[
"google-cloud-aiplatform",
],
)
def interpret_automl_evaluation_metrics(
region: str, model: Input[Artifact], metrics: Output[Metrics]
):
'
For a list of available regression metrics, go here: gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml.
More information on available metrics for different types of models: https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-automl
import google.cloud.aiplatform.gapic as gapic
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{region}-aiplatform.googleapis.com"}
model_service_client = gapic.ModelServiceClient(client_options=client_options)
model_resource_name = model.metadata["resourceName"]
model_evaluations = model_service_client.list_model_evaluations(
parent=model_resource_name
)
# TODO 2: List the model evaluations.
model_evaluation = list(model_evaluations)[0]
available_metrics = [
"meanAbsoluteError",
"meanAbsolutePercentageError",
"rSquared",
"rootMeanSquaredError",
"rootMeanSquaredLogError",
]
output = dict()
for x in available_metrics:
val = model_evaluation.metrics.get(x)
output[x] = val
metrics.log_metric(str(x), float(val))
metrics.log_metric("framework", "AutoML")
print(output)
Explanation: Interpret AutoML Model Evaluation
Similar to BQML, AutoML also generates metrics during its model creation. These can be accessed in the UI, as seen below:
<img src="https://storage.googleapis.com/rafacarv-public-bucket-do-not-delete/abalone/automl-evaluate.png" />
Since you don't have a pre-built-component to access these metrics programmatically, you can use the Vertex AI GAPIC (Google API Compiler), which auto-generates low-level gRPC interfaces to the service.
End of explanation
@component(base_image="python:3.9")
def select_best_model(
metrics_bqml: Input[Metrics],
metrics_automl: Input[Metrics],
thresholds_dict_str: str,
best_metrics: Output[Metrics],
reference_metric_name: str = "rmse",
) -> NamedTuple(
"Outputs",
[
("deploy_decision", str),
("best_model", str),
("metric", float),
("metric_name", str),
],
):
import json
from collections import namedtuple
best_metric = float("inf")
best_model = None
# BQML and AutoML use different metric names.
metric_possible_names = []
if reference_metric_name == "mae":
metric_possible_names = ["meanAbsoluteError", "mean_absolute_error"]
elif reference_metric_name == "rmse":
metric_possible_names = ["rootMeanSquaredError", "root_mean_squared_error"]
metric_bqml = float("inf")
metric_automl = float("inf")
print(metrics_bqml.metadata)
print(metrics_automl.metadata)
for x in metric_possible_names:
try:
metric_bqml = metrics_bqml.metadata[x]
print(f"Metric bqml: {metric_bqml}")
except:
print(f"{x} does not exist int the BQML dictionary")
try:
metric_automl = metrics_automl.metadata[x]
print(f"Metric automl: {metric_automl}")
except:
print(f"{x} does not exist on the AutoML dictionary")
# Change the condition if higher is better.
print(f"Comparing BQML ({metric_bqml}) vs AutoML ({metric_automl})")
if metric_bqml <= metric_automl:
best_model = "bqml"
best_metric = metric_bqml
best_metrics.metadata = metrics_bqml.metadata
else:
best_model = "automl"
best_metric = metric_automl
best_metrics.metadata = metrics_automl.metadata
thresholds_dict = json.loads(thresholds_dict_str)
deploy = False
# TODO 3: Change the condition if higher is better.
if best_metric < thresholds_dict[reference_metric_name]:
deploy = True
if deploy:
deploy_decision = "true"
else:
deploy_decision = "false"
print(f"Which model is best? {best_model}")
print(f"What metric is being used? {reference_metric_name}")
print(f"What is the best metric? {best_metric}")
print(f"What is the threshold to deploy? {thresholds_dict_str}")
print(f"Deploy decision: {deploy_decision}")
Outputs = namedtuple(
"Outputs", ["deploy_decision", "best_model", "metric", "metric_name"]
)
return Outputs(
deploy_decision=deploy_decision,
best_model=best_model,
metric=best_metric,
metric_name=reference_metric_name,
)
Explanation: Model Selection
Now that you have evaluated the models independently, you are going to move forward with only one of them. This election will be done based on the model evaluation metrics gathered in the previous steps.
Bear in mind that BQML and AutoML use different evaluation metric names, hence you had to do a mapping of these different nomenclatures.
End of explanation
@component(base_image="python:3.9", packages_to_install=["google-cloud-aiplatform"])
def validate_infrastructure(
endpoint: Input[Artifact],
) -> NamedTuple(
"validate_infrastructure_output", [("instance", str), ("prediction", float)]
):
import json
from collections import namedtuple
from google.cloud import aiplatform
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
def treat_uri(uri):
return uri[uri.find("projects/") :]
def request_prediction(endp, instance):
instance = json_format.ParseDict(instance, Value())
instances = [instance]
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
response = endp.predict(instances=instances, parameters=parameters)
print("deployed_model_id:", response.deployed_model_id)
print("predictions: ", response.predictions)
# The predictions are a google.protobuf.Value representation of the model's predictions.
predictions = response.predictions
for pred in predictions:
if type(pred) is dict and "value" in pred.keys():
# AutoML predictions
prediction = pred["value"]
elif type(pred) is list:
# BQML Predictions return different format
prediction = pred[0]
return prediction
endpoint_uri = endpoint.uri
treated_uri = treat_uri(endpoint_uri)
instance = {
"Sex": "M",
"Length": 0.33,
"Diameter": 0.255,
"Height": 0.08,
"Whole_weight": 0.205,
"Shucked_weight": 0.0895,
"Viscera_weight": 0.0395,
"Shell_weight": 0.055,
}
instance_json = json.dumps(instance)
print("Will use the following instance: " + instance_json)
endpoint = aiplatform.Endpoint(treated_uri)
# TODO 4: Make a simple prediction
prediction = request_prediction(endpoint, instance)
result_tuple = namedtuple(
"validate_infrastructure_output", ["instance", "prediction"]
)
return result_tuple(instance=str(instance_json), prediction=float(prediction))
Explanation: Validate Infrastructure
Once the best model has been deployed, you will validate the endpoint by making a simple prediction to it.
End of explanation
pipeline_params = {
"project": PROJECT_ID,
"region": REGION,
"gcs_input_file_uri": RAW_INPUT_DATA,
"bq_dataset": BQ_DATASET,
"bq_location": BQ_LOCATION,
"bqml_model_export_location": BQML_EXPORT_LOCATION,
"bqml_serving_container_image_uri": BQML_SERVING_CONTAINER_IMAGE_URI,
"endpoint_display_name": ENDPOINT_DISPLAY_NAME,
"thresholds_dict_str": '{"rmse": 2.5}',
}
@dsl.pipeline(name=DISPLAY_NAME, description="Rapid Prototyping")
def train_pipeline(
project: str,
gcs_input_file_uri: str,
region: str,
bq_dataset: str,
bq_location: str,
bqml_model_export_location: str,
bqml_serving_container_image_uri: str,
endpoint_display_name: str,
thresholds_dict_str: str,
):
# Imports data to BigQuery using a custom component.
import_data_to_bigquery_op = import_data_to_bigquery(
project, bq_location, bq_dataset, gcs_input_file_uri
)
raw_dataset = import_data_to_bigquery_op.outputs["raw_dataset"]
# Splits the BQ dataset using a custom component.
split_datasets_op = split_datasets(raw_dataset, bq_location=bq_location)
# Generates the query to create a BQML using a static function.
create_model_query = _query_create_model(
project, bq_dataset, split_datasets_op.outputs["dataset_uri"]
)
# Builds BQML model using pre-built-component.
bqml_create_op = bq_components.BigqueryCreateModelJobOp(
project=project, location=bq_location, query=create_model_query
)
bqml_model = bqml_create_op.outputs["model"]
# Gather BQML evaluation metrics using a pre-built-component.
bqml_evaluate_op = bq_components.BigqueryEvaluateModelJobOp(
project=project, location=bq_location, model=bqml_model
)
bqml_eval_metrics_raw = bqml_evaluate_op.outputs["evaluation_metrics"]
# Analyzes evaluation BQML metrics using a custom component.
interpret_bqml_evaluation_metrics_op = interpret_bqml_evaluation_metrics(
bqml_evaluation_metrics=bqml_eval_metrics_raw
)
bqml_eval_metrics = interpret_bqml_evaluation_metrics_op.outputs["metrics"]
# Exports the BQML model to a GCS bucket using a pre-built-component.
bqml_export_op = bq_components.BigqueryExportModelJobOp(
project=project,
location=bq_location,
model=bqml_model,
model_destination_path=bqml_model_export_location,
).after(bqml_evaluate_op)
bqml_exported_gcs_path = bqml_export_op.outputs["exported_model_path"]
# Uploads the recently exported BQML model from GCS into Vertex AI using a pre-built-component.
bqml_model_upload_op = vertex_pipeline_components.ModelUploadOp(
project=project,
location=region,
display_name=DISPLAY_NAME + "_bqml",
artifact_uri=bqml_exported_gcs_path,
serving_container_image_uri=bqml_serving_container_image_uri,
)
bqml_vertex_model = bqml_model_upload_op.outputs["model"]
# Creates a Vertex AI Tabular dataset using a pre-built-component.
dataset_create_op = vertex_pipeline_components.TabularDatasetCreateOp(
project=project,
location=region,
display_name=DISPLAY_NAME,
bq_source=split_datasets_op.outputs["dataset_bq_uri"],
)
# Trains an AutoML Tables model using a pre-built-component.
automl_training_op = vertex_pipeline_components.AutoMLTabularTrainingJobRunOp(
project=project,
location=region,
display_name=f"{DISPLAY_NAME}_automl",
optimization_prediction_type="regression",
optimization_objective="minimize-rmse",
predefined_split_column_name="split_col",
dataset=dataset_create_op.outputs["dataset"],
target_column="Rings",
column_transformations=[
{"categorical": {"column_name": "Sex"}},
{"numeric": {"column_name": "Length"}},
{"numeric": {"column_name": "Diameter"}},
{"numeric": {"column_name": "Height"}},
{"numeric": {"column_name": "Whole_weight"}},
{"numeric": {"column_name": "Shucked_weight"}},
{"numeric": {"column_name": "Viscera_weight"}},
{"numeric": {"column_name": "Shell_weight"}},
{"numeric": {"column_name": "Rings"}},
],
)
automl_model = automl_training_op.outputs["model"]
# Analyzes evaluation AutoML metrics using a custom component.
automl_eval_op = interpret_automl_evaluation_metrics(
region=region, model=automl_model
)
automl_eval_metrics = automl_eval_op.outputs["metrics"]
# 1) Decides which model is best (AutoML vs BQML);
# 2) Determines if the best model meets the deployment condition.
best_model_task = select_best_model(
metrics_bqml=bqml_eval_metrics,
metrics_automl=automl_eval_metrics,
thresholds_dict_str=thresholds_dict_str,
)
# If the deploy condition is True, then deploy the best model.
with dsl.Condition(
best_model_task.outputs["deploy_decision"] == "true",
name="deploy_decision",
):
# Creates a Vertex AI endpoint using a pre-built-component.
endpoint_create_op = vertex_pipeline_components.EndpointCreateOp(
project=project,
location=region,
display_name=endpoint_display_name,
)
endpoint_create_op.after(best_model_task)
# In case the BQML model is the best...
with dsl.Condition(
best_model_task.outputs["best_model"] == "bqml",
name="deploy_bqml",
):
# Deploys the BQML model (now on Vertex AI) to the recently created endpoint using a pre-built component.
model_deploy_bqml_op = (
vertex_pipeline_components.ModelDeployOp( # noqa: F841
endpoint=endpoint_create_op.outputs["endpoint"],
model=bqml_vertex_model,
deployed_model_display_name=DISPLAY_NAME + "_best_bqml",
dedicated_resources_machine_type="n1-standard-2",
dedicated_resources_min_replica_count=2,
dedicated_resources_max_replica_count=2,
traffic_split={
"0": 100
}, # newly deployed model gets 100% of the traffic
).set_caching_options(False)
)
# Sends an online prediction request to the recently deployed model using a custom component.
validate_infrastructure(
endpoint=endpoint_create_op.outputs["endpoint"]
).set_caching_options(False).after(model_deploy_bqml_op)
# In case the AutoML model is the best...
with dsl.Condition(
best_model_task.outputs["best_model"] == "automl",
name="deploy_automl",
):
# Deploys the AutoML model to the recently created endpoint using a pre-built component.
model_deploy_automl_op = (
vertex_pipeline_components.ModelDeployOp( # noqa: F841
endpoint=endpoint_create_op.outputs["endpoint"],
model=automl_model,
deployed_model_display_name=DISPLAY_NAME + "_best_automl",
dedicated_resources_machine_type="n1-standard-2",
dedicated_resources_min_replica_count=2,
dedicated_resources_max_replica_count=2,
traffic_split={
"0": 100
}, # newly deployed model gets 100% of the traffic
).set_caching_options(False)
)
# Sends an online prediction request to the recently deployed model using a custom component.
validate_infrastructure(
endpoint=endpoint_create_op.outputs["endpoint"]
).set_caching_options(False).after(model_deploy_automl_op)
Explanation: The Pipeline
End of explanation
compiler.Compiler().compile(
pipeline_func=train_pipeline,
package_path=PIPELINE_JSON_PKG_PATH,
)
vertex.init(project=PROJECT_ID, location=REGION)
# TODO 5: Run the pipeline job
pipeline_job = vertex.PipelineJob(
display_name=DISPLAY_NAME,
template_path=PIPELINE_JSON_PKG_PATH,
pipeline_root=PIPELINE_ROOT,
parameter_values=pipeline_params,
enable_caching=False,
)
response = pipeline_job.submit()
Explanation: Running the Pipeline
End of explanation
pipeline_job.wait()
Explanation: Wait for the pipeline to complete
Currently, your pipeline is running asynchronous by using the submit() method. To have run it synchronously, you would have invoked the run() method.
In this last step, you block on the asynchronously executed waiting for completion using the wait() method.
End of explanation
vertex.init(project=PROJECT_ID, location=REGION)
delete_bucket = False
print("Will delete endpoint")
endpoints = vertex.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
vertex.Endpoint.delete(endpoint)
print("Deleted endpoint:", endpoint)
print("Will delete models")
suffix_list = ["bqml", "automl", "best"]
for suffix in suffix_list:
try:
model_display_name = f"{DISPLAY_NAME}_{suffix}"
print("Will delete model with name " + model_display_name)
models = vertex.Model.list(
filter=f"display_name={model_display_name}", order_by="create_time"
)
model = models[0]
vertex.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
print("Will delete Vertex dataset")
datasets = vertex.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
vertex.TabularDataset.delete(dataset)
print("Deleted Vertex dataset:", dataset)
pipelines = vertex.PipelineJob.list(
filter=f"pipeline_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
vertex.PipelineJob.delete(pipeline)
print("Deleted pipeline:", pipeline)
# Construct a BigQuery client object.
bq_client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION)
# TODO(developer): Set dataset_id to the ID of the dataset to fetch.
dataset_id = f"{PROJECT_ID}.{BQ_DATASET}"
print(f"Will delete BQ dataset '{dataset_id}' from location {BQ_LOCATION}.")
# Use the delete_contents parameter to delete a dataset and its contents.
# Use the not_found_ok parameter to not receive an error if the dataset has already been deleted.
bq_client.delete_dataset(
dataset_id, delete_contents=True, not_found_ok=True
) # Make an API request.
print(f"Deleted BQ dataset '{dataset_id}' from location {BQ_LOCATION}.")
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the notebook.
Otherwise, you can delete the individual resources you created in this notebook:
End of explanation |
13,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Previous
Step1: Its three main tables are vis.Session, vis.Condition, and vis.Trial. Furthermore vis.Condition has many tables below specifying parameters specific to each type of stimulus condition.
Step2: Each vis.Session comprises multiple trials that each has only one condition. The trial has timing information and refers to the general stimulus condition.
The type of condition is determined by the dependent tables of vis.Condition (e.g. vis.Monet, vis.Trippy, vis.MovieClipCond) that describe details that are specific to each type of visual stimulus. Some of these tables have lookup tables with cached data-intensive stimuli such as noise movies. | Python Code:
%pylab inline
import datajoint as dj
from pipeline.vis import *
matplotlib.rcParams['figure.figsize'] = (10.0, 9.0)
Explanation: Previous: pipeline_experiment | Next: pipeline_preprocess
Schema vis
Schema pipeline.vis is upstream from preprocess and it contains information about the visual stimulus entered by the stimulus software.
Note that stimulus information used to be contained in the psy module. It was moved to vis to isolate information relevant to the MICrONS project.
End of explanation
(dj.ERD(Condition)+1+Session+Trial).draw()
Explanation: Its three main tables are vis.Session, vis.Condition, and vis.Trial. Furthermore vis.Condition has many tables below specifying parameters specific to each type of stimulus condition.
End of explanation
(dj.ERD(MovieClipCond)+Monet-1).draw()
Explanation: Each vis.Session comprises multiple trials that each has only one condition. The trial has timing information and refers to the general stimulus condition.
The type of condition is determined by the dependent tables of vis.Condition (e.g. vis.Monet, vis.Trippy, vis.MovieClipCond) that describe details that are specific to each type of visual stimulus. Some of these tables have lookup tables with cached data-intensive stimuli such as noise movies.
End of explanation |
13,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Now that I've streamlined the MCMC process, I am going to submit multiple chains simultaneously. This notebook will make multiple, similar config files, for broad comparison.
This may be rolled into pearce as a helper function, I haven't decided.
For rmin 0, 0.5, 1.0
Step2: Vpeak SHAM | Python Code:
import yaml
import copy
from os import path
import numpy as np
orig_cfg_fname = '/home/users/swmclau2/Git/pearce/bin/mcmc/nh_gg_sham_hsab_mcmc_config.yaml'
with open(orig_cfg_fname, 'r') as yamlfile:
orig_cfg = yaml.load(yamlfile)
orig_cfg
orig_sbatch_fname = '/home/users/swmclau2/Git/pearce/bin/mcmc/nh_gg_sham_hsab_mcmc_config.sbatch'
with open(orig_sbatch_fname, 'r') as f:
lines = []
for line in f:
lines.append(line)
orig_sbatch = ''.join(lines)
#this will enable easier string formatting
sbatch_template = #!/bin/bash
#SBATCH --job-name={jobname}
#SBATCH --time=8:00:00
#SBATCH -p kipac
#SBATCH -o /home/users/swmclau2/Git/pearce/bin/mcmc/config/{jobname}.out
#SBATCH --ntasks=16
###SBATCH --exclusive
module load python/2.7.13
module load py-scipystack
module load hdf5/1.10.0p1
python /home/users/swmclau2/Git/pearce/pearce/inference/initialize_mcmc.py {jobname}.yaml
python /home/users/swmclau2/Git/pearce/pearce/inference/run_mcmc.py {jobname}.yaml
#emu fnames
# only sending out the HOD one cuz the others aren't yet finished
emu_fnames = ['/scratch/users/swmclau2/xi_gg_zheng07/PearceXiggCosmo.hdf5',\
'/scratch/users/swmclau2/xi_gg_corrab_zheng07/PearceXiggCosmoCorrAB.hdf5',
'/scratch/users/swmclau2/xi_gg_hsab_zheng07/PearceXiggHSABCosmo.hdf5']
emu_cov_fnames = ['/home/users/swmclau2/Git/pearce/bin/covmat/xi_gg_nh_emu_cov_v4.npy',
'/home/users/swmclau2/Git/pearce/bin/covmat/xi_gg_nh_emu_corrab_cov_v4.npy',
'/home/users/swmclau2/Git/pearce/bin/covmat/xi_gg_nh_emu_hsab_cov_v4.npy']
emu_names = ['HOD', 'CorrAB', 'HSAB']
#meas_cov = np.load('/home/users/swmclau2/Git/pearce/notebooks/meas_cov_testboxes_gg.npy')
meas_cov = np.load('/home/users/swmclau2/Git/pearce/bin/covmat/xi_gg_darksky_cov.npy')
# prep full covs
full_covs = []
for emu_name, emu_fname, cov_fname in zip(emu_names, emu_fnames, emu_cov_fnames):
cov = np.load(cov_fname)
full_cov = cov+meas_cov
fname = '/home/users/swmclau2/Git/pearce/notebooks/%s_full_cov.npy'%emu_name
np.savetxt(fname, full_cov)
full_covs.append(fname)
cpv = np.array([0.02214, 0.1175, -1, 0.9676, 3.0819, 0.6881*100, 3.04]) #darksky
cpn = ['ombh2', 'omch2', 'w0', 'ns', 'ln10As', 'H0', 'Neff']
cat_val_dict = dict(zip(cpn, cpv))
Explanation: Now that I've streamlined the MCMC process, I am going to submit multiple chains simultaneously. This notebook will make multiple, similar config files, for broad comparison.
This may be rolled into pearce as a helper function, I haven't decided.
For rmin 0, 0.5, 1.0:
For no ab, HSAB and CorrAB emu:
Vpeak sham
Mpeak sham
HOD
HSAB HOD
End of explanation
tmp_cfg = copy.deepcopy(orig_cfg)
directory = "/home/users/swmclau2/Git/pearce/bin/mcmc/config/"
output_dir = "/home/users/swmclau2/scratch/PearceMCMC/"
jobname_template = "VpeakSHAM_xi_gg_rmin_{rmin}_{emu_name}_fixed_cosmo"
for rmin in [None, 0.5, 1.0]:
for emu_fname, emu_name, emu_cov in zip(emu_fnames, emu_names, full_covs):
if rmin is not None:
tmp_cfg['emu']['fixed_params'] = {'z': 0.0, 'rmin':rmin}
tmp_cfg['chain']['fixed_params'].update(cat_val_dict)
tmp_cfg['emu']['training_file'] = emu_fname
tmp_cfg['data']['true_data_fname']= ['/home/users/swmclau2/Git/pearce/bin/shams/ds14b_sub_xi_gg.npy']
tmp_cfg['data']['true_cov_fname'] = [emu_cov]
tmp_cfg['chain']['nsteps'] = 20000
jobname = jobname_template.format(rmin=rmin, emu_name=emu_name)
tmp_cfg['fname'] = path.join(output_dir, jobname+'.hdf5')
with open(path.join(directory, jobname +'.yaml'), 'w') as f:
yaml.dump(tmp_cfg, f)
with open(path.join(directory, jobname + '.sbatch'), 'w') as f:
f.write(sbatch_template.format(jobname=jobname))
Explanation: Vpeak SHAM
End of explanation |
13,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SMA ROC Portfolio
1. The Security is above its 200-day moving average
2. The Security closes with sma_roc > 0, buy.
3. If the Security closes with sma_roc < 0, sell your long position.
(For a Portfolio of securities.)
Step1: Yahoo finance cryptocurrencies
Step2: Define Optimizations
Step3: Run Strategy
Step4: Summarize results
Step5: Bar graphs
Step6: Run Benchmark
Step7: Equity curve | Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
import strategy
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
Explanation: SMA ROC Portfolio
1. The Security is above its 200-day moving average
2. The Security closes with sma_roc > 0, buy.
3. If the Security closes with sma_roc < 0, sell your long position.
(For a Portfolio of securities.)
End of explanation
# Symbol Lists
BitCoin = ['BTC-USD']
CryptoCurrencies_2016 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD',
'XEM-USD', 'DASH-USD', 'MAID-USD', 'LSK-USD', 'DOGE-USD']
# 'DAO-USD' is a dead coin, so missing from above
CryptoCurrencies_2017 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD', 'ETC-USD',
'XEM-USD', 'MIOTA-USD', 'DASH-USD', 'BTS-USD']
# 'STRAT-USD' last trade date is 2020-11-18, so removed
CryptoCurrencies_2018 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'BCH-USD', 'EOS-USD',
'LTC-USD', 'XLM-USD', 'ADA-USD', 'TRX-USD', 'MIOTA-USD']
CryptoCurrencies_2019 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD', 'BCH-USD',
'EOS-USD', 'BNB-USD', 'USDT-USD', 'BSV-USD', 'CRO-USD']
Stocks_Bonds_Gold_Crypto = ['SPY', 'QQQ', 'TLT', 'GLD', 'BTC-USD']
# Set 'continuous_timeseries' : False (for mixed asset classes)
start_1900 = datetime.datetime(1900, 1, 1)
start_2016 = datetime.datetime(2016, 6, 26)
start_2017 = datetime.datetime(2017, 6, 25)
start_2018 = datetime.datetime(2018, 6, 24)
start_2019 = datetime.datetime(2019, 6, 30)
# Pick one of the above symbols and start pairs
symbols = CryptoCurrencies_2016
start = start_2016
capital = 10000
end = datetime.datetime.now()
# NOTE: Cryptocurrencies have 7 days a week timeseries. You can test them with
# their entire timeseries by setting stock_market_calendar=False. Alternatively,
# to trade with stock market calendar by setting stock_market_calendar=True.
# For mixed asset classes that include stocks or ETFs, you must set
# stock_market_calendar=True.
options = {
'use_adj' : False,
'use_cache' : True,
'use_continuous_calendar' : False,
'force_stock_market_calendar' : True,
'stop_loss_pct' : 1.0,
'margin' : 1,
'lookback' : 1,
'sma_timeperiod': 20,
'sma_pct_band': 0,
'use_regime_filter' : False,
'use_vola_weight' : True
}
Explanation: Yahoo finance cryptocurrencies:
https://finance.yahoo.com/cryptocurrencies/
10 largest Crypto currencies from 5 years ago:
https://coinmarketcap.com/historical/20160626/
10 largest Crypto currencies from 4 years ago:
https://coinmarketcap.com/historical/20170625/
10 largest Crypto currencies from 3 years ago:
https://coinmarketcap.com/historical/20180624/
10 largest Crypto currencies from 2 years ago:
https://coinmarketcap.com/historical/20190630/
Some global data
End of explanation
# pick one
optimize_sma_timeperiod = False
optimize_sma_pct_band = True
# define SMAs ranges
if optimize_sma_timeperiod:
Xs = range(5, 40, 5)
Xs = [str(X) for X in Xs]
# define band ranges
elif optimize_sma_pct_band:
Xs = range(0, 11, 1)
Xs = [str(X) for X in Xs]
Explanation: Define Optimizations
End of explanation
strategies = pd.Series(dtype=object)
for X in Xs:
print(X, end=" ")
if optimize_sma_timeperiod:
options['sma_timeperiod'] = int(X)
elif optimize_sma_pct_band:
options['sma_pct_band'] = int(X)
strategies[X] = strategy.Strategy(symbols, capital, start, end, options)
strategies[X].run()
Explanation: Run Strategy
End of explanation
metrics = ('annual_return_rate',
'max_closed_out_drawdown',
'annualized_return_over_max_drawdown',
'best_month',
'worst_month',
'sharpe_ratio',
'sortino_ratio',
'monthly_std',
'pct_time_in_market',
'total_num_trades',
'pct_profitable_trades',
'avg_points')
df = pf.optimizer_summary(strategies, metrics)
df
Explanation: Summarize results
End of explanation
pf.optimizer_plot_bar_graph(df, 'annual_return_rate')
pf.optimizer_plot_bar_graph(df, 'sharpe_ratio')
pf.optimizer_plot_bar_graph(df, 'max_closed_out_drawdown')
Explanation: Bar graphs
End of explanation
s = strategies[Xs[0]]
benchmark = pf.Benchmark('BTC-USD', capital, s.start, s.end, use_adj=True)
benchmark.run()
Explanation: Run Benchmark
End of explanation
if optimize_sma_timeperiod: Y = '20'
elif optimize_sma_pct_band: Y = '3'
pf.plot_equity_curve(strategies[Y].dbal, benchmark=benchmark.dbal)
labels = []
for strategy in strategies:
if optimize_sma_timeperiod:
label = strategy.options['sma_timeperiod']
elif optimize_sma_pct_band:
label = strategy.options['sma_pct_band']
labels.append(label)
pf.plot_equity_curves(strategies, labels)
Explanation: Equity curve
End of explanation |
13,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clean UK schools data
This notebook cleans some UK schools data sets and joins them with other data sources as deprivation data.
Main datasets
Step1: Utility functions
Some utility function to cast data to proper data types.
Step2: Read and clean SPINE data
Load and clean SPINE data. For each school the following fields are loaded
Step3: Read and clean census data
Load and clean some schools census data, including
Step4: Read and clean workforce data
Load and clean school workforce data, such as
Step5: Read and clean spending data
Load school spending data
Step6: Read and clean deprivation data
Load UK deprivation data
Step7: Read and clean key stage 2 final data
Load final key stage 2 data. Including
Step8: Merge datasets
This function, loads all different datasets, merges them together and eventually saves them on disk. | Python Code:
import pandas as pd
import numpy as np
Explanation: Clean UK schools data
This notebook cleans some UK schools data sets and joins them with other data sources as deprivation data.
Main datasets:
* gov.uk Comparing School Website
* English indices of deprivation 2015
End of explanation
def is_int(value):
try:
int(value)
return True
except ValueError:
return False
def is_float(value):
try:
float(value)
return True
except ValueError:
return False
to_float = lambda x: float(x if is_float(x) else np.nan)
to_int = lambda x: int(x) if is_int(x) else np.nan
Explanation: Utility functions
Some utility function to cast data to proper data types.
End of explanation
def clean_spine(file_path):
def la_codes(file_path):
la_codes = pd.read_csv(file_path, usecols=['LEA', 'LA Name'])
la_codes.columns = ['la', 'la name']
return la_codes
la_codes = la_codes('/project/uk-schools-clustering/data/meta/la_and_region_codes_meta.csv')
spine = pd.read_csv(
file_path,
usecols=['URN', 'LA', 'SCHNAME', 'LOCALITY', 'TOWN', 'POSTCODE'],
dtype={
'URN': 'object'
}
)
spine['POSTCODE'] = spine['POSTCODE'].str.replace(' ', '')
spine.columns = ['urn', 'la', 'name', 'locality', 'town', 'postcode']
return pd.merge(spine, la_codes, on=['la']).drop_duplicates('urn')
clean_spine('/project/uk-schools-clustering/data/raw/2016-2017_england_spine.csv').sample(5)
Explanation: Read and clean SPINE data
Load and clean SPINE data. For each school the following fields are loaded:
* school URN
* la
* school name
* locality
* town
* postcode
End of explanation
def clean_census(file_path):
census = pd.read_csv(
file_path, usecols=['URN', 'NOR', 'NORG', 'NORB', 'NUMEAL', 'NUMENGFL', 'PNUMFSM'],
converters={
'NOR': to_int,
'NORG': to_int,
'NORB': to_int,
'NUMEAL': to_int,
'NUMENGFL': to_int,
'PNUMFSM': to_float
}
)
census['on free meal'] = (census['NORG']*census['PNUMFSM']) / 100
census['on free meal'] = round(census['on free meal'])
census.drop(inplace=True, columns=['PNUMFSM'])
census.columns = ['urn', 'total pupils on roll',
'girls on roll', 'boys on roll',
'english not first language', 'english first language', 'on free meal']
return census.drop_duplicates('urn')
clean_census('/project/uk-schools-clustering/data/raw/2016-2017_england_census.csv')
Explanation: Read and clean census data
Load and clean some schools census data, including:
* total number of pupils on roll
* number of girls on roll
* number of boys on roll
* number of pupils which english is not first language
* number of pupils which english is first language
* number of pupils on free meals
End of explanation
def clean_workforce(file_path):
clean_salary = lambda x : x.replace('£', '').replace(',','.')
workforce = pd.read_csv(
file_path,
usecols=['URN', 'Total Number of Teachers (Headcount)', 'Mean Gross FTE Salary of All Teachers'],
dtype={'URN': object},
converters={
'Total Number of Teachers (Headcount)': to_int,
'Mean Gross FTE Salary of All Teachers': lambda x: to_float(clean_salary(x))
}
)
workforce.columns = ['urn', 'teacher headcount', 'mean salary fte']
return workforce
clean_workforce('/project/uk-schools-clustering/data/raw/2016-2017_england_swf.csv')
Explanation: Read and clean workforce data
Load and clean school workforce data, such as:
* Total number of teachers (headcount)
* Mean gross fulltime teacher salary
End of explanation
def clean_spending(file_path):
clean_value = lambda x : x.replace(',','.')
to_float = lambda x: float(clean_value(x) if is_float(clean_value(x)) else np.nan)
spending = pd.read_csv(
file_path,
usecols=['URN', 'TOTALINCOME', 'TOTALEXPENDITURE'],
dtype={
'URN': 'object'
},
converters={
'TOTALINCOME': lambda x : to_float(x),
'TOTALEXPENDITURE': lambda x : to_float(x)
}
)
spending.columns = ['urn', 'total income pp', 'total expenditure pp']
return spending
clean_spending('/project/uk-schools-clustering/data/raw/2016-2017_england_cfr.csv')
Explanation: Read and clean spending data
Load school spending data:
* Total school income per pupil
* Total school expenditure per pupil
End of explanation
def clean_deprivation(file_path):
deprivation = pd.read_csv(
file_path,
usecols=['Postcode', 'Income Score', 'Employment Score', 'IDACI Score'],
converters={
'Postcode' : lambda s : s.replace(' ', ''),
'Income Score': lambda x : to_float(x),
'Employment Score': lambda x : to_float(x),
'IDACI Score': lambda x : to_float(x)
}
)
deprivation.columns = ['postcode', 'income score', 'empl score', 'idaci score']
return deprivation
clean_deprivation('/project/uk-schools-clustering/data/raw/deprivation-by-postcode-2015.csv')
Explanation: Read and clean deprivation data
Load UK deprivation data:
* Income score
* Employment score
* IDACI score
End of explanation
def clean_k2final(file_path):
def clean_percent(percent_str):
percent_candidate = percent_str.replace('%', '')
return to_float(percent_candidate) / 100
k2final = pd.read_csv(
file_path,
usecols=['URN', 'PTREAD_EXP',
'PTMAT_EXP', 'PTGPS_EXP', 'PTWRITTA_EXP',
'READ_AVERAGE', 'GPS_AVERAGE', 'MAT_AVERAGE'
],
converters={
'PTREAD_EXP' : clean_percent,
'PTMAT_EXP' : clean_percent,
'PTGPS_EXP' : clean_percent,
'PTWRITTA_EXP' : clean_percent,
'READ_AVERAGE' : to_int,
'GPS_AVERAGE' : to_int,
'MAT_AVERAGE' : to_int
}
)
k2final.rename(columns={
'URN':'urn',
'PTREAD_EXP': 'perc pupils meeting reading standard',
'PTMAT_EXP': 'perc pupils meeting math standard',
'PTGPS_EXP': 'perc pupils meeting grammar standard',
'PTWRITTA_EXP': 'perc pupils meeting writing standard',
'READ_AVERAGE': 'avg reading scaled score',
'GPS_AVERAGE': 'avg grammar scaled score',
'MAT_AVERAGE': 'avg math scaled score'
}, inplace=True)
return k2final
clean_k2final('/project/uk-schools-clustering/data/raw/2016-2017_england_ks2final.csv')
Explanation: Read and clean key stage 2 final data
Load final key stage 2 data. Including:
* math, writing, reading and grammar average scaled scores
* percentage of pupils meeting math, writing, reading and grammar standards
End of explanation
def get_data(save_to = None, columns = None):
spine = clean_spine('/project/uk-schools-clustering/data/raw/2016-2017_england_spine.csv')
census = clean_census('/project/uk-schools-clustering/data/raw/2016-2017_england_census.csv')
workforce = clean_workforce('/project/uk-schools-clustering/data/raw/2016-2017_england_swf.csv')
spending = clean_spending('/project/uk-schools-clustering/data/raw/2016-2017_england_cfr.csv')
deprivation = clean_deprivation('/project/uk-schools-clustering/data/raw/deprivation-by-postcode-2015.csv')
k2final = clean_k2final('/project/uk-schools-clustering/data/raw/2016-2017_england_ks2final.csv')
result = pd.merge(spine, census, on=['urn'])
result = pd.merge(result, deprivation, on=['postcode'])
result = pd.merge(result, workforce, on=['urn'])
result = pd.merge(result, spending, on=['urn'])
result = pd.merge(result, k2final, on=['urn'])
result.dropna(axis=0, subset=[
'total income pp',
'idaci score',
'mean salary fte',
'perc pupils meeting reading standard',
'perc pupils meeting grammar standard',
'perc pupils meeting math standard',
'avg reading scaled score',
'avg grammar scaled score',
'avg math scaled score'
], how='any', inplace=True)
# result.dropna(axis=0, how='all', inplace=True)
if columns is None:
columns_to_select = result.columns
else:
columns_to_select = columns
if save_to is not None:
result[columns_to_select].to_csv(save_to, index=False)
return result[columns_to_select]
get_data(
'/project/uk-schools-clustering/data/derived/2016-2017_england.csv',
columns=['urn', 'name', 'english first language', 'girls on roll',
'english not first language','total income pp', 'total pupils on roll', 'on free meal',
'idaci score', 'teacher headcount','boys on roll', 'mean salary fte', 'total expenditure pp',
'income score', 'empl score', 'perc pupils meeting reading standard',
'perc pupils meeting math standard', 'perc pupils meeting grammar standard', 'perc pupils meeting writing standard',
'avg reading scaled score','avg grammar scaled score','avg math scaled score']
)
Explanation: Merge datasets
This function, loads all different datasets, merges them together and eventually saves them on disk.
End of explanation |
13,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='inputs_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
# CHANGE - had smoothing before
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \
labels=tf.ones_like(d_logits_fake))) # had smoothing before
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [g_var for g_var in t_vars if g_var.name.startswith('generator')]
d_vars = [d_var for d_var in t_vars if d_var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
# CHANGE
!mkdir checkpoints
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
13,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring data
Step1: Salary
Replacing NA values with empty strings in the salary column
Step2: Extracting equity
Step3: Extracting currency and high - low salary
Need to extract currency, salary_low and salary_high from salary field and copy it to their own columns.
Using regex here to capture parts of the salary field into three columns
Step4: Location
We need better location information, so we can do analysis by countries and cities. For this we need to extract country, state and city out of location column. But first let's remove the na values from location column.
Then use a lambda to split the location into individual fields.
Step5: Fixing US locations
US locations seems to be special. They are in the form of city, state, we need this to be in form of city, state, country, so let's fix this first.
If we have a US state in location_1 column then put US in location_2.
Step6: Filling the state and country columns
If in a row location_2 is null then location_1 contains the country of that location, if location_2 is not empty thne location_2 is going to be the country and location_1 will contain the state.
Step7: Now we can see what countries are posting the most jobs. It seems that the US, Deutschland, Germany and the UK are the top countries. But wait. Aren't Germany and Deutschland are the same country? Let's fix this and some other countries with native names.
Step8: Top cities | Python Code:
jobs.columns
jobs.dtypes
jobs.describe()
jobs.head()
jobs.tail()
Explanation: Exploring data
End of explanation
jobs.salary = jobs.salary.fillna('')
Explanation: Salary
Replacing NA values with empty strings in the salary column
End of explanation
jobs['equity'] = jobs['salary'].str.contains('Provides Equity')
Explanation: Extracting equity
End of explanation
# salary = jobs.salary
salary = jobs.salary.map(lambda x: x.replace("Provides Equity","").replace("/","").strip())
sal = salary.str.extract('(?P<currency>[^\d]*)(?P<number_low>[\d,]+) - (?P<number_high>[\d,]+$)')
sal.number_low = sal.number_low.fillna(0)
sal.number_high = sal.number_high.fillna(0)
sal.currency = sal.currency.fillna('')
# mapping the new columns back
jobs['currency'] = sal.currency
jobs['salary_low'] = sal.number_low
jobs['salary_high'] = sal.number_high
Explanation: Extracting currency and high - low salary
Need to extract currency, salary_low and salary_high from salary field and copy it to their own columns.
Using regex here to capture parts of the salary field into three columns:
- currency will capture zero or more characters that are non digits
- number_low captures one or more characters that are digits and spearators (currently only comma is used)
- number high will capture all the numbers plus separators from the dash until the end of the string
End of explanation
jobs.location = jobs.location.fillna('') # sometimes we have nothing in the location field.
location_split = lambda x: pd.Series([i for i in x.split(',')])
locations = jobs['location'].apply(location_split)
locations.rename(columns={0:'city', 1: 'location_1', 2: 'location_2'},inplace=True)
Explanation: Location
We need better location information, so we can do analysis by countries and cities. For this we need to extract country, state and city out of location column. But first let's remove the na values from location column.
Then use a lambda to split the location into individual fields.
End of explanation
# Fixing US States
us_states = ["AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DC", "DE", "FL", "GA",
"HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD",
"MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ",
"NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC",
"SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY"]
locations['location_1'] = locations['location_1'].str.strip()
locations.loc[locations['location_1'].isin(us_states),'location_2'] = "US"
Explanation: Fixing US locations
US locations seems to be special. They are in the form of city, state, we need this to be in form of city, state, country, so let's fix this first.
If we have a US state in location_1 column then put US in location_2.
End of explanation
# if location_2 is null then location_1 column has the country
# if location_2 is not null then location_2 has the country and location_1 contains the state
jobs['country'] = np.where(locations['location_2'].isnull(), locations['location_1'], locations['location_2'])
jobs['state'] = np.where(locations['location_2'].notnull(), locations['location_1'], '')
jobs['city'] = locations['city']
# filling na for country
jobs.country = jobs.country.fillna('')
# stripping spaces from new columns
jobs['city'] = jobs['city'].str.strip()
jobs['country'] = jobs['country'].str.strip()
Explanation: Filling the state and country columns
If in a row location_2 is null then location_1 contains the country of that location, if location_2 is not empty thne location_2 is going to be the country and location_1 will contain the state.
End of explanation
# replacing some of the country names with their english version
jobs.loc[jobs['country'].str.contains('Deutschland'),'country'] = 'Germany' # Deutschland -> Germany
jobs.loc[jobs['country'].str.contains('Österreich'),'country'] = 'Austria' # Österreich -> Austria
jobs.loc[jobs['country'].str.contains('Suisse'), 'country'] = 'Switzerland' # Suisse -> Switzerland
jobs.loc[jobs['country'].str.contains('Schweiz'), 'country'] = 'Switzerland' # Schweiz -> Switzerland
jobs.loc[jobs['country'].str.contains('Espagne'), 'country'] = 'Spain' # Espagne -> Spain
jobs.loc[jobs['country'].str.contains('République tchèque'), 'country'] = 'Czech Republic' # République tchèque -> Czech Republic
jobs.loc[jobs['country'].str.contains('Niederlande'), 'country'] = 'Netherlands' # Niederlande -> Netherlands
jobs['country'].value_counts().head()
top_cities = jobs['city'].value_counts().nlargest(100)
Explanation: Now we can see what countries are posting the most jobs. It seems that the US, Deutschland, Germany and the UK are the top countries. But wait. Aren't Germany and Deutschland are the same country? Let's fix this and some other countries with native names.
End of explanation
top_cities.head(20)
# saving the result to csv
top_cities.to_csv('../data/top_cities.csv')
Explanation: Top cities
End of explanation |
13,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 1
Imports
Step3: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic
Step5: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
Step7: Write a function sort_word_counts that return a list of sorted word counts
Step8: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt
Step9: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research... | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
Explanation: Algorithms Exercise 1
Imports
End of explanation
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
Split a string into a list of words, removing punctuation and stop words.
s = s.lower()
s = s.splitlines()
s = " ".join(s)
# make stop_words into a list
if type(stop_words) == str:
stop_words = stop_words.split(" ")
#remove punctuation
for element in punctuation:
s = list(filter(lambda a: a!=element, list(s)))
s = "".join(s)
# at this point the puncutation and \n have been removed
# make it a list of words
s = s.split(" ")
#filter out the empty spaces
s = list(filter(lambda x: x!= '', s))
s = " ".join(s)
final = s.split(" ")
if stop_words != None:
for j in range(len(stop_words)):
final = list(filter(lambda x: x!= stop_words[j], final))
return final
tokenize("hello my name\nis natasha proctor!\nAnd I want to be done with this.", stop_words=["hello", "with", "this"])
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
Explanation: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:
Split the string into lines using splitlines.
Split each line into a list of words and merge the lists for each line.
Use Python's builtin filter function to remove all punctuation.
If stop_words is a list, remove all occurences of the words in the list.
If stop_words is a space delimeted string of words, split them and remove them.
Remove any remaining empty words.
Make all words lowercase.
End of explanation
def count_words(data):
Return a word count dictionary from the list of words in data.
r = {data[i]: data.count(data[i]) for i in range(len(data))}
return r
count_words(tokenize('this and the this from and a a a'))
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
End of explanation
def sort_word_counts(wc):
Return a list of 2-tuples of (word, count), sorted by count descending.
zipped = list(zip(wc.keys(), wc.values()))
answer = sorted(zipped, key= lambda a: a[1], reverse=True)
return answer
sort_word_counts({'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2})
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
Explanation: Write a function sort_word_counts that return a list of sorted word counts:
Each element of the list should be a (word, count) tuple.
The list should be sorted by the word counts, with the higest counts coming first.
To perform this sort, look at using the sorted function with a custom key and reverse
argument.
End of explanation
mobydick = open("mobydick_chapter1.txt", "r")
swc = sort_word_counts(count_words(tokenize(mobydick.read(), stop_words="the of and a to in is it that as")))
mobydick.close()
print(len(swc))
assert swc[0]==('i',43)
assert len(swc)==843
Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:
Read the file into a string.
Tokenize with stop words of 'the of and a to in is it that as'.
Perform a word count, the sort and save the result in a variable named swc.
End of explanation
X = [x[1] for x in swc[0:50]]
Y = np.linspace(0,50, 50)
label = [x[0] for x in swc[0:50]]
plt.figure(figsize=(10,10))
plt.plot(X,Y, 'bo')
plt.ylim(-1, 51)
plt.xlim(0,45)
ax = plt.gca()
ax.invert_yaxis()
plt.xlabel("Amount Used")
plt.title("50 Most Used Words in Moby Dick")
plt.yticks(Y, label)
plt.show()
assert True # use this for grading the dotplot
Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
End of explanation |
13,196 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Right now, I have my data in a 3D numpy array. If I was to use MinMaxScaler fit_transform on each matrix of the array, it will normalize it column by column, whereas I wish to normalize entire matrices. Is there anyway to do that? | Problem:
import numpy as np
from sklearn.preprocessing import MinMaxScaler
a = np.array([[[1, 0.5, -2], [-0.5,1, 6], [1,1,1]], [[-2, -3, 1], [-0.5, 10, 6], [1,1,1]]])
scaler = MinMaxScaler()
result = np.zeros_like(a)
for i, arr in enumerate(a):
a_one_column = arr.reshape(-1, 1)
result_one_column = scaler.fit_transform(a_one_column)
result[i, :, :] = result_one_column.reshape(arr.shape) |
13,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Введение в численные методы оптимизации (Ю. Е. Нестеров Введение в выпуклую оптимизацию, гл. 1 $\S$ 1.1)
Обзор материала весеннего семестра
Постановка задачи
Общая схема решения
Сравнение методов оптимизации
Обзор материала весеннего семестра
Также на странице курса.
Методы решения задач безусловной оптимизации
Градиентный спуск и способы его ускорения
Метод Ньютона
Квазиньютоновские методы
Метод сопряжённых градиентов
Решение задачи наименьших квадратов
Безградиентные методы
Стохастические методы
Методы решения задач условной оптимизации
Методы проекции градиента и условного градиента
Проксимальные методы
Методы штрафных и барьерных функций
Метод модифицированой функции Лагранжа
Постановка задачи
\begin{equation}
\begin{split}
& \min_{x \in S} f_0(x)\
\text{s.t. } & f_j(x) = 0, \; j = 1,\ldots,m\
& g_k(x) \leq 0, \; k = 1,\ldots,p
\end{split}
\end{equation}
где $S \subseteq \mathbb{R}^n$, $f_j
Step1: Значение теорем сходимости (Б.Т. Поляк Введение в оптимизацию, гл. 1, $\S$ 6)
Что дают теоремы сходимости
класс задач, для которых можно рассчитывать на применимость метода (важно не завышать условия!)
выпуклость
гладкость
качественное поведение метода
существенно ли начальное приближение
по какому функционалу есть сходимость
оценку скорости сходимости
теоретическая оценка поведения метода без проведения экспериментов
определение факторов, которые влияют на сходимость (обусловленность, размерность, etc)
иногда заранее можно выбрать число итераций для достижения заданной точности
Что НЕ дают теоремы сходимости
сходимость метода ничего не говорит о целесообразности его применения
оценки сходимости зависят от неизвестных констант - неконструктивный характер
учёт ошибок округления и точности решения вспомогательных задач
Мораль
Step2: $f(x) = x\log x$
Step3: Backtracking
```python
def SelectStepSize(x, f, h, rho, alpha0, beta1, beta2)
Step4: Выбор шага
Реализации различных способов выбора шага приведены тут
Зависимость от обусловленности матрицы $f''(x)$
Рассмотрим задачу
$$
\min f(x),
$$
где
$$ f(x) = x^{\top}Ax, \; A = \begin{bmatrix} 1 & 0\ 0 & \gamma \end{bmatrix} $$
$$
f'(x) = 2Ax
$$
Step5: При неудачном начальном приближении сходимость для плохо обусловенной задачи очень медленная
При случайном начальном приближении сходимость может быть гораздо быстрее теоретических оценок
Эксперимент на многомерной задаче
Пусть $A \in \mathbb{R}^{m \times n}$. Рассмотрим систему линейных неравенств
Step6: Решение с помощью градиентного спуска
Step7: Подробнее про jax, его возможности и особенности можно посмотреть например тут | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
USE_COLAB = False
if not USE_COLAB:
plt.rc("text", usetex=True)
import numpy as np
C = 10
alpha = -0.5
q = 0.9
num_iter = 10
sublinear = np.array([C * k**alpha for k in range(1, num_iter + 1)])
linear = np.array([C * q**k for k in range(1, num_iter + 1)])
superlinear = np.array([C * q**(k**2) for k in range(1, num_iter + 1)])
quadratic = np.array([C * q**(2**k) for k in range(1, num_iter + 1)])
plt.figure(figsize=(12,8))
plt.semilogy(np.arange(1, num_iter+1), sublinear,
label=r"Sublinear, $\alpha = -0.5$", linewidth=5)
plt.semilogy(np.arange(1, num_iter+1), superlinear, linewidth=5,
label=r"Superlinear, $q = 0.5, p=2$")
plt.semilogy(np.arange(1, num_iter+1), linear,
label=r"Linear, $q = 0.5$", linewidth=5)
plt.semilogy(np.arange(1, num_iter+1), quadratic,
label=r"Quadratic, $q = 0.5$", linewidth=5)
plt.xlabel("Number of iterations, $k$", fontsize=28)
plt.ylabel("Error rate upper bound", fontsize=28)
plt.legend(loc="best", fontsize=26)
plt.xticks(fontsize = 28)
_ = plt.yticks(fontsize = 28)
Explanation: Введение в численные методы оптимизации (Ю. Е. Нестеров Введение в выпуклую оптимизацию, гл. 1 $\S$ 1.1)
Обзор материала весеннего семестра
Постановка задачи
Общая схема решения
Сравнение методов оптимизации
Обзор материала весеннего семестра
Также на странице курса.
Методы решения задач безусловной оптимизации
Градиентный спуск и способы его ускорения
Метод Ньютона
Квазиньютоновские методы
Метод сопряжённых градиентов
Решение задачи наименьших квадратов
Безградиентные методы
Стохастические методы
Методы решения задач условной оптимизации
Методы проекции градиента и условного градиента
Проксимальные методы
Методы штрафных и барьерных функций
Метод модифицированой функции Лагранжа
Постановка задачи
\begin{equation}
\begin{split}
& \min_{x \in S} f_0(x)\
\text{s.t. } & f_j(x) = 0, \; j = 1,\ldots,m\
& g_k(x) \leq 0, \; k = 1,\ldots,p
\end{split}
\end{equation}
где $S \subseteq \mathbb{R}^n$, $f_j: S \rightarrow \mathbb{R}, \; j = 0,\ldots,m$, $g_k: S \rightarrow \mathbb{R}, \; k=1,\ldots,p$
Все функции как минимум непрерывны.
Важный факт</span>: задачи нелинейной оптимизации
в их самой общей форме являются численно неразрешимыми!
Аналитические результаты
Необходимое условие первого порядка:
если $x^*$ точка локального минимума дифференцируемой функции $f(x)$, тогда
$$
f'(x^*) = 0
$$
Необходимое условие второго порядка
если $x^*$ точка локального минимума дважды дифференцируемой функции $f(x)$, тогда
$$
f'(x^) = 0 \quad \text{и} \quad f''(x^) \succeq 0
$$
Достаточное условие:
пусть $f(x)$ дважды дифференцируемая функция, и пусть точка $x^*$ удовлетворяет условиям
$$
f'(x^) = 0 \quad f''(x^) \succ 0,
$$
тогда $x^*$ является точкой строго локального минимума функции $f(x)$.
Замечание: убедитесь, что Вы понимаете, как доказывать эти
результаты!
Особенности численного решения
Точно решить задачу принципиально невозможно из-за погрешности машинной арифметики
Необходимо задать критерий обнаружения решения
Необходимо определить, какую информацию о задаче использовать
Общая итеративная схема
Дано: начальное приближение $x$, требуемая точность $\varepsilon$.
```python
def GeneralScheme(x, epsilon):
while StopCriterion(x) > epsilon:
OracleResponse = RequestOracle(x)
UpdateInformation(I, x, OracleResponse)
x = NextPoint(I, x)
return x
```
Вопросы
Какие критерии остановки могут быть?
Что такое оракул и зачем он нужен?
Что такое информационная модель?
Как вычисляется новая точка?
Критерии остановки
Сходимость по аргументу:
$$
\| x_k - x^* \|_2 < \varepsilon
$$
Сходимость по функции:
$$
\| f_k - f^* \|_2 < \varepsilon
$$
Выполнение необходимого условия
$$
\| f'(x_k) \|_2 < \varepsilon
$$
Но ведь $x^*$ неизвестна!
Тогда
\begin{align}
& \|x_{k+1} - x_k \| = \|x_{k+1} - x_k + x^ - x^ \| \leq \
& \|x_{k+1} - x^ \| + \| x_k - x^ \| \leq 2\varepsilon
\end{align}
Аналогично для сходимости по функции,
однако иногда можно оценить $f^*$!
Замечание: лучше использовать относительные изменения
этих величин!
Например $\dfrac{\|x_{k+1} - x_k \|_2}{\| x_k \|_2}$
Что такое оракул?
Определение: оракулом называют некоторое абстрактное
устройство, которое отвечает на последовательные вопросы
метода
Аналогия из ООП:
оракул - это виртуальный метод базового класса
каждая задача - производный класс
оракул определяется для каждой задачи отдельно согласно общему определению в базовом классе
Концепция чёрного ящика
1. Единственной информацией, получаемой в ходе работы итеративного метода, являются ответы оракула
2. Ответы оракула являются локальными
Информация о задаче
Каждый ответ оракула даёт локальную информацию о поведении функции в точке
Агрегируя все полученные ответы оракула, обновляем информацию о глобальном виде целевой функции:
кривизна
направление убывания
etc
Вычисление следующей точки
$$
x_{k+1} = x_{k} + \alpha_k h_k
$$
Линейный поиск: фиксируется направление $h_k$ и производится поиск по этому направлению "оптимального" значения $\alpha_k$
Метод доверительных областей: фиксируется допустимый размер области по некоторой норме $\| \cdot \| \leq \alpha$ и модель целевой функции, которая хорошо её аппроксимирует в выбранной области.
Далее производится поиск направления $h_k$, минимизирующего модель целевой функции и не выводящего точку $x_k + h_k$ за пределы доверительной области
Вопросы
Как выбрать $\alpha_k$?
Как выбрать $h_k$?
Как выбрать модель?
Как выбрать область?
Как выбрать размер области?
<span style="color:red">
В курсе рассматривается только линейный поиск!</span>
Однако несколько раз копцепция метода доверительных областей
будет использована.
Как сравнивать методы оптимизации?
Для заданного класса задач сравнивают следующие величины:
1. Сложность
- аналитическая: число обращений к оракулу для решения задачи с точностью $\varepsilon$
- арифметическая: общее число всех вычислений, необходимых для решения задачи с точностью $\varepsilon$
2. Скорость сходимости
3. Эксперименты
Скорости сходимости
1. Сублинейная
$$
\| x_{k+1} - x^* \|_2 \leq C k^{\alpha},
$$
где $\alpha < 0$ и $ 0 < C < \infty$
2. Линейная (геометрическая прогрессия)
$$
\| x_{k+1} - x^* \|_2 \leq Cq^k,
$$
где $q \in (0, 1)$ и $ 0 < C < \infty$
3. Сверхлинейная
$$
\| x_{k+1} - x^* \|_2 \leq Cq^{k^p},
$$
где $q \in (0, 1)$, $ 0 < C < \infty$ и $p > 1$
4. Квадратичная
$$
\| x_{k+1} - x^ \|_2 \leq C\| x_k - x^ \|^2_2, \qquad \text{или} \qquad \| x_{k+1} - x^* \|_2 \leq C q^{2^k}
$$
где $q \in (0, 1)$ и $ 0 < C < \infty$
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rc("text", usetex=True)
import ipywidgets as ipywidg
import numpy as np
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
from tqdm import tqdm
f = lambda x: np.power(x, 2)
gradf = lambda x: 2 * x
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
def update(x0, step):
gd = methods.fo.GradientDescent(f, gradf, ss.ConstantStepSize(step))
_ = gd.solve(np.array([x0]), max_iter=10)
x_hist = gd.get_convergence()
x = np.linspace(-5, 5)
ax.clear()
ax.plot(x, f(x), color="r", label="$f(x) = x^2$")
y_hist = np.array([f(x) for x in x_hist])
x_hist = np.array(x_hist)
plt.quiver(x_hist[:-1], y_hist[:-1], x_hist[1:]-x_hist[:-1], y_hist[1:]-y_hist[:-1],
scale_units='xy', angles='xy', scale=1, width=0.005, color="green", label="Descent path")
ax.legend()
fig.canvas.draw()
step_slider = ipywidg.FloatSlider(value=0.8, min=0, max=1.2, step=0.1, description="Step")
x0_slider = ipywidg.FloatSlider(value=1.5, min=-4, max=4, step=0.1, description="Initial point")
_ = ipywidg.interact(update, x0=x0_slider, step=step_slider)
def plot_alpha(f, grad, x, h, alphas, beta1, beta2):
df = np.zeros_like(alphas)
for i, alpha in enumerate(alphas):
df[i] = f(x + alpha * h)
upper_bound = f(x) + beta1 * alphas * grad(x) * h
lower_bound = f(x) + beta2 * alphas * grad(x) * h
plt.plot(alphas, df, label=r"$f(x + \alpha h)$")
plt.plot(alphas, upper_bound, label="Upper bound")
plt.plot(alphas, lower_bound, label="Lower bound")
plt.xlabel(r"$\alpha$", fontsize=18)
plt.legend(loc="best", fontsize=18)
f = lambda x: x**2
grad = lambda x: 2 * x
beta1 = 0.1
beta2 = 0.9
x0 = 0.5
plot_alpha(f, grad, x0, -grad(x0), np.linspace(1e-3, 1.01, 10), beta1, beta2)
Explanation: Значение теорем сходимости (Б.Т. Поляк Введение в оптимизацию, гл. 1, $\S$ 6)
Что дают теоремы сходимости
класс задач, для которых можно рассчитывать на применимость метода (важно не завышать условия!)
выпуклость
гладкость
качественное поведение метода
существенно ли начальное приближение
по какому функционалу есть сходимость
оценку скорости сходимости
теоретическая оценка поведения метода без проведения экспериментов
определение факторов, которые влияют на сходимость (обусловленность, размерность, etc)
иногда заранее можно выбрать число итераций для достижения заданной точности
Что НЕ дают теоремы сходимости
сходимость метода ничего не говорит о целесообразности его применения
оценки сходимости зависят от неизвестных констант - неконструктивный характер
учёт ошибок округления и точности решения вспомогательных задач
Мораль: нужно проявлять разумную осторожность
и здравый смысл!
Классификация задач
Безусловная оптимизация
целевая функция липшицева
градиент целевой функции липшицев
Условная оптимизация
многогранник
множество простой структуры
общего вида
Классификация методов
Какой размер истории нужно хранить для обновления?
Одношаговые методы
$$
x_{k+1} = \Phi(x_k)
$$
Многошаговые методы
$$
x_{k+1} = \Phi(x_k, x_{k-1}, ...)
$$
Какой порядок поизводных нужно вычислить?
Методы нулевого порядка: оракул возвращает только значение функции $f(x)$
Методы первого порядка: оракул возвращает значение функции $f(x)$ и её градиент $f'(x)$
Методы второго порядка: оракул возвращает значение функции $f(x)$, её градиент $f'(x)$ и гессиан $f''(x)$.
Q: существуют ли методы более высокого порядка?
А: Implementable tensor methods in unconstrained convex optimization by Y. Nesterov, 2019
Методы спуска. Градиентный спуск
Что такое методы спуска?
Последовательность $x_k$ генерируется по правилу
$$
x_{k+1} = x_k + \alpha_k h_k
$$
так что
$$
f(x_{k+1}) < f(x_k)
$$
Направление $h_k$ называется направлением убывания.
Замечание: существуют методы, которые не требуют монотонного убывания функции от итерации к итерации.
```python
def DescentMethod(f, x0, epsilon, **kwargs):
x = x0
while StopCriterion(x, f, **kwargs) > epsilon:
h = ComputeDescentDirection(x, f, **kwargs)
alpha = SelectStepSize(x, h, f, **kwargs)
x = x + alpha * h
return x
```
Способ 1: направление убывания
Рассмотрим линейную аппроксимацию дифференцируемой функции $f$ вдоль некоторого направления убывания $h, \|h\|_2 = 1$:
$$
f(x + \alpha h) = f(x) + \alpha \langle f'(x), h \rangle + o(\alpha)
$$
Из условия убывания
$$
f(x) + \alpha \langle f'(x), h \rangle + o(\alpha) < f(x)
$$
и переходя к пределу при $\alpha \rightarrow 0$:
$$
\langle f'(x), h \rangle \leq 0
$$
Также из неравенства Коши-Буняковского-Шварца
$$
\langle f'(x), h \rangle \geq -\| f'(x) \|_2 \| h \|_2 = -\| f'(x) \|_2
$$
Таким образом, направление антиградиента
$$
h = -\dfrac{f'(x)}{\|f'(x)\|_2}
$$
даёт направление наискорейшего локального убывания функции$~f$.
В итоге метод имеет вид
$$
x_{k+1} = x_k - \alpha f'(x_k)
$$
Способ 2: схема Эйлера решения ОДУ
Рассмотрим обыкновенное диференциальное уравнение вида:
$$
\frac{dx}{dt} = -f'(x(t))
$$
и дискретизуем его на равномерной сетке с шагом $\alpha$:
$$
\frac{x_{k+1} - x_k}{\alpha} = -f'(x_k),
$$
где $x_k \equiv x(t_k)$ и $\alpha = t_{k+1} - t_k$ - шаг сетки.
Отсюда получаем выражение для $x_{k+1}$
$$
x_{k+1} = x_k - \alpha f'(x_k),
$$
которое в точности совпадает с выражением для градиентного спуска.
Такая схема называется явной или прямой схемой Эйлера.
Q: какая схема называется неявной или обратной?
Способ 3: минимизация квадратичной оценки сверху
(А. В. Гасников "Метод универсального градиентного спуска" https://arxiv.org/abs/1711.00394)
Глобальная оценка сверху на функцию $f$ в точке $x_k$:
$$
f(y) \leq f(x_k) + \langle f'(x_k), y - x_k \rangle + \frac{L}{2} \|y - x_k \|_2^2 = g(y),
$$
где $\lambda_{\max}(f''(x)) \leq L$ для всех допустимых $x$.
Справа — квадратичная форма, точка минимума которой имеет аналитическое выражение:
\begin{align}
& g'(y^) = 0 \
& f'(x_k) + L (y^ - x_k) = 0 \
& y^ = x_k - \frac{1}{L}f'(x_k) = x_{k+1}
\end{align*}
Этот способ позволяет оценить значение шага как $\frac{1}{L}$. Однако часто константа $L$ неизвестна.
Итого: метод градиентного спуска — дёшево и сердито
```python
def GradientDescentMethod(f, x0, epsilon, **kwargs):
x = x0
while StopCriterion(x, f, **kwargs) > epsilon:
h = ComputeGradient(x, f, **kwargs)
alpha = SelectStepSize(x, h, f, **kwargs)
x = x - alpha * h
return x
```
Как выбрать шаг $\alpha_k$? (J. Nocedal, S. Wright Numerical Optimization, $\S$ 3.1.)
Список подходов:
- Постоянный шаг
$$
\alpha_k = \overline{\alpha}
$$
Априорно заданная последовательность, например
$$
\alpha_k = \dfrac{\overline{\alpha}}{\sqrt{k+1}}
$$
Наискорейший спуск
$$
\alpha_k = \arg\min_{\alpha \geq 0} f(x_k - \alpha f'(x_k))
$$
Требование достаточного убывания, требование существенного убывания и условие кривизны: для некоторых $\beta_1, \beta_2$, таких что $0 < \beta_1 < \beta_2 < 1$ найти $x_{k+1}$ такую что
Достаточное убывание: $f(x_{k+1}) \leq f(x_k) + \beta_1 \alpha_k \langle f'(x_k), h_k \rangle$ или
$ f(x_k) - f(x_{k+1}) \geq \beta_1 \alpha_k \langle f'(x_k), h_k \rangle
$
Существенное убывание: $f(x_{k+1}) \geq f(x_k) + \beta_2 \alpha_k \langle f'(x_k), h_k \rangle$ или
$
f(x_k) - f(x_{k+1}) \leq \beta_2 \alpha_k \langle f'(x_k), h_k \rangle
$
Условие кривизны: $\langle f'(x_{k+1}), h_k \rangle \geq \beta_2 \langle f'(x_k), h_k \rangle$
Обычно коэффициенты выбирают так: $\beta_1 \in (0, 0.3)$, а $\beta_2 \in (0.9, 1)$
Анализ и мотивация подходов к выбору шага $\alpha_k$
Постоянный шаг: самое простое и неэффективное решение
Априорно заданная последовательность: немногим лучше постоянного шага
Наискорейший спуск: самое лучшее решение, но применимо только если вспомогательная задача решается аналитически или ооооооочень быстро. <br></br>
То есть почти всегда неприменимо :)
Требование достаточного убывания, требование существенного убывания и условие кривизны:
требование достаточного убывания гарантирует, что функция в точке $x_{k+1}$ не превосходит линейной аппроксимации с коэффициентом наклона $\beta_1$
требование существенного убывания гарантирует, что функция в точке $x_{k+1}$ убывает не меньше, чем линейная аппроксимация c коэффициентом наклона $\beta_2$
условие кривизны гарантирует, что угол наклона касательной в точке $x_{k+1}$ не меньше, чем угол наклона касательной в точке $x_k$, <br></br>
умноженный на $\beta_2$
Требование существенного убывания и условие кривизны обеспечивают убывание функции по выбранному направлению $h_k$. Обычно выбирают одно из них.
Альтернативные названия
Требование достаточного убывания $\equiv$ правило Армихо
Требование достаточного убывания + условие кривизны $\equiv$ правило Вольфа
Требование достаточного убывания + требование существенного убывания $\equiv$ правило Гольдштейна
Зачем нужно условие существенного убывания?
End of explanation
x_range = np.linspace(1e-10, 4)
plt.plot(x_range, x_range * np.log(x_range))
x0 = 1
f = lambda x: x * np.log(x)
grad = lambda x: np.log(x) + 1
beta1 = 0.3
beta2 = 0.7
plot_alpha(f, grad, x0, -grad(x0), np.linspace(1e-3, 0.9, 10), beta1, beta2)
Explanation: $f(x) = x\log x$
End of explanation
def GradientDescent(f, gradf, x0, epsilon, num_iter, line_search,
disp=False, callback=None, **kwargs):
x = x0.copy()
iteration = 0
opt_arg = {"f": f, "grad_f": gradf}
for key in kwargs:
opt_arg[key] = kwargs[key]
while True:
gradient = gradf(x)
alpha = line_search(x, -gradient, **opt_arg)
x = x - alpha * gradient
if callback is not None:
callback(x)
iteration += 1
if disp:
print("Current function val =", f(x))
print("Current gradient norm = ", np.linalg.norm(gradf(x)))
if np.linalg.norm(gradf(x)) < epsilon:
break
if iteration >= num_iter:
break
res = {"x": x, "num_iter": iteration, "tol": np.linalg.norm(gradf(x))}
return res
Explanation: Backtracking
```python
def SelectStepSize(x, f, h, rho, alpha0, beta1, beta2):
# 0 < rho < 1
# alpha0 - initial guess of step size
# beta1 and beta2 - constants from conditions
alpha = alpha0
# Check violating sufficient decrease and curvature conditions
while (f(x - alpha * h) >= f(x) + beta1 * alpha grad_f(x_k).dot(h)) and
(grad_f(x - alpha * h).dot(h) <= beta2 * grad_f(x_k).dot(h)):
alpha *= rho
return alpha
```
Теоремы сходимости (Б.Т. Поляк Введение в оптимизацию, гл. 1, $\S$ 4; гл. 3, $\S$ 1; Ю.Е. Нестеров Введение в выпуклую оптимизацию, $\S$ 2.2)
От общего к частному:
Теорема 1.
Пусть
$f(x)$ дифференцируема на $\mathbb{R}^n$,
градиент $f(x)$ удовлетворяет условию Липшица с константой $L$
$f(x)$ ограничена снизу
$\alpha = const$ и $0 < \alpha < \frac{2}{L}$
Тогда для градиентного метода выполнено:
$$
\lim\limits_{k \to \infty} f'(x_k) = 0,
$$
а функция монотонно убывает $f(x_{k+1}) < f(x_k)$.
Теорема 2. Пусть
- $f(x)$ дифференцируема на $\mathbb{R}^n$
- $f(x)$ выпукла
- $f'(x)$ удовлетворяет условию Липшица с константой $L$
- $\alpha = \dfrac{1}{L}$
Тогда
$$
f(x_k) - f^ \leq \dfrac{2L \| x_0 - x^\|^2_2}{k+4}
$$
Теорема 3.
Пусть
- $f(x)$ дважды дифференцируема и $\mu\mathbf{I} \preceq f''(x) \preceq L\mathbf{I}$ для всех $x$
- $\alpha = const$ и $0 < \alpha < \frac{2}{L}$
Тогда
$$
\| x_k - x^\|_2 \leq \|x_0 - x^\|_2 q^k, \qquad q = \max(|1 - \alpha l|, |1 - \alpha L|) < 1
$$
и минимальное $q^ = \dfrac{L - \mu}{L + \mu}$ при $\alpha^ = \dfrac{2}{L + \mu}$
От чего зависит $q^*$ и как это использовать?
Из Теоремы 3 имеем
$$
q^* = \dfrac{L - \mu}{L + \mu} = \dfrac{L/\mu - 1}{L/\mu + 1} = \dfrac{M - 1}{M + 1},
$$
где $M$ - оценка числа обусловленности $f''(x)$.
Вопрос: что такое число обусловленности матрицы?
При $M \gg 1$, $q^ \to 1 \Rightarrow$ оооочень медленная сходимости градиентного метода. Например при $M = 100$: $q^ \approx 0.98 $
При $M \simeq 1$, $q^ \to 0 \Rightarrow$ ускорение сходимости градиентного метода. Например при $M = 4$: $q^ = 0.6 $
Вопрос: какая геометрия у этого требования?
Мораль: необходимо сделать оценку $M$ как можно ближе к 1!
О том, как это сделать, Вам будет предложено подумать в домашнем задании :)
Вычислительный аспект и эксперименты
Для каждого шага метода нужно хранить только текущую точку и вектор градиента: $O(n)$ памяти
Поиск $\alpha_k$:
дан априори
ищется из аналитического решения задачи наискорейшего спуска
заканчивается за конечное число шагов
Для каждого шага метода нужно вычислять линейную комбинацию векторов: $O(n)$ вычислений + высокопроизводительные реализации
Pеализация градиентного спуска
End of explanation
def my_f(x, A):
return 0.5 * x.dot(A.dot(x))
def my_gradf(x, A):
return A.dot(x)
plt.rc("text", usetex=True)
gammas = [0.1, 0.5, 1, 2, 3, 4, 5, 10, 20, 50, 100, 1000, 5000, 10000]
# gammas = [1]
num_iter_converg = []
for g in gammas:
A = np.array([[1, 0],
[0, g]], dtype=np.float64)
f = lambda x: my_f(x, A)
gradf = lambda x: my_gradf(x, A)
# x0 = np.random.rand(A.shape[0])
# x0 = np.sort(x0)
# x0 = x0[::-1]
x0 = np.array([g, 1], dtype=np.float64)
# print x0[1] / x0[0]
gd = methods.fo.GradientDescent(f, gradf, ss.ExactLineSearch4Quad(A))
x = gd.solve(x0, tol=1e-7, max_iter=100)
num_iter_converg.append(len(gd.get_convergence()))
plt.figure(figsize=(8, 6))
plt.loglog(gammas, num_iter_converg)
plt.xticks(fontsize = 20)
plt.yticks(fontsize = 20)
plt.xlabel(r"$\gamma$", fontsize=20)
plt.ylabel(r"Number of iterations with $\varepsilon = 10^{-7}$", fontsize=20)
Explanation: Выбор шага
Реализации различных способов выбора шага приведены тут
Зависимость от обусловленности матрицы $f''(x)$
Рассмотрим задачу
$$
\min f(x),
$$
где
$$ f(x) = x^{\top}Ax, \; A = \begin{bmatrix} 1 & 0\ 0 & \gamma \end{bmatrix} $$
$$
f'(x) = 2Ax
$$
End of explanation
import numpy as np
n = 1000
m = 2000
A = np.random.rand(n, m)
x = cvx.Variable(n)
obj = cvx.Minimize(cvx.sum(-cvx.log(1 - A.T * x)) -
cvx.sum(cvx.log(1 - cvx.square(x))))
prob = cvx.Problem(obj)
prob.solve(solver="SCS", verbose=True)
x = x.value
print("Optimal value =", prob.value)
Explanation: При неудачном начальном приближении сходимость для плохо обусловенной задачи очень медленная
При случайном начальном приближении сходимость может быть гораздо быстрее теоретических оценок
Эксперимент на многомерной задаче
Пусть $A \in \mathbb{R}^{m \times n}$. Рассмотрим систему линейных неравенств: $Ax \leq 1$ при условии $|x_i| \leq 1$ для всех $i$.
Определение. Аналитическим центром системы неравенств $Ax \leq 1$ при условии $|x_i| \leq 1$ является решение задачи
$$
f(x) = - \sum_{i=1}^m \log(1 - a_i^{\top}x) - \sum_{i = 1}^n \log (1 - x^2_i) \to \min_x
$$
$$
f'(x) - ?
$$
Точное решение с помощью CVXPy
End of explanation
import cvxpy as cvx
print(cvx.installed_solvers())
# !pip install jax
# !pip install jaxlib
import jax.numpy as jnp
import jax
# from jax.config import config
# config.update("jax_enable_x64", True)
A = jnp.array(A)
print(A.dtype)
x0 = jnp.zeros(n)
f = lambda x: -jnp.sum(jnp.log(1 - A.T@x)) - jnp.sum(jnp.log(1 - x*x))
grad_f = lambda x: jnp.sum(A @ (jnp.diagflat(1 / (1 - A.T @ x))), \
axis=1) + 2 * x / (1 - jnp.power(x, 2))
grad_f_jax = jax.grad(f)
print(jnp.linalg.norm(grad_f(x0) - grad_f_jax(x0)))
Explanation: Решение с помощью градиентного спуска
End of explanation
gd = methods.fo.GradientDescent(f, grad_f_jax, ss.Backtracking("Armijo", rho=0.5, beta=0.1, init_alpha=1.))
x = gd.solve(x0, tol=1e-5, max_iter=100, disp=True)
x_conv = gd.get_convergence()
grad_conv = [jnp.linalg.norm(grad_f_jax(x)) for x in x_conv]
plt.figure(figsize=(8,6))
plt.semilogy(grad_conv, label=r"$\| f'(x_k) \|_2$")
plt.semilogy([np.linalg.norm(x - np.array(x_k)) for x_k in x_conv], label=r"$\|x_k - x^*\|_2$")
plt.semilogy([np.linalg.norm(prob.value - f(np.array(x_k))) for x_k in x_conv], label=r"$\|f(x_k) - f^*\|_2$")
plt.semilogy([np.linalg.norm(np.array(x_conv[i]) - np.array(x_conv[i+1])) for i in range(len(x_conv) - 1)], label=r"$\|x_k - x_{k+1}\|_2$")
plt.semilogy([np.linalg.norm(f(np.array(x_conv[i])) - f(np.array(x_conv[i+1]))) for i in range(len(x_conv) - 1)], label=r"$\|f(x_k) - f(x_{k+1})\|_2$")
plt.xlabel(r"Number of iteration, $k$", fontsize=20)
plt.ylabel(r"Convergence rate", fontsize=20)
plt.xticks(fontsize = 20)
plt.yticks(fontsize = 20)
plt.legend(loc="best", fontsize=20)
plt.tight_layout()
Explanation: Подробнее про jax, его возможности и особенности можно посмотреть например тут
End of explanation |
13,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Active Subspaces Example Function
Step1: First we draw M samples randomly from the input space.
Step2: Now we normalize the inputs, linearly scaling each to the interval $[-1, 1]$.
Step3: Compute gradients to approximate the matrix on which the active subspace is based.
Step4: Now we use our data to compute the active subspace.
Step5: We use plotting utilities to plot eigenvalues, subspace error, components of the first 2 eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values). | Python Code:
import active_subspaces as ac
import numpy as np
%matplotlib inline
# The otlcircuit_functions.py file contains two functions: the circuit function (circuit(xx))
# and its gradient (circuit_grad(xx)). Each takes an Mx6 matrix (M is the number of data
# points) with rows being normalized inputs; circuit returns a column vector of function
# values at each row of the input and circuit_grad returns a matrix whose ith row is the
# gradient of circuit at the ith row of xx with respect to the normalized inputs
from otlcircuit_functions import *
Explanation: Active Subspaces Example Function: Circuit Voltage
Ryan Howard, CO School of Mines, ryhoward@mines.edu
Paul Constantine, CO School of Mines, pconstan@mines.edu
<br>
In this tutorial, we'll be applying active subspaces to the function
$$
V_m = \frac{(V_{b1}+0.74)\beta(R_{c2}+9)}{\beta(R_{c2}+9)+R_f}+\frac{11.35R_f}{\beta(R_{c2}+9)+R_f}+\frac{0.74R_f\beta(R_{c2}+9)}{R_{c1}(\beta(R_{c2}+9)+R_f)},
$$
where $V_{b1} = 12R_{b2}/(R_{b1}+R_{b2})$, as seen on http://www.sfu.ca/~ssurjano/otlcircuit.html. This function models the midpoint voltage of a transformerless push-pull circuit, and its inputs and their distributions are described in the table below.
Variable|Symbol|Distribution (U(min, max))
:-----|:-----:|:-----
resistance b1|$R_{b1}$|U(50, 150)
resistance b2|$R_{b2}$|U(25, 70)
resistance f|$R_f$|U(.5, 3)
resistance c1|$R_{c1}$|U(1.2, 2.5)
resistance c2|$R_{c2}$|U(.25, 1.2)
current gain|$\beta$|U(50, 300)
End of explanation
M = 1000 #This is the number of data points to use
#Sample the input space according to the distributions in the table above
Rb1 = np.random.uniform(50, 150, (M, 1))
Rb2 = np.random.uniform(25, 70, (M, 1))
Rf = np.random.uniform(.5, 3, (M, 1))
Rc1 = np.random.uniform(1.2, 2.5, (M, 1))
Rc2 = np.random.uniform(.25, 1.2, (M, 1))
beta = np.random.uniform(50, 300, (M, 1))
#the input matrix
x = np.hstack((Rb1, Rb2, Rf, Rc1, Rc2, beta))
Explanation: First we draw M samples randomly from the input space.
End of explanation
#Upper and lower limits for inputs
xl = np.array([50, 25, .5, 1.2, .25, 50])
xu = np.array([150, 70, 3, 2.5, 1.2, 300])
#XX = normalized input matrix
XX = ac.utils.misc.BoundedNormalizer(xl, xu).normalize(x)
Explanation: Now we normalize the inputs, linearly scaling each to the interval $[-1, 1]$.
End of explanation
#output values (f) and gradients (df)
f = circuit(XX)
df = circuit_grad(XX)
Explanation: Compute gradients to approximate the matrix on which the active subspace is based.
End of explanation
#Set up our subspace using the gradient samples
ss = ac.subspaces.Subspaces()
ss.compute(df=df, nboot=500)
Explanation: Now we use our data to compute the active subspace.
End of explanation
#Component labels
in_labels = ['Rb1', 'Rb2', 'Rf', 'Rc1', 'Rc2', 'beta']
#plot eigenvalues, subspace errors
ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br)
ac.utils.plotters.subspace_errors(ss.sub_br)
#manually make the subspace 2D for the eigenvector and 2D summary plots
ss.partition(2)
#Compute the active variable values
y = XX.dot(ss.W1)
#Plot eigenvectors, sufficient summaries
ac.utils.plotters.eigenvectors(ss.W1, in_labels=in_labels)
ac.utils.plotters.sufficient_summary(y, f)
Explanation: We use plotting utilities to plot eigenvalues, subspace error, components of the first 2 eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values).
End of explanation |
13,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In-Class Coding Lab
Step1: Part 1
Step2: The request
As you learned in class and your assigned readings, the HTTP protocol has verbs which consititue the type of request you will send to the remote resource, or url. Based on the url and request type, you will get a response.
The following line of code makes a get request (that's the HTTP verb) to Google's Geocoding API service. This service attempts to convert the address (in this case Syracuse University) into a set of coordinates global coordinates (Latitude and Longitude), so that location can be plotted on a map.
Step3: The response
The get() method returns a Response object variable. I called it response in this example but it could be called anything.
The HTTP response consists of a status code and body. The status code lets you know if the request worked, while the body of the response contains the actual data.
Step4: De-Serializing JSON Text into Python object variables
In the case of web site url's the response body is HTML. This should be rendered in a web browser. But we're dealing with Web Service API's so...
In the case of web API url's the response body could be in a variety of formats from plain text, to XML or JSON. In this course we will only focus on JSON format because as we've seen these translate easily into Python object variables.
Let's convert the response to a Python object variable.
Step5: With our Python object, we can now walk the list of dictionary to retrieve the latitude and longitude
Step6: In the code above we "walked" the Python list of dictionary to get to the location
geodata is a list
geodata[0] is the first item in that list, a dictionary
geodata[0]['lat'] is a dictionary key which represents the latitude
geodata[0]['lon'] is a dictionary key which represents the longitude
It should be noted that this process will vary for each API you call, so its important to get accustomed to performing this task. You'll be doing it quite often.
One final thing to address. What is the type of lat and lon?
Step7: Bummer they are strings. we want them to be floats so we will need to parse the strings with the float() function
Step8: What did we just do?
At this stage, the process for calling a WebAPI in JSON format using Python is the same, regardless of the API.
Use requests.get(url) to make an HTTP GET request to the url.
Assuming the response.ok we can response.json() to de-serialize the JSON into a Python object.
We then extract the information we need using the typical Python list and dict methods.
1.1 You Code
This url calls the GovTrack API, and retrieves information regarding the current President of the United States.
https
Step9: Part 2
Step10: Looking up any address
RECALL
Step11: This is so useful, it should be a function!
One thing you'll come to realize quickly is that your API calls should be wrapped in functions. This promotes readability and code re-use. For example
Step12: 1.2 You Code
Step13: Other request methods
Not every API we call uses the get() method. Some use post() because the amount of data you provide it too large to place on the url. The HTTP POST method sends input data within the body of the request. It does NOT appear on the URL.
An example of an API that uses this method is the Text-Processing.com sentiment analysis service. http
Step14: In the examples provided we used the post() method instead of the get() method. the post() method has a named argument data which takes a dictionary of data, in HTTP parlance this is referred to as the payload. The payload is a dictionary and for text-processing.com it required a key text which holds the text you would like to process for sentiment.
Here's an example of processing the sentiment of a Tweet
Step15: Applications
Sentiment analysis is a useful tool for getting a sense of the mood of text. Any text can be analyzed and common applications are analyzing social media, blog comments, product reviews, and open-ended sections of surveys.
1.3 You Code
Use the above example to write a program which will input any text and print the sentiment using this API!
Step16: Troubleshooting
When you write code that depends on other people's code from around the Internet, there's a lot that can go wrong. Therefore we perscribe the following advice
Step17: This means the response back we get from "http
Step18: We no longer have a JSONDecodeError We now see the REAL error here an HTTPError response 503.
According to the HTTP Protocol spec, error 5xx means it's the server's problem. No amount of code will fix that. We need a different url.
Let's try this instead
Step19: This no longer has an HTTPError, but now we are back to the JSONDecodeError. The response from the URL cannot be de-serialized from JSON text.
NOW you should check - if the output of the response isn't JSON, what is it?
There are two ways you can do this
Step20: As You can see, the response is
Step21: Now that works!
The first is the raw response, and the second is the Python object.
To demonstrate its a python object, let's extract the IP Address from the origin key.
The intermediate print() statements have been removed since the code now works.
Step22: Guidelines for Rewriting as a function
To make your code clear and easier to read, its a good idea to re-factor your working API call into a function. Here are the guidelines
Step23: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==-- | Python Code:
# Run this to make sure you have the pre-requisites!
!pip install -q requests
Explanation: In-Class Coding Lab: Understanding The Foundations of Web APIs
Overview
This lab covers the foundations of what is necessary to properly use consume HTTP web service API's with Python . Here's what we will cover.
Understading requests and responses
Proper error handling
Parameter handling
Refactoring as a function
End of explanation
# start by importing the modules we will need
import requests
import json
Explanation: Part 1: Understanding Requests and responses
In this part we learn about the Python requests module. http://docs.python-requests.org/en/master/user/quickstart/
This module makes it easy to write code to send HTTP requests over the internet and handle the responses. It will be the cornerstone of our API consumption in this course. While there are other modules which accomplish the same thing, requests is the most straightforward and easiest to use.
We'll begin by importing the modules we will need. We do this here so we won't need to include these lines in the other code we write in this lab.
End of explanation
url = 'https://nominatim.openstreetmap.org/search?q=Hinds+Hall+Syracuse+University&format=json'
response = requests.get(url)
Explanation: The request
As you learned in class and your assigned readings, the HTTP protocol has verbs which consititue the type of request you will send to the remote resource, or url. Based on the url and request type, you will get a response.
The following line of code makes a get request (that's the HTTP verb) to Google's Geocoding API service. This service attempts to convert the address (in this case Syracuse University) into a set of coordinates global coordinates (Latitude and Longitude), so that location can be plotted on a map.
End of explanation
response.ok # did the request work?
response.text # what's in the body of the response, as a raw string
Explanation: The response
The get() method returns a Response object variable. I called it response in this example but it could be called anything.
The HTTP response consists of a status code and body. The status code lets you know if the request worked, while the body of the response contains the actual data.
End of explanation
geodata = response.json() # try to decode the response from JSON format
geodata # this is now a Python object variable
Explanation: De-Serializing JSON Text into Python object variables
In the case of web site url's the response body is HTML. This should be rendered in a web browser. But we're dealing with Web Service API's so...
In the case of web API url's the response body could be in a variety of formats from plain text, to XML or JSON. In this course we will only focus on JSON format because as we've seen these translate easily into Python object variables.
Let's convert the response to a Python object variable.
End of explanation
lat = geodata[0]['lat']
lon =geodata[0]['lon']
print(lat, lon)
Explanation: With our Python object, we can now walk the list of dictionary to retrieve the latitude and longitude
End of explanation
type(lat), type(lon)
Explanation: In the code above we "walked" the Python list of dictionary to get to the location
geodata is a list
geodata[0] is the first item in that list, a dictionary
geodata[0]['lat'] is a dictionary key which represents the latitude
geodata[0]['lon'] is a dictionary key which represents the longitude
It should be noted that this process will vary for each API you call, so its important to get accustomed to performing this task. You'll be doing it quite often.
One final thing to address. What is the type of lat and lon?
End of explanation
lat = float(geodata[0]['lat'])
lon = float(geodata[0]['lon'])
print("Latitude: %f, Longitude: %f" % (lat, lon))
Explanation: Bummer they are strings. we want them to be floats so we will need to parse the strings with the float() function:
End of explanation
# TODO Write code here
url = 'https://www.govtrack.us/api/v2/role?current=true&role_type=president'
Explanation: What did we just do?
At this stage, the process for calling a WebAPI in JSON format using Python is the same, regardless of the API.
Use requests.get(url) to make an HTTP GET request to the url.
Assuming the response.ok we can response.json() to de-serialize the JSON into a Python object.
We then extract the information we need using the typical Python list and dict methods.
1.1 You Code
This url calls the GovTrack API, and retrieves information regarding the current President of the United States.
https://www.govtrack.us/api/v2/role?current=true&role_type=president
Use requests.get() to retrieve the contents of the API at this url.
Use response.json() to de-serialize the the response JSON text to a Python object.
Find and print the "name of the current president by locating it within the Python object.
HINT: to figure that out, click on the URL and view the content in your broswer.
End of explanation
url = 'https://nominatim.openstreetmap.org/search' # base URL without paramters after the "?"
search = 'Hinds Hall Syracuse University'
options = { 'q' : search, 'format' : 'json'}
response = requests.get(url, params = options) # This builds the url
print(f"Requested URL: {response.url}") # print the built url
geodata = response.json()
coords = { 'lat' : float(geodata[0]['lat']), 'lng' : float(geodata[0]['lon']) }
print("Search for:", search)
print("Coordinates:", coords)
print(f"{search} is located at ({coords['lat']},{coords['lng']})")
Explanation: Part 2: Parameter Handling
In the example above we hard-coded current=true and role_type=president into the request:
url = 'https://www.govtrack.us/api/v2/role?current=true&role_type=president'
Likewise in the open stret map example we hard coded in the Hinds Hall Syracuse University part:
url = 'https://nominatim.openstreetmap.org/search?q=Hinds+Hall+Syracuse+University&format=json'
A better way to write this code is to allow for the input of any location and supply that to the service. To make this work we need to send parameters into the request as a dictionary. Parameters end up being built into a Query String on the url which serve as the inputs into the API Request.
This way we can geolocate any address!
You'll notice that on the url, we are passing key-value pairs the key is q and the value is Hinds+Hall+Syracuse+University. The other key is format and the value is json. Hey, Python dictionaries are also key-value pairs so:
End of explanation
url = 'https://nominatim.openstreetmap.org/search' # base URL without paramters after the "?"
search = input("Enter a loacation to Geocode: ")
options = { 'q' : search, 'format' : 'json'}
response = requests.get(url, params = options)
geodata = response.json()
coords = { 'lat' : float(geodata[0]['lat']), 'lng' : float(geodata[0]['lon']) }
print("Search for:", search)
print("Coordinates:", coords)
print(f"{search} is located at ({coords['lat']},{coords['lng']})")
Explanation: Looking up any address
RECALL: For requests.get(url, params = options) the part that says params = options is called a named argument, which is Python's way of specifying an optional function argument.
With our parameter now outside the url, we can easily re-write this code to work for any location! Go ahead and execute the code and input Queens, NY. This will retrieve the coordinates (40.728224,-73.794852)
End of explanation
def get_coordinates(search):
url = 'https://nominatim.openstreetmap.org/search' # base URL without paramters after the "?"
options = { 'q' : search, 'format' : 'json'}
response = requests.get(url, params = options)
geodata = response.json()
coords = { 'lat' : float(geodata[0]['lat']), 'lng' : float(geodata[0]['lon']) }
return coords
# main program here:
location = input("Enter a location: ")
coords = get_coordinates(location)
print(f"{search} is located at ({coords['lat']},{coords['lng']})")
Explanation: This is so useful, it should be a function!
One thing you'll come to realize quickly is that your API calls should be wrapped in functions. This promotes readability and code re-use. For example:
End of explanation
from ipywidgets import interact
roles = ['senator', 'representative', 'president', 'vicepresident' ]
@interact(role_type=roles)
def main(role_type):
url = 'https://www.govtrack.us/api/v2/role'
params = { 'current' : 'true', 'role_type' : "?" }
response = requests.get(url, params = params)
print(f"Requested URL: {response.url}")
data = response.json
for item in data['objects']:
print(f"- persons name")
Explanation: 1.2 You Code: Debug
Get this code working!
The GovTrack API, allows you to retieve information about people in Government with 4 different role types: senator, representative, president, vicepresident for example, when you add the role_type=president to the request URL you get the US president, whhen you add role_type=senator you get back US senators.
This code should present a drop down of roles. Upon selected the API is called for that role and then for each object in that role we print the ['person']['name'] as before.
HINT: If you are getting errors, click on the response URL to see the API output.
End of explanation
# 'you suck' == 'negative'
url = 'http://text-processing.com/api/sentiment/'
payload = { 'text' : 'you suck'}
response = requests.post(url, data = payload)
sentiment = response.json()
sentiment
# 'I love cheese' == 'positive'
url = 'http://text-processing.com/api/sentiment/'
payload = { 'text' : 'I love cheese'}
response = requests.post(url, data = payload)
sentiment = response.json()
sentiment
Explanation: Other request methods
Not every API we call uses the get() method. Some use post() because the amount of data you provide it too large to place on the url. The HTTP POST method sends input data within the body of the request. It does NOT appear on the URL.
An example of an API that uses this method is the Text-Processing.com sentiment analysis service. http://text-processing.com/docs/sentiment.html This service will detect the sentiment or mood of text. You give the service some text, and it tells you whether that text is positive, negative or neutral. The JSON response has a key called label which provides the sentiment.
Examples:
End of explanation
tweet = "Arnold Schwarzenegger isn't voluntarily leaving the Apprentice, he was fired by his bad (pathetic) ratings, not by me. Sad end to a great show"
url = 'http://text-processing.com/api/sentiment/'
payload = { 'text' : tweet }
response = requests.post(url, data = payload)
sentiment = response.json()
print("TWEET:", tweet)
print("SENTIMENT", sentiment['label'])
Explanation: In the examples provided we used the post() method instead of the get() method. the post() method has a named argument data which takes a dictionary of data, in HTTP parlance this is referred to as the payload. The payload is a dictionary and for text-processing.com it required a key text which holds the text you would like to process for sentiment.
Here's an example of processing the sentiment of a Tweet:
End of explanation
#TODO write code here
Explanation: Applications
Sentiment analysis is a useful tool for getting a sense of the mood of text. Any text can be analyzed and common applications are analyzing social media, blog comments, product reviews, and open-ended sections of surveys.
1.3 You Code
Use the above example to write a program which will input any text and print the sentiment using this API!
End of explanation
url = "http://myip.ist256.com"
response = requests.get(url)
data = response.json()
print(data)
Explanation: Troubleshooting
When you write code that depends on other people's code from around the Internet, there's a lot that can go wrong. Therefore we perscribe the following advice:
Assume anything that CAN go wrong WILL go wrong
Let's put this to the test with the following example where we call an API to get the IP Address of the computer making the call.
First Things First: Know Your Errors!
Above all, the #1 thing you should understand are the errors you get from Python and what they mean.
Case in point: This first example, which produces a JSONDecodeError on line 3.
End of explanation
url = "http://myip.ist256.com"
response = requests.get(url)
print(f"Generated Url: {response.url}")
response.raise_for_status()
data = response.json()
print(data)
Explanation: This means the response back we get from "http://myip.ist256.com" cannot be decoded from JSON to a Python object.
You might start looking there but you're making a HUGE assumption that the service "http://myip.ist256.com" is "working".
NEVER make this assumption!
KNOW whether or not its working!
There are a couple ways you can do this:
print the response.url then click on it to see what happens.
make reqests throw an error on unsuccessful HTTP response codes.
Let's do both:
we add print(response.url) to see the actual URL we are sending to the API.
we add response.raise_for_status() which throws an exception if the response is not 200/OK.
End of explanation
url = "https://whatismyipaddress.com/"
response = requests.get(url)
print(f"Generated Url: {response.url}")
response.raise_for_status()
data = response.json()
print(data)
Explanation: We no longer have a JSONDecodeError We now see the REAL error here an HTTPError response 503.
According to the HTTP Protocol spec, error 5xx means it's the server's problem. No amount of code will fix that. We need a different url.
Let's try this instead: https://whatismyipaddress.com/
End of explanation
url = "https://whatismyipaddress.com/"
response = requests.get(url)
print(f"Generated Url: {response.url}")
response.raise_for_status()
print(f"RAW RESPONSE: {response.text}")
data = response.json()
print(data)
Explanation: This no longer has an HTTPError, but now we are back to the JSONDecodeError. The response from the URL cannot be de-serialized from JSON text.
NOW you should check - if the output of the response isn't JSON, what is it?
There are two ways you can do this:
Print the response.url and click on it to see if the output is JSON.
print response.text which is the raw output from the response.
We already have the first, let's add the second.
End of explanation
url = "https://httpbin.org/ip"
response = requests.get(url)
print(f"Generated Url: {response.url}")
response.raise_for_status()
print(f"RAW RESPONSE: {response.text}")
data = response.json()
print(data)
Explanation: As You can see, the response is:
Access Denied (BUA77). Contact [email protected]
which is not at all what we expected. Again no amount of Python code will fix this, we need to call the right API, or change the URL of this API.
As a final step, let's try this service: http://httpbin.org/ip
End of explanation
url = "https://httpbin.org/ip"
response = requests.get(url)
response.raise_for_status()
data = response.json()
print(f"MY IP ADDRESS: {data['origin']}")
Explanation: Now that works!
The first is the raw response, and the second is the Python object.
To demonstrate its a python object, let's extract the IP Address from the origin key.
The intermediate print() statements have been removed since the code now works.
End of explanation
# TODO Your Code Here
Explanation: Guidelines for Rewriting as a function
To make your code clear and easier to read, its a good idea to re-factor your working API call into a function. Here are the guidelines:
DO NOT write the function until you get the code working. ALWAYS re-factor (rewrite) the WORKING code as a function.
One API call per function. Don't do too much!
Inputs into the API call such as query string parameters or POST body text should be function input parameters.
The function should NOT return the entire response unless its required. Only return what is needed.
use response.raise_for_status() to throw HTTPError exceptions. This way you will not be misled when there is a problem with the API and not your code.
DO NOT handle errors in your function or account for contingencies. Error handling is the responsilbity of the function's caller.
1.4 You Code
Refactor the code in the cell above into a function iplookup(). call the function to demonsrate it works.
End of explanation
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
Explanation: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==--
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.