Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of the BioSCRAPE package
Biocircuit Stochastic Simulation of Single Cell Reactions and Parameter Estimation
The purpose of this Python notebook is twofold.
The first is to serve as a quick start guide where you should be able to get started with the package by simply looking at the examples here and copying them to your liking.
The second is as a unit testing replacement. It is hard to unit test stochastic algorithms as the output may not (and should not) be the same thing every time. Therefore, instead, if all the examples included below work well, then you can assume that the package installed correctly and is working fine.
Before, getting started, we start by doing some basic plotting configuration and importing the numpy library. Advanced users can modify this to their liking.
Step1: Bioscrape Models
Chemical Reactions
Bioscrape models consist of a set of species and a set of reactions (delays will be discussed later). These models can be simulated either stochastically via SSA or deterministically as an ODE. Each reaction is of the form
${INPUTS} \xrightarrow[]{\rho(.)} {OUTPUTS}$
Here, INPUTS represent a multiset of input species and OUTPUTS represents a multiset of output species. The function $\rho(.)$ is either a deterministic rate function or a stochastic propensity. Propensities are identified by their name and require parameter dictionary with the appropriate parameters. The following functions are supported
Step2: Adding Delays
In stochastic simulations, bioscrape also supports delay. In a delay reaction, delay inputs/outputs are consumed/produced after some amount of delay. Reactions may have a mix of delay and non-delay inputs and outputs. Bioscrape innately supports a number of delay-types
Step3: Adding Rules
In deterministic and stochastic simulations, bioscrape also supports rules which can be used to set species or parameter values during the simulation. Rules are updated every simulation timepoint - and therefore the model may be sensitive to how the timepoint spacing.
The following example two rules will be added to the above model (without delay).
$I = I_0 H(T)$ where $H$ is the step function. Represents the addition of the inducer I at concentrion $I_0$ some time T. Prior to t=T, I is not present.
$S = M \frac{X}{1+aX}$ represents a saturating signal detected from the species X via some sort of sensor.
Rules can also be used for quasi-steady-state or quasi-equilibrium approximations, computing parameters on during the simulation, and much more!
There are two main types of rules
Step4: Saving and Loading Bioscrape Models via Bioscrape XML
Models can be saved and loaded as Bioscrape XML. Here we will save and load the transcription translation model and display the bioscrape XML underneath. Once a model has been loaded, it can be accessed and modified via the API.
Step5: SBML Support
Step6: More on SBML Compatibility
The next cell imports a model from an SBML file and then simulates it using a deterministic simulation. There are limitations to SBML compatibility.
Cannot support delays or events when reading in SBML files. Events will be ignored and a warning will be printed out.
SBML reaction rates must be in a format such that when the reaction rates are converted to a string formula, sympy must be able to parse the formula. This will work fine for usual PEMDAS rates. This will fail for complex function definitions and things like that.
Species will be initialized to their initialAmount field when it is nonzero. If the initialAmount is zero, then the initialConcentration will be used instead.
Multiple compartments or anything related to having compartments will not be supported. No warnings will be provided for this.
Assignment rules are supported, but any other type of rule will be ignored and an associated warning will be printed out.
Parameter names must start with a letter and be alphanumeric, same for species names. Furthermore, log, exp, abs, heaviside, and other associated keywords for functions are not allowed to be variable names. When in doubt, just pick something else
Step7: Deterministic and Stochastic Simulation of the Repressilator
We plot out the repressilator model found <a href="http | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
#%config InlineBackend.figure_f.ormats=['svg']
color_list = ['r', 'k', 'b','g','y','m','c']
mpl.rc('axes', prop_cycle=(mpl.cycler('color', color_list) ))
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
import numpy as np
Explanation: Examples of the BioSCRAPE package
Biocircuit Stochastic Simulation of Single Cell Reactions and Parameter Estimation
The purpose of this Python notebook is twofold.
The first is to serve as a quick start guide where you should be able to get started with the package by simply looking at the examples here and copying them to your liking.
The second is as a unit testing replacement. It is hard to unit test stochastic algorithms as the output may not (and should not) be the same thing every time. Therefore, instead, if all the examples included below work well, then you can assume that the package installed correctly and is working fine.
Before, getting started, we start by doing some basic plotting configuration and importing the numpy library. Advanced users can modify this to their liking.
End of explanation
from bioscrape.simulator import py_simulate_model
from bioscrape.types import Model
#Create a list of species names (strings)
species = ["G", "T", "X", "I"]
#create a list of parameters in the form (param_name[string], param_val[number])
params = [("ktx", 1.5), ("ktl", 10.0), ("KI", 10), ("n", 2.0), ("KR", 20), ("delta", .1)]
#create reaction tuples in the form:
#(Inputs[string list], Outputs[string list], propensity_type[string], propensity_dict {propensity_param:model_param})
rxn1 = (["G"], ["G", "T"], "proportionalhillpositive", {"d":"G", "s1":"I", "k":"ktx", "K":"KI", "n":"n"})
rxn2 = (["T"], ["T", "X"], "hillpositive", {"s1":"T", "k":"ktl", "K":"KR", "n":1}) #Notice that parameters can also take numerical values instead of being named directly
rxn3 = (["T"], [], "massaction", {"k":"delta"})
rxn4 = (["X"], [], "massaction", {"k":"delta"})
#Create a list of all reactions
rxns = [rxn1, rxn2, rxn3, rxn4]
#create an initial condition dictionary species not included in the dictionary will default to 0
x0 = {"G":1, "I":10}
#Instaniate the Model object
M = Model(species = species, parameters = params, reactions = rxns, initial_condition_dict = x0)
#Simulate the Model deterministically
timepoints = np.arange(0, 150, .1)
results_det = py_simulate_model(timepoints, Model = M) #Returns a Pandas DataFrame
#Simulate the Model Stochastically
results_stoch = py_simulate_model(timepoints, Model = M, stochastic = True)
#Plot the results
plt.figure(figsize = (12, 4))
plt.subplot(121)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic")
plt.plot(timepoints, results_stoch["T"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
plt.subplot(122)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic")
plt.plot(timepoints, results_stoch["X"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
Explanation: Bioscrape Models
Chemical Reactions
Bioscrape models consist of a set of species and a set of reactions (delays will be discussed later). These models can be simulated either stochastically via SSA or deterministically as an ODE. Each reaction is of the form
${INPUTS} \xrightarrow[]{\rho(.)} {OUTPUTS}$
Here, INPUTS represent a multiset of input species and OUTPUTS represents a multiset of output species. The function $\rho(.)$ is either a deterministic rate function or a stochastic propensity. Propensities are identified by their name and require parameter dictionary with the appropriate parameters. The following functions are supported:
"massaction: $\rho(S) = k \Pi_{s} s^{I_s}$. Required parameters: "k" the rate constant. Note: for stochastic simulations mass action propensities are $\rho(S) = \frac{1}{V} k \Pi_{s} s!/(s - I_s)!$ where $V$ is the volume.
"positivehill": $\rho(s) = k \frac{s^n}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s".
"negativehill": $\rho(s) = k \frac{1}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s".
"proportionalpositivehill": $\rho(s) = k d \frac{s^n}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s", propritional species "d".
"proportionalnegativehill": $\rho(s) = k d \frac{1}{(K^n+s^n)}$. Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s", propritional species "d".
"general": $\rho(s) = f(s)$ where $f$ can be any algebraic function typed as a string. Required parameters: "rate" an algebraic expression including species and model parameters written as a string.
More details on all these propensity types can be found in the <a href="https://github.com/ananswam/bioscrape/wiki/Propensities">wiki documentation</a>
Transcription Translation Example
First, the following model of transcription and translation will be created programatically. There are three chemical species: $G$ is a gene, $T$ is a transcript, $X$ is a protein.
$G \xrightarrow[]{\rho_{tx}(G, I)} G+T$; $\rho_{tx}(G, I) = G k_{tx}\frac{I^{n}}{K_{I}^{n}+I^{n}}$, $I$ is an inducer.
$T \xrightarrow[]{\rho_{tl}(T)} T+X$; $\rho_{tl}(T) = k_{tl} \frac{T}{K_{R} + T}$, $k_{tl}$ and $K_R$ model effects due to ribosome saturation.
$T \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$.
$X \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$.
The first reaction uses a proportional positive hill function as its rate function to represent induction. The second reaction uses a positive hill function function to represent ribosome saturation. The third and fourth reactions reaction represent degredation via dilution. No delays will be included in this model. This model is constructed below and simulated both stochastically and deterministically.
End of explanation
from bioscrape.simulator import py_simulate_model
from bioscrape.types import Model
#create reaction tuples with delays require additional elements. They are of the form:
#(Inputs[string list], Outputs[string list], propensity_type[string], propensity_dict {propensity_param:model_param},
# delay_type[string], DelayInputs [string list], DelayOutputs [string list], delay_param_dictionary {delay_param:model_param}).
rxn1d = (["G"], ["G"], "proportionalhillpositive", {"d":"G", "s1":"I", "k":"ktx", "K":"KI", "n":"n"},
"gaussian", [], ["T"], {"mean":10.0, "std":1.0})
rxn2d = (["T"], ["T"], "hillpositive", {"s1":"T", "k":"ktl", "K":"KR", "n":1},
"gamma", [], ["X"], {"k":10.0, "theta":3.0})
#Reactions 3 and 4 remain unchanged
rxns_delay = [rxn1d, rxn2d, rxn3, rxn4]
#Instaniate the Model object, species, params, and x0 remain unchanged from the previous example
M_delay = Model(species = species, parameters = params, reactions = rxns_delay, initial_condition_dict = x0)
#Simulate the Model with delay
results_delay = py_simulate_model(timepoints, Model = M_delay, stochastic = True, delay = True)
#Plot the results
plt.figure(figsize = (12, 4))
plt.subplot(121)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic (no delay)")
plt.plot(timepoints, results_stoch["T"], label = "stochastic (no delay)")
plt.plot(timepoints, results_delay["T"], label = "stochastic (with delay)")
plt.legend()
plt.xlabel("Time")
plt.subplot(122)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic (no delay)")
plt.plot(timepoints, results_stoch["X"], label = "stochastic (no delay)")
plt.plot(timepoints, results_delay["X"], label = "stochastic (with delay)")
plt.legend()
plt.xlabel("Time")
Explanation: Adding Delays
In stochastic simulations, bioscrape also supports delay. In a delay reaction, delay inputs/outputs are consumed/produced after some amount of delay. Reactions may have a mix of delay and non-delay inputs and outputs. Bioscrape innately supports a number of delay-types:
fixed: constant delay with parameter "delay".
Gaussian: gaussian distributed delay with parameters "mean" and "std".
Gamma: gamma distributed delay with shape parameter "k" and scale parameter "theta".
In the following example model, the following delays are added to the transcription and translation reactions described above and then simulated stochastically. Note that delays and delay inputs/outputs will be ignored if a model with delays is simulated deterministically.
End of explanation
#Add a new species "S" and "I" to the model. Note: by making S a species, it's output will be returned as a time-course.
M = Model(species = species + ["S", "I"], parameters = params, reactions = rxns, initial_condition_dict = x0)
#Create new parameters for rule 1. Model is now being modified
M.create_parameter("I0", 10) #Inducer concentration
M.create_parameter("T_I0", 25) #Initial time inducer is added
#Create rule 1:
#NOTE Rules can also be passed into the Model constructor as a list of tuples [("rule_type", {"equation":"eq string"})]
M.create_rule("assignment", {"equation":"I = _I0*Heaviside(t-_T_I0)"}) #"_" must be placed before param names, but not species.
#Rule 2 will use constants in equations instead of new parameters.
M.create_rule("assignment", {"equation":"S = 50*X/(1+.2*X)"})
#reset the initial concentration of the inducer to 0
M.set_species({"I":0})
print(M.get_species_list())
print(M.get_params())
#Simulate the Model deterministically
timepoints = np.arange(0, 150, 1.0)
results_det = py_simulate_model(timepoints, Model = M) #Returns a Pandas DataFrame
#Simulate the Model Stochastically
results_stoch = py_simulate_model(timepoints, Model = M, stochastic = True)
#Plot the results
plt.figure(figsize = (12, 8))
plt.subplot(223)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic")
plt.plot(timepoints, results_stoch["T"], label = "stochastic")
plt.legend()
plt.subplot(224)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic")
plt.plot(timepoints, results_stoch["X"], label = "stochastic")
plt.legend()
plt.subplot(221)
plt.title("Inducer I")
plt.plot(timepoints, results_det["I"], label = "deterministic")
plt.plot(timepoints, results_stoch["I"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
plt.subplot(222)
plt.title("Signal S")
plt.plot(timepoints, results_det["S"], label = "deterministic")
plt.plot(timepoints, results_stoch["S"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
#M.write_bioscrape_xml('models/txtl_bioscrape1.xml')
f = open('models/txtl_bioscrape1.xml')
print("Bioscrape Model:\n", f.read())
Explanation: Adding Rules
In deterministic and stochastic simulations, bioscrape also supports rules which can be used to set species or parameter values during the simulation. Rules are updated every simulation timepoint - and therefore the model may be sensitive to how the timepoint spacing.
The following example two rules will be added to the above model (without delay).
$I = I_0 H(T)$ where $H$ is the step function. Represents the addition of the inducer I at concentrion $I_0$ some time T. Prior to t=T, I is not present.
$S = M \frac{X}{1+aX}$ represents a saturating signal detected from the species X via some sort of sensor.
Rules can also be used for quasi-steady-state or quasi-equilibrium approximations, computing parameters on during the simulation, and much more!
There are two main types of rules:
1. "additive": used for calculating the total of many species. Rule 'equation' must be in the form $s_0 = s_1 + s_2 ...$ where $s_i$ each represents a species string.
2. "assignment": a general rule type of with 'equation' form $v = f(s, p)$ where $v$ can be either a species or parameter which is assigned the value $f(s, p)$ where $s$ are all the species and $p$ are all the parameters in the model and $f$ is written as an string.
End of explanation
M.write_bioscrape_xml('models/txtl_model.xml')
# f = open('models/txtl_model.xml')
# print("Bioscrape Model XML:\n", f.read())
M_loaded = Model('models/txtl_model.xml')
print(M_loaded.get_species_list())
print(M_loaded.get_params())
#Change the induction time
#NOTE That changing a model loaded from xml will not change the underlying XML.
M_loaded.set_parameter("T_I0", 50)
M_loaded.write_bioscrape_xml('models/txtl_model_bioscrape.xml')
# f = open('models/txtl_model_bs.xml')
# print("Bioscrape Model XML:\n", f.read())
#Simulate the Model deterministically
timepoints = np.arange(0, 150, 1.0)
results_det = py_simulate_model(timepoints, Model = M_loaded) #Returns a Pandas DataFrame
#Simulate the Model Stochastically
results_stoch = py_simulate_model(timepoints, Model = M_loaded, stochastic = True)
#Plot the results
plt.figure(figsize = (12, 8))
plt.subplot(223)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic")
plt.plot(timepoints, results_stoch["T"], label = "stochastic")
plt.legend()
plt.subplot(224)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic")
plt.plot(timepoints, results_stoch["X"], label = "stochastic")
plt.legend()
plt.subplot(221)
plt.title("Inducer I")
plt.plot(timepoints, results_det["I"], label = "deterministic")
plt.plot(timepoints, results_stoch["I"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
plt.subplot(222)
plt.title("Signal S")
plt.plot(timepoints, results_det["S"], label = "deterministic")
plt.plot(timepoints, results_stoch["S"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
Explanation: Saving and Loading Bioscrape Models via Bioscrape XML
Models can be saved and loaded as Bioscrape XML. Here we will save and load the transcription translation model and display the bioscrape XML underneath. Once a model has been loaded, it can be accessed and modified via the API.
End of explanation
M.write_sbml_model('models/txtl_model_sbml.xml')
# Print out the SBML model
f = open('models/txtl_model_sbml.xml')
print("Bioscrape Model converted to SBML:\n", f.read())
from bioscrape.sbmlutil import import_sbml
M_loaded_sbml = import_sbml('models/txtl_model_sbml.xml')
#Simulate the Model deterministically
timepoints = np.arange(0, 150, 1.0)
results_det = py_simulate_model(timepoints, Model = M_loaded_sbml) #Returns a Pandas DataFrame
#Simulate the Model Stochastically
results_stoch = py_simulate_model(timepoints, Model = M_loaded_sbml, stochastic = True)
#Plot the results
plt.figure(figsize = (12, 8))
plt.subplot(223)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic")
plt.plot(timepoints, results_stoch["T"], label = "stochastic")
plt.legend()
plt.subplot(224)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic")
plt.plot(timepoints, results_stoch["X"], label = "stochastic")
plt.legend()
plt.subplot(221)
plt.title("Inducer I")
plt.plot(timepoints, results_det["I"], label = "deterministic")
plt.plot(timepoints, results_stoch["I"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
plt.subplot(222)
plt.title("Signal S")
plt.plot(timepoints, results_det["S"], label = "deterministic")
plt.plot(timepoints, results_stoch["S"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
Explanation: SBML Support : Saving and Loading Bioscrape Models via SBML
Models can be saved and loaded as SBML. Here we will save and load the transcription translation model to a SBML file. Delays, compartments, function definitions, and other non-standard SBML is not supported.
Once a model has been loaded, it can be accessed and modified via the API.
End of explanation
from bioscrape.sbmlutil import import_sbml
M_sbml = import_sbml('models/sbml_test.xml')
timepoints = np.linspace(0,100,1000)
result = py_simulate_model(timepoints, Model = M_sbml)
plt.figure()
for s in M_sbml.get_species_list():
plt.plot(timepoints, result[s], label = s)
plt.legend()
Explanation: More on SBML Compatibility
The next cell imports a model from an SBML file and then simulates it using a deterministic simulation. There are limitations to SBML compatibility.
Cannot support delays or events when reading in SBML files. Events will be ignored and a warning will be printed out.
SBML reaction rates must be in a format such that when the reaction rates are converted to a string formula, sympy must be able to parse the formula. This will work fine for usual PEMDAS rates. This will fail for complex function definitions and things like that.
Species will be initialized to their initialAmount field when it is nonzero. If the initialAmount is zero, then the initialConcentration will be used instead.
Multiple compartments or anything related to having compartments will not be supported. No warnings will be provided for this.
Assignment rules are supported, but any other type of rule will be ignored and an associated warning will be printed out.
Parameter names must start with a letter and be alphanumeric, same for species names. Furthermore, log, exp, abs, heaviside, and other associated keywords for functions are not allowed to be variable names. When in doubt, just pick something else :)
Below, we first plot out the simulation results for an SBML model where a species X0 goes to a final species X1 through an enymatic process.
End of explanation
# Repressilator deterministic example
from bioscrape.sbmlutil import import_sbml
M_represillator = import_sbml('models/repressilator_sbml.xml')
#Simulate Deterministically and Stochastically
timepoints = np.linspace(0,700,10000)
result_det = py_simulate_model(timepoints, Model = M_represillator)
result_stoch = py_simulate_model(timepoints, Model = M_represillator, stochastic = True)
#Plot Results
plt.figure(figsize = (12, 8))
for i in range(len(M_represillator.get_species_list())):
s = M_represillator.get_species_list()[i]
plt.plot(timepoints, result_det[s], color = color_list[i], label = "Deterministic "+s)
plt.plot(timepoints, result_stoch[s], ":", color = color_list[i], label = "Stochastic "+s)
plt.title('Repressilator Model')
plt.xlabel('Time')
plt.ylabel('Amount')
plt.legend();
Explanation: Deterministic and Stochastic Simulation of the Repressilator
We plot out the repressilator model found <a href="http://www.ebi.ac.uk/biomodels-main/BIOMD0000000012">here</a>. This model generates oscillations as expected. Highlighting the utility of this package, we then with a single line of code switch to a stochastic simulation and note that the amplitudes of each burst become noisy.
End of explanation |
10,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basics of Algorithms & Coding Tests
this notebook shows some essentials and practical python codes to help in your coding test like hackerrank or codility
Two most important things
- remove all duplicates before any iterative processing
- in a loop, when using if-else, set condition that allows elimination quickly without iterating the entire array
Prep
- open empty jupyter notebook to test
- have your cheatsheet by your side
- remember all the useful functions in python
- prepare to use regex
During the Test
- After building your function, attempt using your own test scenarios to input arguments
- Hackerrank should be fine as it gives a number of scenarios, but codility sometimes only gives 1
- hence the need to test a few more to check for bugs
Psychology
- do not give up on a question, and switch to & fro; that only waste more time
- prepare for a long extensive coding for each question
- keep calm & analyse step by step
Next Step
- learn about the various algorithims of cos!
- dynamic programming, greedy algorithm, etc.
- Codility gives a good guide
Big-O Notation
This can be applied to both space & time complexity. Considered a CPU-bound Performance
O(1)
Step1: Data Structure Operations
Step2: Array Sorting
Step3: Example 1
* time complexity = O(n^2)
* space complexity = O(1)
Step4: Example 2
* time complexity = O(n^3)
* space complexity = O(n)
Step5: cProfile
how much time was spent in various levels of your application
Step6: Remove Duplicates
Step7: Sort
Step8: Reverse Sort
Step9: Basic List
Step10: Max & Min
Step11: Absolute
Step12: Filling
Step13: Splitting
Step14: Permutations
Step15: If Else
Step16: Loops
Break, Continue, Pass
break cuts the loop
continue bypass entire code downwards in the current loop on condition
pass ignores code within condition only
Step17: While Loop | Python Code:
from IPython.display import Image
Image("../img/big_o1.png", width=600)
Explanation: Basics of Algorithms & Coding Tests
this notebook shows some essentials and practical python codes to help in your coding test like hackerrank or codility
Two most important things
- remove all duplicates before any iterative processing
- in a loop, when using if-else, set condition that allows elimination quickly without iterating the entire array
Prep
- open empty jupyter notebook to test
- have your cheatsheet by your side
- remember all the useful functions in python
- prepare to use regex
During the Test
- After building your function, attempt using your own test scenarios to input arguments
- Hackerrank should be fine as it gives a number of scenarios, but codility sometimes only gives 1
- hence the need to test a few more to check for bugs
Psychology
- do not give up on a question, and switch to & fro; that only waste more time
- prepare for a long extensive coding for each question
- keep calm & analyse step by step
Next Step
- learn about the various algorithims of cos!
- dynamic programming, greedy algorithm, etc.
- Codility gives a good guide
Big-O Notation
This can be applied to both space & time complexity. Considered a CPU-bound Performance
O(1): Constant Time
O(log n): Logarithmic
O(n): Linear Time
O(n log n): Loglinear
O(n^2): Quadratic
Big-O Complexity
End of explanation
Image("../img/big_o2.png", width=800)
Explanation: Data Structure Operations
End of explanation
Image("../img/big_o3.png", width=500)
Explanation: Array Sorting
End of explanation
counter = 0
for item in query:
for item2 in query:
counter += 1
Explanation: Example 1
* time complexity = O(n^2)
* space complexity = O(1)
End of explanation
counter = 0
list1 = []
for item in query:
list1.append(item)
for item2 in query:
for item3 in query:
counter += 1
Explanation: Example 2
* time complexity = O(n^3)
* space complexity = O(n)
End of explanation
import cProfile
cProfile.run('print(10)')
Explanation: cProfile
how much time was spent in various levels of your application
End of explanation
set = set([1,1,2,2,4,5,6])
set
# convert to list
list(set)
Explanation: Remove Duplicates
End of explanation
sort = sorted([4,-1,23,5,6,7,1,4,5])
sort
print([4,-1,23,5,6,7,1,4,5].sort())
Explanation: Sort
End of explanation
# reverse
sort = sorted([4,1,23,5,6,7,1,4,5],reverse=True)
print(sort)
# OR
print(sort[::-1])
Explanation: Reverse Sort
End of explanation
list1 = [1,2,3,4,5]
# last number
list1[-1]
# get every 2nd feature
list1[::2]
array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
print(array[0])
print(array[-1])
print(array[0:2])
print(array[-3:-1])
# filling an empty array
empty = []
for i in range(10):
empty.append(i)
empty
# remove item
empty.remove(1)
empty
# sum
sum(empty)
Explanation: Basic List
End of explanation
import math
list1 = [1,2,3,4,5]
print('max: ',max(list1))
print('min: ',min(list1))
Explanation: Max & Min
End of explanation
abs(-10.1)
Explanation: Absolute
End of explanation
'-'.join('abcdef')
Explanation: Filling
End of explanation
# individual split
[i for i in 'ABCDEFG']
import textwrap
textwrap.wrap('ABCDEFG',2)
import re
re.findall('.{1,2}', 'ABCDEFG')
Explanation: Splitting
End of explanation
from itertools import permutations
# permutations but without order
list(permutations(['1','2','3'],2))
# permutations but with order
from itertools import combinations
list(combinations([1,2,3], 2))
Explanation: Permutations
End of explanation
test = 'a'
if test.isupper():
print('Upper')
elif test.islower():
print('Lower')
Explanation: If Else
End of explanation
for i in range(5):
if i==2:
break
print(i)
for i in range(5):
if i==2:
continue
print(i)
for i in range(5):
if i==2:
pass
print(i)
Explanation: Loops
Break, Continue, Pass
break cuts the loop
continue bypass entire code downwards in the current loop on condition
pass ignores code within condition only
End of explanation
i = 1
while i < 6:
print(i)
if i == 3:
break
i += 1
Explanation: While Loop
End of explanation |
10,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Segmentation with tf.keras
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="http
Step1: Get all the files
Since this tutorial will be using a dataset from Kaggle, it requires creating an API Token for your Kaggle account, and uploading it.
Step2: Only import kaggle after adding the credentials.
Step3: We'll download the data from Kaggle
Caution, large download ahead - downloading all files will require 14GB of diskspace.
Step4: You must accept the competition rules before downloading the data.
Step5: Here's what the paths look like
Step6: Visualize
Let's take a look at some of the examples of different images in our dataset.
Step7: Set up
Let’s begin by setting up some parameters. We’ll standardize and resize all the shapes of the images. We’ll also set up some training parameters
Step8: Using these exact same parameters may be too computationally intensive for your hardware, so tweak the parameters accordingly. Also, it is important to note that due to the architecture of our UNet version, the size of the image must be evenly divisible by a factor of 32, as we down sample the spatial resolution by a factor of 2 with each MaxPooling2Dlayer.
If your machine can support it, you will achieve better performance using a higher resolution input image (e.g. 512 by 512) as this will allow more precise localization and less loss of information during encoding. In addition, you can also make the model deeper.
Alternatively, if your machine cannot support it, lower the image resolution and/or batch size. Note that lowering the image resolution will decrease performance and lowering batch size will increase training time.
Build our input pipeline with tf.data
Since we begin with filenames, we will need to build a robust and scalable data pipeline that will play nicely with our model. If you are unfamiliar with tf.data you should check out my other tutorial introducing the concept!
Our input pipeline will consist of the following steps
Step10: Shifting the image
Step11: Flipping the image randomly
Step12: Assembling our transformations into our augment function
Step13: Set up train and validation datasets
Note that we apply image augmentation to our training dataset but not our validation dataset.
Step14: Let's see if our image augmentor data pipeline is producing expected results
Step15: Build the model
We'll build the U-Net model. U-Net is especially good with segmentation tasks because it can localize well to provide high resolution segmentation masks. In addition, it works well with small datasets and is relatively robust against overfitting as the training data is in terms of the number of patches within an image, which is much larger than the number of training images itself. Unlike the original model, we will add batch normalization to each of our blocks.
The Unet is built with an encoder portion and a decoder portion. The encoder portion is composed of a linear stack of Conv, BatchNorm, and Relu operations followed by a MaxPool. Each MaxPool will reduce the spatial resolution of our feature map by a factor of 2. We keep track of the outputs of each block as we feed these high resolution feature maps with the decoder portion. The Decoder portion is comprised of UpSampling2D, Conv, BatchNorm, and Relus. Note that we concatenate the feature map of the same size on the decoder side. Finally, we add a final Conv operation that performs a convolution along the channels for each individual pixel (kernel size of (1, 1)) that outputs our final segmentation mask in grayscale.
The Keras Functional API
The Keras functional API is used when you have multi-input/output models, shared layers, etc. It's a powerful API that allows you to manipulate tensors and build complex graphs with intertwined datastreams easily. In addition it makes layers and models both callable on tensors.
* To see more examples check out the get started guide.
We'll build these helper functions that will allow us to ensemble our model block operations easily and simply.
Step16: Define your model
Using functional API, you must define your model by specifying the inputs and outputs associated with the model.
Step17: Defining custom metrics and loss functions
Defining loss and metric functions are simple with Keras. Simply define a function that takes both the True labels for a given example and the Predicted labels for the same given example.
Dice loss is a metric that measures overlap. More info on optimizing for Dice coefficient (our dice loss) can be found in the paper, where it was introduced.
We use dice loss here because it performs better at class imbalanced problems by design. In addition, maximizing the dice coefficient and IoU metrics are the actual objectives and goals of our segmentation task. Using cross entropy is more of a proxy which is easier to maximize. Instead, we maximize our objective directly.
Step18: Here, we'll use a specialized loss function that combines binary cross entropy and our dice loss. This is based on individuals who competed within this competition obtaining better results empirically. Try out your own custom losses to measure performance (e.g. bce + log(dice_loss), only bce, etc.)!
Step19: Compile your model
We use our custom loss function to minimize. In addition, we specify what metrics we want to keep track of as we train. Note that metrics are not actually used during the training process to tune the parameters, but are instead used to measure performance of the training process.
Step20: Train your model
Training your model with tf.data involves simply providing the model's fit function with your training/validation dataset, the number of steps, and epochs.
We also include a Model callback, ModelCheckpoint that will save the model to disk after each epoch. We configure it such that it only saves our highest performing model. Note that saving the model capture more than just the weights of the model
Step21: Don't forget to specify our model callback in the fit function call.
Step22: Visualize training process
Step23: Even with only 5 epochs, we see strong performance.
Visualize actual performance
We'll visualize our performance on the validation set.
Note that in an actual setting (competition, deployment, etc.) we'd evaluate on the test set with the full image resolution.
To load our model we have two options | Python Code:
!pip install kaggle
import os
import glob
import zipfile
import functools
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['axes.grid'] = False
mpl.rcParams['figure.figsize'] = (12,12)
from sklearn.model_selection import train_test_split
import matplotlib.image as mpimg
import pandas as pd
from PIL import Image
import tensorflow as tf
import tensorflow.contrib as tfcontrib
from tensorflow.python.keras import layers
from tensorflow.python.keras import losses
from tensorflow.python.keras import models
from tensorflow.python.keras import backend as K
Explanation: Image Segmentation with tf.keras
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="http://colab.research.google.com/github/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
In this tutorial we will learn how to segment images. Segmentation is the process of generating pixel-wise segmentations giving the class of the object visible at each pixel. For example, we could be identifying the location and boundaries of people within an image or identifying cell nuclei from an image. Formally, image segmentation refers to the process of partitioning an image into a set of pixels that we desire to identify (our target) and the background.
Specifically, in this tutorial we will be using the Kaggle Carvana Image Masking Challenge Dataset.
This dataset contains a large number of car images, with each car taken from different angles. In addition, for each car image, we have an associated manually cutout mask; our task will be to automatically create these cutout masks for unseen data.
Specific concepts that will be covered:
In the process, we will build practical experience and develop intuition around the following concepts:
* Functional API - we will be implementing UNet, a convolutional network model classically used for biomedical image segmentation with the Functional API.
* This model has layers that require multiple input/outputs. This requires the use of the functional API
* Check out the original paper,
U-Net: Convolutional Networks for Biomedical Image Segmentation by Olaf Ronneberger!
* Custom Loss Functions and Metrics - We'll implement a custom loss function using binary cross entropy and dice loss. We'll also implement dice coefficient (which is used for our loss) and mean intersection over union, that will help us monitor our training process and judge how well we are performing.
* Saving and loading keras models - We'll save our best model to disk. When we want to perform inference/evaluate our model, we'll load in the model from disk.
We will follow the general workflow:
Visualize data/perform some exploratory data analysis
Set up data pipeline and preprocessing
Build model
Train model
Evaluate model
Repeat
Audience: This post is geared towards intermediate users who are comfortable with basic machine learning concepts.
Note that if you wish to run this notebook, it is highly recommended that you do so with a GPU.
Time Estimated: 60 min
By: Raymond Yuan, Software Engineering Intern
End of explanation
import os
# Upload the API token.
def get_kaggle_credentials():
token_dir = os.path.join(os.path.expanduser("~"),".kaggle")
token_file = os.path.join(token_dir, "kaggle.json")
if not os.path.isdir(token_dir):
os.mkdir(token_dir)
try:
with open(token_file,'r') as f:
pass
except IOError as no_file:
try:
from google.colab import files
except ImportError:
raise no_file
uploaded = files.upload()
if "kaggle.json" not in uploaded:
raise ValueError("You need an API key! see: "
"https://github.com/Kaggle/kaggle-api#api-credentials")
with open(token_file, "wb") as f:
f.write(uploaded["kaggle.json"])
os.chmod(token_file, 600)
get_kaggle_credentials()
Explanation: Get all the files
Since this tutorial will be using a dataset from Kaggle, it requires creating an API Token for your Kaggle account, and uploading it.
End of explanation
import kaggle
Explanation: Only import kaggle after adding the credentials.
End of explanation
competition_name = 'carvana-image-masking-challenge'
# Download data from Kaggle and unzip the files of interest.
def load_data_from_zip(competition, file):
with zipfile.ZipFile(os.path.join(competition, file), "r") as zip_ref:
unzipped_file = zip_ref.namelist()[0]
zip_ref.extractall(competition)
def get_data(competition):
kaggle.api.competition_download_files(competition, competition)
load_data_from_zip(competition, 'train.zip')
load_data_from_zip(competition, 'train_masks.zip')
load_data_from_zip(competition, 'train_masks.csv.zip')
Explanation: We'll download the data from Kaggle
Caution, large download ahead - downloading all files will require 14GB of diskspace.
End of explanation
get_data(competition_name)
img_dir = os.path.join(competition_name, "train")
label_dir = os.path.join(competition_name, "train_masks")
df_train = pd.read_csv(os.path.join(competition_name, 'train_masks.csv'))
ids_train = df_train['img'].map(lambda s: s.split('.')[0])
x_train_filenames = []
y_train_filenames = []
for img_id in ids_train:
x_train_filenames.append(os.path.join(img_dir, "{}.jpg".format(img_id)))
y_train_filenames.append(os.path.join(label_dir, "{}_mask.gif".format(img_id)))
x_train_filenames, x_val_filenames, y_train_filenames, y_val_filenames = \
train_test_split(x_train_filenames, y_train_filenames, test_size=0.2, random_state=42)
num_train_examples = len(x_train_filenames)
num_val_examples = len(x_val_filenames)
print("Number of training examples: {}".format(num_train_examples))
print("Number of validation examples: {}".format(num_val_examples))
Explanation: You must accept the competition rules before downloading the data.
End of explanation
x_train_filenames[:10]
y_train_filenames[:10]
Explanation: Here's what the paths look like
End of explanation
display_num = 5
r_choices = np.random.choice(num_train_examples, display_num)
plt.figure(figsize=(10, 15))
for i in range(0, display_num * 2, 2):
img_num = r_choices[i // 2]
x_pathname = x_train_filenames[img_num]
y_pathname = y_train_filenames[img_num]
plt.subplot(display_num, 2, i + 1)
plt.imshow(mpimg.imread(x_pathname))
plt.title("Original Image")
example_labels = Image.open(y_pathname)
label_vals = np.unique(example_labels)
plt.subplot(display_num, 2, i + 2)
plt.imshow(example_labels)
plt.title("Masked Image")
plt.suptitle("Examples of Images and their Masks")
plt.show()
Explanation: Visualize
Let's take a look at some of the examples of different images in our dataset.
End of explanation
img_shape = (256, 256, 3)
batch_size = 3
epochs = 5
Explanation: Set up
Let’s begin by setting up some parameters. We’ll standardize and resize all the shapes of the images. We’ll also set up some training parameters:
End of explanation
def _process_pathnames(fname, label_path):
# We map this function onto each pathname pair
img_str = tf.read_file(fname)
img = tf.image.decode_jpeg(img_str, channels=3)
label_img_str = tf.read_file(label_path)
# These are gif images so they return as (num_frames, h, w, c)
label_img = tf.image.decode_gif(label_img_str)[0]
# The label image should only have values of 1 or 0, indicating pixel wise
# object (car) or not (background). We take the first channel only.
label_img = label_img[:, :, 0]
label_img = tf.expand_dims(label_img, axis=-1)
return img, label_img
Explanation: Using these exact same parameters may be too computationally intensive for your hardware, so tweak the parameters accordingly. Also, it is important to note that due to the architecture of our UNet version, the size of the image must be evenly divisible by a factor of 32, as we down sample the spatial resolution by a factor of 2 with each MaxPooling2Dlayer.
If your machine can support it, you will achieve better performance using a higher resolution input image (e.g. 512 by 512) as this will allow more precise localization and less loss of information during encoding. In addition, you can also make the model deeper.
Alternatively, if your machine cannot support it, lower the image resolution and/or batch size. Note that lowering the image resolution will decrease performance and lowering batch size will increase training time.
Build our input pipeline with tf.data
Since we begin with filenames, we will need to build a robust and scalable data pipeline that will play nicely with our model. If you are unfamiliar with tf.data you should check out my other tutorial introducing the concept!
Our input pipeline will consist of the following steps:
Read the bytes of the file in from the filename - for both the image and the label. Recall that our labels are actually images with each pixel annotated as car or background (1, 0).
Decode the bytes into an image format
Apply image transformations: (optional, according to input parameters)
resize - Resize our images to a standard size (as determined by eda or computation/memory restrictions)
The reason why this is optional is that U-Net is a fully convolutional network (e.g. with no fully connected units) and is thus not dependent on the input size. However, if you choose to not resize the images, you must use a batch size of 1, since you cannot batch variable image size together
Alternatively, you could also bucket your images together and resize them per mini-batch to avoid resizing images as much, as resizing may affect your performance through interpolation, etc.
hue_delta - Adjusts the hue of an RGB image by a random factor. This is only applied to the actual image (not our label image). The hue_delta must be in the interval [0, 0.5]
horizontal_flip - flip the image horizontally along the central axis with a 0.5 probability. This transformation must be applied to both the label and the actual image.
width_shift_range and height_shift_range are ranges (as a fraction of total width or height) within which to randomly translate the image either horizontally or vertically. This transformation must be applied to both the label and the actual image.
rescale - rescale the image by a certain factor, e.g. 1/ 255.
Shuffle the data, repeat the data (so we can iterate over it multiple times across epochs), batch the data, then prefetch a batch (for efficiency).
It is important to note that these transformations that occur in your data pipeline must be symbolic transformations.
Why do we do these image transformations?
This is known as data augmentation. Data augmentation "increases" the amount of training data by augmenting them via a number of random transformations. During training time, our model would never see twice the exact same picture. This helps prevent overfitting and helps the model generalize better to unseen data.
Processing each pathname
End of explanation
def shift_img(output_img, label_img, width_shift_range, height_shift_range):
This fn will perform the horizontal or vertical shift
if width_shift_range or height_shift_range:
if width_shift_range:
width_shift_range = tf.random_uniform([],
-width_shift_range * img_shape[1],
width_shift_range * img_shape[1])
if height_shift_range:
height_shift_range = tf.random_uniform([],
-height_shift_range * img_shape[0],
height_shift_range * img_shape[0])
# Translate both
output_img = tfcontrib.image.translate(output_img,
[width_shift_range, height_shift_range])
label_img = tfcontrib.image.translate(label_img,
[width_shift_range, height_shift_range])
return output_img, label_img
Explanation: Shifting the image
End of explanation
def flip_img(horizontal_flip, tr_img, label_img):
if horizontal_flip:
flip_prob = tf.random_uniform([], 0.0, 1.0)
tr_img, label_img = tf.cond(tf.less(flip_prob, 0.5),
lambda: (tf.image.flip_left_right(tr_img), tf.image.flip_left_right(label_img)),
lambda: (tr_img, label_img))
return tr_img, label_img
Explanation: Flipping the image randomly
End of explanation
def _augment(img,
label_img,
resize=None, # Resize the image to some size e.g. [256, 256]
scale=1, # Scale image e.g. 1 / 255.
hue_delta=0, # Adjust the hue of an RGB image by random factor
horizontal_flip=False, # Random left right flip,
width_shift_range=0, # Randomly translate the image horizontally
height_shift_range=0): # Randomly translate the image vertically
if resize is not None:
# Resize both images
label_img = tf.image.resize_images(label_img, resize)
img = tf.image.resize_images(img, resize)
if hue_delta:
img = tf.image.random_hue(img, hue_delta)
img, label_img = flip_img(horizontal_flip, img, label_img)
img, label_img = shift_img(img, label_img, width_shift_range, height_shift_range)
label_img = tf.to_float(label_img) * scale
img = tf.to_float(img) * scale
return img, label_img
def get_baseline_dataset(filenames,
labels,
preproc_fn=functools.partial(_augment),
threads=5,
batch_size=batch_size,
shuffle=True):
num_x = len(filenames)
# Create a dataset from the filenames and labels
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
# Map our preprocessing function to every element in our dataset, taking
# advantage of multithreading
dataset = dataset.map(_process_pathnames, num_parallel_calls=threads)
if preproc_fn.keywords is not None and 'resize' not in preproc_fn.keywords:
assert batch_size == 1, "Batching images must be of the same size"
dataset = dataset.map(preproc_fn, num_parallel_calls=threads)
if shuffle:
dataset = dataset.shuffle(num_x)
# It's necessary to repeat our data for all epochs
dataset = dataset.repeat().batch(batch_size)
return dataset
Explanation: Assembling our transformations into our augment function
End of explanation
tr_cfg = {
'resize': [img_shape[0], img_shape[1]],
'scale': 1 / 255.,
'hue_delta': 0.1,
'horizontal_flip': True,
'width_shift_range': 0.1,
'height_shift_range': 0.1
}
tr_preprocessing_fn = functools.partial(_augment, **tr_cfg)
val_cfg = {
'resize': [img_shape[0], img_shape[1]],
'scale': 1 / 255.,
}
val_preprocessing_fn = functools.partial(_augment, **val_cfg)
train_ds = get_baseline_dataset(x_train_filenames,
y_train_filenames,
preproc_fn=tr_preprocessing_fn,
batch_size=batch_size)
val_ds = get_baseline_dataset(x_val_filenames,
y_val_filenames,
preproc_fn=val_preprocessing_fn,
batch_size=batch_size)
Explanation: Set up train and validation datasets
Note that we apply image augmentation to our training dataset but not our validation dataset.
End of explanation
temp_ds = get_baseline_dataset(x_train_filenames,
y_train_filenames,
preproc_fn=tr_preprocessing_fn,
batch_size=1,
shuffle=False)
# Let's examine some of these augmented images
data_aug_iter = temp_ds.make_one_shot_iterator()
next_element = data_aug_iter.get_next()
with tf.Session() as sess:
batch_of_imgs, label = sess.run(next_element)
# Running next element in our graph will produce a batch of images
plt.figure(figsize=(10, 10))
img = batch_of_imgs[0]
plt.subplot(1, 2, 1)
plt.imshow(img)
plt.subplot(1, 2, 2)
plt.imshow(label[0, :, :, 0])
plt.show()
Explanation: Let's see if our image augmentor data pipeline is producing expected results
End of explanation
def conv_block(input_tensor, num_filters):
encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor)
encoder = layers.BatchNormalization()(encoder)
encoder = layers.Activation('relu')(encoder)
encoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder)
encoder = layers.BatchNormalization()(encoder)
encoder = layers.Activation('relu')(encoder)
return encoder
def encoder_block(input_tensor, num_filters):
encoder = conv_block(input_tensor, num_filters)
encoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder)
return encoder_pool, encoder
def decoder_block(input_tensor, concat_tensor, num_filters):
decoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor)
decoder = layers.concatenate([concat_tensor, decoder], axis=-1)
decoder = layers.BatchNormalization()(decoder)
decoder = layers.Activation('relu')(decoder)
decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder)
decoder = layers.BatchNormalization()(decoder)
decoder = layers.Activation('relu')(decoder)
decoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder)
decoder = layers.BatchNormalization()(decoder)
decoder = layers.Activation('relu')(decoder)
return decoder
inputs = layers.Input(shape=img_shape)
# 256
encoder0_pool, encoder0 = encoder_block(inputs, 32)
# 128
encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64)
# 64
encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128)
# 32
encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256)
# 16
encoder4_pool, encoder4 = encoder_block(encoder3_pool, 512)
# 8
center = conv_block(encoder4_pool, 1024)
# center
decoder4 = decoder_block(center, encoder4, 512)
# 16
decoder3 = decoder_block(decoder4, encoder3, 256)
# 32
decoder2 = decoder_block(decoder3, encoder2, 128)
# 64
decoder1 = decoder_block(decoder2, encoder1, 64)
# 128
decoder0 = decoder_block(decoder1, encoder0, 32)
# 256
outputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0)
Explanation: Build the model
We'll build the U-Net model. U-Net is especially good with segmentation tasks because it can localize well to provide high resolution segmentation masks. In addition, it works well with small datasets and is relatively robust against overfitting as the training data is in terms of the number of patches within an image, which is much larger than the number of training images itself. Unlike the original model, we will add batch normalization to each of our blocks.
The Unet is built with an encoder portion and a decoder portion. The encoder portion is composed of a linear stack of Conv, BatchNorm, and Relu operations followed by a MaxPool. Each MaxPool will reduce the spatial resolution of our feature map by a factor of 2. We keep track of the outputs of each block as we feed these high resolution feature maps with the decoder portion. The Decoder portion is comprised of UpSampling2D, Conv, BatchNorm, and Relus. Note that we concatenate the feature map of the same size on the decoder side. Finally, we add a final Conv operation that performs a convolution along the channels for each individual pixel (kernel size of (1, 1)) that outputs our final segmentation mask in grayscale.
The Keras Functional API
The Keras functional API is used when you have multi-input/output models, shared layers, etc. It's a powerful API that allows you to manipulate tensors and build complex graphs with intertwined datastreams easily. In addition it makes layers and models both callable on tensors.
* To see more examples check out the get started guide.
We'll build these helper functions that will allow us to ensemble our model block operations easily and simply.
End of explanation
model = models.Model(inputs=[inputs], outputs=[outputs])
Explanation: Define your model
Using functional API, you must define your model by specifying the inputs and outputs associated with the model.
End of explanation
def dice_coeff(y_true, y_pred):
smooth = 1.
# Flatten
y_true_f = tf.reshape(y_true, [-1])
y_pred_f = tf.reshape(y_pred, [-1])
intersection = tf.reduce_sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (tf.reduce_sum(y_true_f) + tf.reduce_sum(y_pred_f) + smooth)
return score
def dice_loss(y_true, y_pred):
loss = 1 - dice_coeff(y_true, y_pred)
return loss
Explanation: Defining custom metrics and loss functions
Defining loss and metric functions are simple with Keras. Simply define a function that takes both the True labels for a given example and the Predicted labels for the same given example.
Dice loss is a metric that measures overlap. More info on optimizing for Dice coefficient (our dice loss) can be found in the paper, where it was introduced.
We use dice loss here because it performs better at class imbalanced problems by design. In addition, maximizing the dice coefficient and IoU metrics are the actual objectives and goals of our segmentation task. Using cross entropy is more of a proxy which is easier to maximize. Instead, we maximize our objective directly.
End of explanation
def bce_dice_loss(y_true, y_pred):
loss = losses.binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
return loss
Explanation: Here, we'll use a specialized loss function that combines binary cross entropy and our dice loss. This is based on individuals who competed within this competition obtaining better results empirically. Try out your own custom losses to measure performance (e.g. bce + log(dice_loss), only bce, etc.)!
End of explanation
model.compile(optimizer='adam', loss=bce_dice_loss, metrics=[dice_loss])
model.summary()
Explanation: Compile your model
We use our custom loss function to minimize. In addition, we specify what metrics we want to keep track of as we train. Note that metrics are not actually used during the training process to tune the parameters, but are instead used to measure performance of the training process.
End of explanation
save_model_path = '/tmp/weights.hdf5'
cp = tf.keras.callbacks.ModelCheckpoint(filepath=save_model_path, monitor='val_dice_loss', save_best_only=True, verbose=1)
Explanation: Train your model
Training your model with tf.data involves simply providing the model's fit function with your training/validation dataset, the number of steps, and epochs.
We also include a Model callback, ModelCheckpoint that will save the model to disk after each epoch. We configure it such that it only saves our highest performing model. Note that saving the model capture more than just the weights of the model: by default, it saves the model architecture, weights, as well as information about the training process such as the state of the optimizer, etc.
End of explanation
history = model.fit(train_ds,
steps_per_epoch=int(np.ceil(num_train_examples / float(batch_size))),
epochs=epochs,
validation_data=val_ds,
validation_steps=int(np.ceil(num_val_examples / float(batch_size))),
callbacks=[cp])
Explanation: Don't forget to specify our model callback in the fit function call.
End of explanation
dice = history.history['dice_loss']
val_dice = history.history['val_dice_loss']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(16, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, dice, label='Training Dice Loss')
plt.plot(epochs_range, val_dice, label='Validation Dice Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Dice Loss')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
Explanation: Visualize training process
End of explanation
# Alternatively, load the weights directly: model.load_weights(save_model_path)
model = models.load_model(save_model_path, custom_objects={'bce_dice_loss': bce_dice_loss,
'dice_loss': dice_loss})
# Let's visualize some of the outputs
data_aug_iter = val_ds.make_one_shot_iterator()
next_element = data_aug_iter.get_next()
# Running next element in our graph will produce a batch of images
plt.figure(figsize=(10, 20))
for i in range(5):
batch_of_imgs, label = tf.keras.backend.get_session().run(next_element)
img = batch_of_imgs[0]
predicted_label = model.predict(batch_of_imgs)[0]
plt.subplot(5, 3, 3 * i + 1)
plt.imshow(img)
plt.title("Input image")
plt.subplot(5, 3, 3 * i + 2)
plt.imshow(label[0, :, :, 0])
plt.title("Actual Mask")
plt.subplot(5, 3, 3 * i + 3)
plt.imshow(predicted_label[:, :, 0])
plt.title("Predicted Mask")
plt.suptitle("Examples of Input Image, Label, and Prediction")
plt.show()
Explanation: Even with only 5 epochs, we see strong performance.
Visualize actual performance
We'll visualize our performance on the validation set.
Note that in an actual setting (competition, deployment, etc.) we'd evaluate on the test set with the full image resolution.
To load our model we have two options:
1. Since our model architecture is already in memory, we can simply call load_weights(save_model_path)
2. If you wanted to load the model from scratch (in a different setting without already having the model architecture in memory) we simply call
model = models.load_model(save_model_path, custom_objects={'bce_dice_loss': bce_dice_loss, 'dice_loss': dice_loss}), specificing the necessary custom objects, loss and metrics, that we used to train our model.
If you want to see more examples, check our the keras guide!
End of explanation |
10,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HTML and w3 Schools
Step1: Supporting Technologies
jQuery
Examples
Draggable Elements
https
Step2: Using the %%javascript cell magic and jQuery, we can modify the DOM node's display attributes using their id's!!
Step4: One can also the IPython.display.Javascript class to run code!! | Python Code:
from IPython.display import HTML, Javascript
HTML("Hello World")
Explanation: HTML and w3 Schools
End of explanation
!gvim draggable_1.html
HTML('./draggable_1.html')
Explanation: Supporting Technologies
jQuery
Examples
Draggable Elements
https://www.w3schools.com/tags/att_global_draggable.asp
https://www.w3schools.com/tags/tryit.asp?filename=tryhtml5_global_draggable
https://www.w3schools.com/html/html5_draganddrop.asp
End of explanation
%%javascript
$("p#drag1").css("border", "1px double red")
Explanation: Using the %%javascript cell magic and jQuery, we can modify the DOM node's display attributes using their id's!!
End of explanation
Javascript($("p#drag2").css("border", "2px double green"))
Explanation: One can also the IPython.display.Javascript class to run code!!
End of explanation |
10,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping.
Step1: Transform and sanitize the pings into arrays.
Step2: Create a set of pings from "core" to build a set of core client data. Output the data to CSV or Parquet.
This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs. | Python Code:
def dedupe_pings(rdd):
return rdd.filter(lambda p: p["meta/clientId"] is not None)\
.map(lambda p: (p["meta/documentId"], p))\
.reduceByKey(lambda x, y: x)\
.map(lambda x: x[1])
Explanation: Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping.
End of explanation
def transform(ping):
# Should not be None since we filter those out.
clientId = ping["meta/clientId"]
# Added via the ingestion process so should not be None.
submissionDate = dt.datetime.strptime(ping["meta/submissionDate"], "%Y%m%d")
geoCountry = ping["meta/geoCountry"]
profileDate = None
profileDaynum = ping["profileDate"]
if profileDaynum is not None:
try:
# Bad data could push profileDaynum > 32767 (size of a C int) and throw exception
profileDate = dt.datetime(1970, 1, 1) + dt.timedelta(int(profileDaynum))
except:
profileDate = None
# Create date should already be in ISO format
creationDate = ping["creationDate"]
if creationDate is not None:
# This is only accurate because we know the creation date is always in 'Z' (zulu) time.
creationDate = dt.datetime.strptime(ping["creationDate"], "%Y-%m-%dT%H:%M:%S.%fZ")
appVersion = ping["meta/appVersion"]
buildId = ping["meta/appBuildId"]
locale = ping["locale"]
os = ping["os"]
osVersion = ping["osversion"]
device = ping["device"]
arch = ping["arch"]
defaultSearch = ping["defaultSearch"]
distributionId = ping["distributionId"]
experiments = ping["experiments"]
if experiments is None:
experiments = []
return [clientId, submissionDate, creationDate, profileDate, geoCountry, locale, os, osVersion, buildId, appVersion, device, arch, defaultSearch, distributionId, json.dumps(experiments)]
Explanation: Transform and sanitize the pings into arrays.
End of explanation
channels = ["nightly", "aurora", "beta", "release"]
start = dt.datetime.now() - dt.timedelta(1)
end = dt.datetime.now() - dt.timedelta(1)
day = start
while day <= end:
for channel in channels:
print "\nchannel: " + channel + ", date: " + day.strftime("%Y%m%d")
kwargs = dict(
doc_type="core",
submission_date=(day.strftime("%Y%m%d"), day.strftime("%Y%m%d")),
channel=channel,
app="Fennec",
fraction=1
)
# Grab all available source_version pings
pings = get_pings(sc, source_version="*", **kwargs)
subset = get_pings_properties(pings, ["meta/clientId",
"meta/documentId",
"meta/submissionDate",
"meta/appVersion",
"meta/appBuildId",
"meta/geoCountry",
"locale",
"os",
"osversion",
"device",
"arch",
"profileDate",
"creationDate",
"defaultSearch",
"distributionId",
"experiments"])
subset = dedupe_pings(subset)
print "\nDe-duped pings:" + str(subset.count())
print subset.first()
transformed = subset.map(transform)
print "\nTransformed pings:" + str(transformed.count())
print transformed.first()
s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mobile/mobile_clients"
s3_output += "/v1/channel=" + channel + "/submission=" + day.strftime("%Y%m%d")
schema = StructType([
StructField("clientid", StringType(), False),
StructField("submissiondate", TimestampType(), False),
StructField("creationdate", TimestampType(), True),
StructField("profiledate", TimestampType(), True),
StructField("geocountry", StringType(), True),
StructField("locale", StringType(), True),
StructField("os", StringType(), True),
StructField("osversion", StringType(), True),
StructField("buildid", StringType(), True),
StructField("appversion", StringType(), True),
StructField("device", StringType(), True),
StructField("arch", StringType(), True),
StructField("defaultsearch", StringType(), True),
StructField("distributionid", StringType(), True),
StructField("experiments", StringType(), True)
])
# Make parquet parition file size large, but not too large for s3 to handle
coalesce = 1
if channel == "release":
coalesce = 4
grouped = sqlContext.createDataFrame(transformed, schema)
grouped.coalesce(coalesce).write.parquet(s3_output)
day += dt.timedelta(1)
Explanation: Create a set of pings from "core" to build a set of core client data. Output the data to CSV or Parquet.
This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs.
End of explanation |
10,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 class="title">Example-Dependent Cost-Sensitive Fraud Detection using CostCla</h1>
<center>
<h2>Alejandro Correa Bahnsen, PhD</h2>
<p>
<h2>Data Scientist</h2>
<p>
<div>
<img img class="logo" src="https
Step1: Data file
Step2: Class Label
Step3: Features
Step4: Features
Step5: Aggregated Features
Step6: Fraud Detection as a classification problem
Split in training and testing
Step7: Fraud Detection as a classification problem
Fit models
Step8: Models performance
Evaluate metrics and plot results
Step9: Models performance
Step10: Models performance
Step11: Models performance
None of these measures takes into account the business and economical realities that take place in fraud detection.
Losses due to fraud or customer satisfaction costs, are not considered in the evaluation of the different models.
<h1 class="bigtitle">Financial Evaluation of a Fraud Detection Model</h1>
Motivation
Typically, a fraud model is evaluated using standard cost-insensitive measures.
However, in practice, the cost associated with approving a fraudulent transaction (False Negative) is quite different from the cost associated with declining a legitimate transaction (False Positive).
Furthermore, the costs are not constant among transactions.
Cost Matrix
| | Actual Positive ($y_i=1$) | Actual Negative ($y_i=0$)|
|--- |
Step12: Financial savings
The financial cost of using a classifier $f$ on $\mathcal{S}$ is calculated by
$$ Cost(f(\mathcal{S})) = \sum_{i=1}^N y_i(1-c_i)C_{FN_i} + (1-y_i)c_i C_{FP_i}.$$
Then the financial savings are defined as the cost of the algorithm versus the cost of using no algorithm at all.
$$ Savings(f(\mathcal{S})) = \frac{ Cost_l(\mathcal{S}) - Cost(f(\mathcal{S}))} {Cost_l(\mathcal{S})},$$
where $Cost_l(\mathcal{S})$ is the cost of the costless class
Models Savings
costcla.metrics.savings_score(y_true, y_pred, cost_mat)
Step13: Models Savings
Step14: Threshold Optimization
Convert a classifier cost-sensitive by selecting a proper threshold
from training instances according to the savings
$$ t \quad = \quad argmax_t \
Step15: Threshold Optimization
Step16: Models Savings
There are significant differences in the results when evaluating a model using a traditional cost-insensitive measures
Train models that take into account the different financial costs
<h1 class="bigtitle">Example-Dependent Cost-Sensitive Classification</h1>
*Why "Example-Dependent"
Cost-sensitive classification ussualy refers to class-dependent costs, where the cost dependends on the class but is assumed constant accross examples.
In fraud detection, different transactions have different amounts, which implies that the costs are not constant
Bayes Minimum Risk (BMR)
The BMR classifier is a decision model based on quantifying tradeoffs between various decisions using probabilities and the costs that accompany such decisions.
In particular
Step17: BMR Results
Step18: BMR Results
Why so important focusing on the Recall
Average cost of a False Negative
Step19: Average cost of a False Positive
Step20: BMR Results
Bayes Minimum Risk increases the savings by using a cost-insensitive method and then introducing the costs
Why not introduce the costs during the estimation of the methods?
Cost-Sensitive Decision Trees (CSDT)
A a new cost-based impurity measure taking into account the costs when all the examples in a leaf
costcla.models.CostSensitiveDecisionTreeClassifier(criterion='direct_cost', criterion_weight=False, pruned=True)
Cost-Sensitive Random Forest (CSRF)
Ensemble of CSDT
costcla.models.CostSensitiveRandomForestClassifier(n_estimators=10, max_samples=0.5, max_features=0.5,combination='majority_voting)
CSDT & CSRF Code
Step21: CSDT & CSRF Results
Step23: Lessons Learned (so far ...)
Selecting models based on traditional statistics does not give the best results in terms of cost
Models should be evaluated taking into account real financial costs of the application
Algorithms should be developed to incorporate those financial costs
<center>
<img src="https | Python Code:
import pandas as pd
import numpy as np
from costcla import datasets
from costcla.datasets.base import Bunch
def load_fraud(cost_mat_parameters=dict(Ca=10)):
# data_ = pd.read_pickle("trx_fraud_data.pk")
data_ = pd.read_pickle("/home/al/DriveAl/EasySol/Projects/DetectTA/Tests/trx_fraud_data_v3_agg.pk")
target = data_['fraud'].values
data = data_.drop('fraud', 1)
n_samples = data.shape[0]
cost_mat = np.zeros((n_samples, 4))
cost_mat[:, 0] = cost_mat_parameters['Ca']
cost_mat[:, 1] = data['amount']
cost_mat[:, 2] = cost_mat_parameters['Ca']
cost_mat[:, 3] = 0.0
return Bunch(data=data.values, target=target, cost_mat=cost_mat,
target_names=['Legitimate Trx', 'Fraudulent Trx'], DESCR='',
feature_names=data.columns.values, name='FraudDetection')
datasets.load_fraud = load_fraud
data = datasets.load_fraud()
Explanation: <h1 class="title">Example-Dependent Cost-Sensitive Fraud Detection using CostCla</h1>
<center>
<h2>Alejandro Correa Bahnsen, PhD</h2>
<p>
<h2>Data Scientist</h2>
<p>
<div>
<img img class="logo" src="https://raw.githubusercontent.com/albahnsen/CostSensitiveClassification/master/doc/tutorials/files/logo_easysol.jpg" style="width: 400px;">
</div>
<h3>PyCaribbean, Santo Domingo, Dominican Republic, Feb 2016</h3>
</center>
<h1 class="bigtitle">About Me</h1>
%%html
<style>
table,td,tr,th {border:none!important}
</style>
### A brief bio:
* PhD in **Machine Learning** at Luxembourg University
* Data Scientist at Easy Solutions
* Worked for +8 years as a data scientist at GE Money, Scotiabank and SIX Financial Services
* Bachelor in Industrial Engineering and Master in Financial Engineering
* Organizer of Big Data & Data Science Bogota Meetup
* Sport addict, love to swim, play tennis, squash, and volleyball, among others.
<p>
<table style="border-collapse: collapse; border-top-color: rgb(255, 255, 255); border-right-color: rgb(255, 255, 255); border-bottom-color: rgb(255, 255, 255); border-left-color: rgb(255, 255, 255); border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px; " border="0" bordercolor="#888" cellspacing="0" align="left">
<tr>
<td>
<a href="mailto: [email protected]"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0.224580688,30 C0.224580688,13.4314567 13.454941,0 29.7754193,0 C46.0958976,0 59.3262579,13.4314567 59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C13.454941,60 0.224580688,46.5685433 0.224580688,30 Z M0.224580688,30" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M35.0384324,31.6384006 L47.2131148,40.5764264 L47.2131148,20 L35.0384324,31.6384006 Z M13.7704918,20 L13.7704918,40.5764264 L25.9449129,31.6371491 L13.7704918,20 Z M30.4918033,35.9844891 L27.5851037,33.2065217 L13.7704918,42 L47.2131148,42 L33.3981762,33.2065217 L30.4918033,35.9844891 Z M46.2098361,20 L14.7737705,20 L30.4918033,32.4549304 L46.2098361,20 Z M46.2098361,20" id="Shape" fill="#333333" sketch:type="MSShapeGroup"></path>
<path d="M59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C23.7225405,60 18.0947051,58.1525134 13.4093244,54.9827754 L47.2695458,5.81941103 C54.5814438,11.2806503 59.3262579,20.0777973 59.3262579,30 Z M59.3262579,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td>
<td>
<a href="mailto: [email protected]" target="_blank">[email protected]</a>
</td> </tr><tr> <td>
<a href="http://github.com/albahnsen"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0.336871032,30 C0.336871032,13.4314567 13.5672313,0 29.8877097,0 C46.208188,0 59.4385483,13.4314567 59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C13.5672313,60 0.336871032,46.5685433 0.336871032,30 Z M0.336871032,30" id="Github" fill="#333333" sketch:type="MSShapeGroup"></path>
<path d="M18.2184245,31.9355566 C19.6068506,34.4507902 22.2845295,36.0156764 26.8007287,36.4485173 C26.1561023,36.9365335 25.3817877,37.8630984 25.2749857,38.9342607 C24.4644348,39.4574749 22.8347506,39.62966 21.5674303,39.2310659 C19.7918469,38.6717023 19.1119377,35.1642642 16.4533306,35.6636959 C15.8773626,35.772144 15.9917933,36.1507609 16.489567,36.4722998 C17.3001179,36.9955141 18.0629894,37.6500075 18.6513541,39.04366 C19.1033554,40.113871 20.0531304,42.0259813 23.0569369,42.0259813 C24.2489236,42.0259813 25.0842679,41.8832865 25.0842679,41.8832865 C25.0842679,41.8832865 25.107154,44.6144649 25.107154,45.6761142 C25.107154,46.9004355 23.4507693,47.2457569 23.4507693,47.8346108 C23.4507693,48.067679 23.9990832,48.0895588 24.4396415,48.0895588 C25.3102685,48.0895588 27.1220883,47.3646693 27.1220883,46.0918317 C27.1220883,45.0806012 27.1382993,41.6806599 27.1382993,41.0860982 C27.1382993,39.785673 27.8372803,39.3737607 27.8372803,39.3737607 C27.8372803,39.3737607 27.924057,46.3153869 27.6704022,47.2457569 C27.3728823,48.3397504 26.8360115,48.1846887 26.8360115,48.6727049 C26.8360115,49.3985458 29.0168704,48.8505978 29.7396911,47.2571725 C30.2984945,46.0166791 30.0543756,39.2072834 30.0543756,39.2072834 L30.650369,39.1949165 C30.650369,39.1949165 30.6837446,42.3123222 30.6637192,43.7373675 C30.6427402,45.2128317 30.5426134,47.0792797 31.4208692,47.9592309 C31.9977907,48.5376205 33.868733,49.5526562 33.868733,48.62514 C33.868733,48.0857536 32.8436245,47.6424485 32.8436245,46.1831564 L32.8436245,39.4688905 C33.6618042,39.4688905 33.5387911,41.6768547 33.5387911,41.6768547 L33.5988673,45.7788544 C33.5988673,45.7788544 33.4186389,47.2733446 35.2190156,47.8992991 C35.8541061,48.1209517 37.2139245,48.1808835 37.277815,47.8089257 C37.3417055,47.4360167 35.6405021,46.8814096 35.6252446,45.7236791 C35.6157088,45.0178155 35.6567131,44.6059032 35.6567131,41.5379651 C35.6567131,38.470027 35.2438089,37.336079 33.8048426,36.4323453 C38.2457082,35.9766732 40.9939527,34.880682 42.3337458,31.9450695 C42.4383619,31.9484966 42.8791491,30.5737742 42.8219835,30.5742482 C43.1223642,29.4659853 43.2844744,28.1550957 43.3168964,26.6025764 C43.3092677,22.3930799 41.2895654,20.9042975 40.9014546,20.205093 C41.4736082,17.0182425 40.8060956,15.5675121 40.4961791,15.0699829 C39.3518719,14.6637784 36.5149435,16.1145088 34.9653608,17.1371548 C32.438349,16.3998984 27.0982486,16.4712458 25.0957109,17.3274146 C21.4005522,14.6875608 19.445694,15.0918628 19.445694,15.0918628 C19.445694,15.0918628 18.1821881,17.351197 19.1119377,20.6569598 C17.8961113,22.2028201 16.9902014,23.2968136 16.9902014,26.1963718 C16.9902014,27.8297516 17.1828264,29.2918976 17.6176632,30.5685404 C17.5643577,30.5684093 18.2008493,31.9359777 18.2184245,31.9355566 Z M18.2184245,31.9355566" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C23.8348308,60 18.2069954,58.1525134 13.5216148,54.9827754 L47.3818361,5.81941103 C54.6937341,11.2806503 59.4385483,20.0777973 59.4385483,30 Z M59.4385483,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td><td>
<a href="http://github.com/albahnsen" target="_blank">http://github.com/albahnsen</a>
</td> </tr><tr> <td>
<a href="http://linkedin.com/in/albahnsen"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0.449161376,30 C0.449161376,13.4314567 13.6795217,0 30,0 C46.3204783,0 59.5508386,13.4314567 59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C13.6795217,60 0.449161376,46.5685433 0.449161376,30 Z M0.449161376,30" fill="#007BB6" sketch:type="MSShapeGroup"></path>
<path d="M22.4680392,23.7098144 L15.7808366,23.7098144 L15.7808366,44.1369537 L22.4680392,44.1369537 L22.4680392,23.7098144 Z M22.4680392,23.7098144" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M22.9084753,17.3908761 C22.8650727,15.3880081 21.4562917,13.862504 19.1686418,13.862504 C16.8809918,13.862504 15.3854057,15.3880081 15.3854057,17.3908761 C15.3854057,19.3522579 16.836788,20.9216886 19.0818366,20.9216886 L19.1245714,20.9216886 C21.4562917,20.9216886 22.9084753,19.3522579 22.9084753,17.3908761 Z M22.9084753,17.3908761" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M46.5846502,32.4246563 C46.5846502,26.1503226 43.2856534,23.2301456 38.8851658,23.2301456 C35.3347011,23.2301456 33.7450983,25.2128128 32.8575489,26.6036896 L32.8575489,23.7103567 L26.1695449,23.7103567 C26.2576856,25.6271338 26.1695449,44.137496 26.1695449,44.137496 L32.8575489,44.137496 L32.8575489,32.7292961 C32.8575489,32.1187963 32.9009514,31.5097877 33.0777669,31.0726898 C33.5610713,29.8530458 34.6614937,28.5902885 36.5089747,28.5902885 C38.9297703,28.5902885 39.8974476,30.4634101 39.8974476,33.2084226 L39.8974476,44.1369537 L46.5843832,44.1369537 L46.5846502,32.4246563 Z M46.5846502,32.4246563" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C23.9471212,60 18.3192858,58.1525134 13.6339051,54.9827754 L47.4941264,5.81941103 C54.8060245,11.2806503 59.5508386,20.0777973 59.5508386,30 Z M59.5508386,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td> <td>
<a href="http://linkedin.com/in/albahnsen" target="_blank">http://linkedin.com/in/albahnsen</a>
</td> </tr><tr> <td>
<a href="http://twitter.com/albahnsen"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0,30 C0,13.4314567 13.4508663,0 30.0433526,0 C46.6358389,0 60.0867052,13.4314567 60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C13.4508663,60 0,46.5685433 0,30 Z M0,30" fill="#4099FF" sketch:type="MSShapeGroup"></path>
<path d="M29.2997675,23.8879776 L29.3627206,24.9260453 L28.3135016,24.798935 C24.4943445,24.3116787 21.1578281,22.6592444 18.3249368,19.8840023 L16.9399677,18.5069737 L16.5832333,19.5238563 C15.8277956,21.7906572 16.3104363,24.1845684 17.8842648,25.7946325 C18.72364,26.6844048 18.5347806,26.8115152 17.0868584,26.2818888 C16.5832333,26.1124083 16.1425613,25.985298 16.1005925,26.0488532 C15.9537019,26.1971486 16.457327,28.1249885 16.8560302,28.8876505 C17.4016241,29.9469033 18.5137962,30.9849709 19.7308902,31.5993375 L20.7591248,32.0865938 L19.5420308,32.1077788 C18.3669055,32.1077788 18.3249368,32.1289639 18.4508431,32.57385 C18.8705307,33.9508786 20.5282967,35.4126474 22.3749221,36.048199 L23.6759536,36.4930852 L22.5427971,37.1710069 C20.8640467,38.1455194 18.891515,38.6963309 16.9189833,38.738701 C15.9746862,38.759886 15.1982642,38.8446262 15.1982642,38.9081814 C15.1982642,39.1200319 17.7583585,40.306395 19.2482495,40.7724662 C23.7179224,42.1494948 29.0269705,41.5563132 33.0140027,39.2047722 C35.846894,37.5311528 38.6797853,34.2050993 40.0018012,30.9849709 C40.7152701,29.2689815 41.428739,26.1335934 41.428739,24.6294545 C41.428739,23.654942 41.4916922,23.5278317 42.6668174,22.3626537 C43.359302,21.6847319 44.0098178,20.943255 44.135724,20.7314044 C44.3455678,20.3288884 44.3245835,20.3288884 43.2543801,20.6890343 C41.4707078,21.324586 41.2188952,21.2398458 42.1002392,20.2865183 C42.750755,19.6085965 43.527177,18.3798634 43.527177,18.0197174 C43.527177,17.9561623 43.2124113,18.0620876 42.8556769,18.252753 C42.477958,18.4646036 41.6385828,18.7823794 41.0090514,18.9730449 L39.8758949,19.3331908 L38.8476603,18.634084 C38.281082,18.252753 37.4836756,17.829052 37.063988,17.7019416 C35.9937846,17.4053509 34.357003,17.447721 33.3917215,17.7866818 C30.768674,18.7400093 29.110908,21.1974757 29.2997675,23.8879776 Z M29.2997675,23.8879776" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C23.8895925,60 18.1679598,58.1525134 13.4044895,54.9827754 L47.8290478,5.81941103 C55.2628108,11.2806503 60.0867052,20.0777973 60.0867052,30 Z M60.0867052,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td> <td>
<a href="http://twitter.com/albahnsen" target="_blank">@albahnsen</a>
</td> </tr>
</table>
# Agenda
* Quick Intro to Fraud Detection
* Financial Evaluation of a Fraud Detection Model
* Example-Dependent Classification
* CostCla Library
* Conclusion and Future Work
# Fraud Detection
Estimate the **probability** of a transaction being **fraud** based on analyzing customer patterns and recent fraudulent behavior
<center>
<div>
<img img class="logo" src="https://raw.githubusercontent.com/albahnsen/CostSensitiveClassification/master/doc/tutorials/files/trx_flow.png" style="width: 800px;">
</div>
</center>
# Fraud Detection
Issues when constructing a fraud detection system:
* Skewness of the data
* **Cost-sensitivity**
* Short time response of the system
* Dimensionality of the search space
* Feature preprocessing
* Model selection
Different machine learning methods are used in practice, and in the
literature: logistic regression, neural networks, discriminant
analysis, genetic programing, decision trees, random forests among others
# Fraud Detection
Formally, a fraud detection is a statistical model that allows the estimation of the probability of transaction $i$ being a fraud ($y_i=1$)
$$\hat p_i=P(y_i=1|\mathbf{x}_i)$$
<h1 class="bigtitle">Data!</h1>
<center>
<img img class="logo" src="http://www.sei-security.com/wp-content/uploads/2015/12/shutterstock_144683186.jpg" style="width: 400px;">
</center>
# Load dataset from CostCla package
End of explanation
print(data.keys())
print('Number of examples ', data.target.shape[0])
Explanation: Data file
End of explanation
target = pd.DataFrame(pd.Series(data.target).value_counts(), columns=('Frequency',))
target['Percentage'] = (target['Frequency'] / target['Frequency'].sum()) * 100
target.index = ['Negative (Legitimate Trx)', 'Positive (Fraud Trx)']
target.loc['Total Trx'] = [data.target.shape[0], 1.]
print(target)
Explanation: Class Label
End of explanation
pd.DataFrame(data.feature_names[:4], columns=('Features',))
Explanation: Features
End of explanation
df = pd.DataFrame(data.data[:, :4], columns=data.feature_names[:4])
df.head(10)
Explanation: Features
End of explanation
df = pd.DataFrame(data.data[:, 4:], columns=data.feature_names[4:])
df.head(10)
Explanation: Aggregated Features
End of explanation
from sklearn.cross_validation import train_test_split
X = data.data[:, [2, 3] + list(range(4, data.data.shape[1]))].astype(np.float)
X_train, X_test, y_train, y_test, cost_mat_train, cost_mat_test = \
train_test_split(X, data.target, data.cost_mat, test_size=0.33, random_state=10)
Explanation: Fraud Detection as a classification problem
Split in training and testing
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
classifiers = {"RF": {"f": RandomForestClassifier()},
"DT": {"f": DecisionTreeClassifier()}}
ci_models = ['DT', 'RF']
# Fit the classifiers using the training dataset
for model in classifiers.keys():
classifiers[model]["f"].fit(X_train, y_train)
classifiers[model]["c"] = classifiers[model]["f"].predict(X_test)
classifiers[model]["p"] = classifiers[model]["f"].predict_proba(X_test)
classifiers[model]["p_train"] = classifiers[model]["f"].predict_proba(X_train)
Explanation: Fraud Detection as a classification problem
Fit models
End of explanation
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.pylabtools import figsize
import seaborn as sns
colors = sns.color_palette()
figsize(12, 8)
from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score
measures = {"F1Score": f1_score, "Precision": precision_score,
"Recall": recall_score, "Accuracy": accuracy_score}
results = pd.DataFrame(columns=measures.keys())
for model in ci_models:
results.loc[model] = [measures[measure](y_test, classifiers[model]["c"]) for measure in measures.keys()]
Explanation: Models performance
Evaluate metrics and plot results
End of explanation
def fig_acc():
plt.bar(np.arange(results.shape[0])-0.3, results['Accuracy'], 0.6, label='Accuracy', color=colors[0])
plt.xticks(range(results.shape[0]), results.index)
plt.tick_params(labelsize=22); plt.title('Accuracy', size=30)
plt.show()
fig_acc()
Explanation: Models performance
End of explanation
def fig_f1():
plt.bar(np.arange(results.shape[0])-0.3, results['Precision'], 0.2, label='Precision', color=colors[0])
plt.bar(np.arange(results.shape[0])-0.3+0.2, results['Recall'], 0.2, label='Recall', color=colors[1])
plt.bar(np.arange(results.shape[0])-0.3+0.4, results['F1Score'], 0.2, label='F1Score', color=colors[2])
plt.xticks(range(results.shape[0]), results.index)
plt.tick_params(labelsize=22)
plt.ylim([0, 1])
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5),fontsize=22)
plt.show()
fig_f1()
Explanation: Models performance
End of explanation
# The cost matrix is already calculated for the dataset
# cost_mat[C_FP,C_FN,C_TP,C_TN]
print(data.cost_mat[[10, 17, 50]])
Explanation: Models performance
None of these measures takes into account the business and economical realities that take place in fraud detection.
Losses due to fraud or customer satisfaction costs, are not considered in the evaluation of the different models.
<h1 class="bigtitle">Financial Evaluation of a Fraud Detection Model</h1>
Motivation
Typically, a fraud model is evaluated using standard cost-insensitive measures.
However, in practice, the cost associated with approving a fraudulent transaction (False Negative) is quite different from the cost associated with declining a legitimate transaction (False Positive).
Furthermore, the costs are not constant among transactions.
Cost Matrix
| | Actual Positive ($y_i=1$) | Actual Negative ($y_i=0$)|
|--- |:-: |:-: |
| Pred. Positive ($c_i=1$) | $C_{TP_i}=C_a$ | $C_{FP_i}=C_a$ |
| Pred. Negative ($c_i=0$) | $C_{FN_i}=Amt_i$ | $C_{TN_i}=0$ |
Where:
$C_{FN_i}$ = Amount of the transaction $i$
$C_a$ is the administrative cost of dealing with an alert
For more info see <a href="http://albahnsen.com/files/%20Improving%20Credit%20Card%20Fraud%20Detection%20by%20using%20Calibrated%20Probabilities%20-%20Publish.pdf" target="_blank">[Correa Bahnsen et al., 2014]</a>
End of explanation
# Calculation of the cost and savings
from costcla.metrics import savings_score, cost_loss
# Evaluate the savings for each model
results["Savings"] = np.zeros(results.shape[0])
for model in ci_models:
results["Savings"].loc[model] = savings_score(y_test, classifiers[model]["c"], cost_mat_test)
# Plot the results
def fig_sav():
plt.bar(np.arange(results.shape[0])-0.4, results['Precision'], 0.2, label='Precision', color=colors[0])
plt.bar(np.arange(results.shape[0])-0.4+0.2, results['Recall'], 0.2, label='Recall', color=colors[1])
plt.bar(np.arange(results.shape[0])-0.4+0.4, results['F1Score'], 0.2, label='F1Score', color=colors[2])
plt.bar(np.arange(results.shape[0])-0.4+0.6, results['Savings'], 0.2, label='Savings', color=colors[3])
plt.xticks(range(results.shape[0]), results.index)
plt.tick_params(labelsize=22)
plt.ylim([0, 1])
plt.xlim([-0.5, results.shape[0] -1 + .5])
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5),fontsize=22)
plt.show()
Explanation: Financial savings
The financial cost of using a classifier $f$ on $\mathcal{S}$ is calculated by
$$ Cost(f(\mathcal{S})) = \sum_{i=1}^N y_i(1-c_i)C_{FN_i} + (1-y_i)c_i C_{FP_i}.$$
Then the financial savings are defined as the cost of the algorithm versus the cost of using no algorithm at all.
$$ Savings(f(\mathcal{S})) = \frac{ Cost_l(\mathcal{S}) - Cost(f(\mathcal{S}))} {Cost_l(\mathcal{S})},$$
where $Cost_l(\mathcal{S})$ is the cost of the costless class
Models Savings
costcla.metrics.savings_score(y_true, y_pred, cost_mat)
End of explanation
fig_sav()
Explanation: Models Savings
End of explanation
from costcla.models import ThresholdingOptimization
for model in ci_models:
classifiers[model+"-TO"] = {"f": ThresholdingOptimization()}
# Fit
classifiers[model+"-TO"]["f"].fit(classifiers[model]["p_train"], cost_mat_train, y_train)
# Predict
classifiers[model+"-TO"]["c"] = classifiers[model+"-TO"]["f"].predict(classifiers[model]["p"])
print('New thresholds')
for model in ci_models:
print(model + '-TO - ' + str(classifiers[model+'-TO']['f'].threshold_))
for model in ci_models:
# Evaluate
results.loc[model+"-TO"] = 0
results.loc[model+"-TO", measures.keys()] = \
[measures[measure](y_test, classifiers[model+"-TO"]["c"]) for measure in measures.keys()]
results["Savings"].loc[model+"-TO"] = savings_score(y_test, classifiers[model+"-TO"]["c"], cost_mat_test)
Explanation: Threshold Optimization
Convert a classifier cost-sensitive by selecting a proper threshold
from training instances according to the savings
$$ t \quad = \quad argmax_t \: Savings(c(t), y) $$
Threshold Optimization - Code
costcla.models.ThresholdingOptimization(calibration=True)
fit(y_prob_train=None, cost_mat, y_true_train)
- Parameters
- y_prob_train : Predicted probabilities of the training set
- cost_mat : Cost matrix of the classification problem.
- y_true_cal : True class
predict(y_prob)
- Parameters
- y_prob : Predicted probabilities
Returns
y_pred : Predicted class
Threshold Optimization
End of explanation
fig_sav()
Explanation: Threshold Optimization
End of explanation
from costcla.models import BayesMinimumRiskClassifier
for model in ci_models:
classifiers[model+"-BMR"] = {"f": BayesMinimumRiskClassifier()}
# Fit
classifiers[model+"-BMR"]["f"].fit(y_test, classifiers[model]["p"])
# Calibration must be made in a validation set
# Predict
classifiers[model+"-BMR"]["c"] = classifiers[model+"-BMR"]["f"].predict(classifiers[model]["p"], cost_mat_test)
for model in ci_models:
# Evaluate
results.loc[model+"-BMR"] = 0
results.loc[model+"-BMR", measures.keys()] = \
[measures[measure](y_test, classifiers[model+"-BMR"]["c"]) for measure in measures.keys()]
results["Savings"].loc[model+"-BMR"] = savings_score(y_test, classifiers[model+"-BMR"]["c"], cost_mat_test)
Explanation: Models Savings
There are significant differences in the results when evaluating a model using a traditional cost-insensitive measures
Train models that take into account the different financial costs
<h1 class="bigtitle">Example-Dependent Cost-Sensitive Classification</h1>
*Why "Example-Dependent"
Cost-sensitive classification ussualy refers to class-dependent costs, where the cost dependends on the class but is assumed constant accross examples.
In fraud detection, different transactions have different amounts, which implies that the costs are not constant
Bayes Minimum Risk (BMR)
The BMR classifier is a decision model based on quantifying tradeoffs between various decisions using probabilities and the costs that accompany such decisions.
In particular:
$$ R(c_i=0|\mathbf{x}i)=C{TN_i}(1-\hat p_i)+C_{FN_i} \cdot \hat p_i, $$
and
$$ R(c_i=1|\mathbf{x}i)=C{TP_i} \cdot \hat p_i + C_{FP_i}(1- \hat p_i), $$
BMR Code
costcla.models.BayesMinimumRiskClassifier(calibration=True)
fit(y_true_cal=None, y_prob_cal=None)
- Parameters
- y_true_cal : True class
- y_prob_cal : Predicted probabilities
predict(y_prob,cost_mat)
- Parameters
- y_prob : Predicted probabilities
- cost_mat : Cost matrix of the classification problem.
Returns
y_pred : Predicted class
BMR Code
End of explanation
fig_sav()
Explanation: BMR Results
End of explanation
print(data.data[data.target == 1, 2].mean())
Explanation: BMR Results
Why so important focusing on the Recall
Average cost of a False Negative
End of explanation
print(data.cost_mat[:,0].mean())
Explanation: Average cost of a False Positive
End of explanation
from costcla.models import CostSensitiveDecisionTreeClassifier
from costcla.models import CostSensitiveRandomForestClassifier
classifiers = {"CSDT": {"f": CostSensitiveDecisionTreeClassifier()},
"CSRF": {"f": CostSensitiveRandomForestClassifier(combination='majority_bmr')}}
# Fit the classifiers using the training dataset
for model in classifiers.keys():
classifiers[model]["f"].fit(X_train, y_train, cost_mat_train)
if model == "CSRF":
classifiers[model]["c"] = classifiers[model]["f"].predict(X_test, cost_mat_test)
else:
classifiers[model]["c"] = classifiers[model]["f"].predict(X_test)
for model in ['CSDT', 'CSRF']:
# Evaluate
results.loc[model] = 0
results.loc[model, measures.keys()] = \
[measures[measure](y_test, classifiers[model]["c"]) for measure in measures.keys()]
results["Savings"].loc[model] = savings_score(y_test, classifiers[model]["c"], cost_mat_test)
Explanation: BMR Results
Bayes Minimum Risk increases the savings by using a cost-insensitive method and then introducing the costs
Why not introduce the costs during the estimation of the methods?
Cost-Sensitive Decision Trees (CSDT)
A a new cost-based impurity measure taking into account the costs when all the examples in a leaf
costcla.models.CostSensitiveDecisionTreeClassifier(criterion='direct_cost', criterion_weight=False, pruned=True)
Cost-Sensitive Random Forest (CSRF)
Ensemble of CSDT
costcla.models.CostSensitiveRandomForestClassifier(n_estimators=10, max_samples=0.5, max_features=0.5,combination='majority_voting)
CSDT & CSRF Code
End of explanation
fig_sav()
Explanation: CSDT & CSRF Results
End of explanation
#Format from https://github.com/ellisonbg/talk-2013-scipy
from IPython.display import display, HTML
s =
<style>
.rendered_html {
font-family: "proxima-nova", helvetica;
font-size: 100%;
line-height: 1.3;
}
.rendered_html h1 {
margin: 0.25em 0em 0.5em;
color: #015C9C;
text-align: center;
line-height: 1.2;
page-break-before: always;
}
.rendered_html h2 {
margin: 1.1em 0em 0.5em;
color: #26465D;
line-height: 1.2;
}
.rendered_html h3 {
margin: 1.1em 0em 0.5em;
color: #002845;
line-height: 1.2;
}
.rendered_html li {
line-height: 1.5;
}
.prompt {
font-size: 120%;
}
.CodeMirror-lines {
font-size: 120%;
}
.output_area {
font-size: 120%;
}
#notebook {
background-image: url('files/images/witewall_3.png');
}
h1.bigtitle {
margin: 4cm 1cm 4cm 1cm;
font-size: 300%;
}
h3.point {
font-size: 200%;
text-align: center;
margin: 2em 0em 2em 0em;
#26465D
}
.logo {
margin: 20px 0 20px 0;
}
a.anchor-link {
display: none;
}
h1.title {
font-size: 250%;
}
</style>
display(HTML(s))
Explanation: Lessons Learned (so far ...)
Selecting models based on traditional statistics does not give the best results in terms of cost
Models should be evaluated taking into account real financial costs of the application
Algorithms should be developed to incorporate those financial costs
<center>
<img src="https://raw.githubusercontent.com/albahnsen/CostSensitiveClassification/master/logo.png" style="width: 600px;" align="middle">
</center>
CostCla Library
CostCla is a Python open source cost-sensitive classification library built on top of Scikit-learn, Pandas and Numpy.
Source code, binaries and documentation are distributed under 3-Clause BSD license in the website http://albahnsen.com/CostSensitiveClassification/
CostCla Algorithms
Cost-proportionate over-sampling <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.514" target="_blank">[Elkan, 2001]</a>
SMOTE <a href="http://arxiv.org/abs/1106.1813" target="_blank">[Chawla et al., 2002]</a>
Cost-proportionate rejection-sampling <a href="http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1250950" target="_blank">[Zadrozny et al., 2003]</a>
Thresholding optimization <a href="http://www.aaai.org/Papers/AAAI/2006/AAAI06-076.pdf" target="_blank">[Sheng and Ling, 2006]</a>
Bayes minimum risk <a href="http://albahnsen.com/files/%20Improving%20Credit%20Card%20Fraud%20Detection%20by%20using%20Calibrated%20Probabilities%20-%20Publish.pdf" target="_blank">[Correa Bahnsen et al., 2014a]</a>
Cost-sensitive logistic regression <a href="http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Logistic%20Regression%20for%20Credit%20Scoring_publish.pdf" target="_blank">[Correa Bahnsen et al., 2014b]</a>
Cost-sensitive decision trees <a href="http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Decision%20Trees.pdf" target="_blank">[Correa Bahnsen et al., 2015a]</a>
Cost-sensitive ensemble methods: cost-sensitive bagging, cost-sensitive pasting, cost-sensitive random forest and cost-sensitive random patches <a href="http://arxiv.org/abs/1505.04637" target="_blank">[Correa Bahnsen et al., 2015c]</a>
CostCla Databases
Credit Scoring1 - Kaggle credit competition <a href="https://www.kaggle.com/c/GiveMeSomeCredit" target="_blank">[Data]</a>, cost matrix: <a href="http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Logistic%20Regression%20for%20Credit%20Scoring_publish.pdf" target="_blank">[Correa Bahnsen et al., 2014]</a>
Credit Scoring 2 - PAKDD2009 Credit <a href="http://sede.neurotech.com.br/PAKDD2009/" target="_blank">[Data]</a>, cost matrix: <a href="http://albahnsen.com/files/Example-Dependent%20Cost-Sensitive%20Logistic%20Regression%20for%20Credit%20Scoring_publish.pdf" target="_blank">[Correa Bahnsen et al., 2014a]</a>
Direct Marketing - PAKDD2009 Credit <a href="https://archive.ics.uci.edu/ml/datasets/Bank+Marketing" target="_blank">[Data]</a>, cost matrix: <a href="http://albahnsen.com/files/%20Improving%20Credit%20Card%20Fraud%20Detection%20by%20using%20Calibrated%20Probabilities%20-%20Publish.pdf" target="_blank">[Correa Bahnsen et al., 2014b]</a>
Churn Modeling, soon
Fraud Detection, soon
Future Work
CSDT in Cython
Cost-sensitive class-dependent algorithms
Sampling algorithms
Probability calibration (Only ROCCH)
Other algorithms
More databases
You find the presentation and the IPython Notebook here:
<a href="http://nbviewer.ipython.org/format/slides/github/albahnsen/CostSensitiveClassification/blob/master/doc/tutorials/slides_edcs_fraud_detection.ipynb#/" target="_blank">http://nbviewer.ipython.org/format/slides/github/
albahnsen/CostSensitiveClassification/blob/
master/doc/tutorials/slides_edcs_fraud_detection.ipynb#/</a>
<a href="https://github.com/albahnsen/CostSensitiveClassification/blob/master/doc/tutorials/slides_edcs_fraud_detection.ipynb" target="_blank">https://github.com/albahnsen/CostSensitiveClassification/ blob/master/doc/tutorials/slides_edcs_fraud_detection.ipynb</a>
<h1 class="bigtitle">Thanks!</h1>
<center>
<table style="border-collapse: collapse; border-top-color: rgb(255, 255, 255); border-right-color: rgb(255, 255, 255); border-bottom-color: rgb(255, 255, 255); border-left-color: rgb(255, 255, 255); border-top-width: 1px; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px; " border="0" bordercolor="#888" cellspacing="0" align="left">
<tr>
<td>
<a href="mailto: [email protected]"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0.224580688,30 C0.224580688,13.4314567 13.454941,0 29.7754193,0 C46.0958976,0 59.3262579,13.4314567 59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C13.454941,60 0.224580688,46.5685433 0.224580688,30 Z M0.224580688,30" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M35.0384324,31.6384006 L47.2131148,40.5764264 L47.2131148,20 L35.0384324,31.6384006 Z M13.7704918,20 L13.7704918,40.5764264 L25.9449129,31.6371491 L13.7704918,20 Z M30.4918033,35.9844891 L27.5851037,33.2065217 L13.7704918,42 L47.2131148,42 L33.3981762,33.2065217 L30.4918033,35.9844891 Z M46.2098361,20 L14.7737705,20 L30.4918033,32.4549304 L46.2098361,20 Z M46.2098361,20" id="Shape" fill="#333333" sketch:type="MSShapeGroup"></path>
<path d="M59.3262579,30 C59.3262579,46.5685433 46.0958976,60 29.7754193,60 C23.7225405,60 18.0947051,58.1525134 13.4093244,54.9827754 L47.2695458,5.81941103 C54.5814438,11.2806503 59.3262579,20.0777973 59.3262579,30 Z M59.3262579,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td>
<td>
<a href="mailto: [email protected]" target="_blank">[email protected]</a>
</td> </tr><tr> <td>
<a href="http://github.com/albahnsen"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0.336871032,30 C0.336871032,13.4314567 13.5672313,0 29.8877097,0 C46.208188,0 59.4385483,13.4314567 59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C13.5672313,60 0.336871032,46.5685433 0.336871032,30 Z M0.336871032,30" id="Github" fill="#333333" sketch:type="MSShapeGroup"></path>
<path d="M18.2184245,31.9355566 C19.6068506,34.4507902 22.2845295,36.0156764 26.8007287,36.4485173 C26.1561023,36.9365335 25.3817877,37.8630984 25.2749857,38.9342607 C24.4644348,39.4574749 22.8347506,39.62966 21.5674303,39.2310659 C19.7918469,38.6717023 19.1119377,35.1642642 16.4533306,35.6636959 C15.8773626,35.772144 15.9917933,36.1507609 16.489567,36.4722998 C17.3001179,36.9955141 18.0629894,37.6500075 18.6513541,39.04366 C19.1033554,40.113871 20.0531304,42.0259813 23.0569369,42.0259813 C24.2489236,42.0259813 25.0842679,41.8832865 25.0842679,41.8832865 C25.0842679,41.8832865 25.107154,44.6144649 25.107154,45.6761142 C25.107154,46.9004355 23.4507693,47.2457569 23.4507693,47.8346108 C23.4507693,48.067679 23.9990832,48.0895588 24.4396415,48.0895588 C25.3102685,48.0895588 27.1220883,47.3646693 27.1220883,46.0918317 C27.1220883,45.0806012 27.1382993,41.6806599 27.1382993,41.0860982 C27.1382993,39.785673 27.8372803,39.3737607 27.8372803,39.3737607 C27.8372803,39.3737607 27.924057,46.3153869 27.6704022,47.2457569 C27.3728823,48.3397504 26.8360115,48.1846887 26.8360115,48.6727049 C26.8360115,49.3985458 29.0168704,48.8505978 29.7396911,47.2571725 C30.2984945,46.0166791 30.0543756,39.2072834 30.0543756,39.2072834 L30.650369,39.1949165 C30.650369,39.1949165 30.6837446,42.3123222 30.6637192,43.7373675 C30.6427402,45.2128317 30.5426134,47.0792797 31.4208692,47.9592309 C31.9977907,48.5376205 33.868733,49.5526562 33.868733,48.62514 C33.868733,48.0857536 32.8436245,47.6424485 32.8436245,46.1831564 L32.8436245,39.4688905 C33.6618042,39.4688905 33.5387911,41.6768547 33.5387911,41.6768547 L33.5988673,45.7788544 C33.5988673,45.7788544 33.4186389,47.2733446 35.2190156,47.8992991 C35.8541061,48.1209517 37.2139245,48.1808835 37.277815,47.8089257 C37.3417055,47.4360167 35.6405021,46.8814096 35.6252446,45.7236791 C35.6157088,45.0178155 35.6567131,44.6059032 35.6567131,41.5379651 C35.6567131,38.470027 35.2438089,37.336079 33.8048426,36.4323453 C38.2457082,35.9766732 40.9939527,34.880682 42.3337458,31.9450695 C42.4383619,31.9484966 42.8791491,30.5737742 42.8219835,30.5742482 C43.1223642,29.4659853 43.2844744,28.1550957 43.3168964,26.6025764 C43.3092677,22.3930799 41.2895654,20.9042975 40.9014546,20.205093 C41.4736082,17.0182425 40.8060956,15.5675121 40.4961791,15.0699829 C39.3518719,14.6637784 36.5149435,16.1145088 34.9653608,17.1371548 C32.438349,16.3998984 27.0982486,16.4712458 25.0957109,17.3274146 C21.4005522,14.6875608 19.445694,15.0918628 19.445694,15.0918628 C19.445694,15.0918628 18.1821881,17.351197 19.1119377,20.6569598 C17.8961113,22.2028201 16.9902014,23.2968136 16.9902014,26.1963718 C16.9902014,27.8297516 17.1828264,29.2918976 17.6176632,30.5685404 C17.5643577,30.5684093 18.2008493,31.9359777 18.2184245,31.9355566 Z M18.2184245,31.9355566" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M59.4385483,30 C59.4385483,46.5685433 46.208188,60 29.8877097,60 C23.8348308,60 18.2069954,58.1525134 13.5216148,54.9827754 L47.3818361,5.81941103 C54.6937341,11.2806503 59.4385483,20.0777973 59.4385483,30 Z M59.4385483,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td><td>
<a href="http://github.com/albahnsen" target="_blank">http://github.com/albahnsen</a>
</td> </tr><tr> <td>
<a href="http://linkedin.com/in/albahnsen"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0.449161376,30 C0.449161376,13.4314567 13.6795217,0 30,0 C46.3204783,0 59.5508386,13.4314567 59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C13.6795217,60 0.449161376,46.5685433 0.449161376,30 Z M0.449161376,30" fill="#007BB6" sketch:type="MSShapeGroup"></path>
<path d="M22.4680392,23.7098144 L15.7808366,23.7098144 L15.7808366,44.1369537 L22.4680392,44.1369537 L22.4680392,23.7098144 Z M22.4680392,23.7098144" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M22.9084753,17.3908761 C22.8650727,15.3880081 21.4562917,13.862504 19.1686418,13.862504 C16.8809918,13.862504 15.3854057,15.3880081 15.3854057,17.3908761 C15.3854057,19.3522579 16.836788,20.9216886 19.0818366,20.9216886 L19.1245714,20.9216886 C21.4562917,20.9216886 22.9084753,19.3522579 22.9084753,17.3908761 Z M22.9084753,17.3908761" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M46.5846502,32.4246563 C46.5846502,26.1503226 43.2856534,23.2301456 38.8851658,23.2301456 C35.3347011,23.2301456 33.7450983,25.2128128 32.8575489,26.6036896 L32.8575489,23.7103567 L26.1695449,23.7103567 C26.2576856,25.6271338 26.1695449,44.137496 26.1695449,44.137496 L32.8575489,44.137496 L32.8575489,32.7292961 C32.8575489,32.1187963 32.9009514,31.5097877 33.0777669,31.0726898 C33.5610713,29.8530458 34.6614937,28.5902885 36.5089747,28.5902885 C38.9297703,28.5902885 39.8974476,30.4634101 39.8974476,33.2084226 L39.8974476,44.1369537 L46.5843832,44.1369537 L46.5846502,32.4246563 Z M46.5846502,32.4246563" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M59.5508386,30 C59.5508386,46.5685433 46.3204783,60 30,60 C23.9471212,60 18.3192858,58.1525134 13.6339051,54.9827754 L47.4941264,5.81941103 C54.8060245,11.2806503 59.5508386,20.0777973 59.5508386,30 Z M59.5508386,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td> <td>
<a href="http://linkedin.com/in/albahnsen" target="_blank">http://linkedin.com/in/albahnsen</a>
</td> </tr><tr> <td>
<a href="http://twitter.com/albahnsen"><svg width="40px" height="40px" viewBox="0 0 60 60" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns">
<path d="M0,30 C0,13.4314567 13.4508663,0 30.0433526,0 C46.6358389,0 60.0867052,13.4314567 60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C13.4508663,60 0,46.5685433 0,30 Z M0,30" fill="#4099FF" sketch:type="MSShapeGroup"></path>
<path d="M29.2997675,23.8879776 L29.3627206,24.9260453 L28.3135016,24.798935 C24.4943445,24.3116787 21.1578281,22.6592444 18.3249368,19.8840023 L16.9399677,18.5069737 L16.5832333,19.5238563 C15.8277956,21.7906572 16.3104363,24.1845684 17.8842648,25.7946325 C18.72364,26.6844048 18.5347806,26.8115152 17.0868584,26.2818888 C16.5832333,26.1124083 16.1425613,25.985298 16.1005925,26.0488532 C15.9537019,26.1971486 16.457327,28.1249885 16.8560302,28.8876505 C17.4016241,29.9469033 18.5137962,30.9849709 19.7308902,31.5993375 L20.7591248,32.0865938 L19.5420308,32.1077788 C18.3669055,32.1077788 18.3249368,32.1289639 18.4508431,32.57385 C18.8705307,33.9508786 20.5282967,35.4126474 22.3749221,36.048199 L23.6759536,36.4930852 L22.5427971,37.1710069 C20.8640467,38.1455194 18.891515,38.6963309 16.9189833,38.738701 C15.9746862,38.759886 15.1982642,38.8446262 15.1982642,38.9081814 C15.1982642,39.1200319 17.7583585,40.306395 19.2482495,40.7724662 C23.7179224,42.1494948 29.0269705,41.5563132 33.0140027,39.2047722 C35.846894,37.5311528 38.6797853,34.2050993 40.0018012,30.9849709 C40.7152701,29.2689815 41.428739,26.1335934 41.428739,24.6294545 C41.428739,23.654942 41.4916922,23.5278317 42.6668174,22.3626537 C43.359302,21.6847319 44.0098178,20.943255 44.135724,20.7314044 C44.3455678,20.3288884 44.3245835,20.3288884 43.2543801,20.6890343 C41.4707078,21.324586 41.2188952,21.2398458 42.1002392,20.2865183 C42.750755,19.6085965 43.527177,18.3798634 43.527177,18.0197174 C43.527177,17.9561623 43.2124113,18.0620876 42.8556769,18.252753 C42.477958,18.4646036 41.6385828,18.7823794 41.0090514,18.9730449 L39.8758949,19.3331908 L38.8476603,18.634084 C38.281082,18.252753 37.4836756,17.829052 37.063988,17.7019416 C35.9937846,17.4053509 34.357003,17.447721 33.3917215,17.7866818 C30.768674,18.7400093 29.110908,21.1974757 29.2997675,23.8879776 Z M29.2997675,23.8879776" id="Path" fill="#FFFFFF" sketch:type="MSShapeGroup"></path>
<path d="M60.0867052,30 C60.0867052,46.5685433 46.6358389,60 30.0433526,60 C23.8895925,60 18.1679598,58.1525134 13.4044895,54.9827754 L47.8290478,5.81941103 C55.2628108,11.2806503 60.0867052,20.0777973 60.0867052,30 Z M60.0867052,30" id="reflec" fill-opacity="0.08" fill="#000000" sketch:type="MSShapeGroup"></path>
</svg></a>
</td> <td>
<a href="http://twitter.com/albahnsen" target="_blank">@albahnsen</a>
</td> </tr>
</table>
</center>
End of explanation |
10,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Model without Spots
Step3: Adding Spots
Let's add a spot to the primary component in our binary.
The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See the spots tutorial for more details.
Step4: Comparing Light Curves | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.add_dataset('lc', times=phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none', model='no_spot')
Explanation: Model without Spots
End of explanation
b.add_feature('spot', component='primary', feature='spot01', relteff=0.9, radius=30, colat=45, long=90)
b.run_compute(irrad_method='none', model='with_spot')
Explanation: Adding Spots
Let's add a spot to the primary component in our binary.
The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See the spots tutorial for more details.
End of explanation
afig, mplfig = b.plot(show=True, legend=True)
Explanation: Comparing Light Curves
End of explanation |
10,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
USA UFO sightings (Python 3 version)
This notebook is based on the first chapter sample from Machine Learning for Hackers with some added features. I did this to present Jupyter Notebook with Python 3 for Tech Days in my Job.
The original link is offline so you need to download the file from the author's repository inside ../data form the r notebook directory.
I will assume the following questions need to be aswers;
- What is the best place to have UFO sightings on USA?
- What is the best month to have UFO sightings on USA?
Loading the data
This first session will handle with loading the main data file using Pandas.
Step1: Here we are loading the dataset with pandas with a minimal set of options.
- sep
Step2: With the data loaded in ufo dataframe, lets check it composition and first set of rows.
Step3: The dataframe describe() show us how many itens (without NaN) each column have, how many are uniques, which is more frequent value, and how much this value appear. head() simply show us the first 5 rows (first is 0 on Python).
Dealing with metadata and column names
We need to handle the columns names, to do so is necessary to see the data document. The table bellow shows the fields details get from the metadata
Step4: Now we have a good looking dataframe with columns.
Step5: Data Wrangling
Now we start to transform our data into something to analyse.
Keeping only necessary data
To decide about this lets get back to the questions to be answers.
The first one is about the better place on USA to have UFO sightings, for this we will need the Location column, and in some place in time we will make filters for it. The second question is about the better month to have UFO sightings, which will lead to the DateOccurred column.
Based on this Shape and LongDescription columns can be stripped high now (it's a bit obvious for the data relevance). But there is 2 others columns which can or cannot be removed, DataRepoted and Duration.
I always keep in mind to maintain, at last util second order, columns with some useful information to use it on further data wrangling or to get some statistical sense of it. Both columns have a date (in a YYYYDDMM year format) and a string which can possibly store some useful information if have data treatment to convert it in some numeric format. For the purpose of this demo, I removing it because DateReported will not be used further (the main purpose of the date is when the sight occurs and not when it was registered) and Duration is a relly mess and for a example to show on a Tech Day the effort to decompose it is not worthing.
The drop() command bellow have the following parameters
Step6: Converting data
Now we are good to start the data transformation, the dates columns must be converted to Python date objects to allow manipulation of theirs time series.
The first problem will happens when trying to run this code using pandas.to_datetime() to convert the string
Step7: The column now is a datetime object and have 60814 against the original 61069 elements, which shows some bad dates are gone. The following code show us how many elements was removed.
Step8: There is no surprise that 60814 + 255 = 61069, we need to deal with this values too.
So we have a field DateOccurred with some NaN values. In this point we need to make a importante decision, get rid of the columns with NaN dates or fill it with something.
There is no universal guide to this, we could fill it with the mean of the column or copy the content of the DateReported column. But in this case the missing date is less then 0.5% of the total, so for the simplicity sakes we will simply drop all NaN values.
Step9: With the dataframe with clean dates, lets create another 2 columns to handle years and months in separate. This will make some analysis more easy (like discover which is the better month of year to look for UFO sights).
Step10: A funny thing about year is the most old sight is in 1762! This dataset includes sights from history.
How can this be significative? Well, to figure it out its time to plot some charts. The humans are visual beings and a picture really worth much more than a bunch of numbers and words.
To do so we will use the default matplotlib library from Python to build our graphs.
Analysing the years
Before start lets count the sights by year.
The comands bellow are equivalent to the following SQL code
Step11: We can see the number of sightings is more representative after around 1900, so we will filter the dataframe for all year above this threshold.
Step12: Now lets see how the graph will behave
Step13: Handling location
Here we will make two steps, first is splitting all locations is city and states, for USA only. Second is load a dataset having the latitude and longitude for each USA city for future merge. | Python Code:
import pandas as pd
import numpy as np
Explanation: USA UFO sightings (Python 3 version)
This notebook is based on the first chapter sample from Machine Learning for Hackers with some added features. I did this to present Jupyter Notebook with Python 3 for Tech Days in my Job.
The original link is offline so you need to download the file from the author's repository inside ../data form the r notebook directory.
I will assume the following questions need to be aswers;
- What is the best place to have UFO sightings on USA?
- What is the best month to have UFO sightings on USA?
Loading the data
This first session will handle with loading the main data file using Pandas.
End of explanation
ufo = pd.read_csv(
'../data/ufo_awesome.tsv',
sep = "\t",
header = None,
dtype = object,
na_values = ['', 'NaN'],
error_bad_lines = False,
warn_bad_lines = False
)
Explanation: Here we are loading the dataset with pandas with a minimal set of options.
- sep: once the file is in TSV format the separator is a <TAB> special character;
- na_values: the file have empty strings for NaN values;
- header: ignore any column as a header since the file lacks it;
- dtype: load the dataframe as objects, avoiding interpret the data types¹;
- error_bad_lines: ignore lines with more than the number of rows;
- warn_bad_lines: is set to false to avoid ugly warnings on the screen, activate this if you want to analyse the bad rows.
¹ Before start to make assumptions of the data I prefer load it as objects and then convert it after make sense of it. Also the data can be corrupted and make it impossible to cast.
End of explanation
ufo.describe()
ufo.head()
Explanation: With the data loaded in ufo dataframe, lets check it composition and first set of rows.
End of explanation
ufo.columns = [
'DateOccurred',
'DateReported',
'Location',
'Shape',
'Duration',
'LongDescription'
]
Explanation: The dataframe describe() show us how many itens (without NaN) each column have, how many are uniques, which is more frequent value, and how much this value appear. head() simply show us the first 5 rows (first is 0 on Python).
Dealing with metadata and column names
We need to handle the columns names, to do so is necessary to see the data document. The table bellow shows the fields details get from the metadata:
| Short name | Type | Description |
| ---------- | ---- | ----------- |
| sighted_at | Long | Date the event occurred (yyyymmdd) |
| reported_at | Long | Date the event was reported |
| location | String | City and State where event occurred |
| shape | String | One word string description of the UFO shape |
| duration | String | Event duration (raw text field) |
| description | String | A long, ~20-30 line, raw text description |
To keep in sync with the R example, we will set the columns names to the following values:
- DateOccurred
- DateReported
- Location
- Shape
- Duration
- LogDescription
End of explanation
ufo.head()
Explanation: Now we have a good looking dataframe with columns.
End of explanation
ufo.drop(
labels = ['DateReported', 'Duration', 'Shape', 'LongDescription'],
axis = 1,
inplace = True
)
ufo.head()
Explanation: Data Wrangling
Now we start to transform our data into something to analyse.
Keeping only necessary data
To decide about this lets get back to the questions to be answers.
The first one is about the better place on USA to have UFO sightings, for this we will need the Location column, and in some place in time we will make filters for it. The second question is about the better month to have UFO sightings, which will lead to the DateOccurred column.
Based on this Shape and LongDescription columns can be stripped high now (it's a bit obvious for the data relevance). But there is 2 others columns which can or cannot be removed, DataRepoted and Duration.
I always keep in mind to maintain, at last util second order, columns with some useful information to use it on further data wrangling or to get some statistical sense of it. Both columns have a date (in a YYYYDDMM year format) and a string which can possibly store some useful information if have data treatment to convert it in some numeric format. For the purpose of this demo, I removing it because DateReported will not be used further (the main purpose of the date is when the sight occurs and not when it was registered) and Duration is a relly mess and for a example to show on a Tech Day the effort to decompose it is not worthing.
The drop() command bellow have the following parameters:
- labels: columns to remove;
- axis: set to 1 to remove columns;
- inplace: set to True to modify the dataframe itself and return none.
End of explanation
ufo['DateOccurred'] = pd.Series([
pd.to_datetime(
date,
format = '%Y%m%d',
errors='coerce'
) for date in ufo['DateOccurred']
])
ufo.describe()
Explanation: Converting data
Now we are good to start the data transformation, the dates columns must be converted to Python date objects to allow manipulation of theirs time series.
The first problem will happens when trying to run this code using pandas.to_datetime() to convert the string:
python
ufo['DateOccurred'] = pd.Series([
pd.to_datetime(
date,
format = '%Y%m%d'
) for date in ufo['DateOccurred']
])
This will rise a serie of errors (stack trace) which is cause by this:
ValueError: time data '0000' does not match format '%Y%m%d' (match)
What happen here is bad data (welcome to the data science world, most of data will come corrupted, missing, wrong or with some other problem). Before proceed we need to deal with the dates with wrong format.
So what to do? Well we can make the to_datetime() method ignore the errors putting a NaT values on the field. Lets convert this and then see how the DataOccurred column will appear.
End of explanation
ufo['DateOccurred'].isnull().sum()
Explanation: The column now is a datetime object and have 60814 against the original 61069 elements, which shows some bad dates are gone. The following code show us how many elements was removed.
End of explanation
ufo.isnull().sum()
ufo.dropna(
axis = 0,
inplace = True
)
ufo.isnull().sum()
ufo.describe()
Explanation: There is no surprise that 60814 + 255 = 61069, we need to deal with this values too.
So we have a field DateOccurred with some NaN values. In this point we need to make a importante decision, get rid of the columns with NaN dates or fill it with something.
There is no universal guide to this, we could fill it with the mean of the column or copy the content of the DateReported column. But in this case the missing date is less then 0.5% of the total, so for the simplicity sakes we will simply drop all NaN values.
End of explanation
ufo['Year'] = pd.DatetimeIndex(ufo['DateOccurred']).year
ufo['Month'] = pd.DatetimeIndex(ufo['DateOccurred']).month
ufo.head()
ufo['Month'].describe()
ufo['Year'].describe()
Explanation: With the dataframe with clean dates, lets create another 2 columns to handle years and months in separate. This will make some analysis more easy (like discover which is the better month of year to look for UFO sights).
End of explanation
sightings_by_year = ufo.groupby('Year').size().reset_index()
sightings_by_year.columns = ['Year', 'Sightings']
sightings_by_year.describe()
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
plt.style.use('seaborn-white')
%matplotlib inline
plt.xticks(rotation = 90)
sns.barplot(
data = sightings_by_year,
x = 'Year',
y = 'Sightings',
color= 'blue'
)
ax = plt.gca()
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5))
Explanation: A funny thing about year is the most old sight is in 1762! This dataset includes sights from history.
How can this be significative? Well, to figure it out its time to plot some charts. The humans are visual beings and a picture really worth much more than a bunch of numbers and words.
To do so we will use the default matplotlib library from Python to build our graphs.
Analysing the years
Before start lets count the sights by year.
The comands bellow are equivalent to the following SQL code:
SQL
SELECT Year, count(*) AS Sightings
FROM ufo
GROUP BY Year
End of explanation
ufo = ufo[ufo['Year'] >= 1900]
Explanation: We can see the number of sightings is more representative after around 1900, so we will filter the dataframe for all year above this threshold.
End of explanation
%matplotlib inline
new_sightings_by_year = ufo.groupby('Year').size().reset_index()
new_sightings_by_year.columns = ['Year', 'Sightings']
new_sightings_by_year.describe()
%matplotlib inline
plt.xticks(rotation = 90)
sns.barplot(
data = new_sightings_by_year,
x = 'Year',
y = 'Sightings',
color= 'blue'
)
ax = plt.gca()
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5))
Explanation: Now lets see how the graph will behave
End of explanation
locations = ufo['Location'].str.split(', ').apply(pd.Series)
ufo['City'] = locations[0]
ufo['State'] = locations[1]
Explanation: Handling location
Here we will make two steps, first is splitting all locations is city and states, for USA only. Second is load a dataset having the latitude and longitude for each USA city for future merge.
End of explanation |
10,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise
Step20: Expected Output | Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
# GRADED FUNCTION: basic_sigmoid
import math
import numpy as np
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = math.exp(-1 * x)
s = 1 / (1 + s)
### END CODE HERE ###
return s
basic_sigmoid(3)
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
#s = np.exp(np.multiply(-1, x))
#s = np.divide(1, np.add(1, s))
s = 1 / (1 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s * (1 - s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = None
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = None
# Divide x by its norm.
x = None
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = None
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = None
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = None
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation |
10,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applications of Linear Alebra
Step2: Variance and covariance
Remember the formula for covariance
$$
\text{Cov}(X, Y) = \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1}
$$
where $\text{Cov}(X, X)$ is the sample variance of $X$.
Step3: Eigendecomposition of the covariance matrix
Step4: Covariance matrix as a linear transformation
The covariance matrix is a linear transformation that maps $\mathbb{R}^n$ in the direction of its eigenvectors with scaling factor given by the eigenvalues. Here we see it applied to a collection of random vectors in the box bounded by [-1, 1].
We will assume we have a covariance matrix
Step5: Create random vectors in a box
Step6: Apply covariance matrix as linear transformation
Step7: The linear transform maps the random vectors as described.
Step8: PCA
Principal Components Analysis (PCA) basically means to find and rank all the eigenvalues and eigenvectors of a covariance matrix. This is useful because high-dimensional data (with $p$ features) may have nearly all their variation in a small number of dimensions $k$, i.e. in the subspace spanned by the eigenvectors of the covariance matrix that have the $k$ largest eigenvalues. If we project the original data into this subspace, we can have a dimension reduction (from $p$ to $k$) with hopefully little loss of information.
Numerically, PCA is typically done using SVD on the data matrix rather than eigendecomposition on the covariance matrix. The next section explains why this works.
Data matrices that have zero mean for all feature vectors
\begin{align}
\text{Cov}(X, Y) &= \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} \
&= \frac{\sum_{i=1}^nX_iY_i}{n-1} \
&= \frac{XY^T}{n-1}
\end{align}
and so the covariance matrix for a data set X that has zero mean in each feature vector is just $XX^T/(n-1)$.
In other words, we can also get the eigendecomposition of the covariance matrix from the positive semi-definite matrix $XX^T$.
Note that zeroing the feature vector does not affect the covariance matrix
Step9: Eigendecomposition of the covariance matrix
Step10: Change of basis via PCA
We can transform the original data set so that the eigenvectors are the basis vectors and find the new coordinates of the data points with respect to this new basis
This is the change of basis transformation covered in the Linear Alegebra module. First, note that the covariance matrix is a real symmetric matrix, and so the eigenvector matrix is an orthogonal matrix.
Step11: Linear algebra review for change of basis
Graphical illustration of change of basis
Suppose we have a vector $u$ in the standard basis $B$ , and a matrix $A$ that maps $u$ to $v$, also in $B$. We can use the eigenvalues of $A$ to form a new basis $B'$. As explained above, to bring a vector $u$ from $B$-space to a vector $u'$ in $B'$-space, we multiply it by $Q^{-1}$, the inverse of the matrix having the eigenvctors as column vectors. Now, in the eigenvector basis, the equivalent operation to $A$ is the diagonal matrix $\Lambda$ - this takes $u'$ to $v'$. Finally, we convert $v'$ back to a vector $v$ in the standard basis by multiplying with $Q$.
Step12: Principal components
Principal components are simply the eigenvectors of the covariance matrix used as basis vectors. Each of the original data points is expressed as a linear combination of the principal components, giving rise to a new set of coordinates.
Step13: For example, if we only use the first column of ys, we will have the projection of the data onto the first principal component, capturing the majority of the variance in the data with a single feature that is a linear combination of the original features.
Transform back to original coordinates
We may need to transform the (reduced) data set to the original feature coordinates for interpretation. This is simply another linear transform (matrix multiplication).
Step14: Dimension reduction via PCA
We have the sepctral decomposition of the covariance matrix
$$
A = Q^{-1}\Lambda Q
$$
Suppose $\Lambda$ is a rank $p$ matrix. To reduce the dimensionality to $k \le p$, we simply set all but the first $k$ values of the diagonal of $\Lambda$ to zero. This is equivalent to ignoring all except the first $k$ principal components.
What does this achieve? Recall that $A$ is a covariance matrix, and the trace of the matrix is the overall variability, since it is the sum of the variances.
Step15: Since the trace is invariant under change of basis, the total variability is also unchanged by PCA. By keeping only the first $k$ principal components, we can still "explain" $\sum_{i=1}^k e[i]/\sum{e}$ of the total variability. Sometimes, the degree of dimension reduction is specified as keeping enough principal components so that (say) $90\%$ of the total variability is explained.
Using Singular Value Decomposition (SVD) for PCA
SVD is a decomposition of the data matrix $X = U S V^T$ where $U$ and $V$ are orthogonal matrices and $S$ is a diagnonal matrix.
Recall that the transpose of an orthogonal matrix is also its inverse, so if we multiply on the right by $X^T$, we get the follwoing simplification
\begin{align}
X &= U S V^T \
X X^T &= U S V^T (U S V^T)^T \
&= U S V^T V S U^T \
&= U S^2 U^T
\end{align}
Compare with the eigendecomposition of a matrix $A = W \Lambda W^{-1}$, we see that SVD gives us the eigendecomposition of the matrix $XX^T$, which as we have just seen, is basically a scaled version of the covariance for a data matrix with zero mean, with the eigenvectors given by $U$ and eigenvealuse by $S^2$ (scaled by $n-1$).. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Applications of Linear Alebra: PCA
We will explore 3 applications of linear algebra in data analysis - change of basis (for dimension reduction), projections (for solving linear systems) and the quadratic form (for optimization). The first application is the change of basis to the eigenvector basis that underlies Principal Components Analysis s(PCA).
We will review the following in class:
The standard basis
Orthonormal basis and orthgonal matrices
Change of basis
Similar matrices
Eigendecomposition
Sample covariance
Covariance as a linear transform
PCA and dimension reduction
PCA and "explained variance"
SVD
End of explanation
def cov(x, y):
Returns covariance of vectors x and y).
xbar = x.mean()
ybar = y.mean()
return np.sum((x - xbar)*(y - ybar))/(len(x) - 1)
X = np.random.random(10)
Y = np.random.random(10)
np.array([[cov(X, X), cov(X, Y)], [cov(Y, X), cov(Y,Y)]])
# This can of course be calculated using numpy's built in cov() function
np.cov(X, Y)
# Extension to more variables is done in a pair-wise way
Z = np.random.random(10)
np.cov([X, Y, Z])
Explanation: Variance and covariance
Remember the formula for covariance
$$
\text{Cov}(X, Y) = \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1}
$$
where $\text{Cov}(X, X)$ is the sample variance of $X$.
End of explanation
mu = [0,0]
sigma = [[0.6,0.2],[0.2,0.2]]
n = 1000
x = np.random.multivariate_normal(mu, sigma, n).T
A = np.cov(x)
m = np.array([[1,2,3],[6,5,4]])
ms = m - m.mean(1).reshape(2,1)
np.dot(ms, ms.T)/2
e, v = np.linalg.eig(A)
plt.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e, v.T):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3])
plt.title('Eigenvectors of covariance matrix scaled by eigenvalue.');
Explanation: Eigendecomposition of the covariance matrix
End of explanation
covx = np.array([[1,0.6],[0.6,1]])
Explanation: Covariance matrix as a linear transformation
The covariance matrix is a linear transformation that maps $\mathbb{R}^n$ in the direction of its eigenvectors with scaling factor given by the eigenvalues. Here we see it applied to a collection of random vectors in the box bounded by [-1, 1].
We will assume we have a covariance matrix
End of explanation
u = np.random.uniform(-1, 1, (100, 2)).T
Explanation: Create random vectors in a box
End of explanation
y = covx @ u
e1, v1 = np.linalg.eig(covx)
Explanation: Apply covariance matrix as linear transformation
End of explanation
plt.scatter(u[0], u[1], c='blue')
plt.scatter(y[0], y[1], c='orange')
for e_, v_ in zip(e1, v1.T):
plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2)
plt.xticks([])
plt.yticks([])
pass
Explanation: The linear transform maps the random vectors as described.
End of explanation
np.set_printoptions(precision=3)
X = np.random.random((5,4))
X
### Subtract the row mean from each row
Y = X - X.mean(1)[:, None]
Y.mean(1)
Y
### Calculate the covariance
np.cov(X)
np.cov(Y)
Explanation: PCA
Principal Components Analysis (PCA) basically means to find and rank all the eigenvalues and eigenvectors of a covariance matrix. This is useful because high-dimensional data (with $p$ features) may have nearly all their variation in a small number of dimensions $k$, i.e. in the subspace spanned by the eigenvectors of the covariance matrix that have the $k$ largest eigenvalues. If we project the original data into this subspace, we can have a dimension reduction (from $p$ to $k$) with hopefully little loss of information.
Numerically, PCA is typically done using SVD on the data matrix rather than eigendecomposition on the covariance matrix. The next section explains why this works.
Data matrices that have zero mean for all feature vectors
\begin{align}
\text{Cov}(X, Y) &= \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} \
&= \frac{\sum_{i=1}^nX_iY_i}{n-1} \
&= \frac{XY^T}{n-1}
\end{align}
and so the covariance matrix for a data set X that has zero mean in each feature vector is just $XX^T/(n-1)$.
In other words, we can also get the eigendecomposition of the covariance matrix from the positive semi-definite matrix $XX^T$.
Note that zeroing the feature vector does not affect the covariance matrix
End of explanation
e1, v1 = np.linalg.eig(np.dot(x, x.T)/(n-1))
plt.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e1, v1.T):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
Explanation: Eigendecomposition of the covariance matrix
End of explanation
e, v = np.linalg.eig(np.cov(x))
v.dot(v.T)
Explanation: Change of basis via PCA
We can transform the original data set so that the eigenvectors are the basis vectors and find the new coordinates of the data points with respect to this new basis
This is the change of basis transformation covered in the Linear Alegebra module. First, note that the covariance matrix is a real symmetric matrix, and so the eigenvector matrix is an orthogonal matrix.
End of explanation
ys = np.dot(v1.T, x)
Explanation: Linear algebra review for change of basis
Graphical illustration of change of basis
Suppose we have a vector $u$ in the standard basis $B$ , and a matrix $A$ that maps $u$ to $v$, also in $B$. We can use the eigenvalues of $A$ to form a new basis $B'$. As explained above, to bring a vector $u$ from $B$-space to a vector $u'$ in $B'$-space, we multiply it by $Q^{-1}$, the inverse of the matrix having the eigenvctors as column vectors. Now, in the eigenvector basis, the equivalent operation to $A$ is the diagonal matrix $\Lambda$ - this takes $u'$ to $v'$. Finally, we convert $v'$ back to a vector $v$ in the standard basis by multiplying with $Q$.
End of explanation
plt.scatter(ys[0,:], ys[1,:], alpha=0.2)
for e_, v_ in zip(e1, np.eye(2)):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
Explanation: Principal components
Principal components are simply the eigenvectors of the covariance matrix used as basis vectors. Each of the original data points is expressed as a linear combination of the principal components, giving rise to a new set of coordinates.
End of explanation
zs = np.dot(v1, ys)
plt.scatter(zs[0,:], zs[1,:], alpha=0.2)
for e_, v_ in zip(e1, v1.T):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
u, s, v = np.linalg.svd(x)
u.dot(u.T)
Explanation: For example, if we only use the first column of ys, we will have the projection of the data onto the first principal component, capturing the majority of the variance in the data with a single feature that is a linear combination of the original features.
Transform back to original coordinates
We may need to transform the (reduced) data set to the original feature coordinates for interpretation. This is simply another linear transform (matrix multiplication).
End of explanation
A
A.trace()
e, v = np.linalg.eig(A)
D = np.diag(e)
D
D.trace()
D[0,0]/D.trace()
Explanation: Dimension reduction via PCA
We have the sepctral decomposition of the covariance matrix
$$
A = Q^{-1}\Lambda Q
$$
Suppose $\Lambda$ is a rank $p$ matrix. To reduce the dimensionality to $k \le p$, we simply set all but the first $k$ values of the diagonal of $\Lambda$ to zero. This is equivalent to ignoring all except the first $k$ principal components.
What does this achieve? Recall that $A$ is a covariance matrix, and the trace of the matrix is the overall variability, since it is the sum of the variances.
End of explanation
u, s, v = np.linalg.svd(x)
e2 = s**2/(n-1)
v2 = u
plt.scatter(x[0,:], x[1,:], alpha=0.2)
for e_, v_ in zip(e2, v2):
plt.plot([0, 3*e_*v_[0]], [0, 3*e_*v_[1]], 'r-', lw=2)
plt.axis([-3,3,-3,3]);
v1 # from eigenvectors of covariance matrix
v2 # from SVD
e1 # from eigenvalues of covariance matrix
e2 # from SVD
a0 = np.random.normal(0,1,100)
a1 = a0 + np.random.normal(0,.5,100)
a2 = 2*a0 + a1 + np.random.normal(5,0.01,100)
xs = np.vstack([a0, a1, a2])
xs.shape
U, s, V = np.linalg.svd(xs)
(s**2)/(99)
U
Explanation: Since the trace is invariant under change of basis, the total variability is also unchanged by PCA. By keeping only the first $k$ principal components, we can still "explain" $\sum_{i=1}^k e[i]/\sum{e}$ of the total variability. Sometimes, the degree of dimension reduction is specified as keeping enough principal components so that (say) $90\%$ of the total variability is explained.
Using Singular Value Decomposition (SVD) for PCA
SVD is a decomposition of the data matrix $X = U S V^T$ where $U$ and $V$ are orthogonal matrices and $S$ is a diagnonal matrix.
Recall that the transpose of an orthogonal matrix is also its inverse, so if we multiply on the right by $X^T$, we get the follwoing simplification
\begin{align}
X &= U S V^T \
X X^T &= U S V^T (U S V^T)^T \
&= U S V^T V S U^T \
&= U S^2 U^T
\end{align}
Compare with the eigendecomposition of a matrix $A = W \Lambda W^{-1}$, we see that SVD gives us the eigendecomposition of the matrix $XX^T$, which as we have just seen, is basically a scaled version of the covariance for a data matrix with zero mean, with the eigenvectors given by $U$ and eigenvealuse by $S^2$ (scaled by $n-1$)..
End of explanation |
10,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Field sampling tutorial
The particle trajectories allow us to study fields like temperature, plastic concentration or chlorophyll from a Lagrangian perspective.
In this tutorial we will go through how particles can sample Fields, using temperature as an example. Along the way we will get to know the parcels class Variable (see here for the documentation) and some of its methods. This tutorial covers several applications of a sampling setup
Step1: Suppose we want to study the environmental temperature for plankton drifting around a peninsula. We have a dataset with surface ocean velocities and the corresponding sea surface temperature stored in netcdf files in the folder "Peninsula_data". Besides the velocity fields, we load the temperature field using extra_fields={'T'
Step2: To sample the temperature field, we need to create a new class of particles where temperature is a Variable. As an argument for the Variable class, we need to provide the initial values for the particles. The easiest option is to access fieldset.T, but this option has some drawbacks.
Step3: Using fieldset.T leads to the WARNING displayed above because Variable accesses the fieldset in the slower SciPy mode. Another problem can occur when using the repeatdt argument instead of time
Step4: Since the initial time is not defined, the Variable class does not know at what time to access the temperature field.
The solution to this initialisation problem is to leave the initial value zero and sample the initial condition in JIT mode with the sampling Kernel
Step5: To sample the initial values we can execute the Sample kernel over the entire particleset with dt = 0 so that time does not increase
Step6: The particle dataset now contains the particle trajectories and the corresponding environmental temperature
Step7: Sampling initial values
In some simulations only the particles initial value within the field is of interest
Step8: Since all the particles are released at the same x-position and the temperature field is invariant in the y-direction, all particles have an initial temperature of 0.4$^\circ$C
Step9: Sampling with repeatdt
Some experiments require large sets of particles to be released repeatedly on the same locations. The particleset object has the option repeatdt for this, but when you want to sample the initial values this introduces some problems as we have seen here. For more advanced control over the repeated release of particles, you can manually write a for-loop using the function particleset.add(). Note that this for-loop is very similar to the one that repeatdt would execute under the hood in particleset.execute().
Adding particles to the particleset during the simulation reduces the memory used compared to specifying the delayed particle release times upfront, which improves the computational speed. In the loop, we want to initialise new particles and sample their initial temperature. If we want to write both the initialised particles with the sampled temperature and the older particles that have already been advected, we have to make sure both sets of particles find themselves at the same moment in time. The initial conditions must be written to the output file before advecting them, because during advection the particle.time will increase.
We do not specify the outputdt argument for the output_file and instead write the data with output_file.write(pset, time) on each iteration. A new particleset is initialised whenever time is a multiple of repeatdt. Because the particles are advected after being written, the last displacement must be written once more after the loop.
Step10: In each iteration of the loop, spanning six hours, we have added ten particles.
Step11: Let's check if the initial temperatures were sampled correctly for all particles
Step12: And see if the sampling of the temperature field is done correctly along the trajectories | Python Code:
# Modules needed for the Parcels simulation
from parcels import Variable, FieldSet, ParticleSet, JITParticle, AdvectionRK4
import numpy as np
from datetime import timedelta as delta
# To open and look at the temperature data
import xarray as xr
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: Field sampling tutorial
The particle trajectories allow us to study fields like temperature, plastic concentration or chlorophyll from a Lagrangian perspective.
In this tutorial we will go through how particles can sample Fields, using temperature as an example. Along the way we will get to know the parcels class Variable (see here for the documentation) and some of its methods. This tutorial covers several applications of a sampling setup:
* Basic along trajectory sampling
* Sampling initial conditions
* Sampling initial and along-trajectory values with repeated release
Basic sampling
We import the Variable class as well as the standard modules needed to set up a simulation.
End of explanation
# Velocity and temperature fields
fieldset = FieldSet.from_parcels("Peninsula_data/peninsula", extra_fields={'T': 'T'}, allow_time_extrapolation=True)
# Particle locations and initial time
npart = 10 # number of particles to be released
lon = 3e3 * np.ones(npart)
lat = np.linspace(3e3 , 45e3, npart, dtype=np.float32)
time = np.arange(0, npart) * delta(hours=2).total_seconds() # release each particle two hours later
# Plot temperature field and initial particle locations
T_data = xr.open_dataset("Peninsula_data/peninsulaT.nc")
plt.figure()
ax = plt.axes()
T_contour = ax.contourf(T_data.x.values, T_data.y.values, T_data.T.values[0,0], cmap=plt.cm.inferno)
ax.scatter(lon, lat, c='w')
plt.colorbar(T_contour, label='T [$^{\circ} C$]')
plt.show()
Explanation: Suppose we want to study the environmental temperature for plankton drifting around a peninsula. We have a dataset with surface ocean velocities and the corresponding sea surface temperature stored in netcdf files in the folder "Peninsula_data". Besides the velocity fields, we load the temperature field using extra_fields={'T': 'T'}. The particles are released on the left hand side of the domain.
End of explanation
class SampleParticle(JITParticle): # Define a new particle class
temperature = Variable('temperature', initial=fieldset.T) # Variable 'temperature' initialised by sampling the temperature
pset = ParticleSet(fieldset=fieldset, pclass=SampleParticle, lon=lon, lat=lat, time=time)
Explanation: To sample the temperature field, we need to create a new class of particles where temperature is a Variable. As an argument for the Variable class, we need to provide the initial values for the particles. The easiest option is to access fieldset.T, but this option has some drawbacks.
End of explanation
repeatdt = delta(hours=3)
pset = ParticleSet(fieldset=fieldset, pclass=SampleParticle, lon=lon, lat=lat, repeatdt=repeatdt)
Explanation: Using fieldset.T leads to the WARNING displayed above because Variable accesses the fieldset in the slower SciPy mode. Another problem can occur when using the repeatdt argument instead of time:
<a id='repeatdt_error'></a>
End of explanation
class SampleParticleInitZero(JITParticle): # Define a new particle class
temperature = Variable('temperature', initial=0) # Variable 'temperature' initially zero
pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=lon, lat=lat, time=time)
def SampleT(particle, fieldset, time):
particle.temperature = fieldset.T[time, particle.depth, particle.lat, particle.lon]
sample_kernel = pset.Kernel(SampleT) # Casting the SampleT function to a kernel.
Explanation: Since the initial time is not defined, the Variable class does not know at what time to access the temperature field.
The solution to this initialisation problem is to leave the initial value zero and sample the initial condition in JIT mode with the sampling Kernel:
End of explanation
pset.execute(sample_kernel, dt=0) # by only executing the sample kernel we record the initial temperature of the particles
output_file = pset.ParticleFile(name="InitZero.nc", outputdt=delta(hours=1))
pset.execute(AdvectionRK4 + sample_kernel, runtime=delta(hours=30), dt=delta(minutes=5),
output_file=output_file)
output_file.export() # export the trajectory data to a netcdf file
output_file.close()
Explanation: To sample the initial values we can execute the Sample kernel over the entire particleset with dt = 0 so that time does not increase
End of explanation
Particle_data = xr.open_dataset("InitZero.nc")
plt.figure()
ax = plt.axes()
ax.set_ylabel('Y')
ax.set_xlabel('X')
ax.set_ylim(1000, 49000)
ax.set_xlim(1000, 99000)
ax.plot(Particle_data.lon.transpose(), Particle_data.lat.transpose(), c='k', zorder=1)
T_scatter = ax.scatter(Particle_data.lon, Particle_data.lat, c=Particle_data.temperature,
cmap=plt.cm.inferno, norm=mpl.colors.Normalize(vmin=0., vmax=20.),
edgecolor='k', zorder=2)
plt.colorbar(T_scatter, label='T [$^{\circ} C$]')
plt.show()
Explanation: The particle dataset now contains the particle trajectories and the corresponding environmental temperature
End of explanation
class SampleParticleOnce(JITParticle): # Define a new particle class
temperature = Variable('temperature', initial=0, to_write='once') # Variable 'temperature'
pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleOnce, lon=lon, lat=lat, time=time)
pset.execute(sample_kernel, dt=0) # by only executing the sample kernel we record the initial temperature of the particles
output_file = pset.ParticleFile(name="WriteOnce.nc", outputdt=delta(hours=1))
pset.execute(AdvectionRK4, runtime=delta(hours=24), dt=delta(minutes=5),
output_file=output_file)
output_file.close()
Explanation: Sampling initial values
In some simulations only the particles initial value within the field is of interest: the variable does not need to be known along the entire trajectory. To reduce computing we can specify the to_write argument to the temperature Variable. This argument can have three values: True, False or 'once'. It determines whether to write the Variable to the output file. If we want to know only the initial value, we can enter 'once' and only the first value will be written to the output file.
End of explanation
Particle_data = xr.open_dataset("WriteOnce.nc")
plt.figure()
ax = plt.axes()
ax.set_ylabel('Y')
ax.set_xlabel('X')
ax.set_ylim(1000, 49000)
ax.set_xlim(1000, 99000)
ax.plot(Particle_data.lon.transpose(), Particle_data.lat.transpose(), c='k', zorder=1)
T_scatter = ax.scatter(Particle_data.lon, Particle_data.lat,
c=np.tile(Particle_data.temperature, (Particle_data.lon.shape[1], 1)).T,
cmap=plt.cm.inferno, norm=mpl.colors.Normalize(vmin=0., vmax=1.),
edgecolor='k', zorder=2)
plt.colorbar(T_scatter, label='Initial T [$^{\circ} C$]')
plt.show()
Explanation: Since all the particles are released at the same x-position and the temperature field is invariant in the y-direction, all particles have an initial temperature of 0.4$^\circ$C
End of explanation
outputdt = delta(hours=1).total_seconds() # write the particle data every hour
repeatdt = delta(hours=6).total_seconds() # release each set of particles six hours later
runtime = delta(hours=24).total_seconds()
pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=[], lat=[], time=[]) # Using SampleParticleInitZero
kernels = AdvectionRK4 + sample_kernel
output_file = pset.ParticleFile(name="RepeatLoop.nc") # Do not specify the outputdt yet, so we can manually write the output
for time in np.arange(0, runtime, outputdt):
if np.isclose(np.fmod(time, repeatdt), 0): # time is a multiple of repeatdt
pset_init = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=lon, lat=lat, time=time)
pset_init.execute(sample_kernel, dt=0) # record the initial temperature of the particles
pset.add(pset_init) # add the newly released particles to the total particleset
output_file.write(pset,time) # write the initialised particles and the advected particles
pset.execute(kernels, runtime=outputdt, dt=delta(minutes=5))
print('Length of pset at time %d: %d' % (time, len(pset)))
output_file.write(pset, time+outputdt)
output_file.close()
Explanation: Sampling with repeatdt
Some experiments require large sets of particles to be released repeatedly on the same locations. The particleset object has the option repeatdt for this, but when you want to sample the initial values this introduces some problems as we have seen here. For more advanced control over the repeated release of particles, you can manually write a for-loop using the function particleset.add(). Note that this for-loop is very similar to the one that repeatdt would execute under the hood in particleset.execute().
Adding particles to the particleset during the simulation reduces the memory used compared to specifying the delayed particle release times upfront, which improves the computational speed. In the loop, we want to initialise new particles and sample their initial temperature. If we want to write both the initialised particles with the sampled temperature and the older particles that have already been advected, we have to make sure both sets of particles find themselves at the same moment in time. The initial conditions must be written to the output file before advecting them, because during advection the particle.time will increase.
We do not specify the outputdt argument for the output_file and instead write the data with output_file.write(pset, time) on each iteration. A new particleset is initialised whenever time is a multiple of repeatdt. Because the particles are advected after being written, the last displacement must be written once more after the loop.
End of explanation
Particle_data = xr.open_dataset("RepeatLoop.nc")
print(Particle_data.time[:,0].values / np.timedelta64(1, 'h')) # The initial hour at which each particle is released
assert np.allclose(Particle_data.time[:,0].values / np.timedelta64(1, 'h'), [int(k/10)*6 for k in range(40)])
Explanation: In each iteration of the loop, spanning six hours, we have added ten particles.
End of explanation
print(Particle_data.temperature[:,0].values)
assert np.allclose(Particle_data.temperature[:,0].values, Particle_data.temperature[:,0].values[0])
Explanation: Let's check if the initial temperatures were sampled correctly for all particles
End of explanation
Release0 = Particle_data.where(Particle_data.time[:,0]==np.timedelta64(0, 's')) # the particles released at t = 0
plt.figure()
ax = plt.axes()
ax.set_ylabel('Y')
ax.set_xlabel('X')
ax.set_ylim(1000, 49000)
ax.set_xlim(1000, 99000)
ax.plot(Release0.lon.transpose(), Release0.lat.transpose(), c='k', zorder=1)
T_scatter = ax.scatter(Release0.lon, Release0.lat, c=Release0.temperature,
cmap=plt.cm.inferno, norm=mpl.colors.Normalize(vmin=0., vmax=20.),
edgecolor='k', zorder=2)
plt.colorbar(T_scatter, label='T [$^{\circ} C$]')
plt.show()
Explanation: And see if the sampling of the temperature field is done correctly along the trajectories
End of explanation |
10,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grade
Step1: 1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
Tip
Step2: 2. What's the current wind speed? How much warmer does it feel than it actually is?
Step3: 3. Moon Visible in New York
The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step4: 4. What's the difference between the high and low temperatures for today?
Step5: 5. Next Week's Prediction
Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step6: 6.Weather in Florida
What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step7: 7. Temperature in Central Park
What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000? | Python Code:
import requests
Explanation: Grade: 7 / 7
End of explanation
#https://api.forecast.io/forecast/APIKEY/LATITUDE,LONGITUDE,TIME
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/12.971599,77.594563')
data = response.json()
#print(data)
#print(data.keys())
print("Bangalore is in", data['timezone'], "timezone")
timezone_find = data.keys()
#find representation
print("The longitude is", data['longitude'], "The latitude is", data['latitude'])
Explanation: 1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world!
Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
print("The current windspeed at New York is", data['currently']['windSpeed'])
# TA-COMMENT: You want to compare apparentTemperature to another value here... It may feel colder! Or the same.
#print(data['currently']) - find how much warmer
print("It is",data['currently']['apparentTemperature'], "warmer it feels than it actually is")
Explanation: 2. What's the current wind speed? How much warmer does it feel than it actually is?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
#print(data['daily']['data'])
now_moon = data['daily']['data']
for i in now_moon:
print("The visibility of moon today in New York is", i['moonPhase'], "and is in the middle of new moon phase and the first quarter moon")
Explanation: 3. Moon Visible in New York
The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
TemMax = data['daily']['data']
for i in TemMax:
tem_diff = i['temperatureMax'] - i['temperatureMin']
print("The temparature difference for today approximately is", round(tem_diff))
Explanation: 4. What's the difference between the high and low temperatures for today?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941')
data = response.json()
temp = data['daily']['data']
#print(temp)
count = 0
for i in temp:
count = count+1
print("The high temperature for the day", count, "is", i['temperatureMax'], "and the low temperature is", i['temperatureMin'])
if float(i['temperatureMin']) < 40:
print("it's a cold weather")
elif (float(i['temperatureMin']) > 40) & (float(i['temperatureMin']) < 60):
print("It's a warm day!")
else:
print("It's very hot weather")
Explanation: 5. Next Week's Prediction
Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/25.761680,-80.191790, 2016-06-09T12:01:00-0400')
data = response.json()
#print(data['hourly']['data'])
Tem = data['hourly']['data']
count = 0
for i in Tem:
count = count +1
print("The temperature in Miami, Florida on 9th June in the", count, "hour is", i['temperature'])
if float(i['cloudCover']) > 0.5:
print("and is cloudy")
Explanation: 6.Weather in Florida
What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1980-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1980 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1990-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1990 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 2000-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 2000 was", Temp)
Explanation: 7. Temperature in Central Park
What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
End of explanation |
10,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2
Step1: Answer
Step2: 3
Step3: 4
Step4: 6
Step5: 7
Step6: 9
Step7: 10
Step8: 12
Step9: 13
Step10: 14
Step11: 15
Step12: Answer
Step13: 16
Step14: Answer
Step15: 18
Step16: Answer
Step17: 19
Step18: Answer
Step19: 20
Step20: Answer
Step21: 21
Step22: Answer
Step23: 23
Step24: Answer
Step25: 24
Step26: Answer | Python Code:
# We can use the set() function to convert lists into sets.
# A set is a data type, just like a list, but it only contains each value once.
car_makers = ["Ford", "Volvo", "Audi", "Ford", "Volvo"]
# Volvo and ford are duplicates
print(car_makers)
# Converting to a set
unique_car_makers = set(car_makers)
print(unique_car_makers)
# We can't index sets, so we need to convert back into a list first.
unique_cars_list = list(unique_car_makers)
print(unique_cars_list[0])
genders_list = []
unique_genders = set()
unique_genders_list = []
from legislators import legislators
Explanation: 2: Find the different genders
Instructions
Loop through the rows in legislators, and extract the gender column (fourth column)
Append the genders to genders_list.
Then turn genders_list into a set, and assign it to unique_genders
Finally, convert unique_genders back into a list, and assign it to unique_genders_list.
End of explanation
genders_list = []
for leg in legislators:
genders_list.append(leg[3])
unique_genders = set(genders_list)
unique_gender_list = list(unique_genders)
print(unique_gender_list)
Explanation: Answer
End of explanation
for leg in legislators:
if leg[3] == '':
leg[3] = 'M'
Explanation: 3: Replacing genders
Instructions
Loop through the rows in legislators and replace any gender values of "" with "M".
End of explanation
birth_years = []
for row in legislators:
birth_year = row[2].split("-")[0]
birth_years.append(birth_year)
print(birth_years)
Explanation: 4: Parsing birth years
Instructions
Loop through the rows in legislators
Inside the loop, get the birthday column from the row, and split the birthday.
After splitting the birthday, get the birth year, and append it to birth_years
At the end, birth_years will contain the birth years of all the congresspeople in the data.
End of explanation
dogs = ["labrador", "poodle", "collie"]
cats = ["siamese", "persian", "somali"]
# Enumerate the dogs list, and print the values.
for i, dog in enumerate(dogs):
# Will print the dog at the current loop iteration.
print(dog)
# This will equal dog. Prints the dog at index i.
print(dogs[i])
# Print the cat at index i.
print(cats[i])
ships = ["Andrea Doria", "Titanic", "Lusitania"]
cars = ["Ford Edsel", "Ford Pinto", "Yugo"]
for i, e in enumerate(ships):
print(e)
print(cars[i])
Explanation: 6: Practice with enumerate
Instructions
Use a for loop to enumerate the ships list.
In the body of the loop, print the ship at the current index, then the car at the current index.
Make sure you have two separate print statements.
End of explanation
lolists = [["apple", "monkey"], ["orange", "dog"], ["banana", "cat"]]
trees = ["cedar", "maple", "fig"]
for i, row in enumerate(lolists):
row.append(trees[i])
# Our list now has a new column containing the values from trees.
print(lolists)
# Legislators and birth_years have both been loaded in.
for i, e in enumerate(legislators):
e.append(birth_years[i])
Explanation: 7: Create a birth year column
Instructions
Loop through the rows in legislators list, and append the corresponding value in birth_years to each row.
End of explanation
# Define a list of lists
data = [["tiger", "lion"], ["duck", "goose"], ["cardinal", "bluebird"]]
# Extract the first column from the list
first_column = [row[0] for row in data]
apple_price = [100, 101, 102, 105]
apple_price_doubled = [2*p for p in apple_price]
apple_price_lowered = [p-100 for p in apple_price]
print(apple_price_doubled, apple_price_lowered)
Explanation: 9: Practice with list comprehensions
Double all of the prices in apple_price, and assign the resulting list to apple_price_doubled.
Subtract 100 from all of the prices in apple_price, and assign the resulting list to apple_price_lowered.
End of explanation
for row in legislators:
try:
row[7] = int(row[7])
except ValueError as verr:
row[7] = None
# Hmm, but the above code fails.
# It fails because there is a value in the column that can't be converted to an int.
# Remember how some genders were missing? It also looks like some birthdays were missing, which is giving us invalid values in the birth years column.
Explanation: 10: Convert birth years to integers
End of explanation
# Cannot be parsed into an int with the int() function.
invalid_int = ""
# Can be parsed into an int.
valid_int = "10"
# Parse the valid int
try:
valid_int = int(valid_int)
except Exception:
# This code is never run, because there is no error parsing valid_int into an integer.
valid_int = 0
# Try to parse the invalid int
try:
invalid_int = int(invalid_int)
except Exception:
# The parsing fails, so we end up here.
# The code here will be run, and will assign 0 to invalid_int.
invalid_int = 0
print(valid_int)
print(invalid_int)
another_invalid_int = "Oregon"
another_valid_int = "1000"
try:
another_invalid_int = int(another_invalid_int)
except Exception as ex:
another_invalid_int = 0
try:
another_valid_int = int(another_valid_int)
except Exception as ex:
another_valid_int = 0
print(another_invalid_int, another_valid_int)
Explanation: 12: Practice with try/except
Instructions
Use try/except statements to parse another_invalid_int and another_valid_int.
Assign 0 to another_invalid_int in the except block.
At the end, another_valid_int will be parsed properly, and another_invalid_int will be 0.
End of explanation
invalid_int = ""
try:
# This parsing will fail
invalid_int = int(invalid_int)
except Exception:
# Nothing will happen in the body of the except statement, because we are passing.
pass
# invalid_int still has the same value.
print(invalid_int)
# We can also use the pass statement with for loops.
# (although it's less useful in this example)
a = [1,4,5]
for i in a:
pass
# And if statements.
if 10 > 5:
pass
# We can use the pass keyword inside the body of any statement that ends with a colon.
valid_int = "10"
try:
valid_int = int(valid_int)
except:
pass
print(valid_int)
Explanation: 13: The pass keyword
Instructions
Use a try/except block to parse valid_int into an integer.
Use the pass keyword inside the except block.
End of explanation
for row in legislators:
try:
row[7] = int(row[7])
except Exception as ex:
row[7] = 0
print(legislators)
Explanation: 14: Convert birth years to integers
Instructions
Loop over the rows in legislators, and convert the values in the birth year column to integers.
In cases where parsing fails, assign 0 as the value.
End of explanation
data = [[1,1],[0,5],[10,7]]
last_value = 0
# There are some holes in this code -- it won't work properly if the first birth year is 0, for example, but its fine for now.
# It keeps track of the last value in the column in the last_value variable.
# If it finds an item that equals 0, it replaces the value with the last value.
for row in data:
# Check if the item is 0.
if row[0] == 0:
# If it is, replace it with the last value.
row[0] = last_value
# Set last value equal to the item -- we need to do this in order to keep track of what the previous value was, so we can use it for replacement.
last_value = row[0]
# The 0 value in the second row, first column has been replaced with a 1.
print(data)
Explanation: 15: Fill in years without a value
Instructions
Loop through legislators, and replace any values in the birth_year column that are 0 with the previous value.
End of explanation
last_birth_year = 0
for row in legislators:
if row[7] == 0:
row[7] = last_birth_year
last_birth_year = row[7]
Explanation: Answer
End of explanation
names = ["Jim", "Bob", "Bob", "JimBob", "Joe", "Jim"]
name_counts = {}
for name in names:
if name in name_counts:
name_counts[name] = name_counts[name] + 1
else:
name_counts[name] = 1
female_name_counts = {}
Explanation: 16: Counting up the female names
Instructions
Count up how many times each female name occurs in legislators. First name is the second column.
You'll need to make sure that gender (fourth column) equals "F", and that birth year (eighth column) is greater than 1940.
Store the first name key and the counts in the female_name_counts dictionary.
You'll need to use nested if statements to first check if gender and birth year are valid, and then to check if the first name is in female_name_counts.
End of explanation
female_name_counts = {}
for row in legislators:
if row[3] == 'F' and row[7] > 1940:
fname = row[1]
if fname in female_name_counts:
female_name_counts[fname] += 1
else:
female_name_counts[fname] = 1
Explanation: Answer
End of explanation
# Set a variable equal to the None type
a = None
# A normal variable
b = 1
# This is True
print(a is None)
# And this is False
print(b is None)
# a is of the None type
print(type(a))
# Assigns whether a equals None to a_none
a_none = a is None
# Evaluates to True
print(a_none)
c = None
d = "Bamboo"
Explanation: 18: Practicing with the None type
Instructions
Check whether c equals None, and assign the result to c_none.
Check whether d equals None, and assign the result to d_none.
End of explanation
c_none, d_none = c is None, d is None
print(c_none, d_none)
Explanation: Answer
End of explanation
max_val = None
data = [-10, -20, -50, -100]
for i in data:
# If max_val equals None, or i is greater than max_val, then set max_val equal to i.
# This ensures that no matter how small the values in data are, max_val will always get changed to a value in the list.
# If you are checking if a value equals None and you are using it with and or or, then the None check always needs to come first.
if max_val is None or i > max_val:
max_val = i
min_val = None
income = [100,700,100,50,100,40,56,31,765,1200,1400,32,6412,987]
Explanation: 19: Finding maximums with the None type
Instructions
Use a for loop to set min_val equal to the smallest value in income.
End of explanation
min_val = None
for inc in income:
if min_val is None or inc < min_val:
min_val = inc
print(min_val)
Explanation: Answer
End of explanation
# female_name_counts has been loaded in.
max_value = None
Explanation: 20: Finding how many times the top female names occur
Instructions
Loop through the keys in female_name_counts, and get the value associated with the key.
Assign the value to max_value if it is larger, or if max_value is None.
At the end of the loop, max_value will be the largest value in the dictionary.
End of explanation
for key in female_name_counts:
val = female_name_counts[key]
if max_value is None or val > max_value:
max_value = val
Explanation: Answer
End of explanation
# female_name_counts has been loaded in.
top_female_names = []
Explanation: 21: Finding the female names that occur the most
Instructions
Loop through the keys in female_name_counts.
If any value equals 2, append the key to top_female_names.
At the end, top_female_names will be a list of the most occurring female congressperson names.
End of explanation
for key in female_name_counts:
value = female_name_counts[key]
if value == 2:
top_female_names.append(key)
print(top_female_names)
Explanation: Answer
End of explanation
animal_types = {"robin": "bird", "pug": "dog", "osprey": "bird"}
# The .items method lets us access a dictionary key and value in a loop.
for key,value in animal_types.items():
print(key)
print(value)
# This is equal to the value
print(animal_types[key])
plant_types = {"orchid": "flower", "cedar": "tree", "maple": "tree"}
Explanation: 23: Practice with the items method
Instructions
Use the .items() method along with a for loop to loop through plant_types.
Inside the loop, print the key, and then the value.
End of explanation
for key, val in plant_types.items():
print(key)
print(val)
Explanation: Answer
End of explanation
# legislators has been loaded in.
top_male_names = []
Explanation: 24: Finding the male names that occur the most
Instructions
Loop through legislators, and count up how much each name where the gender column equals "M" and the birth year is after 1940 occurs. Store the results in a dictionary.
Then find the highest value in that dictionary.
Finally, loop through the dictionary and append any keys where the value equals the highest value to top_male_names.
End of explanation
male_name_counts = {}
print(len(legislators))
for row in legislators:
if row[3] == 'M' and row[7] > 1940:
fname = row[1]
if fname in male_name_counts:
male_name_counts[fname] += 1
else:
male_name_counts[fname] = 1
print(male_name_counts)
max_value = None
for key in male_name_counts:
val = male_name_counts[key]
if max_value is None or val > max_value:
max_value = val
for key, val in male_name_counts.items():
if val == max_value:
top_male_names.append(key)
print(top_male_names)
Explanation: Answer
End of explanation |
10,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This section is a walk through the pre-alignment sequence filtering in ReproPhylo. We will start by several preliminaries discussed in the previous sections
Step1: 3.5.1 Filtering by sequence length or GC content
At this point we have record features belonging to the loci in our Project. We have to split them by locus
Step2: With this done, we can display length and %GC distribution for each locus
Step3: Now we'll exclude all the outliers
Step4: We can now confirm that the filter worked
Step5: 3.5.2 Excluding and including
It is possible to exclude and include sequences by record id
3.5.2.1 Excluding
By default, excluding is done by starting with a full bin (all the sequences are included). In this case, since we have already filtered some sequences out, we need to start excluding from the current state and not from a full bin. Starting from a full bin by using the default setting start_from_max=True would undo the filtering by GC content and sequence length we have done above. As an example we will exclude JX177918.1 from the MT-CO1 Locus bin.
Step6: The following line confirms that this record id is no longer in the MT-CO1 Locus bin.
Step7: 3.5.2.2 Including
By default, including starts from empty bins, however here we want to keep the current state and only add one sequence
Step8: The following line confirms that this record was added back to the MT-CO1 Locus bin.
Step9: 3.5.3 Quick reference | Python Code:
from reprophylo import *
pj = unpickle_pj('outputs/my_project.pkpj', git=False)
Explanation: This section is a walk through the pre-alignment sequence filtering in ReproPhylo. We will start by several preliminaries discussed in the previous sections:
End of explanation
pj.extract_by_locus()
Explanation: 3.5.1 Filtering by sequence length or GC content
At this point we have record features belonging to the loci in our Project. We have to split them by locus:
End of explanation
%matplotlib inline
pj.report_seq_stats()
Explanation: With this done, we can display length and %GC distribution for each locus:
End of explanation
# Define minima and maxima
gc_inliers = {
'18s': [50,54],
'28s': [57,67],
'MT-CO1': [35,43]
}
len_inliers = {
'18s': [1200,1800],
'28s': [500,900],
'MT-CO1': [500,1500]
}
# Apply to loci data
for locus in gc_inliers:
# trim GC outliers
pj.filter_by_gc_content(locus,
min_percent_gc=gc_inliers[locus][0],
max_percent_gc=gc_inliers[locus][1])
# trim length outlier
pj.filter_by_seq_length(locus,
min_length=len_inliers[locus][0],
max_length=len_inliers[locus][1])
Explanation: Now we'll exclude all the outliers:
End of explanation
pj.report_seq_stats()
Explanation: We can now confirm that the filter worked:
End of explanation
exclude = {'MT-CO1': ['JX177918.1']}
pj.exclude(start_from_max=False, **exclude)
Explanation: 3.5.2 Excluding and including
It is possible to exclude and include sequences by record id
3.5.2.1 Excluding
By default, excluding is done by starting with a full bin (all the sequences are included). In this case, since we have already filtered some sequences out, we need to start excluding from the current state and not from a full bin. Starting from a full bin by using the default setting start_from_max=True would undo the filtering by GC content and sequence length we have done above. As an example we will exclude JX177918.1 from the MT-CO1 Locus bin.
End of explanation
any(['JX177918.1' in feature.id for feature in pj.records_by_locus['MT-CO1']])
Explanation: The following line confirms that this record id is no longer in the MT-CO1 Locus bin.
End of explanation
include = {'MT-CO1': ['JX177918.1']}
pj.include(start_from_null=False, **include)
Explanation: 3.5.2.2 Including
By default, including starts from empty bins, however here we want to keep the current state and only add one sequence:
End of explanation
any(['JX177918.1' in feature.id for feature in pj.records_by_locus['MT-CO1']])
# Update the pickle file
pickle_pj(pj, 'outputs/my_project.pkpj')
Explanation: The following line confirms that this record was added back to the MT-CO1 Locus bin.
End of explanation
# Split records to loci bins
pj.extract_by_locus()
# Show length and %GC distributions
%matplotlib inline
pj.report_seq_stats()
# Filter by GC content
pj.filter_by_locus('LocusName',
min_percent_gc = 30,
max_percent_gc = 50)
# Filter by sequence length
pj.filter_by_seq_length('LocusName',
min_length = 200,
max_length = 1000)
# Include or exclude records in the loci bins
records = {'LocusName1': ['recordid1','recordid2'],
'LocusName2': ['recordid3','recordid4']}
pj.exclude(start_from_max=True, **records)
# or
pj.include(start_from_null=True, **records)
Explanation: 3.5.3 Quick reference
End of explanation |
10,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pipeline for microendoscopic data processing in CaImAn using the CNMF-E algorithm
This demo presents a complete pipeline for processing microendoscopic data using CaImAn. It includes
Step1: Select file(s) to be processed
The download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fnames variable as a list. Note that the memory requirement of the CNMF-E algorithm are much higher compared to the standard CNMF algorithm. Test the limits of your system before trying to process very large amounts of data.
Step2: Setup a cluster
To enable parallel processing a (local) cluster needs to be set up. This is done with a cell below. The variable backend determines the type of cluster used. The default value 'local' uses the multiprocessing package. The ipyparallel option is also available. More information on these choices can be found here. The resulting variable dview expresses the cluster option. If you use dview=dview in the downstream analysis then parallel processing will be used. If you use dview=None then no parallel processing will be employed.
Step3: Setup some parameters
We first set some parameters related to the data and motion correction and create a params object. We'll modify this object with additional settings later on. You can also set all the parameters at once as demonstrated in the demo_pipeline.ipynb notebook.
Step4: Motion Correction
The background signal in micro-endoscopic data is very strong and makes the motion correction challenging.
As a first step the algorithm performs a high pass spatial filtering with a Gaussian kernel to remove the bulk of the background and enhance spatial landmarks.
The size of the kernel is given from the parameter gSig_filt. If this is left to the default value of None then no spatial filtering is performed (default option, used in 2p data).
After spatial filtering, the NoRMCorre algorithm is used to determine the motion in each frame. The inferred motion is then applied to the original data so no information is lost.
The motion corrected files are saved in memory mapped format. If no motion correction is being performed, then the file gets directly memory mapped.
Step5: Load memory mapped file
Step6: Parameter setting for CNMF-E
We now define some parameters for the source extraction step using the CNMF-E algorithm.
We construct a new dictionary and use this to modify the existing params object,
Step7: Inspect summary images and set parameters
Check the optimal values of min_corr and min_pnr by moving slider in the figure that pops up. You can modify them in the params object.
Note that computing the correlation pnr image can be computationally and memory demanding for large datasets. In this case you can compute
only on a subset of the data (the results will not change). You can do that by changing images[
Step8: You can inspect the correlation and PNR images to select the threshold values for min_corr and min_pnr. The algorithm will look for components only in places where these value are above the specified thresholds. You can adjust the dynamic range in the plots shown above by choosing the selection tool (third button from the left) and selecting the desired region in the histogram plots on the right of each panel.
Step9: Run the CNMF-E algorithm
Step10: Alternate way to run the pipeline at once
It is possible to run the combined steps of motion correction, memory mapping, and cnmf fitting in one step as shown below. The command is commented out since the analysis has already been performed. It is recommended that you familiriaze yourself with the various steps and the results of the various steps before using it.
Step11: Component Evaluation
The processing in patches creates several spurious components. These are filtered out by evaluating each component using three different criteria
Step12: Do some plotting
Step13: View traces of accepted and rejected components. Note that if you get data rate error you can start Jupyter notebooks using
Step14: Stop cluster
Step15: Some instructive movies
Play the reconstructed movie alongside the original movie and the (amplified) residual | Python Code:
try:
get_ipython().magic(u'load_ext autoreload')
get_ipython().magic(u'autoreload 2')
get_ipython().magic(u'matplotlib qt')
except:
pass
import logging
import matplotlib.pyplot as plt
import numpy as np
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.DEBUG)
import caiman as cm
from caiman.source_extraction import cnmf
from caiman.utils.utils import download_demo
from caiman.utils.visualization import inspect_correlation_pnr, nb_inspect_correlation_pnr
from caiman.motion_correction import MotionCorrect
from caiman.source_extraction.cnmf import params as params
from caiman.utils.visualization import plot_contours, nb_view_patches, nb_plot_contour
import cv2
try:
cv2.setNumThreads(0)
except:
pass
import bokeh.plotting as bpl
import holoviews as hv
bpl.output_notebook()
hv.notebook_extension('bokeh')
Explanation: Pipeline for microendoscopic data processing in CaImAn using the CNMF-E algorithm
This demo presents a complete pipeline for processing microendoscopic data using CaImAn. It includes:
- Motion Correction using the NoRMCorre algorithm
- Source extraction using the CNMF-E algorithm
- Deconvolution using the OASIS algorithm
Some basic visualization is also included. The demo illustrates how to params, MoctionCorrection and cnmf object for processing 1p microendoscopic data. For processing two-photon data consult the related demo_pipeline.ipynb demo. For more information see the companion CaImAn paper.
End of explanation
fnames = ['data_endoscope.tif'] # filename to be processed
fnames = [download_demo(fnames[0])]
Explanation: Select file(s) to be processed
The download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fnames variable as a list. Note that the memory requirement of the CNMF-E algorithm are much higher compared to the standard CNMF algorithm. Test the limits of your system before trying to process very large amounts of data.
End of explanation
#%% start a cluster for parallel processing (if a cluster already exists it will be closed and a new session will be opened)
if 'dview' in locals():
cm.stop_server(dview=dview)
c, dview, n_processes = cm.cluster.setup_cluster(
backend='local', n_processes=None, single_thread=False)
Explanation: Setup a cluster
To enable parallel processing a (local) cluster needs to be set up. This is done with a cell below. The variable backend determines the type of cluster used. The default value 'local' uses the multiprocessing package. The ipyparallel option is also available. More information on these choices can be found here. The resulting variable dview expresses the cluster option. If you use dview=dview in the downstream analysis then parallel processing will be used. If you use dview=None then no parallel processing will be employed.
End of explanation
# dataset dependent parameters
frate = 10 # movie frame rate
decay_time = 0.4 # length of a typical transient in seconds
# motion correction parameters
motion_correct = True # flag for performing motion correction
pw_rigid = False # flag for performing piecewise-rigid motion correction (otherwise just rigid)
gSig_filt = (3, 3) # size of high pass spatial filtering, used in 1p data
max_shifts = (5, 5) # maximum allowed rigid shift
strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels
overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)
max_deviation_rigid = 3 # maximum deviation allowed for patch with respect to rigid shifts
border_nan = 'copy' # replicate values along the boundaries
mc_dict = {
'fnames': fnames,
'fr': frate,
'decay_time': decay_time,
'pw_rigid': pw_rigid,
'max_shifts': max_shifts,
'gSig_filt': gSig_filt,
'strides': strides,
'overlaps': overlaps,
'max_deviation_rigid': max_deviation_rigid,
'border_nan': border_nan
}
opts = params.CNMFParams(params_dict=mc_dict)
Explanation: Setup some parameters
We first set some parameters related to the data and motion correction and create a params object. We'll modify this object with additional settings later on. You can also set all the parameters at once as demonstrated in the demo_pipeline.ipynb notebook.
End of explanation
if motion_correct:
# do motion correction rigid
mc = MotionCorrect(fnames, dview=dview, **opts.get_group('motion'))
mc.motion_correct(save_movie=True)
fname_mc = mc.fname_tot_els if pw_rigid else mc.fname_tot_rig
if pw_rigid:
bord_px = np.ceil(np.maximum(np.max(np.abs(mc.x_shifts_els)),
np.max(np.abs(mc.y_shifts_els)))).astype(np.int)
else:
bord_px = np.ceil(np.max(np.abs(mc.shifts_rig))).astype(np.int)
plt.subplot(1, 2, 1); plt.imshow(mc.total_template_rig) # % plot template
plt.subplot(1, 2, 2); plt.plot(mc.shifts_rig) # % plot rigid shifts
plt.legend(['x shifts', 'y shifts'])
plt.xlabel('frames')
plt.ylabel('pixels')
bord_px = 0 if border_nan is 'copy' else bord_px
fname_new = cm.save_memmap(fname_mc, base_name='memmap_', order='C',
border_to_0=bord_px)
else: # if no motion correction just memory map the file
fname_new = cm.save_memmap(fnames, base_name='memmap_',
order='C', border_to_0=0, dview=dview)
Explanation: Motion Correction
The background signal in micro-endoscopic data is very strong and makes the motion correction challenging.
As a first step the algorithm performs a high pass spatial filtering with a Gaussian kernel to remove the bulk of the background and enhance spatial landmarks.
The size of the kernel is given from the parameter gSig_filt. If this is left to the default value of None then no spatial filtering is performed (default option, used in 2p data).
After spatial filtering, the NoRMCorre algorithm is used to determine the motion in each frame. The inferred motion is then applied to the original data so no information is lost.
The motion corrected files are saved in memory mapped format. If no motion correction is being performed, then the file gets directly memory mapped.
End of explanation
# load memory mappable file
Yr, dims, T = cm.load_memmap(fname_new)
images = Yr.T.reshape((T,) + dims, order='F')
Explanation: Load memory mapped file
End of explanation
# parameters for source extraction and deconvolution
p = 1 # order of the autoregressive system
K = None # upper bound on number of components per patch, in general None
gSig = (3, 3) # gaussian width of a 2D gaussian kernel, which approximates a neuron
gSiz = (13, 13) # average diameter of a neuron, in general 4*gSig+1
Ain = None # possibility to seed with predetermined binary masks
merge_thr = .7 # merging threshold, max correlation allowed
rf = 40 # half-size of the patches in pixels. e.g., if rf=40, patches are 80x80
stride_cnmf = 20 # amount of overlap between the patches in pixels
# (keep it at least large as gSiz, i.e 4 times the neuron size gSig)
tsub = 2 # downsampling factor in time for initialization,
# increase if you have memory problems
ssub = 1 # downsampling factor in space for initialization,
# increase if you have memory problems
# you can pass them here as boolean vectors
low_rank_background = None # None leaves background of each patch intact,
# True performs global low-rank approximation if gnb>0
gnb = 0 # number of background components (rank) if positive,
# else exact ring model with following settings
# gnb= 0: Return background as b and W
# gnb=-1: Return full rank background B
# gnb<-1: Don't return background
nb_patch = 0 # number of background components (rank) per patch if gnb>0,
# else it is set automatically
min_corr = .8 # min peak value from correlation image
min_pnr = 10 # min peak to noise ration from PNR image
ssub_B = 2 # additional downsampling factor in space for background
ring_size_factor = 1.4 # radius of ring is gSiz*ring_size_factor
opts.change_params(params_dict={'method_init': 'corr_pnr', # use this for 1 photon
'K': K,
'gSig': gSig,
'gSiz': gSiz,
'merge_thr': merge_thr,
'p': p,
'tsub': tsub,
'ssub': ssub,
'rf': rf,
'stride': stride_cnmf,
'only_init': True, # set it to True to run CNMF-E
'nb': gnb,
'nb_patch': nb_patch,
'method_deconvolution': 'oasis', # could use 'cvxpy' alternatively
'low_rank_background': low_rank_background,
'update_background_components': True, # sometimes setting to False improve the results
'min_corr': min_corr,
'min_pnr': min_pnr,
'normalize_init': False, # just leave as is
'center_psf': True, # leave as is for 1 photon
'ssub_B': ssub_B,
'ring_size_factor': ring_size_factor,
'del_duplicates': True, # whether to remove duplicates from initialization
'border_pix': bord_px}) # number of pixels to not consider in the borders)
Explanation: Parameter setting for CNMF-E
We now define some parameters for the source extraction step using the CNMF-E algorithm.
We construct a new dictionary and use this to modify the existing params object,
End of explanation
# compute some summary images (correlation and peak to noise)
cn_filter, pnr = cm.summary_images.correlation_pnr(images[::1], gSig=gSig[0], swap_dim=False) # change swap dim if output looks weird, it is a problem with tiffile
# inspect the summary images and set the parameters
nb_inspect_correlation_pnr(cn_filter, pnr)
Explanation: Inspect summary images and set parameters
Check the optimal values of min_corr and min_pnr by moving slider in the figure that pops up. You can modify them in the params object.
Note that computing the correlation pnr image can be computationally and memory demanding for large datasets. In this case you can compute
only on a subset of the data (the results will not change). You can do that by changing images[::1] to images[::5] or something similar.
This will compute the correlation pnr image
End of explanation
# print parameters set above, modify them if necessary based on summary images
print(min_corr) # min correlation of peak (from correlation image)
print(min_pnr) # min peak to noise ratio
Explanation: You can inspect the correlation and PNR images to select the threshold values for min_corr and min_pnr. The algorithm will look for components only in places where these value are above the specified thresholds. You can adjust the dynamic range in the plots shown above by choosing the selection tool (third button from the left) and selecting the desired region in the histogram plots on the right of each panel.
End of explanation
cnm = cnmf.CNMF(n_processes=n_processes, dview=dview, Ain=Ain, params=opts)
cnm.fit(images)
Explanation: Run the CNMF-E algorithm
End of explanation
# cnm1 = cnmf.CNMF(n_processes, params=opts, dview=dview)
# cnm1.fit_file(motion_correct=motion_correct)
Explanation: Alternate way to run the pipeline at once
It is possible to run the combined steps of motion correction, memory mapping, and cnmf fitting in one step as shown below. The command is commented out since the analysis has already been performed. It is recommended that you familiriaze yourself with the various steps and the results of the various steps before using it.
End of explanation
#%% COMPONENT EVALUATION
# the components are evaluated in three ways:
# a) the shape of each component must be correlated with the data
# b) a minimum peak SNR is required over the length of a transient
# c) each shape passes a CNN based classifier
min_SNR = 3 # adaptive way to set threshold on the transient size
r_values_min = 0.85 # threshold on space consistency (if you lower more components
# will be accepted, potentially with worst quality)
cnm.params.set('quality', {'min_SNR': min_SNR,
'rval_thr': r_values_min,
'use_cnn': False})
cnm.estimates.evaluate_components(images, cnm.params, dview=dview)
print(' ***** ')
print('Number of total components: ', len(cnm.estimates.C))
print('Number of accepted components: ', len(cnm.estimates.idx_components))
Explanation: Component Evaluation
The processing in patches creates several spurious components. These are filtered out by evaluating each component using three different criteria:
the shape of each component must be correlated with the data at the corresponding location within the FOV
a minimum peak SNR is required over the length of a transient
each shape passes a CNN based classifier
<img src="../../docs/img/evaluationcomponent.png"/>
After setting some parameters we again modify the existing params object.
End of explanation
#%% plot contour plots of accepted and rejected components
cnm.estimates.plot_contours_nb(img=cn_filter, idx=cnm.estimates.idx_components)
Explanation: Do some plotting
End of explanation
# accepted components
cnm.estimates.hv_view_components(img=cn_filter, idx=cnm.estimates.idx_components,
denoised_color='red', cmap='gray')
# rejected components
cnm.estimates.hv_view_components(img=cn_filter, idx=cnm.estimates.idx_components_bad,
denoised_color='red', cmap='gray')
Explanation: View traces of accepted and rejected components. Note that if you get data rate error you can start Jupyter notebooks using:
'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10'
End of explanation
cm.stop_server(dview=dview)
Explanation: Stop cluster
End of explanation
# with background
cnm.estimates.play_movie(images, q_max=99.5, magnification=2,
include_bck=True, gain_res=10, bpx=bord_px)
# without background
cnm.estimates.play_movie(images, q_max=99.9, magnification=2,
include_bck=False, gain_res=4, bpx=bord_px)
Explanation: Some instructive movies
Play the reconstructed movie alongside the original movie and the (amplified) residual
End of explanation |
10,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-2', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:56
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
10,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stock Spans
Chapter 1 of Real World Algorithms.
Panos Louridas<br />
Athens University of Economics and Business
Stacks in Python
There is no special stack data structure in Python, as all the required functionality is provided by lists.
We push something on the stack by appending it to the end of the list, calling append() on the list.
We pop something from the stack by calling pop() on the list.
Step1: The Stock Span Problem in Python
We will work with data for the Dow Jones Industrial Average (DJIA) of the New York Stock Exchange.
Before we start, let us see how the DJIA has evolved over time.
The following lines require knowledge of special Python libraries; don't worry if you don't understand them.
If you do want to run them, you have to install the pandas and matplotlib libraries, and enter them in a Jupyter notebook.
The file djia.csv contains the DJIA from February 16, 1885 to 2022. The data was retrieved from MeasuringWorth (Samuel H. Williamson, "Daily Closing Values of the DJA in the United States, 1885 to Present," MeasuringWorth, 2020
URL
Step2: Now back to basics.
The following Python function implements the simple stock span algorithm.
It takes as input a list quotes with the DJIA closing values, one per day.
It returns a list spans with the stock spack for every day.
Step3: To use this function we must have constructed the quotes list.
The following function takes a file that contains the DJIA data. The file has the following format
Step4: Then for the file djia.csv we do the following
Step5: Let's check how many lines we've read
Step6: We can call simple_stock_span(quotes) and print some stock spans.
Step7: Out of curiosity, let's find the greatest stock span, using Python's max() function.
Step8: Then we want to find the date when this happened. To do that, we need to get the index of the max_value in spans, as this will be the same with the index of the date in which it occurred in dates. To find the first index of an item in a list, we use the index() method.
Step9: Now we will examine how long simple_stock_span(quotes) takes to run.
We will run it 10 times and report the average.
Step10: We will contrast that result with a stack-based implementation.
The stack-based stock span algorithm in Python is as follows
Step11: We call it in the same way
Step12: We verify that the two results, spans_simple and spans_stack are the same.
Step13: And we can measure the time it takes
Step14: This is a very substantial decrease! The difference, of course, is owed to the fact that the second algorithm executes much fewer steps than the first.
In fact, we can have the computer count exactly the number of steps it executes when executing the code. First, we put all the code we want to count in one file; below we have copied the functions we've written and we call simple_stock_span() and stack_stock_span() once each. | Python Code:
stack = [3, 4, 5]
stack.append(6)
stack.append(7)
stack
stack.pop()
stack
stack.pop()
stack
stack.pop()
stack
Explanation: Stock Spans
Chapter 1 of Real World Algorithms.
Panos Louridas<br />
Athens University of Economics and Business
Stacks in Python
There is no special stack data structure in Python, as all the required functionality is provided by lists.
We push something on the stack by appending it to the end of the list, calling append() on the list.
We pop something from the stack by calling pop() on the list.
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
plt.style.use('ggplot')
df = pd.read_csv("djia.csv", comment="#", parse_dates=[0], index_col=0, names=["date", "djia"])
plt = df.plot(figsize=(12, 6))
plt.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
min_x = df.index.min()
max_x = df.index.max()
plt.set_xlim([min_x, max_x])
ticks = pd.date_range(start=min_x, end=max_x, freq='10A')
_ = plt.set_xticks(ticks)
Explanation: The Stock Span Problem in Python
We will work with data for the Dow Jones Industrial Average (DJIA) of the New York Stock Exchange.
Before we start, let us see how the DJIA has evolved over time.
The following lines require knowledge of special Python libraries; don't worry if you don't understand them.
If you do want to run them, you have to install the pandas and matplotlib libraries, and enter them in a Jupyter notebook.
The file djia.csv contains the DJIA from February 16, 1885 to 2022. The data was retrieved from MeasuringWorth (Samuel H. Williamson, "Daily Closing Values of the DJA in the United States, 1885 to Present," MeasuringWorth, 2020
URL: https://www.measuringworth.com/datasets/DJA/).
End of explanation
def simple_stock_span(quotes: list[float]) -> list[int]:
spans = []
for i in range(len(quotes)):
k = 1
span_end = False
while i - k >= 0 and not span_end:
if quotes[i - k] <= quotes[i]:
k += 1
else:
span_end = True
spans.append(k)
return spans
Explanation: Now back to basics.
The following Python function implements the simple stock span algorithm.
It takes as input a list quotes with the DJIA closing values, one per day.
It returns a list spans with the stock spack for every day.
End of explanation
def read_quotes(filename: str) -> tuple[list[str], list[float]]:
dates = []
quotes = []
with open(filename) as quotes_file:
for line in quotes_file:
if line.startswith('#'):
continue
parts = line.split(',')
if len(parts) != 2:
continue
month, day, year = parts[0].split('/')
date = '/'.join((year, month, day))
dates.append(date)
quotes.append(float(parts[-1]))
return dates, quotes
Explanation: To use this function we must have constructed the quotes list.
The following function takes a file that contains the DJIA data. The file has the following format:
3/9/2022,33286.25
3/10/2022,33174.07
3/11/2022,32944.19
Each quote is preceded by the corresponding date, in month-day-year format. Also, some lines in the file start with #. These are comments and we will ignore them.
So our function will read the file line-by-line and return a list dates with the dates on which we have quotes and a list quotes that will contain the second item of each line (the DJIA value).
To split the line, we use the split() function, which splits a string into pieces, breaking the string at the places it finds the separator that we specify. The expression parts = line.split(',') breaks the line at the comma and returns the pieces in parts; parts[0] is the date and parts[1], or equivalently parts[-1], is the quote. We will ignore any lines that cannot be split into two parts (for example empty lines).
As we noted, the dates are in month-day-year format. We will convert them to the year-month-day format. To do that, we need to split each date at /. The expression month, day, year = parts[0].split('/') will break the date in three parts and assign each one of them to a separate variable. Then, we assemble them in the required order by calling '/'.join((year, month, day)), and we append the resulting date into the dates list. We also append the quote value in the quotes list. As this is a string when we read it from the file, we convert it to a float with a call to float().
At the end, the function will return a list of dates and a list containing the corresponding quotes.
End of explanation
dates, quotes = read_quotes("djia.csv")
Explanation: Then for the file djia.csv we do the following:
End of explanation
len(quotes)
Explanation: Let's check how many lines we've read:
End of explanation
spans_simple = simple_stock_span(quotes)
print(spans_simple[-10:])
Explanation: We can call simple_stock_span(quotes) and print some stock spans.
End of explanation
max_value = max(spans_simple)
max_value
Explanation: Out of curiosity, let's find the greatest stock span, using Python's max() function.
End of explanation
max_value_indx = spans_simple.index(max_value)
dates[max_value_indx]
Explanation: Then we want to find the date when this happened. To do that, we need to get the index of the max_value in spans, as this will be the same with the index of the date in which it occurred in dates. To find the first index of an item in a list, we use the index() method.
End of explanation
import time
iterations = 10
start = time.time()
for i in range(iterations):
simple_stock_span(quotes)
end = time.time()
time_simple = (end - start) / iterations
print(time_simple)
Explanation: Now we will examine how long simple_stock_span(quotes) takes to run.
We will run it 10 times and report the average.
End of explanation
def stack_stock_span(quotes: list[float]) -> list[int]:
spans = [1]
s = []
s.append(0)
for i in range(1, len(quotes)):
while len(s) != 0 and quotes[s[-1]] <= quotes[i]:
s.pop()
if len(s) == 0:
spans.append(i+1)
else:
spans.append(i - s[-1])
s.append(i)
return spans
Explanation: We will contrast that result with a stack-based implementation.
The stack-based stock span algorithm in Python is as follows:
End of explanation
spans_stack = stack_stock_span(quotes)
print(spans_stack[-10:])
Explanation: We call it in the same way:
End of explanation
spans_simple == spans_stack
Explanation: We verify that the two results, spans_simple and spans_stack are the same.
End of explanation
start = time.time()
for i in range(iterations):
stack_stock_span(quotes)
end = time.time()
time_stack = (end - start) / iterations
print(time_stack)
Explanation: And we can measure the time it takes:
End of explanation
import time
def simple_stock_span(quotes: list[float]) -> list[int]:
spans = []
for i in range(len(quotes)):
k = 1
span_end = False
while i - k >= 0 and not span_end:
if quotes[i - k] <= quotes[i]:
k += 1
else:
span_end = True
spans.append(k)
return spans
def stack_stock_span(quotes: list[float]) -> list[int]:
spans = [1]
s = []
s.append(0)
for i in range(1, len(quotes)):
while len(s) != 0 and quotes[s[-1]] <= quotes[i]:
s.pop()
if len(s) == 0:
spans.append(i+1)
else:
spans.append(i - s[-1])
s.append(i)
return spans
def read_quotes(filename: str) -> tuple[list[str], list[float]]:
dates = []
quotes = []
with open(filename) as quotes_file:
for line in quotes_file:
if line.startswith('#'):
continue
parts = line.split(',')
if len(parts) != 2:
continue
month, day, year = parts[0].split('/')
date = '/'.join((year, month, day))
dates.append(date)
quotes.append(float(parts[-1]))
return dates, quotes
_, quotes = read_quotes("djia.csv") # we use _ for a variable that we
# don't care to use
spans_simple = simple_stock_span(quotes)
spans_stack = stack_stock_span(quotes)
print('spans_simple == spans_stack:', spans_simple == spans_stack)
Explanation: This is a very substantial decrease! The difference, of course, is owed to the fact that the second algorithm executes much fewer steps than the first.
In fact, we can have the computer count exactly the number of steps it executes when executing the code. First, we put all the code we want to count in one file; below we have copied the functions we've written and we call simple_stock_span() and stack_stock_span() once each.
End of explanation |
10,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forward Modeling the X-ray Image data
In this notebook, we'll take a closer look at the X-ray image data products, and build a simple, generative, forward model for the observed data.
Step1: The XMM Image Data
Recall that we downloaded some XMM data in the "First Look" notebook.
We downloaded three files, and just looked at one - the "science" image.
Step2: im is the image, our observed data, presented after some "standard processing." The numbers in the pixels are counts (i.e. numbers of photoelectrons recorded by the CCD during the exposure).
We display the image on a log scale, which allows us to simultaneously see both the cluster of galaxies in the center, and the much fainter background and other sources in the field.
Step3: A Model for the Cluster of Galaxies
We will use a common parametric model for the surface brightness of galaxy clusters
Step4: The "Exposure Map"
The ex image is in units of seconds, and represents the effective exposure time at each pixel position.
This is actually the product of the exposure time that the detector was exposed for, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised.
Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
Step5: The "Particle Background Map"
pb is not data at all, but rather a model for the expected counts/pixel in this specific observation due to the "quiescent particle background."
This map comes out of a blackbox in the processing pipeline. Even though there are surely uncertainties in it, we have no quantitative description of them to work with.
Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in pb.
Step6: Sources
There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment.
A convenient way to do this is by setting the exposure map to zero in these locations - as if a set of tiny little shutters in front of each of those pixels had not been opened. "Not observed" is different from "observed zero counts."
Let's read in a text file encoding a list of circular regions in the image, and set the exposure map pixels within each of those regions in to zero.
Step7: As a sanity check, let's have a look at the modified exposure map.
Compare the location of the "holes" to the science image above.
Step8: Generative Model
The last piece we need is an assumption for the sampling distribution for the counts $N$ in each pixel | Python Code:
import astropy.io.fits as pyfits
import astropy.visualization as viz
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
Explanation: Forward Modeling the X-ray Image data
In this notebook, we'll take a closer look at the X-ray image data products, and build a simple, generative, forward model for the observed data.
End of explanation
imfits = pyfits.open('a1835_xmm/P0098010101M2U009IMAGE_3000.FTZ')
im = imfits[0].data
Explanation: The XMM Image Data
Recall that we downloaded some XMM data in the "First Look" notebook.
We downloaded three files, and just looked at one - the "science" image.
End of explanation
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
Explanation: im is the image, our observed data, presented after some "standard processing." The numbers in the pixels are counts (i.e. numbers of photoelectrons recorded by the CCD during the exposure).
We display the image on a log scale, which allows us to simultaneously see both the cluster of galaxies in the center, and the much fainter background and other sources in the field.
End of explanation
pbfits = pyfits.open('a1835_xmm/P0098010101M2X000BKGMAP3000.FTZ')
pb = pbfits[0].data
exfits = pyfits.open('a1835_xmm/P0098010101M2U009EXPMAP3000.FTZ')
ex = exfits[0].data
Explanation: A Model for the Cluster of Galaxies
We will use a common parametric model for the surface brightness of galaxy clusters: the azimuthally symmetric beta model:
$S(r) = S_0 \left[1.0 + \left(\frac{r}{r_c}\right)^2\right]^{-3\beta + 1/2}$,
where $r$ is projected distance from the cluster center.
The parameters of this model are:
$x_0$, the $x$ coordinate of the cluster center
$y_0$, the $y$ coordinate of the cluster center
$S_0$, the normalization, in surface brightness units
$r_c$, a radial scale (called the "core radius")
$\beta$, which determines the slope of the profile
Note that this model describes a 2D surface brightness distribution, since $r^2 = x^2 + y^2$
Predicting the Data
Our data are counts, i.e. the number of times a physical pixel in the camera was activated while pointing at the area of sky corresponding to a pixel in our image. We can think of different sky pixels as having different effective exposure times, as encoded by an exposure map, ex.
Counts can be produced by:
X-rays from our source of interest (the galaxy cluster)
X-rays from other detected sources (i.e. the other sources we've masked out)
X-rays from unresolved background sources (the Cosmic X-ray Background)
Diffuse X-rays from the Galactic halo and the local bubble (the local X-ray foreground)
Soft protons from the solar wind, cosmic rays, and other undesirables (the particle background)
Of these, the particle background represents a flux of particles that either do not traverse the telescope optics at all, or follow a different optical path than X-rays.
In contrast, the X-ray background is vignetted in the same way as X-rays from a source of interest. We will lump these sources (2-4) together, to extend our model so that it is composed of a galaxy cluster, the X-ray background, and the particle background.
Counts from the Cluster
Since our data are counts in each pixel, our model needs to predict the counts in each pixel. However, physical models will not predict count distributions, but rather intensity (counts per second per pixel per unit effective area of the telescope). The spatial variation of the effective area relative to the aimpoint is accounted for in the exposure map, and we can leave the overall area to one side when fitting (although we would need it to turn our results into physically interesting conclusions).
Since the X-rays from the cluster are transformed according to the exposure map, the units of $S_0$ are counts/s/pixel, and the model prediction for the expected number of counts from the cluster is CL*ex, where CL is an image with pixel values computed from $S(r)$.
X-ray background model
The simplest assumption we can make about the X-ray background is that it is spatially uniform, on average. The model must account for the varying effective exposure as a function of position, however. So the model prediction associated with this component is b*ex, where b is a single number with units of counts/s/pixel.
Particle background model
We're given, from a blackbox, a prediction for the expected counts/pixel due to particles, so the model is simply this image, pb.
Full model
Combining these three components, the model (CL+b)*ex + pb predicts an expected number of counts/pixel across the field. What does this mean? Blah blah Poisson distribution, etc.
The "exposure map" and the "particle background map" were supplied to us by the XMM reduction pipeline, along with the science image. Let's take a look at them now.
End of explanation
plt.imshow(ex, cmap='gray', origin='lower');
Explanation: The "Exposure Map"
The ex image is in units of seconds, and represents the effective exposure time at each pixel position.
This is actually the product of the exposure time that the detector was exposed for, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised.
Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
End of explanation
plt.imshow(pb, cmap='gray', origin='lower');
Explanation: The "Particle Background Map"
pb is not data at all, but rather a model for the expected counts/pixel in this specific observation due to the "quiescent particle background."
This map comes out of a blackbox in the processing pipeline. Even though there are surely uncertainties in it, we have no quantitative description of them to work with.
Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in pb.
End of explanation
mask = np.loadtxt('a1835_xmm/M2ptsrc.txt')
for reg in mask:
# this is inefficient but effective
for i in np.round(reg[1]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
for j in np.round(reg[0]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
if (i-reg[1])**2 + (j-reg[0])**2 <= reg[2]**2:
ex[np.int(i-1), np.int(j-1)] = 0.0
Explanation: Sources
There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment.
A convenient way to do this is by setting the exposure map to zero in these locations - as if a set of tiny little shutters in front of each of those pixels had not been opened. "Not observed" is different from "observed zero counts."
Let's read in a text file encoding a list of circular regions in the image, and set the exposure map pixels within each of those regions in to zero.
End of explanation
plt.imshow(ex, cmap='gray', origin='lower');
Explanation: As a sanity check, let's have a look at the modified exposure map.
Compare the location of the "holes" to the science image above.
End of explanation
# import cluster_pgm
# cluster_pgm.forward()
from IPython.display import Image
Image(filename="cluster_pgm_forward.png")
def beta_model_profile(r, S0, rc, beta):
'''
The fabled beta model, radial profile S(r)
'''
return S0 * (1.0 + (r/rc)**2)**(-3.0*beta + 0.5)
def beta_model_image(x, y, x0, y0, S0, rc, beta):
'''
Here, x and y are arrays ("meshgrids" or "ramps") containing x and y pixel numbers,
and the other arguments are galaxy cluster beta model parameters.
Returns a surface brightness image of the same shape as x and y.
'''
return beta_model_profile(np.sqrt((x-x0)**2 + (y-y0)**2), S0, rc, beta)
def model_image(x, y, ex, pb, x0, y0, S0, rc, beta, b):
'''
Here, x, y, ex and pb are images, all of the same shape, and the other args are
cluster model and X-ray background parameters. ex is the (constant) exposure map
and pb is the (constant) particle background map.
'''
return (beta_model_image(x, y, x0, y0, S0, rc, beta) + b) * ex + pb
# Set up the ramp images, to enable fast array calculations:
nx,ny = ex.shape
x = np.outer(np.ones(ny),np.arange(nx))
y = np.outer(np.arange(ny),np.ones(nx))
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(x, cmap='gray', origin='lower')
ax[0].set_title('x')
fig.colorbar(left,ax=ax[0],shrink=0.9)
right = ax[1].imshow(y, cmap='gray', origin='lower')
ax[1].set_title('y')
fig.colorbar(right,ax=ax[1],shrink=0.9)
# Now choose parameters, compute model and plot, compared to data!
x0,y0 = 328,328 # The center of the image is 328,328
S0,b = 0.001,1e-2 # Cluster and background surface brightness, arbitrary units
beta = 2.0/3.0 # Canonical value is beta = 2/3
rc = 12 # Core radius, in pixels
# Realize the expected counts map for the model:
mu = model_image(x,y,ex,pb,x0,y0,S0,rc,beta,b)
# Draw a *sample image* from the Poisson sampling distribution:
mock = np.random.poisson(mu,mu.shape)
# The difference between the mock and the real data should be symmetrical noise if the model
# is a good match...
diff = im - mock
# Plot three panels:
fig,ax = plt.subplots(nrows=1, ncols=3)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(viz.scale_image(mock, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[0].set_title('Mock (log, rescaled)')
fig.colorbar(left,ax=ax[0],shrink=0.6)
center = ax[1].imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[1].set_title('Data (log, rescaled)')
fig.colorbar(center,ax=ax[1],shrink=0.6)
right = ax[2].imshow(diff, vmin=-40, vmax=40, cmap='gray', origin='lower')
ax[2].set_title('Difference (linear)')
fig.colorbar(right,ax=ax[2],shrink=0.6)
Explanation: Generative Model
The last piece we need is an assumption for the sampling distribution for the counts $N$ in each pixel: let's assume that this distribution is Poisson, since we expect X-ray photon arrivals to be "rare events."
${\rm Pr}(N_k|\mu_k) = \frac{{\rm e}^{-\mu} \mu^{N_k}}{N_k !}$
Here, $\mu_k(\theta)$ is the expected number of counts in the $k$th pixel:
$\mu_k(\theta) = \left( S(r_k;\theta) + b \right) \cdot$ ex + pb
At this point we can draw the PGM for a forward model of this dataset, using the exposure and particle background maps supplied, and some choices for the model parameters.
Then, we can go ahead and simulate some mock data and compare with the image we have.
End of explanation |
10,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Py-EMDE
Python Email Data Entry
The following code can gather data from weather stations reporting to the CHORDS portal, package it up into the proper format for GLOBE Email Data Entry , and send it using the SparkPost API.
In order to send email, you'll need to setup SparkPost by creating an account and confirming you own the domain you'll be sending emails from. You'll also need to create a SparkPost API key and set the environment variable SPARKPOST_API_KEY equal to the value of your API key. This script can be further modified to use a different method for sending email if needed.
This code will contact the CHORDS Portal and collect all the measurement data from the specified instrument, in the specified date range.
Step1: Now the collected data can be viewed simply by issuing the following command
Step2: This code is useful for looking at a specific measurement dataset
Step3: A modified version of the above code will format the data properly for GLOBE Email Data Entry
Step4: To see the data formatted in GLOBE Email Data Entry format, comment out the return data_list command above, uncomment the print command right above it, then issue the following command
Step5: To email the data set to GLOBE's email data entry server, run the following code.
Step6: Finally, this command sends the email | Python Code:
import requests
import json
r = requests.get('http://3d-kenya.chordsrt.com/instruments/1.geojson?start=2016-09-01T00:00&end=2016-11-01T00:00')
if r.status_code == 200:
d = r.json()['Data']
else:
print("Please verify that the URL for the weather station is correct. You may just have to try again with a different/smaller date range or different dates.")
Explanation: Py-EMDE
Python Email Data Entry
The following code can gather data from weather stations reporting to the CHORDS portal, package it up into the proper format for GLOBE Email Data Entry , and send it using the SparkPost API.
In order to send email, you'll need to setup SparkPost by creating an account and confirming you own the domain you'll be sending emails from. You'll also need to create a SparkPost API key and set the environment variable SPARKPOST_API_KEY equal to the value of your API key. This script can be further modified to use a different method for sending email if needed.
This code will contact the CHORDS Portal and collect all the measurement data from the specified instrument, in the specified date range.
End of explanation
d
Explanation: Now the collected data can be viewed simply by issuing the following command
End of explanation
for o in d:
if o['variable_shortname'] == 'msl1':
print(o['time'], o['value'], o['units'])
Explanation: This code is useful for looking at a specific measurement dataset
End of explanation
davad_tuple = (
'f1',
'f2',
'f3',
'f4',
'f5',
'f6',
'f7',
'f8',
'f9',
'f10',
'f11',
'f12',
'f13',
'f14',
)
def make_data_set(d):
data_list = []
for o in d:
if o['variable_shortname'] == 'rain':
t = o['time'].split("T")
tdate = t[0].replace('-', '')
ttime = ''.join(t[1].split(':')[:-1])
rain = o['value']
if ttime.endswith('00') or ttime.endswith('15') or ttime.endswith('30') or ttime.endswith('45'):
davad_tuple = ['DAVAD', 'GLIDGDTR', 'SITE_ID:45015']+['X']*11
davad_tuple[3] = tdate + ttime
davad_tuple[11] = str(rain)
data_list.append('{}'.format(' '.join(davad_tuple)))
#print('//AA\n{}\n//ZZ'.format('\n'.join(data_list)))
return data_list
Explanation: A modified version of the above code will format the data properly for GLOBE Email Data Entry
End of explanation
make_data_set(d)
Explanation: To see the data formatted in GLOBE Email Data Entry format, comment out the return data_list command above, uncomment the print command right above it, then issue the following command
End of explanation
def email_data(data_list):
import os
from sparkpost import SparkPost
FROM_EMAIL = os.getenv('FROM_EMAIL')
BCC_EMAIL = os.getenv('BCC_EMAIL')
# Send email using the SparkPost api
sp = SparkPost() # uses environment variable named SPARKPOST_API_KEY
response = sp.transmission.send(
recipients=['[email protected]'],
bcc=[BCC_EMAIL],
text='//AA\n{}\n//ZZ'.format('\n'.join(data_list)),
from_email=FROM_EMAIL,
subject='DATA'
)
print(response)
Explanation: To email the data set to GLOBE's email data entry server, run the following code.
End of explanation
email_data(make_data_set(d))
Explanation: Finally, this command sends the email
End of explanation |
10,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Array Computing
Terminology
List
A sequence of values that can vary in length.
The values can be different data types.
The values can be modified (mutable).
Tuple
A sequence of values with a fixed length.
The values can be different data types.
The values cannot be modified (immutable).
Array
A sequence of values with a fixed length.
The values cannot be different data types.
The values can be modified (mutable).
Vector
Step1: Tuples
Step2: Mathematical Operations on Vectors
Review of vector operations
Step3: Vectors in Python programming
Our current solution
Step4: Repeat on your own
Step5: Or using zip, we did this already before
Step6: When to use lists and arrays?
In general, we'll use lists instead of arrays when elements have to be added (e.g., we don't know how the number of elements ahead of time, and must use methods like append and extend) or their types are heterogeneous.
Otherwise we'll use arrays for numerical calculations.
Basics of numpy arrays
Characteristics of numpy arrays
Step7: To convert a list to an array use the array method
Step8: Note the type!
To create an array of length n filled with zeros (to be filled later)
Step9: To create arrays with elements of a type other than the default float, use a second argument
Step10: We often want array elements equally spaced by some interval (delta).
numpy.linspace(start, end, number of elements) does this
Step11: Q. What will that do?
Array elements are accessed with square brackets, the same as lists
Step12: Slicing can also be done on arrays
Step13: For reference below
Step14: Let's edit one of the values in the z array
Step15: Now let's look at the y array again
Step16: The variable yArray is a reference (or view in Numpy lingo) to three elements (a slice) from zArray
Step17: Do not forget this -- check your array values frequently if you are unsure!
Computing coordinates and function values
Here's the distance function we did previously
Step18: We could convert timeList and distList from lists to arrays
Step19: We can do this directly by creating arrays (without converting from a list) with np.linspace
to create timeArray and np.zeros to create distArray.
(This is merely a demonstration, not superior to the above code for this simple example.)
Step20: Vectorization -- one of the great powers of arrays
The examples above are great, but they doesn't use the computation power of arrays by operating on all the elements simultaneously!
Loops are slow.
Operating on the elements simultaneously is much faster (and simpler!).
"Vectorization" is replacing a loop with vector or array expressions.
Step21: What just happened?
Let's look at what the function "distance" is doing to the values in timeArray
Step22: Caution
Step23: but do this for arrays | Python Code:
x = 2
y = 3
myList = [x, y]
myList
Explanation: Array Computing
Terminology
List
A sequence of values that can vary in length.
The values can be different data types.
The values can be modified (mutable).
Tuple
A sequence of values with a fixed length.
The values can be different data types.
The values cannot be modified (immutable).
Array
A sequence of values with a fixed length.
The values cannot be different data types.
The values can be modified (mutable).
Vector: A 1 dimensional (1D) array.
Matrix: - A 2 dimensional (2D) array.
Arrays are like lists but less flexible and more
efficient for lengthy calculations (one data type,
stored in the same location in memory).
But first:
VECTORS -- very simple arrays
Vectors can have an arbitrary number of components,
existing in an n-dimensional space.
(x1, x2, x3, ... xn)
Or
(x0, x1, x2, ... x(n-1)) for Python...
In Python, vectors are represented by lists or tuples:
Lists:
End of explanation
myTuple = (-4, 7)
myTuple
Explanation: Tuples:
End of explanation
numList = [0.0, 1.0, 2.0]
numTuple = (0.0, 1.0, 2.0)
2 * numList
2 * numTuple
2.0 * numList
Explanation: Mathematical Operations on Vectors
Review of vector operations: textbook sections 5.1.2 & 5.1.3
In computing:
Applying a mathematical function to a vector
means applying it to each element in the vector.
(you may hear me use the phrase "element-wise,"
which means "performing some operation one element
at a time")
However, this is not true of lists and tuples
Q. What do these yield?
End of explanation
def distance(t, a = 9.8):
'''Calculate the distance given a time and acceleration.
Input: time in seconds <int> or <float>,
acceleration in m/s^2 <int> or <float>
Output: distance in m <float>
'''
return 0.5 * a * t**2
numPoints = 6 # number of points
delta = 1.0 / (numPoints - 1) # time interval between points
# Q. What do the two lines below do?
timeList = [index * delta for index in range(numPoints)]
distList = [distance(t) for t in timeList]
print("Time List: ", timeList)
print("Distance List:", distList)
Explanation: Vectors in Python programming
Our current solution:
* using lists for collecting function data
* convert to NumPy arrays for doing math with them.
As an example, a falling object in Earth's gravity:
End of explanation
timeDistList = []
for index in range(numPoints):
timeDistList.append([timeList[index], distList[index]])
for element in timeDistList:
print element
Explanation: Repeat on your own: stitching results together:
End of explanation
timeDistList2 = [[time, dist] for time, dist in zip(timeList, distList)]
for element in timeDistList2:
print(element)
daveList = range(5)
for element in zip(timeList, distList):
print(element)
list(zip(timeList, distList, daveList))
Explanation: Or using zip, we did this already before:
End of explanation
import numpy as np
Explanation: When to use lists and arrays?
In general, we'll use lists instead of arrays when elements have to be added (e.g., we don't know how the number of elements ahead of time, and must use methods like append and extend) or their types are heterogeneous.
Otherwise we'll use arrays for numerical calculations.
Basics of numpy arrays
Characteristics of numpy arrays:
Elements are all the same type
Number of elements known when array is created
Numerical Python (numpy) must be imported to
manipulate arrays.
All array elements are operated on by numpy,
which eliminates loops and makes programs
much faster.
Arrays with one index are sometimes called vectors
(or 1D arrays). Arrays with two indices are
sometimes called matrices (or 2D arrays).
End of explanation
myList = [1, 2, 3]
myArray = np.array(myList)
print(type(myArray))
myArray
Explanation: To convert a list to an array use the array method:
End of explanation
np.zeros?
myArray = np.zeros(10)
myArray
Explanation: Note the type!
To create an array of length n filled with zeros (to be filled later):
End of explanation
myArray = np.zeros(5, dtype=int)
myArray
Explanation: To create arrays with elements of a type other than the default float, use a second argument:
End of explanation
zArray = np.linspace(0, 5, 6)
zArray
Explanation: We often want array elements equally spaced by some interval (delta).
numpy.linspace(start, end, number of elements) does this:
NOTE #### HERE, THE "end" VALUE IS NOT (end - 1) #### NOTE
End of explanation
zArray[3]
Explanation: Q. What will that do?
Array elements are accessed with square brackets, the same as lists:
End of explanation
yArray = zArray[1:4]
yArray
Explanation: Slicing can also be done on arrays:
Q. What does this give us?
End of explanation
zArray
Explanation: For reference below:
End of explanation
zArray[3] = 10.0
zArray
Explanation: Let's edit one of the values in the z array
End of explanation
yArray
Explanation: Now let's look at the y array again
End of explanation
lList = [6, 7, 8, 9, 10, 11]
mList = lList[1:3]
print(mList)
lList[1] = 10
mList
Explanation: The variable yArray is a reference (or view in Numpy lingo) to three elements (a slice) from zArray: element indices 1, 2, and 3.
Here is a blog post which discusses this issue nicely:
http://nedbatchelder.com/text/names.html
Reason this is of course memory efficiency: Why copy data if not necessary?
End of explanation
def distance(t, a = 9.8):
'''Calculate the distance given a time and acceleration.
Input: time in seconds <int> or <float>,
acceleration in m/s^2 <int> or <float>
Output: distance in m <float>
'''
return 0.5 * a * t**2
numPoints = 6 # number of points
delta = 1.0 / (numPoints - 1) # time interval between points
timeList = [index * delta for index in range(numPoints)] # Create the time list
distList = [distance(t) for t in timeList] # Create the distance list
Explanation: Do not forget this -- check your array values frequently if you are unsure!
Computing coordinates and function values
Here's the distance function we did previously:
End of explanation
timeArray = np.array(timeList)
distArray = np.array(distList)
print(type(timeArray), timeArray)
print(type(distArray), distArray)
Explanation: We could convert timeList and distList from lists to arrays:
End of explanation
def distance(t, a = 9.8):
'''Calculate the distance given a time and acceleration.
Input: time in seconds <int> or <float>,
acceleration in m/s^2 <int> or <float>
Output: distance in m <float>
'''
return 0.5 * a * t**2
numPoints = 6 # number of points
timeArray = np.linspace(0, 1, numPoints) # Create the time array
distArray = np.zeros(numPoints) # Create the distance array populated with 0's
print("Time Array: ", type(timeArray), timeArray)
print("Dist Array Zeros: ", type(distArray), distArray)
for index in range(numPoints):
distArray[index] = distance(timeArray[index]) # Populate the distance array with calculated values
print("Dist Array Populated:", type(distArray), distArray)
Explanation: We can do this directly by creating arrays (without converting from a list) with np.linspace
to create timeArray and np.zeros to create distArray.
(This is merely a demonstration, not superior to the above code for this simple example.)
End of explanation
def distance(t, a = 9.8):
'''Calculate the distance given a time and acceleration.
Input: time(s) in seconds <int> or <float> or <np.array>,
acceleration in m/s^2 <int> or <float>
Output: distance in m <float>
'''
return 0.5 * a * t**2
numPoints = 6 # number of points
timeArray = np.linspace(0, 1, numPoints) # Create the time array
distArray = distance(timeArray) # Create and populate the distance array using vectorization
print("Time Array:", type(timeArray), timeArray)
print("Dist Array:", type(distArray), distArray)
Explanation: Vectorization -- one of the great powers of arrays
The examples above are great, but they doesn't use the computation power of arrays by operating on all the elements simultaneously!
Loops are slow.
Operating on the elements simultaneously is much faster (and simpler!).
"Vectorization" is replacing a loop with vector or array expressions.
End of explanation
numPoints = 6 # Number of points
a = 9.8 # Acceleration in m/s^2
timeArray = np.linspace(0, 1, numPoints) # The values a created like before
print("Original ", timeArray)
timeArray = timeArray**2 # Once in the function, they are first squared
print("Squared ", timeArray)
print(distArray)
timeArray = timeArray * 0.5 # Next they are multiplied by 0.5
print("Times 0.5", timeArray)
timeArray = timeArray * a # Finally, they are multiplied by a and the entire modified
print("Times a ", timeArray) # array is returned
Explanation: What just happened?
Let's look at what the function "distance" is doing to the values in timeArray
End of explanation
import math
math.sin(0.5)
Explanation: Caution: numpy has its own math functions, such as sin, cos, pi, exp, and some of these are slightly different from Python's math module.
Also, the math module does not accept numpy array as arguments, i.e. it is NOT vectorized.
Conclusiong: Use numpy built in math whenever dealing with arrays, but be aware that if you repeatedly (in a loop) calculate only 1 value at a time, the math library would be faster (because numpy has some overhead costs to do autmatically element-wise math).
So, do this for single calculations:
End of explanation
np.sin([0.1, 0.2, 0.3, 0.4, 0.5])
Explanation: but do this for arrays:
End of explanation |
10,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
0. Preparing Data
Before digging into the parser notebook, the version of the CWE xml file within this notebook is v3.0, which can be downloaded from this link. Here we loaded CWE v3.0 xml file. Therefore, if there is any new version of XML raw file, please make change for the following code. If the order of weakness table is changed, please change the code for function <b>extract_target_field_elements</b> in section 2.1.
Step1: 1. Introduction
The purpose of this notebook is to build the fields parser and extract the contents from various fields in the CWE 3.0 XML file so that the field content can be directly analyzed and stored into database. Guided by CWE Introduction notebook, this notebook will focus on the detail structure under Weakness table and how parser functions work within the weakness table.
To preserve the semantic information and not lose details during the transformation from the representation on website and XML file to the output file, we build a 3-step pipeline to modularize the parser functions for various fields in different semantic format. The 3-step pipeline contains the following steps
Step2: 2.2 XML Node Field Parser
Once the node is provided by the former function, its XML structure is consistent for that field, and cwe version, but it is inconsistent for different versions. For example, a table represented in XML is different than a paragraph in XML. However, even for the same expected structure such as a table, the XML may have different tags used within it.
The associated parser function then is tested to cover all possible cases of this field in the XML, and also interpreted against its .html format to understand its purpose. We again refer to the introductory notebook Sections 4 and 5 which convey respectively the potential usage of the field for PERCEIVE and the overall format of the structure when compared to others.
The purpose is then documented as part of the functions documentation (sub-sections of this Section), conveying the rationale behind what was deemed potentially useful in the content to be kept, as well as how certain tags were mapped to a data structure while being removed.
The parser function outputs one of the known data structures (i.e. memory representation) that is shared among the different fields of what is deemed relevant. For example, while 5 field nodes may have their relevant information in different ways, they may all be at the end of the day tables, named lists, or single blocks of texts. Sharing a common representation in this stage decouples the 3rd step of the pipeline from understanding the many ways the same information is stored in the XML.
Because different CWE versions may organize a .html table, bullet list etc in different ways even for the same field, this organization also decouples the previous and following section functions from being modified on every new version if necessary.
The following fields have parser functions as of today
Step3: Through function <b> parse_potential_mitigations</b>, the above Potential_Mitigations node for cwe_1022 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-1022)</b>
2.2.2 Parse Common_Consequences
Common_Consequences field has a nested structure under the field element.
To understand the nesting structure, here we use the Common_Consequences field in cwe-103 as example. Under Common_Consequences element, there are two field entries named by 'Consequence', which represent two different consequences associated with the weakness. Under each consequence element, three entry elements constitute one weakness consequence, including scope, impact, and note, which have the contents that our parser is intended to to extract.
To preserve the table format of Common_Consequences, we use the dictionary to pair the CWE id and the content we parse from Common_Consequences field. Since there are multiple consequences for one weakness, a list of dictionaries will be used to store the content, where the number of dictionaries is equal to the number of consequences. Since one consequence may have multiple impacts and scopes but only one note, we use tuple to store the content of impact and scope, while directly store the content of note. In summary, the data structure in memory can be represented as the following format
Step4: Through function <b> parse_common_consequences</b>, the above Common_Consequences node for cwe_103 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-103)</b>
3. Export Data Structure
At the point this notebook is being created, it is still an open end question on how will tables, bullet lists and other potential structures in CWE will be used for topic modeling. For example, should rows in a table be concatenated into a paragraph and used as a document? What if the same word is repeated as to indicate a lifecycle?
In order to keep this notebook future-proof, this section abstracts how we will handle each field representation (e.g. table, bullet list, etc) from the memory data structure. It also keeps it flexible for multi-purpose
Step5: 4. Main Execution
After developing the 3 steps parsing pipeline, this section will combine these 3 steps and produce the output file for different fields. As introduced in Section 2, although the parsing procedure keeps same for all fields, each field will have own parsing function, while the same format of fields may share a same exporting function. As a result, the main execution varies for each field.
4.1 Main execution for Potential_Mitigations
The main execution will combine the above 3 steps parsing pipeline for Potential_Mitigations. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id.
Step6: 4.2 Main execution for Common_Consequences
The main execution will combine the above 3 steps parsing pipeline for Common_Consequences. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id. | Python Code:
cwe_xml_file='cwec_v3.0.xml'
Explanation: 0. Preparing Data
Before digging into the parser notebook, the version of the CWE xml file within this notebook is v3.0, which can be downloaded from this link. Here we loaded CWE v3.0 xml file. Therefore, if there is any new version of XML raw file, please make change for the following code. If the order of weakness table is changed, please change the code for function <b>extract_target_field_elements</b> in section 2.1.
End of explanation
def extract_target_field_elements(target_field, cwe_xml_file):
'''
This function responsibility is to abstract how nodes are found given their field name and should be used together with the histogram.
Args:
- target_field: the arg defines which nodes are found given by the field that we are aiming to target
- cwe_xml_file: the CWE xml file that this function will work and extract the target field nodes
Outcome:
- a list of nodes that have the pre-defined target field as the element tag
'''
# read xml file and store as the root element in memory
tree = lxml.etree.parse(cwe_xml_file)
root = tree.getroot()
# Remove namespaces from XML.
for elem in root.getiterator():
if not hasattr(elem.tag, 'find'): continue # (1)
i = elem.tag.find('}') # Counts the number of characters up to the '}' at the end of the XML namespace within the XML tag
if i >= 0:
elem.tag = elem.tag[i+1:] # Starts the tag a character after the '}'
# define the path of target field. Here we select all element nodes that the tag is the target field
target_field_path='Weakness/./'+target_field
# extract weakness table in the XML // if the order of weakness table is changed, please make change for the following code
weakness_table = root[0]
# generate all elements with the target field name
target_field_nodes=weakness_table.findall(target_field_path)
return target_field_nodes
Explanation: 1. Introduction
The purpose of this notebook is to build the fields parser and extract the contents from various fields in the CWE 3.0 XML file so that the field content can be directly analyzed and stored into database. Guided by CWE Introduction notebook, this notebook will focus on the detail structure under Weakness table and how parser functions work within the weakness table.
To preserve the semantic information and not lose details during the transformation from the representation on website and XML file to the output file, we build a 3-step pipeline to modularize the parser functions for various fields in different semantic format. The 3-step pipeline contains the following steps: searching XML Field node location, XML field node parser, and exporting the data structure to the output file based on the semantic format in Section 4 of CWE Introduction Notebook. More details will be explained in Section 2.
2. Parser Architecture
The overall parser architecture is constituted by the following three procedures: 1) extracting the nodes with the target field tag, 2) parsing the target field node to the representation in memory, and 3) exporting the data structure to the output file.
Section 2.1 explains the way to search XML field nodes with the target field tag. No matter parsing which field, the first step is to use Xpath and then locate all XML field nodes with the field tag we are intended to parse. The function in section 2.1 has been tested for all fields and thus can locate XML nodes with any given field naming. However, the function is inconsistent for different versions, since the order of weakness table might be different. It happens between v2.9 and v3.0.
Section 2.2 explains the way to parse and extract the content of the target field into the representation in memory. Since different fields have various nested structures in xml raw file and the content we will parse varies field by field, the worst situation is that there will be one parser function for each different field. However, from Section 4 in CWE Introduction Notebook, certain fields may share a same format on website, such as table or bullet list, the ideal situation is that we would have only 4 or 5 functions to represent the data in memory.
Section 3 addresses the way to export the data representation from Section 2.2. A set of functions in Section 3 should be equal to the number of data structures in Section 2.2.
2.1 XML Field Node Location
This function searches the tree for the specified field node provided (e.g. Potential_Mitigations) as input and returns the associated XML node of the field. The string containing the field name can be found in the Introductory Notebook's histogram on Section 4 . As it can be observed in that histogram, only certain fields are worthwhile parsing due to their occurrence frequency.
End of explanation
def parse_potential_mitigations(potential_mitigation_node):
'''
The parser function concern is abstracting how the Potential_Mitigations field is stored in XML,
and provide it in a common and simpler data structure
Args:
- potential_mitigations_node: the node that has Potential_Mitigations tag, such as the above image
Outcomes:
- A dictionary that pairs cwe_id as key and the mitigation list as value.
In the dictionary, the mitigation list will be a list of dictionaries that each dictionary pairs tag and the corresponding content for each mitigation.
More details can be found in the following example for cwe-1022
'''
# extract cwe_id from the attribute of potential_mitigations element's parent node
cwe_id=potential_mitigations_node.getparent().attrib.get('ID')
cwe_id='CWE_'+cwe_id
# the mitigation list that each element represents an indivudual mitigation element
mitigation_list=[]
target_field=potential_mitigations_node.tag
# for each mitigation node under the potential_mitigations node
for mitigation in list(potential_mitigations_node):
# the dictionary that contain the information for each mitigation element
mitigation_dict=dict()
# traverse all mitigation_element nodes under each mitigation node
for mitigation_element in list(mitigation):
# generate tag and content of each mitigation_element
mitigation_element_tag=mitigation_element.tag.lower()
mitigation_element_content=mitigation_element.text
## in case there is nested elements under mitigation_element but store the content from a same tag
# check whether there is an element under mitigation_element
if mitigation_element_content.isspace():
entry_element_content=''
# iterate all child elements below mitigation_element,
for mitigation_element_child in mitigation_element.iter():
# extract the content
mitigation_element_child_content=mitigation_element_child.text
# if there is no content under the element or if this a nested element that contain one more element, then move to the next
if mitigation_element_child_content.isspace():
continue
# if not, merge the content
else:
mitigation_element_content+=mitigation_element_child_content
# store the tag and content for each mitigation element to the dictionary
mitigation_dict[mitigation_element_tag]=mitigation_element_content.strip()
# add each mitigation element dictionary to mitigation_list
mitigation_list.append(mitigation_dict)
# pair the cwe_id with the mitigation contents
potential_mitigations_dict=dict()
potential_mitigations_dict[cwe_id]=mitigation_list
return potential_mitigations_dict
Explanation: 2.2 XML Node Field Parser
Once the node is provided by the former function, its XML structure is consistent for that field, and cwe version, but it is inconsistent for different versions. For example, a table represented in XML is different than a paragraph in XML. However, even for the same expected structure such as a table, the XML may have different tags used within it.
The associated parser function then is tested to cover all possible cases of this field in the XML, and also interpreted against its .html format to understand its purpose. We again refer to the introductory notebook Sections 4 and 5 which convey respectively the potential usage of the field for PERCEIVE and the overall format of the structure when compared to others.
The purpose is then documented as part of the functions documentation (sub-sections of this Section), conveying the rationale behind what was deemed potentially useful in the content to be kept, as well as how certain tags were mapped to a data structure while being removed.
The parser function outputs one of the known data structures (i.e. memory representation) that is shared among the different fields of what is deemed relevant. For example, while 5 field nodes may have their relevant information in different ways, they may all be at the end of the day tables, named lists, or single blocks of texts. Sharing a common representation in this stage decouples the 3rd step of the pipeline from understanding the many ways the same information is stored in the XML.
Because different CWE versions may organize a .html table, bullet list etc in different ways even for the same field, this organization also decouples the previous and following section functions from being modified on every new version if necessary.
The following fields have parser functions as of today:
|Field Name| Function Name|
|:---:|:----:|
|Potential_Mitigations|parse_potential_mitigations|
|Common_Consequences|parse_common_consequences|
2.2.1 Parse Potential_Mitigations
Potential_Mitigations field has a nested structure under the field element. To understand the nesting structure, here we use the following image for cwe-1022 as example. Under Potential_Mitigatations element, there are two mitigation entries named by 'Mitigation', which represent the way to mitigate the weakness in the development cycle. Under each mitigation node, there are multiple sub-enties that constitute one mitigation (phase and description in cwe-1022 example), which have the contents that our parser is intended to extract.
To preserve the named list format of Potential_Mitigations, we use the dictionary to pair the CWE id and the content we parse from Potential_Mitigations field. Since there are multiple mitigation methods to mitigate the weakness, a list of dictionaries will be used to store the content, where the number of dictionaries is equal to the number of mitigation methods. And then the tag and the corresponding content will be paired in each dictionary. In summary, the data structure in memory can be represented as the following format: {CWE_id: [{tag1:content1, tag2: content2..}, {tag1:content3, tag2:content4..}...]}. More details can be found in the example of cwe-1022.
There are two special cases when parsing Potential_Mitigations field:
1) Various sub-entries:
Some Mitigation nodes may contain more sub-entries, other than phase and description, such as strategy, effectiveness and effectiveness_notes. These entries can be found in cwe-1004 and cwe-106. In this case, the parser will store the tag and content as same as phase and description.
2) HTML tags under Description node:
In some cases, the content under Description node will be stored in multiple html elements, such as p, li, div, and ul. These html tags are used to separate the sentences from a paragraph. For example, there are two html elements <p> under the description of the second mitigation node in the following images. By comparing to how the contents are represented on the webiste, we conclude the tag <p> is not useful to be kept. Therefore, in this case, the parser will concatenate the content of description under the same mitigation node and remove the tag <p>.
Since the number of element varies depending on the CWE_id, here is the cardinality of these tags:
|Tag|Cardinality|
|:---:|:---:|
|Phase|1|
|Description|1|
|Strategy|0 or 1|
|Effectiveness|0 or 1|
|Effectiveness_Notes|0 or 1|
<b>How content represent on the website (cwe-1022)</b>
<b>How content represent in the xml file (cwe-1022)</b>
End of explanation
def parse_common_consequences(common_consequences_node):
'''
The parser function concern is abstracting how the Common_Consequences field is stored in XML,
and provide it in a common and simpler data structure
Args:
- common_consequences_node: the node that has Common_Consequences tag, such as the above image
Outcomes:
- A dictionary that pairs cwe_id as key and the consequence list as value.
In the dictionary, the consequence list will be a list of dictionaries that each dictionary pairs tag and the corresponding content for each consequence.
More details can be found in the following example for cwe-103.
'''
# extract cwe_id from the attribute of common_consequences element's parent node
cwe_id=common_consequences_node.getparent().attrib.get('ID')
cwe_id='CWE_'+cwe_id
# the consequence list that each element represents an indivudual consequence element
consequence_list=[]
target_field=common_consequences_node.tag
# for each consequence node under the common_consequence node
for consequence in list(common_consequences_node):
# the dictionary that contain the information for each consequence element
consequence_dict=dict()
# traverse all consequence_element nodes under each consequence node
for consequence_element in list(consequence):
# generate tag and content of each consequence_element
consequence_element_tag=consequence_element.tag.lower()
consequence_element_content=consequence_element.text.strip()
# parse the note content directly as the value
if consequence_element_tag=='note':
consequence_dict[consequence_element_tag]=consequence_element_content
# for scope and impact, parse the content for scope and impact as tuple
else:
# if the tag is already in the dictionary, add the content to the existing tuple
if consequence_element_tag in consequence_dict:
consequence_dict[consequence_element_tag]+=(consequence_element_content,)
# if not, create a tuple to contain the content
else:
consequence_dict[consequence_element_tag]=(consequence_element_content,)
# add each consequence element dictionary to conisequence_list
consequence_list.append(consequence_dict)
# pair the cwe_id with the consequence contents
common_consequences_dict=dict()
common_consequences_dict[cwe_id]=consequence_list
return common_consequences_dict
Explanation: Through function <b> parse_potential_mitigations</b>, the above Potential_Mitigations node for cwe_1022 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-1022)</b>
2.2.2 Parse Common_Consequences
Common_Consequences field has a nested structure under the field element.
To understand the nesting structure, here we use the Common_Consequences field in cwe-103 as example. Under Common_Consequences element, there are two field entries named by 'Consequence', which represent two different consequences associated with the weakness. Under each consequence element, three entry elements constitute one weakness consequence, including scope, impact, and note, which have the contents that our parser is intended to to extract.
To preserve the table format of Common_Consequences, we use the dictionary to pair the CWE id and the content we parse from Common_Consequences field. Since there are multiple consequences for one weakness, a list of dictionaries will be used to store the content, where the number of dictionaries is equal to the number of consequences. Since one consequence may have multiple impacts and scopes but only one note, we use tuple to store the content of impact and scope, while directly store the content of note. In summary, the data structure in memory can be represented as the following format: {CWE_id: [{'Scope':scope tuple, 'Impact':impact tuple, 'Note': Text}, {'Scope':scope tuple, 'Impact':impact tuple, 'Note': Text}...]}. More details can be found in the example of cwe-103.
Since the number of element varies depending on the field, here is the cardinality of these fields:
|Tag|Cardinality|
|:---:|:---:|
|Scope|1 or more|
|Impact|1 or more|
|Note|0 or 1|
<b>How content represent on the website (cwe-103)</b>
<b>How content represent in the xml file (cwe-103)</b>
End of explanation
def export_data(target_field_node):
'''This section code will be done in the future.'''
pass
Explanation: Through function <b> parse_common_consequences</b>, the above Common_Consequences node for cwe_103 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-103)</b>
3. Export Data Structure
At the point this notebook is being created, it is still an open end question on how will tables, bullet lists and other potential structures in CWE will be used for topic modeling. For example, should rows in a table be concatenated into a paragraph and used as a document? What if the same word is repeated as to indicate a lifecycle?
In order to keep this notebook future-proof, this section abstracts how we will handle each field representation (e.g. table, bullet list, etc) from the memory data structure. It also keeps it flexible for multi-purpose: A table may be parsed for content for topic modeling, but also for extracting graph relationships (e.g. the Related Attack Pattern and Related Weaknesses fields contain hyperlinks to other CWE entries which could be reshaped as a graph).
End of explanation
if __name__ == "__main__":
# extract the nodes, whose tag is Potential_Mitigations,from cwe_xml_file
potential_mitigations_nodes=extract_target_field_elements('Potential_Mitigations',cwe_xml_file)
# read each Potential_Mitigation node
for potential_mitigations_node in potential_mitigations_nodes:
# parse the content for each potential_mitigation node
potential_mitigations_info=parse_potential_mitigations(potential_mitigations_node)
# export the parsed content TO-DO
export_data(potential_mitigations_info)
Explanation: 4. Main Execution
After developing the 3 steps parsing pipeline, this section will combine these 3 steps and produce the output file for different fields. As introduced in Section 2, although the parsing procedure keeps same for all fields, each field will have own parsing function, while the same format of fields may share a same exporting function. As a result, the main execution varies for each field.
4.1 Main execution for Potential_Mitigations
The main execution will combine the above 3 steps parsing pipeline for Potential_Mitigations. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id.
End of explanation
if __name__ == "__main__":
# extract the nodes, whose tag is Common_Consequences, from cwe_xml_file
common_consequences_nodes=extract_target_field_elements('Common_Consequences',cwe_xml_file)
# read each Common_Consequences node
for common_consequences_node in common_consequences_nodes:
# parse the content for each common_consequence node
common_consequence_info=parse_common_consequences(common_consequences_node)
# export the parsed content TO-DO
export_data(common_consequence_info)
Explanation: 4.2 Main execution for Common_Consequences
The main execution will combine the above 3 steps parsing pipeline for Common_Consequences. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id.
End of explanation |
10,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy essentials
NumPy is a Python library for manipulation of vectors and arrays. We import it just like any Python module
Step1: Vector creation | Python Code:
import numpy as np
Explanation: NumPy essentials
NumPy is a Python library for manipulation of vectors and arrays. We import it just like any Python module:
End of explanation
# From Python lists or iterators
n1 = np.array( [0,1,2,3,4,5,6] )
n2 = np.array( range(6) )
# Using numpy iterators
n3 = np.arange( 10, 20, 0.1)
n3
Explanation: Vector creation
End of explanation |
10,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2019 Book Update
Step1: Read in my Goodreads Export
Go to https
Step2: Select the books from the goodreads dump
Get relevant columns
Sort by newest
Remove unsightly NaNs,
Filter to only 'read' books and then get rid of filter column
Only books added to Goodreads in 2019
Step3: Everyone loves star ratings
Step4: Create the style for display
Step5: Add the HTML to my clipboard to copy to the blog | Python Code:
import pandas as pd
import numpy as np
Explanation: 2019 Book Update
End of explanation
book_df = pd.read_csv('goodreads_library_export.csv')
book_df['Date Added'] = pd.to_datetime(book_df['Date Added'],
format="%Y/%m/%d")
book_df.columns
Explanation: Read in my Goodreads Export
Go to https://www.goodreads.com/review/import and hit 'export library'
Convert string datetime to datetime object
End of explanation
cols = ['Title', 'Author', 'Date Added', 'Number of Pages',
'My Rating', 'Exclusive Shelf']
export_df = (
book_df[cols]
.sort_values('Date Added', ascending=False)
.fillna('')
[book_df['Exclusive Shelf'] == "read"]
[book_df['Date Added'] > pd.datetime(2019, 1, 1)]
.drop('Exclusive Shelf', axis=1)
)
export_df.loc[14, 'Number of Pages'] = 576.0 # Not in the GR db, apparently
export_df
Explanation: Select the books from the goodreads dump
Get relevant columns
Sort by newest
Remove unsightly NaNs,
Filter to only 'read' books and then get rid of filter column
Only books added to Goodreads in 2019
End of explanation
def ratings_to_stars(rating):
return ("★" * rating +
"✩" * (5-rating))
export_df['My Rating'] = export_df['My Rating'].apply(ratings_to_stars)
export_df.head()
Explanation: Everyone loves star ratings
End of explanation
# Define hover behavior
hover_props = [("background-color", "#CCC")]
# Set CSS properties for th elements in dataframe
th_props = [
('font-size', '13px'),
('text-align', 'left'),
('font-weight', 'bold'),
('color', '#6d6d6d'),
('background-color', '#f7f7f9'),
]
# Set CSS properties for td elements in dataframe
td_props = [
('font-size', '12px'),
('padding', '0.75em 0.75em'),
('max-width', "250px")
]
# Set table styles
styles = [
dict(selector="tr:hover", props=hover_props),
dict(selector="th", props=th_props),
dict(selector="td", props=td_props)
]
style_df = (
export_df
.reset_index(drop=True)
.style
.set_table_styles(styles)
.bar(subset=['Number of Pages'], color='#999')
.format({"Date Added": lambda x: x.strftime(format="%Y-%m-%d")})
.format({"Number of Pages": lambda x: int(x)})
)
style_df
Explanation: Create the style for display
End of explanation
import clipboard
clipboard.copy(style_df.render())
Explanation: Add the HTML to my clipboard to copy to the blog
End of explanation |
10,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scaling Gaussian Processes to big datasets
This notebook was made with the following version of george
Step1: One of the biggest technical challenges faced when using Gaussian Processes to model big datasets is that the computational cost naïvely scales as $\mathcal{O}(N^3)$ where $N$ is the number of points in you dataset. This cost can be prohibitive even for moderately sized datasets. There are a lot of methods for making these types of problems tractable by exploiting structure or making approximations. George comes equipped with one approximate method with controllable precision that works well with one-dimensional inputs (time series, for example). The method comes from this paper and it can help speed up many—but not all—Gaussian Process models.
To demonstrate this method, in this tutorial, we'll benchmark the two Gaussian Process "solvers" included with george. For comparison, we'll also measure the computational cost of the same operations using the popular GPy library and the new scikit-learn interface. Note that GPy is designed a Gaussian Process toolkit and it comes with a huge number state-of-the-art algorithms for the application of Gaussian Processes and it is not meant for efficiently computing marginalized likelihoods so the comparison isn't totally fair.
As usual, we'll start by generating a large fake dataset
Step2: The standard method for computing the marginalized likelihood of this dataset under a GP model is
Step3: When using only 100 data points, this computation is very fast but we could also use the approximate solver as follows
Step4: The new scikit-learn interface is quite similar (you'll need to install a recent version of scikit-learn to execute this cell)
Step5: To implement this same model in GPy, you would do something like (I've never been able to get the heteroscedastic regression to work in GPy)
Step6: Now that we have working implementations of this model using all of the different methods and modules, let's run a benchmark to look at the computational cost and scaling of each option. The code here doesn't matter too much but we'll compute the best-of-"K" runtime for each method where "K" depends on how long I'm willing to wait. This cell takes a few minutes to run.
Step7: Finally, here are the results of the benchmark plotted on a logarithmic scale | Python Code:
import george
george.__version__
Explanation: Scaling Gaussian Processes to big datasets
This notebook was made with the following version of george:
End of explanation
import numpy as np
import matplotlib.pyplot as pl
np.random.seed(1234)
x = np.sort(np.random.uniform(0, 10, 50000))
yerr = 0.1 * np.ones_like(x)
y = np.sin(x)
Explanation: One of the biggest technical challenges faced when using Gaussian Processes to model big datasets is that the computational cost naïvely scales as $\mathcal{O}(N^3)$ where $N$ is the number of points in you dataset. This cost can be prohibitive even for moderately sized datasets. There are a lot of methods for making these types of problems tractable by exploiting structure or making approximations. George comes equipped with one approximate method with controllable precision that works well with one-dimensional inputs (time series, for example). The method comes from this paper and it can help speed up many—but not all—Gaussian Process models.
To demonstrate this method, in this tutorial, we'll benchmark the two Gaussian Process "solvers" included with george. For comparison, we'll also measure the computational cost of the same operations using the popular GPy library and the new scikit-learn interface. Note that GPy is designed a Gaussian Process toolkit and it comes with a huge number state-of-the-art algorithms for the application of Gaussian Processes and it is not meant for efficiently computing marginalized likelihoods so the comparison isn't totally fair.
As usual, we'll start by generating a large fake dataset:
End of explanation
from george import kernels
kernel = np.var(y) * kernels.ExpSquaredKernel(1.0)
gp_basic = george.GP(kernel)
gp_basic.compute(x[:100], yerr[:100])
print(gp_basic.log_likelihood(y[:100]))
Explanation: The standard method for computing the marginalized likelihood of this dataset under a GP model is:
End of explanation
gp_hodlr = george.GP(kernel, solver=george.HODLRSolver, seed=42)
gp_hodlr.compute(x[:100], yerr[:100])
print(gp_hodlr.log_likelihood(y[:100]))
Explanation: When using only 100 data points, this computation is very fast but we could also use the approximate solver as follows:
End of explanation
import sklearn
print("sklearn version: {0}".format(sklearn.__version__))
from sklearn.gaussian_process.kernels import RBF
from sklearn.gaussian_process import GaussianProcessRegressor
kernel_skl = np.var(y) * RBF(length_scale=1.0)
gp_skl = GaussianProcessRegressor(kernel_skl,
alpha=yerr[:100]**2,
optimizer=None,
copy_X_train=False)
gp_skl.fit(x[:100, None], y[:100])
print(gp_skl.log_marginal_likelihood(kernel_skl.theta))
Explanation: The new scikit-learn interface is quite similar (you'll need to install a recent version of scikit-learn to execute this cell):
End of explanation
import GPy
print("GPy version: {0}".format(GPy.__version__))
kernel_gpy = GPy.kern.RBF(input_dim=1, variance=np.var(y), lengthscale=1.)
gp_gpy = GPy.models.GPRegression(x[:100, None], y[:100, None], kernel_gpy)
gp_gpy['.*Gaussian_noise'] = yerr[0]**2
print(gp_gpy.log_likelihood())
Explanation: To implement this same model in GPy, you would do something like (I've never been able to get the heteroscedastic regression to work in GPy):
End of explanation
import time
ns = np.array([50, 100, 200, 500, 1000, 5000, 10000, 50000], dtype=int)
t_basic = np.nan + np.zeros(len(ns))
t_hodlr = np.nan + np.zeros(len(ns))
t_gpy = np.nan + np.zeros(len(ns))
t_skl = np.nan + np.zeros(len(ns))
for i, n in enumerate(ns):
# Time the HODLR solver.
best = np.inf
for _ in range(100000 // n):
strt = time.time()
gp_hodlr.compute(x[:n], yerr[:n])
gp_hodlr.log_likelihood(y[:n])
dt = time.time() - strt
if dt < best:
best = dt
t_hodlr[i] = best
# Time the basic solver.
best = np.inf
for _ in range(10000 // n):
strt = time.time()
gp_basic.compute(x[:n], yerr[:n])
gp_basic.log_likelihood(y[:n])
dt = time.time() - strt
if dt < best:
best = dt
t_basic[i] = best
# Compare to the proposed scikit-learn interface.
best = np.inf
if n <= 10000:
gp_skl = GaussianProcessRegressor(kernel_skl,
alpha=yerr[:n]**2,
optimizer=None,
copy_X_train=False)
gp_skl.fit(x[:n, None], y[:n])
for _ in range(10000 // n):
strt = time.time()
gp_skl.log_marginal_likelihood(kernel_skl.theta)
dt = time.time() - strt
if dt < best:
best = dt
t_skl[i] = best
# Compare to GPy.
best = np.inf
for _ in range(5000 // n):
kernel_gpy = GPy.kern.RBF(input_dim=1, variance=np.var(y), lengthscale=1.)
strt = time.time()
gp_gpy = GPy.models.GPRegression(x[:n, None], y[:n, None], kernel_gpy)
gp_gpy['.*Gaussian_noise'] = yerr[0]**2
gp_gpy.log_likelihood()
dt = time.time() - strt
if dt < best:
best = dt
t_gpy[i] = best
Explanation: Now that we have working implementations of this model using all of the different methods and modules, let's run a benchmark to look at the computational cost and scaling of each option. The code here doesn't matter too much but we'll compute the best-of-"K" runtime for each method where "K" depends on how long I'm willing to wait. This cell takes a few minutes to run.
End of explanation
pl.loglog(ns, t_gpy, "-o", label="GPy")
pl.loglog(ns, t_skl, "-o", label="sklearn")
pl.loglog(ns, t_basic, "-o", label="basic")
pl.loglog(ns, t_hodlr, "-o", label="HODLR")
pl.xlim(30, 80000)
pl.ylim(1.1e-4, 50.)
pl.xlabel("number of datapoints")
pl.ylabel("time [seconds]")
pl.legend(loc=2, fontsize=16);
Explanation: Finally, here are the results of the benchmark plotted on a logarithmic scale:
End of explanation |
10,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: Python is an interpreted language
fire up an interpreter
assign a value to a variable
print something
switch to ipython
Fundamentals
Python uses whitespaces (tabs and spaces) to structure code rather than braces.
Terminating semicolons are not required.
Step2: Why should one do this when designing a language?
When putting several statements into one line you must use semicolons to seperate them.
Step3: But for the sake of readability you shouldn't do so.
Comments
Step4: Functions and methods
Functions are called using parentheses with zero or more arguments. A value may be returned.
Step5: Methods are called using this syntax
Step6: Positional and keyword arguments
Step7: is equal to
Step8: References
Step9: functions are also called by reference.
Dynamic references
In contrast to Java, Python references are typeless. There is no problem with this
Step10: Strong typing
Step11: Types can be checked using isinstance
Step12: You can also pass it a list of types
Step13: Attributes and methods
Attributes and methods are accessed using the obj.attribute or the obj.methods() notation.
Step14: Imports
Python modules are simple <name>.py files container functions, classes and variables.
Step15: Operators
Most operators work as you might expect
Step16: Equality
== tests for equivalence
is tests for identity
Step17: Mutability of objects
In Python there are mutable objects like lists, dicts and user-defined types (classes).
Step18: But there are also immutable objects like 'tuples' and strings
Step20: Scalar Types
Python has a couple of build in scalar types | Python Code:
from IPython.display import Image
Image('images/mem0.jpg')
Image('images/mem1.jpg')
Image('images/C++_machine_learning.png')
Image('images/Java_machine_learning.png')
Image('images/Python_machine_learning.png')
Image('images/R_machine_learning.png')
Explanation: Note: We are using Python here, not Python 2. This means we are using Python 3!
Note: This lecture comprises only those parts of Python that are fundamental to Machine Learning purposes.
Note: Stop writing, listen to me.
Why do I have to learn Python?
End of explanation
def minimum(x, y):
if x < y:
return x
else:
return y
Explanation: Python is an interpreted language
fire up an interpreter
assign a value to a variable
print something
switch to ipython
Fundamentals
Python uses whitespaces (tabs and spaces) to structure code rather than braces.
Terminating semicolons are not required.
End of explanation
a = 42; b = 7; c = 23
print(a, b)
Explanation: Why should one do this when designing a language?
When putting several statements into one line you must use semicolons to seperate them.
End of explanation
# This is a comment
#
# This is a multi line comment
#
Explanation: But for the sake of readability you shouldn't do so.
Comments
End of explanation
foo = min(2, 3)
foo
Explanation: Functions and methods
Functions are called using parentheses with zero or more arguments. A value may be returned.
End of explanation
obj.method(parameters)
Explanation: Methods are called using this syntax
End of explanation
f(a, b, c=5, d='foo')
Explanation: Positional and keyword arguments
End of explanation
f(a, b, d='foo', c=5)
Explanation: is equal to
End of explanation
a = [1, 2, 3]
b = a
b.append(4)
b
a
Explanation: References
End of explanation
a = 5
print(type(a))
a = 'foo'
print(type(a))
Explanation: functions are also called by reference.
Dynamic references
In contrast to Java, Python references are typeless. There is no problem with this:
End of explanation
5 + '5'
Explanation: Strong typing
End of explanation
isinstance(4, int)
isinstance(4.5, int)
Explanation: Types can be checked using isinstance
End of explanation
isinstance(4.5, (int, float))
minimum.bar = lambda x: x
minimum.bar('foo')
Explanation: You can also pass it a list of types
End of explanation
a = 'foo'
a.count('o')
a.<TAB>
Explanation: Attributes and methods
Attributes and methods are accessed using the obj.attribute or the obj.methods() notation.
End of explanation
%%bash
ls -la material/foo.py
%%bash
cat material/foo.py
from foo import PI as pi
from foo import f
import foo
f()
pi
foo.f()
foo.PI
Explanation: Imports
Python modules are simple <name>.py files container functions, classes and variables.
End of explanation
7 + 4
5 / 2
4 < 3
4 ** 2
id('this is a somewhat long text')
id('this is a somewhat long text')
Explanation: Operators
Most operators work as you might expect
End of explanation
10000000 + 10000000 is 20000000
10000000 + 10000000 == 20000000
Explanation: Equality
== tests for equivalence
is tests for identity
End of explanation
a_list = [0, 1, 2]
a_list
a_list[1] = 3
a_list
Explanation: Mutability of objects
In Python there are mutable objects like lists, dicts and user-defined types (classes).
End of explanation
a_tuple = (0, 1, 2)
a_tuple
a_tuple[0] = 4
foo = 'bar'
id(foo)
foo = foo + 'bar'
id(foo)
Explanation: But there are also immutable objects like 'tuples' and strings
End of explanation
4
4 / 2
7.3e-4
float(43)
str('foobar')
'foobar'
"foobar"
multiline
foobar
foo = 'bar'
foo[1]
str(4)
int('4')
s = 'foo bar'
s[0:3]
list(s)
s = 'bob\'s bar'
print(s)
a = 'foo'
b = 'bar'
a + b
pi = 3.14159
'pi is {:.2f}'.format(pi)
Explanation: Scalar Types
Python has a couple of build in scalar types:
* int (32-bit, with automatic conversion to 64-bit)
* bool
* None
* float (64-bit)
* str
End of explanation |
10,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
text
Header
для редактирования формулы ниже использует синтаксис tex
$$ c = \sqrt{a^2 + b^2}$$
Step1: Ниже аналоги команд для пользователей Windows
Step2: удаление директории, если она не нужна (windows) | Python Code:
! echo 'hello, world!'
!echo $t
%%bash
mkdir test_directory
cd test_directory/
ls -a
#удаление директории, если она не нужна
! rm -r test_directory
Explanation: text
Header
для редактирования формулы ниже использует синтаксис tex
$$ c = \sqrt{a^2 + b^2}$$
End of explanation
%%cmd
mkdir test_directory
cd test_directory
dir
Explanation: Ниже аналоги команд для пользователей Windows:
End of explanation
%%cmd
rmdir test_directiory
%lsmagic
%pylab inline
y = range(11)
y
plot(y)
Explanation: удаление директории, если она не нужна (windows)
End of explanation |
10,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
First a non-learning test
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few). | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = [0.0, 0.5, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
Explanation: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
import pickle
with open('../../data/simple_q_learner_fast_learner_3_actions.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation |
10,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skyrme example
Step1: Link the O$_2$scl library
Step2: Get the value of $\hbar c$ from an O$_2$scl find_constants object
Step3: Get a copy (a pointer to) the O$_2$scl unit conversion object
Step4: Create neutron and proton objects and set their spin degeneracy and
masses. The masses are expected to be in units of inverse
femtometers.
Step5: Create the Skyrme EOS object and load the NRAPR parameterization
Step6: Compute nuclear saturation and output the saturation density
and binding energy
Step7: Create the nstar_cold object for automatically computing the
beta-equilibrium EOS and solving the TOV equations.
Step8: Let the nstar_cold object know we want to use the NRAPR EOS
Step9: Compute the EOS
Step10: Summarize the columns in the EOS table
Step11: Compute the M-R curve using the TOV equations
Step12: Get the table for the TOV results
Step13: Summarize the columns in the TOV table | Python Code:
import o2sclpy
Explanation: Skyrme example
End of explanation
link=o2sclpy.linker()
link.link_o2scl_o2graph(True,True)
Explanation: Link the O$_2$scl library
End of explanation
fc=o2sclpy.find_constants(link)
hc=fc.find_unique('hbarc','MeV*fm')
print('hbarc = %7.6e' % (hc))
Explanation: Get the value of $\hbar c$ from an O$_2$scl find_constants object
End of explanation
cu=link.o2scl_settings.get_convert_units()
Explanation: Get a copy (a pointer to) the O$_2$scl unit conversion object
End of explanation
neut=o2sclpy.fermion(link)
neut.g=2.0
neut.m=cu.convert('g','1/fm',fc.find_unique('massneutron','g'))
prot=o2sclpy.fermion(link)
prot.g=2.0
prot.m=cu.convert('g','1/fm',fc.find_unique('massproton','g'))
Explanation: Create neutron and proton objects and set their spin degeneracy and
masses. The masses are expected to be in units of inverse
femtometers.
End of explanation
sk=o2sclpy.eos_had_skyrme(link)
o2sclpy.skyrme_load(link,sk,'NRAPR',False,0)
Explanation: Create the Skyrme EOS object and load the NRAPR parameterization
End of explanation
sk.saturation()
print('NRAPR: n0=%7.6e 1/fm^3, E/A=%7.6e MeV' % (sk.n0,sk.eoa*hc))
print('')
Explanation: Compute nuclear saturation and output the saturation density
and binding energy
End of explanation
nc=o2sclpy.nstar_cold(link)
Explanation: Create the nstar_cold object for automatically computing the
beta-equilibrium EOS and solving the TOV equations.
End of explanation
nc.set_eos(sk)
Explanation: Let the nstar_cold object know we want to use the NRAPR EOS
End of explanation
ret1=nc.calc_eos(0.01)
Explanation: Compute the EOS
End of explanation
eos_table=nc.get_eos_results()
print('EOS table:')
for i in range(0,eos_table.get_ncolumns()):
col=eos_table.get_column_name(i)
unit=eos_table.get_unit(col)
print('Column',i,str(col,'UTF-8'),str(unit,'UTF-8'))
print('')
# from https://stackoverflow.com/questions/24277488/in-python-how-to-capture-
# the-stdout-from-a-c-shared-library-to-a-variable
import ctypes
import os
import sys
import threading
# Create pipe and dup2() the write end of it on top of stdout, saving a copy
# of the old stdout
#stdout_fileno = sys.stdout.fileno()
stdout_fileno=1
stdout_save = os.dup(stdout_fileno)
stdout_pipe = os.pipe()
os.dup2(stdout_pipe[1], stdout_fileno)
os.close(stdout_pipe[1])
captured_stdout = ''
def drain_pipe():
global captured_stdout
while True:
data = os.read(stdout_pipe[0], 1024)
if not data:
break
captured_stdout += str(data,'UTF-8')
t = threading.Thread(target=drain_pipe)
t.start()
ret2=nc.calc_nstar()
# Close the write end of the pipe to unblock the reader thread and trigger it
# to exit
os.close(stdout_fileno)
t.join()
# Clean up the pipe and restore the original stdout
os.close(stdout_pipe[0])
os.dup2(stdout_save, stdout_fileno)
os.close(stdout_save)
print('y')
print(captured_stdout)
Explanation: Summarize the columns in the EOS table
End of explanation
# ret2=nc.calc_nstar()
Explanation: Compute the M-R curve using the TOV equations
End of explanation
tov_table=nc.get_tov_results()
print('')
Explanation: Get the table for the TOV results
End of explanation
print('TOV table:')
for i in range(0,tov_table.get_ncolumns()):
col=tov_table.get_column_name(i)
unit=tov_table.get_unit(col)
print('Column',i,str(col,'UTF-8'),str(unit,'UTF-8'))
print('')
Explanation: Summarize the columns in the TOV table
End of explanation |
10,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simon #metoo step 4
For sentiment analysis, we will use the VADER library.
Step1: We can get the tweets to analyse by reading the text column from our metoo dataset. We also read the dates column.
Step2: To skip some text cleaning and filtering steps here, we re-read the tweets from a pre-processed file.
Step3: Next, use VADER to calculate sentiment scores.
Step4: Iterate over the sentences list and the vader_scores list in parallel, to be able to add each sentence as a key to the dictionary of its scores.
Step5: Now vader_scores is a list of dictionaries with scores and sentences. We write it to a pandas dataframe. | Python Code:
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
import pandas as pd
pd.set_option('display.max_colwidth', -1)
Explanation: Simon #metoo step 4
For sentiment analysis, we will use the VADER library.
End of explanation
df = pd.DataFrame.from_csv("metoo_full_backup_3M.csv")
sentences = df.text
dates = df.created_at
Explanation: We can get the tweets to analyse by reading the text column from our metoo dataset. We also read the dates column.
End of explanation
sentences = open('metoo_tweets.txt', 'r').readlines()
Explanation: To skip some text cleaning and filtering steps here, we re-read the tweets from a pre-processed file.
End of explanation
vader_scores = []
numdocs = len(sentences)
for c,sentence in enumerate(sentences):
score = analyzer.polarity_scores(sentence)
vader_scores.append(score)
if c % 1000 == 0:
print("\r" + str(c) + "/" + str(numdocs) ,end = "")
Explanation: Next, use VADER to calculate sentiment scores.
End of explanation
for sentence, score_dict in zip(sentences, vader_scores):
score_dict['text'] = sentence
for date, score_dict in zip(dates, vader_scores):
score_dict['created_at'] = date
Explanation: Iterate over the sentences list and the vader_scores list in parallel, to be able to add each sentence as a key to the dictionary of its scores.
End of explanation
vader_df = pd.DataFrame(vader_scores)[['text', 'created_at','compound', 'neg', 'neu', 'pos']]
vader_df = vader_df.sort_values('compound', ascending=True)
vader_df.head(7)
vader_df = pd.DataFrame(vader_scores)[['text', 'compound', 'created_at']]
vader_df = vader_df.sort_values('compound', ascending=True)
vader_df.head(7)
df = df.sort_values(by="text")
vader_df = vader_df.sort_values(by="text")
df = df.reset_index(drop=True)
vader_df = vader_df.reset_index(drop=True)
df.head()
df.tail()
vader_df.dtypes
firstday = (vader_df['created_at'] > '2017-10-28') & (vader_df['created_at'] < '2017-10-31')
firstday_df = df[firstday]
firstday_df = firstday_df.sort_values(by="sentiment", ascending = False)
firstday_df.to_csv("firstday.csv")
firstday_df
firstday_df = firstday_df.sort_values(by="created_at", ascending = True)
firstday_df
firstday_df.dtypes
vader_df.head()
vader_df.tail()
df['neg'] = vader_df['neg']
df['pos'] = vader_df['pos']
df['text2'] = vader_df.text
df.head()
df.to_csv("sentiment_dataframe.csv")
sentiments = df[['created_at', 'neg', 'pos']]
sentiments = sentiments.sort_values(by="created_at")
sentiments = sentiments.reset_index(drop=True)
sentiments.head()
sentiments.tail()
sentiments['created_at'] = pd.to_datetime(sentiments['created_at'])
groups = sentiments.groupby([sentiments['created_at'].dt.date])
daycol = []
posmeancol = []
negmeancol = []
for name, group in groups:
posmeancol.append(group.pos.mean())
negmeancol.append(group.neg.mean())
date = group.created_at.tolist()[0]
daycol.append(str(date)[:-9])
daycol = pd.Series(daycol)
posmeancol = pd.Series(posmeancol)
negmeancol = pd.Series(negmeancol)
sentdata = pd.concat([daycol, posmeancol, negmeancol], axis=1)
sentdata.columns=['day', 'posmean', 'negmean']
import matplotlib.pyplot as plt
from matplotlib import dates, pyplot
import matplotlib.ticker as ticker
%matplotlib inline
# Create a new figure
plt.figure(figsize=(10,6), dpi=100)
# Define x
#x = sentdata.day.tolist() # the day col in list-of-strings format
x = range(68)
xn = range(len(x)) # make it numerical 1 to n
plt.xticks(xn, x) # name the ticks
# What columns to plot
pos = sentdata.posmean
neg = sentdata.negmean
# Plot them
plt.plot(xn, pos, color="gray", linewidth=2.0, linestyle=":", label="Positive scores")
plt.plot(xn, neg, color="black", linewidth=2.0, linestyle="-", label = "Negative scores")
plt.legend(loc='upper left', frameon=False)
# Set axis ranges
plt.xlim(1,60)
plt.ylim(0,0.2)
# Label orientation and size
plt.xticks(rotation=0, fontsize = 8)
plt.yticks(rotation=0, fontsize = 8)
# Tweak axes more
ax = plt.gca() # get current axes in the plot
# Loop over the x labels and hide all
for label in ax.xaxis.get_ticklabels():
label.set_visible(False)
# Loop over every nth x label and set it to visible
for label in ax.xaxis.get_ticklabels()[::7]:
label.set_visible(True)
# Also, set the very first label to visible
ax.xaxis.get_ticklabels()[1].set_visible(True)
plt.ylabel("Mean weighted normalised composite score")
plt.xlabel("Day")
plt.savefig('sentiments.pdf')
sentdata_beginning = sentdata[(sentdata.day > '2017-10-25')]
sentdata_beginning.plot(kind='area')
october = sentiments[(sentiments.created_at < '2017-11-01')]
november = sentiments[(sentiments.created_at > '2017-10-31') & (sentiments.created_at <= '2017-11-30')]
december = sentiments[(sentiments.created_at > '2017-11-30')]
import seaborn as sns
%matplotlib inline
sns.violinplot(data=october, inner="box", orient = "h", bw=.03)
#sns.violinplot(data=corr_df, palette="Set3", bw=.2, cut=1, linewidth=1)
%matplotlib inline
sns.violinplot(data=november, inner="box", orient="h", bw=.03)
%matplotlib inline
sns.violinplot(data=december, inner="box", orient = "h", bw=.03)
dta = sentiments.head(100) # testdata
dta['item'] = dta.index
dta.head()
hexbin = sns.jointplot(x="item", y="sentiment", data=dta, kind="scatter")
#bins='log', cmap='inferno'
Explanation: Now vader_scores is a list of dictionaries with scores and sentences. We write it to a pandas dataframe.
End of explanation |
10,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resonant excitation
We want to study the behaviour of an undercritically damped SDOF system when it is
subjected to a harmonic force $p(t) = p_o \sin\omega_nt$, i.e., when the excitation frequency equals the free vibration frequency of the system.
Of course, $\beta=1$, $D(\beta,\zeta)|{\beta=1}=\displaystyle\frac{1}{2\zeta}$
and $\theta=\pi/2$, hence $$\xi(t)=\Delta{st}\,\frac{1}{2\zeta}\cos\omega_nt.$$
Starting from rest conditions, we have
$$\frac{x(t)}{\Delta_{st}} = \exp(-\zeta\omega_n t)\left(
-\frac{\omega_n}{2\omega_D}\sin(\omega_n t)
-\frac{1}{2\zeta}\cos(\omega_n t)\right) + \frac{1}{2\zeta}\cos(\omega_n t)$$
and, multiplying both sides by $2\zeta$
\begin{align}
x(t)\frac{2\zeta}{\Delta_{st}} = \bar{x}(t)& =
\exp(-\zeta\omega_n t)\left(
-\zeta\frac{\omega_n}{\omega_D}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t)\
& = \exp(-\zeta\omega_n t)\left(
-\frac{\zeta}{\sqrt{1-\zeta^2}}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t).
\end{align}
We have now a normalized function of time that grows, oscillating, from 0 to 1,
where the free parameters are just $\omega_n$ and $\zeta$.
To go further, we set arbitrarily $\omega_n=2\pi$ (our plots will be nicer...)
and have just a dependency on $t$ and $\zeta$.
Eventually, we define a function of $\zeta$ that returns a function of $t$ only,
here it is...
Step1: Above we compute some constants that depend on $\zeta$,
i.e., the damped frequency and the coefficient in
front of the sine term, then we define a function of time
in terms of these constants and of $\zeta$ itself.
Because we are going to use this function with a vector argument,
the last touch is to vectorize the function just before returning it
to the caller.
Plotting our results
We start by using a function defined in the numpy aka np module to
generate a vector whose entries are 1001 equispaced real numbers, starting from
zero and up to 20, inclusive of both ends, and assigning the name t to this vector.
Step2: We want to see what happens for different values of $\zeta$, so we create
a list of values and assign the name zetas to this list.
Step3: Now, the real plotting
Step4: Wait a minute!
So, after all this work, we have that the greater the damping, the smaller the
number of cycles that's needed to reach the maximum value of the response...
Yes, it's exactly like that, and there is a reason. Think of it.
.
.
.
.
.
.
.
We have normalized the response functions to have always a maximum absolute
value of one, but in effect the max values are different, and a heavily damped
system needs less cycles to reach steady-state because the maximum value is much,
much smaller.
Let's plot the unnormalized (well, there's still the $\Delta_{st}$ normalization)
responses.
Note the differences with above | Python Code:
def x_normalized(t, z):
wn = w = 2*pi
wd = wn*sqrt(1-z*z)
# Clough Penzien p. 43
A = z/sqrt(1-z*z)
return (-cos(wd*t)-A*sin(wd*t))*exp(-z*wn*t) + cos(w*t)
Explanation: Resonant excitation
We want to study the behaviour of an undercritically damped SDOF system when it is
subjected to a harmonic force $p(t) = p_o \sin\omega_nt$, i.e., when the excitation frequency equals the free vibration frequency of the system.
Of course, $\beta=1$, $D(\beta,\zeta)|{\beta=1}=\displaystyle\frac{1}{2\zeta}$
and $\theta=\pi/2$, hence $$\xi(t)=\Delta{st}\,\frac{1}{2\zeta}\cos\omega_nt.$$
Starting from rest conditions, we have
$$\frac{x(t)}{\Delta_{st}} = \exp(-\zeta\omega_n t)\left(
-\frac{\omega_n}{2\omega_D}\sin(\omega_n t)
-\frac{1}{2\zeta}\cos(\omega_n t)\right) + \frac{1}{2\zeta}\cos(\omega_n t)$$
and, multiplying both sides by $2\zeta$
\begin{align}
x(t)\frac{2\zeta}{\Delta_{st}} = \bar{x}(t)& =
\exp(-\zeta\omega_n t)\left(
-\zeta\frac{\omega_n}{\omega_D}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t)\
& = \exp(-\zeta\omega_n t)\left(
-\frac{\zeta}{\sqrt{1-\zeta^2}}\sin(\omega_n t)
-\cos(\omega_n t)\right) + \cos(\omega_n t).
\end{align}
We have now a normalized function of time that grows, oscillating, from 0 to 1,
where the free parameters are just $\omega_n$ and $\zeta$.
To go further, we set arbitrarily $\omega_n=2\pi$ (our plots will be nicer...)
and have just a dependency on $t$ and $\zeta$.
Eventually, we define a function of $\zeta$ that returns a function of $t$ only,
here it is...
End of explanation
t = np.linspace(0,20,1001)
print(t)
Explanation: Above we compute some constants that depend on $\zeta$,
i.e., the damped frequency and the coefficient in
front of the sine term, then we define a function of time
in terms of these constants and of $\zeta$ itself.
Because we are going to use this function with a vector argument,
the last touch is to vectorize the function just before returning it
to the caller.
Plotting our results
We start by using a function defined in the numpy aka np module to
generate a vector whose entries are 1001 equispaced real numbers, starting from
zero and up to 20, inclusive of both ends, and assigning the name t to this vector.
End of explanation
zetas = (.02, .05, .10, .20)
print(zetas)
Explanation: We want to see what happens for different values of $\zeta$, so we create
a list of values and assign the name zetas to this list.
End of explanation
for z in zetas:
plt.plot(t, x_normalized(t, z))
plt.ylim((-1.0, 1.0))
plt.title(r'$\zeta=%4.2f$'%(z,))
plt.show()
Explanation: Now, the real plotting:
z takes in turn each of the values in zetas,
then we generate a function of time for the current z
we generate a plot with a line that goes through the point
(a(0),b(0)), (a(1),b(1)), (a(2),b(2)), ...
where, in our case, a is the vector t and b is the vector
returned from the vectorized function bar_x
we make a slight adjustement to the extreme values of the y-axis
of the plot
we give a title to the plot
we FORCE (plt.show()) the plot to be produced.
End of explanation
t = np.linspace(0,5,501)
for z in zetas:
plt.plot(t, x_normalized(t, z)/(2*z), label=r'$\zeta=%4.2f$'%(z,))
plt.legend(ncol=5,loc='lower center', fancybox=1, shadow=1, framealpha=.95)
plt.grid()
Explanation: Wait a minute!
So, after all this work, we have that the greater the damping, the smaller the
number of cycles that's needed to reach the maximum value of the response...
Yes, it's exactly like that, and there is a reason. Think of it.
.
.
.
.
.
.
.
We have normalized the response functions to have always a maximum absolute
value of one, but in effect the max values are different, and a heavily damped
system needs less cycles to reach steady-state because the maximum value is much,
much smaller.
Let's plot the unnormalized (well, there's still the $\Delta_{st}$ normalization)
responses.
Note the differences with above:
we focus on a shorter interval of time and, in each step
we don't add a title
we don't force the creation of a distinct plot in each cycle,
we add a label to each curve
at the end of the cycle,
we ask for the generation of a legend that uses the labels
we specified to generate a, well, a legend for the curves
we ask to plot all the properly labeled curves using plt.plot().
End of explanation |
10,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro. to Snorkel
Step1: We repeat our definition of the Spouse Candidate subclass from Parts II and III.
Step2: Using a labeled development set
In our setting here, we will use the phrase "development set" to refer to a small set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the test set, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions.
In our case, we already loaded existing labels for a development set (split 1), so we can load them again now
Step3: Creating and Modeling a Noisy Training Set
Our biggest step in the data programming pipeline is the creation - and modeling - of a noisy training set. We'll approach this in three main steps
Step4: Pattern-based LFs
These LFs express some common sense text patterns which indicate that a person pair might be married. For example, LF_husband_wife looks for words in spouses between the person mentions, and LF_same_last_name checks to see if the two people have the same last name (but aren't the same whole name).
Step5: Distant Supervision LFs
In addition to writing labeling functions that describe text pattern-based heuristics for labeling training examples, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these.
Step6: For later convenience we group the labeling functions into a list.
Step8: Developing Labeling Functions
Above, we've written a bunch of labeling functions already, which should give you some sense about how to go about it. While writing them, we probably want to check to make sure that they at least work as intended before adding to our set. Suppose we're thinking about writing a simple LF
Step9: One simple thing we can do is quickly test it on our development set (or any other set), without saving it to the database. This is simple to do. For example, we can easily get every candidate that this LF labels as true
Step10: We can then easily put this into the Viewer as usual (try it out!)
Step11: 2. Applying the Labeling Functions
Next, we need to actually run the LFs over all of our training candidates, producing a set of Labels and LabelKeys (just the names of the LFs) in the database. We'll do this using the LabelAnnotator class, a UDF which we will again run with UDFRunner. Note that this will delete any existing Labels and LabelKeys for this candidate set. We start by setting up the class
Step12: Finally, we run the labeler. Note that we set a random seed for reproducibility, since some of the LFs involve random number generators. Again, this can be run in parallel, given an appropriate database like Postgres is being used
Step13: If we've already created the labels (saved in the database), we can load them in as a sparse matrix here too
Step14: Note that the returned matrix is a special subclass of the scipy.sparse.csr_matrix class, with some special features which we demonstrate below
Step15: We can also view statistics about the resulting label matrix.
Coverage is the fraction of candidates that the labeling function emits a non-zero label for.
Overlap is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a non-zero label for.
Conflict is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a conflicting non-zero label for.
Step16: 3. Fitting the Generative Model
Now, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other.
Step17: We now apply the generative model to the training candidates to get the noise-aware training label set. We'll refer to these as the training marginals
Step18: We'll look at the distribution of the training marginals
Step19: We can view the learned accuracy parameters, and other statistics about the LFs learned by the generative model
Step20: Using the Model to Iterate on Labeling Functions
Now that we have learned the generative model, we can stop here and use this to potentially debug and/or improve our labeling function set. First, we apply the LFs to our development set
Step21: And finally, we get the score of the generative model
Step22: Interpreting Generative Model Performance
At this point, we should be getting an F1 score of around 0.4 to 0.5 on the development set, which is pretty good! However, we should be very careful in interpreting this. Since we developed our labeling functions using this development set as a guide, and our generative model is composed of these labeling functions, we expect it to score very well here!
In fact, it is probably somewhat overfit to this set. However this is fine, since in the next tutorial, we'll train a more powerful end extraction model which will generalize beyond the development set, and which we will evaluate on a blind test set (i.e. one we never looked at during development).
Doing Some Error Analysis
At this point, we might want to look at some examples in one of the error buckets. For example, one of the false negatives that we did not correctly label as true mentions. To do this, we can again just use the Viewer
Step23: We can easily see the labels that the LFs gave to this candidate using simple ORM-enabled syntax
Step24: We can also now explore some of the additional functionalities of the lf_stats method for our dev set LF labels, L_dev
Step25: Note that for labeling functions with low coverage, our learned accuracies are closer to our prior of 70% accuracy.
Saving our training labels
Finally, we'll save the training_marginals, which are our probabilistic training labels, so that we can use them in the next tutorial to train our end extraction model | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
import numpy as np
from snorkel import SnorkelSession
session = SnorkelSession()
Explanation: Intro. to Snorkel: Extracting Spouse Relations from the News
Part II: Generating and modeling noisy training labels
In this part of the tutorial, we will write labeling functions which express various heuristics, patterns, and weak supervision strategies to label our data.
In most real-world settings, hand-labeled training data is prohibitively expensive and slow to collect. A common scenario, though, is to have access to tons of unlabeled training data, and have some idea of how to label it programmatically. For example:
We may be able to think of text patterns that would indicate two people mentioned in a sentence are married, such as seeing the word "spouse" between the mentions.
We may have access to an external knowledge base (KB) that lists some known pairs of married people, and can use these to heuristically label some subset of our data.
Our labeling functions will capture these types of strategies. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will model their accuracies with a generative model, which Snorkel will help us easily apply.
This will ultimately produce a single set of noise-aware training labels, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our NIPS 2016 paper.
End of explanation
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
Explanation: We repeat our definition of the Spouse Candidate subclass from Parts II and III.
End of explanation
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
Explanation: Using a labeled development set
In our setting here, we will use the phrase "development set" to refer to a small set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the test set, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions.
In our case, we already loaded existing labels for a development set (split 1), so we can load them again now:
End of explanation
import re
from snorkel.lf_helpers import (
get_left_tokens, get_right_tokens, get_between_tokens,
get_text_between, get_tagged_text,
)
Explanation: Creating and Modeling a Noisy Training Set
Our biggest step in the data programming pipeline is the creation - and modeling - of a noisy training set. We'll approach this in three main steps:
Creating labeling functions (LFs): This is where most of our development time would actually go into if this were a real application. Labeling functions encode our heuristics and weak supervision signals to generate (noisy) labels for our training candidates.
Applying the LFs: Here, we actually use them to label our candidates!
Training a generative model of our training set: Here we learn a model over our LFs, learning their respective accuracies automatically. This will allow us to combine them into a single, higher-quality label set.
We'll also add some detail on how to go about developing labeling functions and then debugging our model of them to improve performance.
1. Creating Labeling Functions
In Snorkel, our primary interface through which we provide training signal to the end extraction model we are training is by writing labeling functions (LFs) (as opposed to hand-labeling massive training sets). We'll go through some examples for our spouse extraction task below.
A labeling function is just a Python function that accepts a Candidate and returns 1 to mark the Candidate as true, -1 to mark the Candidate as false, and 0 to abstain from labeling the Candidate (note that the non-binary classification setting is covered in the advanced tutorials!).
In the next stages of the Snorkel pipeline, we'll train a model to learn the accuracies of the labeling functions and trewieght them accordingly, and then use them to train a downstream model. It turns out by doing this, we can get high-quality models even with lower-quality labeling functions. So they don't need to be perfect! Now on to writing some:
End of explanation
spouses = {'spouse', 'wife', 'husband', 'ex-wife', 'ex-husband'}
family = {'father', 'mother', 'sister', 'brother', 'son', 'daughter',
'grandfather', 'grandmother', 'uncle', 'aunt', 'cousin'}
family = family | {f + '-in-law' for f in family}
other = {'boyfriend', 'girlfriend' 'boss', 'employee', 'secretary', 'co-worker'}
# Helper function to get last name
def last_name(s):
name_parts = s.split(' ')
return name_parts[-1] if len(name_parts) > 1 else None
def LF_husband_wife(c):
return 1 if len(spouses.intersection(get_between_tokens(c))) > 0 else 0
def LF_husband_wife_left_window(c):
if len(spouses.intersection(get_left_tokens(c[0], window=2))) > 0:
return 1
elif len(spouses.intersection(get_left_tokens(c[1], window=2))) > 0:
return 1
else:
return 0
def LF_same_last_name(c):
p1_last_name = last_name(c.person1.get_span())
p2_last_name = last_name(c.person2.get_span())
if p1_last_name and p2_last_name and p1_last_name == p2_last_name:
if c.person1.get_span() != c.person2.get_span():
return 1
return 0
def LF_no_spouse_in_sentence(c):
return -1 if np.random.rand() < 0.75 and len(spouses.intersection(c.get_parent().words)) == 0 else 0
def LF_and_married(c):
return 1 if 'and' in get_between_tokens(c) and 'married' in get_right_tokens(c) else 0
def LF_familial_relationship(c):
return -1 if len(family.intersection(get_between_tokens(c))) > 0 else 0
def LF_family_left_window(c):
if len(family.intersection(get_left_tokens(c[0], window=2))) > 0:
return -1
elif len(family.intersection(get_left_tokens(c[1], window=2))) > 0:
return -1
else:
return 0
def LF_other_relationship(c):
return -1 if len(other.intersection(get_between_tokens(c))) > 0 else 0
Explanation: Pattern-based LFs
These LFs express some common sense text patterns which indicate that a person pair might be married. For example, LF_husband_wife looks for words in spouses between the person mentions, and LF_same_last_name checks to see if the two people have the same last name (but aren't the same whole name).
End of explanation
import bz2
# Function to remove special characters from text
def strip_special(s):
return ''.join(c for c in s if ord(c) < 128)
# Read in known spouse pairs and save as set of tuples
with bz2.BZ2File('data/spouses_dbpedia.csv.bz2', 'rb') as f:
known_spouses = set(
tuple(strip_special(x.decode('utf-8')).strip().split(',')) for x in f.readlines()
)
# Last name pairs for known spouses
last_names = set([(last_name(x), last_name(y)) for x, y in known_spouses if last_name(x) and last_name(y)])
def LF_distant_supervision(c):
p1, p2 = c.person1.get_span(), c.person2.get_span()
return 1 if (p1, p2) in known_spouses or (p2, p1) in known_spouses else 0
def LF_distant_supervision_last_names(c):
p1, p2 = c.person1.get_span(), c.person2.get_span()
p1n, p2n = last_name(p1), last_name(p2)
return 1 if (p1 != p2) and ((p1n, p2n) in last_names or (p2n, p1n) in last_names) else 0
Explanation: Distant Supervision LFs
In addition to writing labeling functions that describe text pattern-based heuristics for labeling training examples, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these.
End of explanation
LFs = [
LF_distant_supervision, LF_distant_supervision_last_names,
LF_husband_wife, LF_husband_wife_left_window, LF_same_last_name,
LF_no_spouse_in_sentence, LF_and_married, LF_familial_relationship,
LF_family_left_window, LF_other_relationship
]
Explanation: For later convenience we group the labeling functions into a list.
End of explanation
def LF_wife_in_sentence(c):
A simple example of a labeling function
return 1 if 'wife' in c.get_parent().words else 0
Explanation: Developing Labeling Functions
Above, we've written a bunch of labeling functions already, which should give you some sense about how to go about it. While writing them, we probably want to check to make sure that they at least work as intended before adding to our set. Suppose we're thinking about writing a simple LF:
End of explanation
labeled = []
for c in session.query(Spouse).filter(Spouse.split == 1).all():
if LF_wife_in_sentence(c) != 0:
labeled.append(c)
print("Number labeled:", len(labeled))
Explanation: One simple thing we can do is quickly test it on our development set (or any other set), without saving it to the database. This is simple to do. For example, we can easily get every candidate that this LF labels as true:
End of explanation
from snorkel.lf_helpers import test_LF
tp, fp, tn, fn = test_LF(session, LF_wife_in_sentence, split=1, annotator_name='gold')
Explanation: We can then easily put this into the Viewer as usual (try it out!):
SentenceNgramViewer(labeled, session)
We also have a simple helper function for getting the empirical accuracy of a single LF with respect to the development set labels for example. This function also returns the evaluation buckets of the candidates (true positive, false positive, true negative, false negative):
End of explanation
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator(lfs=LFs)
Explanation: 2. Applying the Labeling Functions
Next, we need to actually run the LFs over all of our training candidates, producing a set of Labels and LabelKeys (just the names of the LFs) in the database. We'll do this using the LabelAnnotator class, a UDF which we will again run with UDFRunner. Note that this will delete any existing Labels and LabelKeys for this candidate set. We start by setting up the class:
End of explanation
np.random.seed(1701)
%time L_train = labeler.apply(split=0)
L_train
Explanation: Finally, we run the labeler. Note that we set a random seed for reproducibility, since some of the LFs involve random number generators. Again, this can be run in parallel, given an appropriate database like Postgres is being used:
End of explanation
%time L_train = labeler.load_matrix(session, split=0)
L_train
Explanation: If we've already created the labels (saved in the database), we can load them in as a sparse matrix here too:
End of explanation
L_train.get_candidate(session, 0)
L_train.get_key(session, 0)
Explanation: Note that the returned matrix is a special subclass of the scipy.sparse.csr_matrix class, with some special features which we demonstrate below:
End of explanation
L_train.lf_stats(session)
Explanation: We can also view statistics about the resulting label matrix.
Coverage is the fraction of candidates that the labeling function emits a non-zero label for.
Overlap is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a non-zero label for.
Conflict is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a conflicting non-zero label for.
End of explanation
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel()
gen_model.train(L_train, epochs=100, decay=0.95, step_size=0.1 / L_train.shape[0], reg_param=1e-6)
gen_model.weights.lf_accuracy
Explanation: 3. Fitting the Generative Model
Now, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other.
End of explanation
train_marginals = gen_model.marginals(L_train)
Explanation: We now apply the generative model to the training candidates to get the noise-aware training label set. We'll refer to these as the training marginals:
End of explanation
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
Explanation: We'll look at the distribution of the training marginals:
End of explanation
gen_model.learned_lf_stats()
Explanation: We can view the learned accuracy parameters, and other statistics about the LFs learned by the generative model:
End of explanation
L_dev = labeler.apply_existing(split=1)
Explanation: Using the Model to Iterate on Labeling Functions
Now that we have learned the generative model, we can stop here and use this to potentially debug and/or improve our labeling function set. First, we apply the LFs to our development set:
End of explanation
tp, fp, tn, fn = gen_model.error_analysis(session, L_dev, L_gold_dev)
Explanation: And finally, we get the score of the generative model:
End of explanation
from snorkel.viewer import SentenceNgramViewer
# NOTE: This if-then statement is only to avoid opening the viewer during automated testing of this notebook
# You should ignore this!
import os
if 'CI' not in os.environ:
sv = SentenceNgramViewer(fn, session)
else:
sv = None
sv
c = sv.get_selected() if sv else list(fp.union(fn))[0]
c
Explanation: Interpreting Generative Model Performance
At this point, we should be getting an F1 score of around 0.4 to 0.5 on the development set, which is pretty good! However, we should be very careful in interpreting this. Since we developed our labeling functions using this development set as a guide, and our generative model is composed of these labeling functions, we expect it to score very well here!
In fact, it is probably somewhat overfit to this set. However this is fine, since in the next tutorial, we'll train a more powerful end extraction model which will generalize beyond the development set, and which we will evaluate on a blind test set (i.e. one we never looked at during development).
Doing Some Error Analysis
At this point, we might want to look at some examples in one of the error buckets. For example, one of the false negatives that we did not correctly label as true mentions. To do this, we can again just use the Viewer:
End of explanation
c.labels
Explanation: We can easily see the labels that the LFs gave to this candidate using simple ORM-enabled syntax:
End of explanation
L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])
Explanation: We can also now explore some of the additional functionalities of the lf_stats method for our dev set LF labels, L_dev: we can plug in the gold labels that we have, and the accuracies that our generative model has learned:
End of explanation
from snorkel.annotations import save_marginals
%time save_marginals(session, L_train, train_marginals)
Explanation: Note that for labeling functions with low coverage, our learned accuracies are closer to our prior of 70% accuracy.
Saving our training labels
Finally, we'll save the training_marginals, which are our probabilistic training labels, so that we can use them in the next tutorial to train our end extraction model:
End of explanation |
10,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 5.1
Step1: Import datasets
Using a dictionary of pandas dataframes, with the key as the language. A better way would be to have a tidy dataframe.
Step2: Monthly bot edit counts are in the format
Step3: Monthly bot revert counts are a bit more complicated and in a slightly different format
Step4: Processing
We're going to do this in a pretty messy and so-not-best-practice way (which would be a single tidy dataframe with nice hierarchical indexes) by having two dictionaries of dataframes.
Step5: Preview the dataframes in the dictionaries
Step7: Clean and combine the two datasets
Convert dates
Remember that they used different formats for representing months? Gotta fix that.
Step8: Test function
Step9: Yay!
Apply the transformation
Step10: Combine the datasets, looking at only articles / ns0
Step11: Preview the combined dataframe
FYI, all things Wikipedia database related (especially bots) are generally way less consistent before 2004.
Step12: The results | Python Code:
import pandas as pd
import seaborn as sns
import mwapi
import numpy as np
import glob
%matplotlib inline
Explanation: Section 5.1: proportion of all bot edits to articles that are bot-bot reverts
This is a data analysis script for an analysis presented in section 5.1, which you can run based entirely off the files in this GitHub repository. It loads datasets/montly_bot_edits/[language]wiki_20170427.tsv and datasets_monthly_bot_reverts/[language]wiki_20170420.tsv.
End of explanation
!ls -lah ../../datasets/monthly_bot_edits/
!ls -lah ../../datasets/monthly_bot_reverts/
Explanation: Import datasets
Using a dictionary of pandas dataframes, with the key as the language. A better way would be to have a tidy dataframe.
End of explanation
!head ../../datasets/monthly_bot_edits/enwiki_20170427.tsv
Explanation: Monthly bot edit counts are in the format: month (YYYYMM), page namespace, and total number of bot edits in that language's namespace that month (n).
End of explanation
!head ../../datasets/monthly_bot_reverts/enwiki_20170420.tsv
Explanation: Monthly bot revert counts are a bit more complicated and in a slightly different format:
- month (YYYYMM01)
- page namespace
- number of total reverts by all editors (reverts)
- number of reverts by bot accounts (bot_reverts)
- number of edits by bots that were reverted (bot_reverteds)
- number of reverts by bots of edits made by bots (bot2bot_reverts)
End of explanation
df_edits_dict = {}
for filename in glob.glob("../../datasets/monthly_bot_edits/??wiki_2017042?.tsv"):
lang_code = filename[33:35]
df_edits_dict[lang_code] = pd.read_csv(filename, sep="\t")
df_edits_dict[lang_code] = df_edits_dict[lang_code].drop_duplicates()
for lang, lang_df in df_edits_dict.items():
print(lang, len(lang_df))
df_rev_dict = {}
for filename in glob.glob("../../datasets/monthly_bot_reverts/??wiki_2017042?.tsv"):
lang_code = filename[35:37]
df_rev_dict[lang_code] = pd.read_csv(filename, sep="\t")
df_rev_dict[lang_code] = df_rev_dict[lang_code].drop_duplicates()
for lang, lang_df in df_rev_dict.items():
print(lang, len(lang_df))
langs = ["de", "en", "es", "fr", "ja", "pt", "zh"]
Explanation: Processing
We're going to do this in a pretty messy and so-not-best-practice way (which would be a single tidy dataframe with nice hierarchical indexes) by having two dictionaries of dataframes.
End of explanation
df_edits_dict['en'][0:5]
df_rev_dict['en'][0:5]
Explanation: Preview the dataframes in the dictionaries
End of explanation
def truncate_my(s):
Truncate YYYYMMDD format to YYYYMM. For use with df.apply()
s = str(s)
return int(s[0:6])
Explanation: Clean and combine the two datasets
Convert dates
Remember that they used different formats for representing months? Gotta fix that.
End of explanation
truncate_my(20100101)
Explanation: Test function
End of explanation
for lang in langs:
df_edits_dict[lang] = df_edits_dict[lang].set_index('month')
df_rev_dict[lang]['month_my'] = df_rev_dict[lang]['month'].apply(truncate_my)
df_rev_dict[lang] = df_rev_dict[lang].set_index('month_my')
Explanation: Yay!
Apply the transformation
End of explanation
combi_ns0_dict = {}
combi_dict = {}
for lang in langs:
print(lang)
combi_ns0_dict[lang] = pd.concat([df_rev_dict[lang].query("page_namespace == 0"), df_edits_dict[lang].query("page_namespace == 0")], axis=1, join='outer')
combi_ns0_dict[lang]['bot_edits'] = combi_ns0_dict[lang]['n']
combi_ns0_dict[lang]['prop_bot2bot_rv'] = combi_ns0_dict[lang]['bot2bot_reverts']/combi_ns0_dict[lang]['bot_edits']
Explanation: Combine the datasets, looking at only articles / ns0
End of explanation
combi_ns0_dict['en'][29:39]
Explanation: Preview the combined dataframe
FYI, all things Wikipedia database related (especially bots) are generally way less consistent before 2004.
End of explanation
sum_dict = {}
for lang in langs:
#print(lang)
sum_dict[lang] = combi_ns0_dict[lang][['bot_edits','bot2bot_reverts']].sum()
print(lang, "ns0 proportion:", (sum_dict[lang]['bot2bot_reverts']/sum_dict[lang]['bot_edits']*100).round(4), "%")
print(sum_dict[lang])
print("")
Explanation: The results: proportion of bot-bot reverts out of all bot edits, articles/ns0 only
End of explanation |
10,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Currencies Trend Following Portfolio
1. The Security closes with 50/100 ma > 0, buy.
2. If the Security closes 50/100 ma < 0, sell your long position.
(For a Portfolio of currencies.)
NOTE
Step1: MICRO FUTURES
Step2: Run Strategy
Step3: View log DataFrames
Step4: Generate strategy stats - display all available stats
Step5: View Performance by Symbol
Step6: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
Step7: Plot Equity Curves
Step8: Bar Graph
Step9: Analysis | Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
from talib.abstract import *
import pinkfish as pf
import strategy
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
pd.set_option('display.max_rows', None)
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
Explanation: Currencies Trend Following Portfolio
1. The Security closes with 50/100 ma > 0, buy.
2. If the Security closes 50/100 ma < 0, sell your long position.
(For a Portfolio of currencies.)
NOTE: pinkfish does not yet have full support for currencies backtesting, and
the currency data from yahoo finance isn't very good.
End of explanation
# symbol: (description, multiplier)
currencies = {
# 'BTCUSD=X': 'Bitcoin USD Futures', 1),
# 'ETHUSD=X': 'Ethereum USD
'EURUSD=X': 'EUR/USD Futures',
'JPY=X': 'USD/JPY Futures',
'GBPUSD=X': 'GBP/USD Futures',
'AUDUSD=X': 'AUD/USD Futures',
'NZDUSD=X': 'NZD/USD Futures',
'EURJPY=X': 'EUR/JPY Futures',
'GBPJPY=X': 'GBP/JPY Futures',
'EURGBP=X': 'EUR/GBP Futures',
'EURCAD=X': 'EUR/CAD Futures',
'EURSEK=X': 'EUR/SEK Futures',
'EURCHF=X': 'EUR/CHF Futures',
'EURHUF=X': 'EUR/HUF Futures',
'EURJPY=X': 'EUR/JPY Futures',
'CNY=X': 'USD/CNY Futures',
'HKD=X': 'USD/HKD Futures',
'SGD=X': 'USD/SGD Futures',
'INR=X': 'USD/INR Futures',
'MXN=X': 'USD/MXN Futures',
'PHP=X': 'USD/PHP Futures',
'IDR=X': 'USD/IDR Futures',
'THB=X': 'USD/THB Futures',
'MYR=X': 'USD/MYR Futures',
'ZAR=X': 'USD/ZAR Futures',
'RUB=X': 'USD/RUB Futures'
}
symbols = list(currencies)
#symbols = ['EURUSD=X']
capital = 100_000
start = datetime.datetime(1900, 1, 1)
end = datetime.datetime.now()
options = {
'use_adj' : False,
'use_cache' : True,
'force_stock_market_calendar' : True,
'stop_loss_pct' : 1.0,
'margin' : 1,
'lookback' : 1,
'sma_timeperiod': 20,
'sma_pct_band': 0,
'use_regime_filter' : True,
'use_vola_weight' : False
}
Explanation: MICRO FUTURES
End of explanation
s = strategy.Strategy(symbols, capital, start, end, options=options)
s.run()
Explanation: Run Strategy
End of explanation
s.rlog.head()
s.tlog.head()
s.dbal.tail()
Explanation: View log DataFrames: raw trade log, trade log, and daily balance
End of explanation
pf.print_full(s.stats)
Explanation: Generate strategy stats - display all available stats
End of explanation
weights = {symbol: 1 / len(symbols) for symbol in symbols}
totals = s.portfolio.performance_per_symbol(weights=weights)
totals
corr_df = s.portfolio.correlation_map(s.ts)
corr_df
Explanation: View Performance by Symbol
End of explanation
benchmark = pf.Benchmark('SPY', s.capital, s.start, s.end, use_adj=True)
benchmark.run()
Explanation: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
End of explanation
pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)
Explanation: Plot Equity Curves: Strategy vs Benchmark
End of explanation
df = pf.plot_bar_graph(s.stats, benchmark.stats)
df
Explanation: Bar Graph: Strategy vs Benchmark
End of explanation
kelly = pf.kelly_criterion(s.stats, benchmark.stats)
kelly
Explanation: Analysis: Kelly Criterian
End of explanation |
10,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
17 - Natural Language Processing
by Alejandro Correa Bahnsen and Jesus Solano
version 1.5, March 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
What is NLP?
Using computers to process (analyze, understand, generate) natural human languages
Most knowledge created by humans is unstructured text, and we need a way to make sense of it
Build probabilistic model using data about a language
What are some of the higher level task areas?
Information retrieval
Step1: Tokenization
What
Step2: create document-term matrices
Step3: lowercase
Step4: ngram_range
Step5: Predict shares
Step6: Stopword Removal
What
Step7: Other CountVectorizer Options
max_features
Step8: min_df
Step9: Stemming and Lemmatization
Stemming
Step10: Lemmatization
What
Step11: Term Frequency-Inverse Document Frequency (TF-IDF)
What
Step12: More details | Python Code:
import pandas as pd
import numpy as np
import scipy as sp
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
# from textblob import TextBlob, Word
from nltk.stem.snowball import SnowballStemmer
%matplotlib inline
df = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/mashable_texts.csv', index_col=0)
df.head()
Explanation: 17 - Natural Language Processing
by Alejandro Correa Bahnsen and Jesus Solano
version 1.5, March 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
What is NLP?
Using computers to process (analyze, understand, generate) natural human languages
Most knowledge created by humans is unstructured text, and we need a way to make sense of it
Build probabilistic model using data about a language
What are some of the higher level task areas?
Information retrieval: Find relevant results and similar results
Google
Information extraction: Structured information from unstructured documents
Events from Gmail
Machine translation: One language to another
Google Translate
Text simplification: Preserve the meaning of text, but simplify the grammar and vocabulary
Rewordify
Simple English Wikipedia
Predictive text input: Faster or easier typing
My application
A much better application
Sentiment analysis: Attitude of speaker
Hater News
Automatic summarization: Extractive or abstractive summarization
autotldr
Natural Language Generation: Generate text from data
How a computer describes a sports match
Publishers withdraw more than 120 gibberish papers
Speech recognition and generation: Speech-to-text, text-to-speech
Google's Web Speech API demo
Vocalware Text-to-Speech demo
Question answering: Determine the intent of the question, match query with knowledge base, evaluate hypotheses
How did supercomputer Watson beat Jeopardy champion Ken Jennings?
IBM's Watson Trivia Challenge
The AI Behind Watson
What are some of the lower level components?
Tokenization: breaking text into tokens (words, sentences, n-grams)
Stopword removal: a/an/the
Stemming and lemmatization: root word
TF-IDF: word importance
Part-of-speech tagging: noun/verb/adjective
Named entity recognition: person/organization/location
Spelling correction: "New Yrok City"
Word sense disambiguation: "buy a mouse"
Segmentation: "New York City subway"
Language detection: "translate this page"
Machine learning
Why is NLP hard?
Ambiguity:
Hospitals are Sued by 7 Foot Doctors
Juvenile Court to Try Shooting Defendant
Local High School Dropouts Cut in Half
Non-standard English: text messages
Idioms: "throw in the towel"
Newly coined words: "retweet"
Tricky entity names: "Where is A Bug's Life playing?"
World knowledge: "Mary and Sue are sisters", "Mary and Sue are mothers"
NLP requires an understanding of the language and the world.
Data
End of explanation
y = df.shares
y.describe()
y = pd.cut(y, [0, 893, 1200, 2275, 63200], labels=[0, 1, 2, 3])
y.value_counts()
df['y'] = y
Explanation: Tokenization
What: Separate text into units such as sentences or words
Why: Gives structure to previously unstructured text
Notes: Relatively easy with English language text, not easy with some languages
Create the target feature (number of shares)
End of explanation
X = df.text
# use CountVectorizer to create document-term matrices from X
vect = CountVectorizer()
X_dtm = vect.fit_transform(X)
temp=X_dtm.todense()
vect.vocabulary_
# rows are documents, columns are terms (aka "tokens" or "features")
X_dtm.shape
# last 50 features
print(vect.get_feature_names()[-150:-100])
# show vectorizer options
vect
Explanation: create document-term matrices
End of explanation
vect = CountVectorizer(lowercase=False)
X_dtm = vect.fit_transform(X)
X_dtm.shape
X_dtm.todense()[0].argmax()
vect.get_feature_names()[8097]
Explanation: lowercase: boolean, True by default
Convert all characters to lowercase before tokenizing.
End of explanation
# include 1-grams and 2-grams
vect = CountVectorizer(ngram_range=(1, 4))
X_dtm = vect.fit_transform(X)
X_dtm.shape
# last 50 features
print(vect.get_feature_names()[-1000:-950])
Explanation: ngram_range: tuple (min_n, max_n)
The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used.
End of explanation
# Default CountVectorizer
vect = CountVectorizer()
X_dtm = vect.fit_transform(X)
# use Naive Bayes to predict the star rating
nb = MultinomialNB()
pd.Series(cross_val_score(nb, X_dtm, y, cv=10)).describe()
# define a function that accepts a vectorizer and calculates the accuracy
def tokenize_test(vect):
X_dtm = vect.fit_transform(X)
print('Features: ', X_dtm.shape[1])
nb = MultinomialNB()
print(pd.Series(cross_val_score(nb, X_dtm, y, cv=10)).describe())
# include 1-grams and 2-grams
vect = CountVectorizer(ngram_range=(1, 2))
tokenize_test(vect)
Explanation: Predict shares
End of explanation
# remove English stop words
vect = CountVectorizer(stop_words='english')
tokenize_test(vect)
# set of stop words
print(vect.get_stop_words())
Explanation: Stopword Removal
What: Remove common words that will likely appear in any text
Why: They don't tell you much about your text
stop_words: string {'english'}, list, or None (default)
If 'english', a built-in stop word list for English is used.
If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens.
If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms.
End of explanation
# remove English stop words and only keep 100 features
vect = CountVectorizer(stop_words='english', max_features=100)
tokenize_test(vect)
# all 100 features
print(vect.get_feature_names())
# include 1-grams and 2-grams, and limit the number of features
vect = CountVectorizer(ngram_range=(1, 2), max_features=1000)
tokenize_test(vect)
Explanation: Other CountVectorizer Options
max_features: int or None, default=None
If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus.
End of explanation
# include 1-grams and 2-grams, and only include terms that appear at least 2 times
vect = CountVectorizer(ngram_range=(1, 2), min_df=2)
tokenize_test(vect)
Explanation: min_df: float in range [0.0, 1.0] or int, default=1
When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts.
End of explanation
# initialize stemmer
stemmer = SnowballStemmer('english')
# words
vect = CountVectorizer()
vect.fit(X)
words = list(vect.vocabulary_.keys())[:100]
# stem each word
print([stemmer.stem(word) for word in words])
Explanation: Stemming and Lemmatization
Stemming:
What: Reduce a word to its base/stem/root form
Why: Often makes sense to treat related words the same way
Notes:
Uses a "simple" and fast rule-based approach
Stemmed words are usually not shown to users (used for analysis/indexing)
Some search engines treat words with the same stem as synonyms
End of explanation
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
import nltk
nltk.download('wordnet')
# assume every word is a noun
print([wordnet_lemmatizer.lemmatize(word) for word in words])
# assume every word is a verb
print([wordnet_lemmatizer.lemmatize(word,pos='v') for word in words])
# define a function that accepts text and returns a list of lemmas
def split_into_lemmas(text):
text = text.lower()
words = text.split()
return [wordnet_lemmatizer.lemmatize(word) for word in words]
# use split_into_lemmas as the feature extraction function (WARNING: SLOW!)
vect = CountVectorizer(analyzer=split_into_lemmas)
tokenize_test(vect)
Explanation: Lemmatization
What: Derive the canonical form ('lemma') of a word
Why: Can be better than stemming
Notes: Uses a dictionary-based approach (slower than stemming)
End of explanation
# example documents
simple_train = ['call you tonight', 'Call me a cab', 'please call me... PLEASE!']
# Term Frequency
vect = CountVectorizer()
tf = pd.DataFrame(vect.fit_transform(simple_train).toarray(), columns=vect.get_feature_names())
tf
# Document Frequency
vect = CountVectorizer(binary=True)
df_ = vect.fit_transform(simple_train).toarray().sum(axis=0)
pd.DataFrame(df_.reshape(1, 6), columns=vect.get_feature_names())
# Term Frequency-Inverse Document Frequency (simple version)
tf/df_
# TfidfVectorizer
vect = TfidfVectorizer()
pd.DataFrame(vect.fit_transform(simple_train).toarray(), columns=vect.get_feature_names())
Explanation: Term Frequency-Inverse Document Frequency (TF-IDF)
What: Computes "relative frequency" that a word appears in a document compared to its frequency across all documents
Why: More useful than "term frequency" for identifying "important" words in each document (high frequency in that document, low frequency in other documents)
Notes: Used for search engine scoring, text summarization, document clustering
End of explanation
# create a document-term matrix using TF-IDF
vect = TfidfVectorizer(stop_words='english')
dtm = vect.fit_transform(X)
features = vect.get_feature_names()
dtm.shape
# choose a random text
review_id = 40
review_text = X[review_id]
review_length = len(review_text)
# create a dictionary of words and their TF-IDF scores
word_scores = {}
for word in vect.vocabulary_.keys():
word = word.lower()
if word in features:
word_scores[word] = dtm[review_id, features.index(word)]
# print words with the top 5 TF-IDF scores
print('TOP SCORING WORDS:')
top_scores = sorted(word_scores.items(), key=lambda x: x[1], reverse=True)[:5]
for word, score in top_scores:
print(word)
# print 5 random words
print('\n' + 'RANDOM WORDS:')
random_words = np.random.choice(list(word_scores.keys()), size=5, replace=False)
for word in random_words:
print(word)
Explanation: More details: TF-IDF is about what matters
Using TF-IDF to Summarize a text
End of explanation |
10,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, [None, real_dim], name='inputs_real')
inputs_z = tf.placeholder(tf.float32, [None, z_dim], name='inputs_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1 * alpha, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None, name='logits')
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1 * alpha, h1)
# Logits and sigmoid output
logits = tf.layers.dense(h1, 1, activation=None, name='logits')
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, d_hidden_size)
d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse=True)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [ v for v in t_vars if v.name.startswith('generator') ]
d_vars = [ v for v in t_vars if v.name.startswith('discriminator') ]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
[ v.name for v in t_vars]
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}\r".format(train_loss_g), end='')
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
10,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework
Step1: Part 1
Step2: Part 3 | Python Code:
import requests
import json
file='US-Senators.json'
senators = requests.get('https://www.govtrack.us/api/v2/role?current=true&role_type=senator').json()['objects']
with open(file,'w') as f:
f.write(json.dumps(senators))
print(f"Saved: {file}")
Explanation: Homework: US Senator Lookup
The Problem
Let's write a program similar to this unit's End-To-End Example. Instead of European countries this program will provide a drop-down of US states. When a state is selected, the program should display the US senators for that state.
What information Should you display? Here is a sample of the 2 senators from the State of NY:
Sen. Charles “Chuck” Schumer [D-NY]
Senior Senator for New York
PARTY: Democrat
PHONE: 202-224-6542
WEBSITE: https://www.schumer.senate.gov
CONTACT: https://www.schumer.senate.gov/contact/email-chuck
## Sen. Kirsten Gillibrand [D-NY]
Junior Senator for New York
PARTY: Democrat
PHONE: 202-224-4451
WEBSITE: https://www.gillibrand.senate.gov
CONTACT: https://www.gillibrand.senate.gov/contact/email-me
HINTS:
Everything you will display for a senator can be found in the dictionary for that senator. Look at the keys available for a single senator as reference.
You will need to make a list of unqiue states from the senators.json file, similar to how to approaches the problem in last week's homework for product categories.
This Code will fetch the current US Senators from the web and save the results to a US-Senators.json file.
End of explanation
# Step 2: Write code here
Explanation: Part 1: Problem Analysis
Inputs:
TODO: Inputs
Outputs:
TODO: Outputs
Algorithm (Steps in Program):
```
TODO:Steps Here
```
Part 2: Code Solution
You may write your code in several cells, but place the complete, final working copy of your code solution within this single cell below. Only the within this cell will be considered your solution. Any imports or user-defined functions should be copied into this cell.
End of explanation
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
Explanation: Part 3: Questions
What are the advantages of using a dictionary for this information instead of a delimited file like in the previous homework?
--== Double-Click and Write Your Answer Below This Line ==--
How easy would it be to write a similar program for World Leaders? Or College Professors? or NBA Players? What is different about each case?
--== Double-Click and Write Your Answer Below This Line ==--
Explain your approach to figuring out which dictionary keys you needed to complete the program.
--== Double-Click and Write Your Answer Below This Line ==--
Part 4: Reflection
Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
Keep your response to between 100 and 250 words.
--== Double-Click and Write Your Reflection Below Here ==--
End of explanation |
10,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: This notebook showcases the analysis applied to LLC outputs. Here the calculations are performed for a single snapshot. The full LLC model outputs can be obtained from the ECCO Project. All fields used in this paper take about 700 GB!
The analysis leverage on other pieces of code developed by the first author
Step3: Vorticity, divergence, and rate of strain
Step4: Discretization error
Step5: Quick-and-dirty, sanity-check plots
Step6: Spectra | Python Code:
import datetime
import numpy as np
import scipy as sp
from scipy import interpolate
import matplotlib.pyplot as plt
%matplotlib inline
import cmocean
import seawater as sw
from netCDF4 import Dataset
from llctools import llc_model
from pyspec import spectrum as spec
c1 = 'slateblue'
c2 = 'tomato'
c3 = 'k'
c4 = 'indigo'
plt.rcParams['lines.linewidth'] = 1.5
ap = .75
plt.style.use('seaborn-colorblind')
def leg_width(lg,fs):
" Sets the linewidth of each legend object
for legobj in lg.legendHandles:
legobj.set_linewidth(fs)
def parse_time(times):
Converts an array of strings that defines
the LLC outputs into datatime arrays,
e.g., '20110306T010000' --> datetime.datetime(2011, 3, 6, 1, 0)
Input
------
times: array of strings that define LLC model time
Output
------
time: array of datetime associated with times
time = []
for i in range(times.size):
yr = times[i][:4]
mo = times[i][4:6]
day = times[i][6:8]
hr = times[i][9:11]
time.append(datetime.datetime(int(yr),int(mo),int(day),int(hr)))
return np.array(time)
grid_path = '../data/llc/2160/grid/'
data_path = '../data/llc/2160/uv/'
# Kuroshio Extension model class
m = llc_model.LLCRegion(grid_dir=grid_path,data_dir=data_path,Nlon=480,Nlat=466,Nz=1)
m.load_grid()
# model sub-region surface fields files
fileu = m.data_dir+'U_480x466x1.20110308T220000'
filev = m.data_dir+'V_480x466x1.20110308T220000'
fileeta = m.data_dir[:-3]+'Eta/Eta_480x466x1.20110308T220000'
time_string = fileu[-15:]
time=llc_model.parse_time(time_string)
time
# important note: U,V are relative to the LLC model grid,
# not geostrophical coordinates. Thus, on
# faces 4 and 5, U = meridional component
# and V = -zonal component (see Dimitris's llc.readme).
u, v, eta = m.load_2d_data(filev), -m.load_2d_data(fileu), m.load_2d_data(fileeta)
lon,lat = m.lon[m.Nlat//2],m.lat[:,m.Nlon//2]
# create a regular Cartesian grid
dd = 6. # grid spacing [km]
dlon = dd/111.320*np.cos(np.abs(m.lat[m.Nlat//2,m.Nlon//2])*np.pi/180.)
dlat = dd/110.574
lonimin,lonimax = lon.min()+dlon,lon.max()-dlon
latimin,latimax = lat.min()+dlat,lat.max()-dlat
loni = np.arange(m.lon.min(),m.lon.max()+dlon,dlon)
lati = np.arange(m.lat.min(),m.lat.max()+dlat,dlat)
long,latg = np.meshgrid(loni,lati)
f0 = sw.f(latg)
interpu, interpv, interpeta = sp.interpolate.interp2d(lon,lat,u), sp.interpolate.interp2d(lon,lat,v), sp.interpolate.interp2d(lon,lat,eta)
ui, vi,etai = interpu(loni,lati), interpv(loni,lati), interpeta(loni,lati)
Explanation: This notebook showcases the analysis applied to LLC outputs. Here the calculations are performed for a single snapshot. The full LLC model outputs can be obtained from the ECCO Project. All fields used in this paper take about 700 GB!
The analysis leverage on other pieces of code developed by the first author: llctools and pyspec.
End of explanation
def calc_gradu(u,v,dd = 6.):
uy,ux = np.gradient(u,dd,dd)
vy,vx = np.gradient(v,dd,dd)
vort, div, strain = (vx - uy), ux+vy, ( (ux-vy)**2 + (vx+uy)**2 )**.5
return vort, div, strain
# double mirror ui and vi
def double_mirror(a,forward='True'):
if forward:
A = np.hstack([a,np.fliplr(a)])
A = np.vstack([A,np.fliplr(A)])
else:
iy,ix = a.shape
A = a[:iy//2,:ix//2]
return A
def calc_gradu2(u,v,dd = 6.):
u, v = double_mirror(u), double_mirror(v)
iy,ix = u.shape
Lx, Ly = (ix-1)*dd, (iy-1)*dd
dk = 1./Lx
dl = 1./Ly
l = 2*np.pi*dl*np.append( np.arange(0.,iy//2), np.arange(-iy//2,0.) )
k = 2*np.pi*dk*np.arange(0.,ix//2+1)
k,l = np.meshgrid(k,l)
uh, vh = np.fft.rfft2(u), np.fft.rfft2(v)
ux, uy = np.fft.irfft2(1j*k*uh), np.fft.irfft2(1j*l*uh)
vx, vy = np.fft.irfft2(1j*k*vh), np.fft.irfft2(1j*l*vh)
vort, div, strain = (vx - uy), ux+vy, ( (ux-vy)**2 + (vx+uy)**2 )**.5
return vort, div, strain
def rms(field):
return ((field**2).mean())**.5
vort, div, strain = calc_gradu(ui,vi,dd = 6.e3)
vort, div, strain = vort/f0, div/f0, strain/f0
vort2, div2, strain2 = calc_gradu2(ui,vi,dd = 6.e3)
vort2,div2, strain2 = double_mirror(vort2,forward=False),double_mirror(div2,forward=False), double_mirror(strain2,forward=False)
vort2, div2, strain2 = vort2/f0, div2/f0, strain2/f0
vort.mean()/np.abs(vort).max(), div.mean()/np.abs(div).max(), strain.mean()/np.abs(strain).max()
vort2.mean()/np.abs(vort2).max(), div2.mean()/np.abs(div2).max(), strain2.mean()/np.abs(strain2).max()
Explanation: Vorticity, divergence, and rate of strain
End of explanation
fig = plt.figure(figsize=(14,4))
cv = np.linspace(-1.5,1.5,20)
cd = np.linspace(-.5,.5,20)
cs = np.linspace(0.,1.5,10)
ax = fig.add_subplot(131)
plt.contourf(vort,cv,vmin=cv.min(),vmax=cv.max(),cmap=cmocean.cm.balance,extend='both')
plt.title('vorticity, rms = %f' % rms(vort))
#plt.colorbar()
plt.xticks([]); plt.yticks([])
ax = fig.add_subplot(132)
plt.contourf(vort2,cv,vmin=cv.min(),vmax=cv.max(),cmap=cmocean.cm.balance,extend='both')
plt.title('vorticity, rms = %f' % rms(vort2))
#plt.colorbar()
plt.xticks([]); plt.yticks([])
fig = plt.figure(figsize=(14,4))
ax = fig.add_subplot(131)
plt.contourf(div,cd,vmin=cd.min(),vmax=cd.max(),cmap=cmocean.cm.balance,extend='both')
plt.title('divergence, rms = %f' % rms(div))
#plt.colorbar()
plt.xticks([]); plt.yticks([])
ax = fig.add_subplot(132)
plt.contourf(div2,cd,vmin=cd.min(),vmax=cd.max(),cmap=cmocean.cm.balance,extend='both')
plt.title('divergence, rms = %f' % rms(div2))
#plt.colorbar()
plt.xticks([]); plt.yticks([])
fig = plt.figure(figsize=(14,4))
ax = fig.add_subplot(131)
plt.contourf(strain,cs,vmin=cs.min(),vmax=cs.max(),cmap=cmocean.cm.amp,extend='both')
plt.title('divergence, rms = %f' % rms(strain))
#plt.colorbar()
plt.xticks([]); plt.yticks([])
ax = fig.add_subplot(132)
plt.contourf(strain2,cs,vmin=cs.min(),vmax=cs.max(),cmap=cmocean.cm.amp,extend='both')
plt.title('strain, rms = %f' % rms(strain2))
#plt.colorbar()
plt.xticks([]); plt.yticks([])
stats_4320 = np.load(__depends__[1])
stats_2160 = np.load(__depends__[2])
llc = Dataset(__depends__[0])
time2160 = parse_time(llc['2160']['hourly']['time'][:])
timed2160 = time2160[::24]
time4320 = parse_time(llc['4320']['hourly']['time'][:])
timed4320 = time4320[::24]
Explanation: Discretization error
End of explanation
cv = np.linspace(-1.5,1.5,20)
cd = np.linspace(-.5,.5,20)
cs = np.linspace(0.,1.5,10)
fig = plt.figure(figsize=(19,4))
ax = fig.add_subplot(131)
plt.contourf(vort,cv,vmin=cv.min(),vmax=cv.max(),cmap='RdBu_r',extend='both')
plt.title('vorticity, rms = %f' % rms(vort))
plt.colorbar()
plt.xticks([]); plt.yticks([])
ax = fig.add_subplot(132)
plt.title('divergence, rms = %f' % rms(div))
plt.contourf(div,cd,vmin=cd.min(),vmax=cd.max(),cmap='RdBu_r',extend='both')
plt.colorbar()
plt.xticks([]); plt.yticks([])
ax = fig.add_subplot(133)
plt.title('strain rate, rms %f' % rms(strain))
plt.contourf(strain,cs,vmax=cs.max(),cmap='viridis',extend='max')
plt.colorbar()
plt.xticks([]); plt.yticks([])
Explanation: Quick-and-dirty, sanity-check plots
End of explanation
specU = spec.TWODimensional_spec(ui.copy(),d1=dd,d2=dd)
specV = spec.TWODimensional_spec(vi.copy(),d1=dd,d2=dd)
specEta = spec.TWODimensional_spec(etai.copy(),d1=dd,d2=dd)
iEu,iEv, iEeta = specU.ispec,specV.ispec, specEta.ispec
iE = 0.5*(iEu+iEv)
kr = np.array([1.e-4,1.])
e2 = kr**-2/1.e4
e3 = kr**-3/1.e7
e5 = kr**-5/1.e9
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(121)
plt.loglog(specU.ki,iE)
plt.loglog(kr,12.*e2,'.5',linewidth=2); plt.text(1/17.5,5.e-1,'-2',fontsize=14)
plt.loglog(kr,35*e3,'.5',linewidth=2); plt.text(1/30.,2.e-2,'-3',fontsize=14)
plt.xlim(1.e-3,1.e-1)
plt.ylim(1.e-2,1.e2)
plt.xlabel('Wavenumber [cpkm]')
plt.ylabel(r'KE density [m$^2$ s$^{-2}$/cpkm]')
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=.45, hspace=None)
ax = fig.add_subplot(122)
plt.loglog(specEta.ki,iEeta)
plt.loglog(kr,e2/.5e1,'.5',linewidth=2); plt.text(1/17.5,1.e-2,'-2',fontsize=14)
plt.loglog(kr,3*e5/1.5e2,'.5',linewidth=2); plt.text(1/25.5,1.e-5,'-5',fontsize=14)
plt.xlim(1.e-3,1.e-1)
plt.ylim(1.e-6,1.e2)
plt.ylabel(r'SSH variance density [m$^2$/cpkm]')
plt.xlabel('Wavenumber [cpkm]')
Explanation: Spectra
End of explanation |
10,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
INSTRUCTIONS
Go on by clicking on "Run cell" button (above us). DO NOT click again on "Run cell" button unless you have gone to next cell. Please, in order to continue with the questionarie, click manually on next cell and then click on "Run cell" button.
Definitions
Step1: DISPLAY RETRIEVED DRUM SAMPLE.
Step2: EVALUATION
WRITE YOUR NAME AND SELECT YOUR MUSICAL EXPERIENCE
Step3: ANSWER THE FOLLOWING QUESTIONS
Step4: GATHER ANSWERS INFORMATION.
Step5: ADD TO CSV.
Step6: RESET VARIABLES AND START AGAIN TO EVALUATE A NEW DRUM SAMPLE. | Python Code:
instrument, category, accordion = load_interface1()
check1, slider1, check2, slider2, check3, slider3, check4, slider4 = load_interface2()
display(accordion)
display(check1,slider1)
display(check2,slider2)
display(check3,slider3)
display(check4,slider4)
Explanation: INSTRUCTIONS
Go on by clicking on "Run cell" button (above us). DO NOT click again on "Run cell" button unless you have gone to next cell. Please, in order to continue with the questionarie, click manually on next cell and then click on "Run cell" button.
Definitions:
- Brightness: indicator of the amount of high-frequency content in a sound.
- Hardness: forcefulness of a sound's attack.
- Depth (as a timbral attribute): emphasis on low-frequency content.
- Roughness: regulation of pleasantness of a sound (sensory dissonance).
DRUM RETRIEVAL INTERFACE
DISPLAY PRELIMINARY INTERFACE. CHOOSE INSTRUMENT AND CATEGORY CLASSES AND AS MANY HIGH-LEVEL DESCRIPTORS AS YOU WANT.
End of explanation
search_sounds_and_show_results(instrument, category, check1, slider1, check2, slider2, check3, slider3, check4, slider4)
Explanation: DISPLAY RETRIEVED DRUM SAMPLE.
End of explanation
name,exp,exp2 = load_personal()
display(name,exp,exp2)
Explanation: EVALUATION
WRITE YOUR NAME AND SELECT YOUR MUSICAL EXPERIENCE
End of explanation
response1, response2, response3 = load_questions()
print "Does the retrieved sample really correspond to the expected drum instrument class?"
display(response1)
print "Does the retrieved sample really correspond to the expected drum category class?"
display(response2)
print "Select how you think the system has interpreted your selection based on High-Level Descriptors."
display(response3)
Explanation: ANSWER THE FOLLOWING QUESTIONS:
End of explanation
if check1.value is True:
bright = slider1.value
else:
bright = 'NaN'
if check2.value is True:
depth = slider2.value
else:
depth = 'NaN'
if check3.value is True:
hard = slider3.value
else:
hard = 'NaN'
if check4.value is True:
rough = slider4.value
else:
rough = 'NaN'
print "User name: " + name.value, "\n", "Musical experience as: " + exp.value[0], "\n", "Years of experience: " + exp2.value[0], "\n"
print "Instrument: " + instrument.value, "\n", "Category: " + category.value, "\n"
print "Brightness: " + str(bright), "\n", "Depth: " + str(depth), "\n", "Hardness: " + str(hard), "\n", "Roughness: " + str(rough), "\n"
print "Correct instrument? " + response1.value, "\n", "Correct category? " + response2.value, "\n", "High-level descriptors? "+response3.value, "\n"
Explanation: GATHER ANSWERS INFORMATION.
End of explanation
df = pd.read_csv('test.csv',index_col=0)
d = [name.value,exp.value[0],exp2.value[0],instrument.value,category.value,bright,depth,hard,rough,response1.value,response2.value,response3.value]
df.loc[len(df)] = [d[n] for n in range(len(df.columns))]
df.to_csv('test.csv')
df
Explanation: ADD TO CSV.
End of explanation
%reset
Explanation: RESET VARIABLES AND START AGAIN TO EVALUATE A NEW DRUM SAMPLE.
End of explanation |
10,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this exercise, you'll work through several applications of PCA to the Ames dataset.
Run this cell to set everything up!
Step1: Let's choose a few features that are highly correlated with our target, SalePrice.
Step2: We'll rely on PCA to untangle the correlational structure of these features and suggest relationships that might be usefully modeled with new features.
Run this cell to apply PCA and extract the loadings.
Step3: 1) Interpret Component Loadings
Look at the loadings for components PC1 and PC3. Can you think of a description of what kind of contrast each component has captured? After you've thought about it, run the next cell for a solution.
Step4: Your goal in this question is to use the results of PCA to discover one or more new features that improve the performance of your model. One option is to create features inspired by the loadings, like we did in the tutorial. Another option is to use the components themselves as features (that is, add one or more columns of X_pca to X).
2) Create New Features
Add one or more new features to the dataset X. For a correct solution, get a validation score below 0.140 RMSLE. (If you get stuck, feel free to use the hint below!)
Step5: The next question explores a way you can use PCA to detect outliers in the dataset (meaning, data points that are unusually extreme in some way). Outliers can have a detrimental effect on model performance, so it's good to be aware of them in case you need to take corrective action. PCA in particular can show you anomalous variation which might not be apparent from the original features
Step6: As you can see, in each of the components there are several points lying at the extreme ends of the distributions -- outliers, that is.
Now run the next cell to see those houses that sit at the extremes of a component
Step7: 3) Outlier Detection
Do you notice any patterns in the extreme values? Does it seem like the outliers are coming from some special subset of the data?
After you've thought about your answer, run the next cell for the solution and some discussion. | Python Code:
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.feature_engineering_new.ex5 import *
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.decomposition import PCA
from sklearn.feature_selection import mutual_info_regression
from sklearn.model_selection import cross_val_score
from xgboost import XGBRegressor
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
def apply_pca(X, standardize=True):
# Standardize
if standardize:
X = (X - X.mean(axis=0)) / X.std(axis=0)
# Create principal components
pca = PCA()
X_pca = pca.fit_transform(X)
# Convert to dataframe
component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])]
X_pca = pd.DataFrame(X_pca, columns=component_names)
# Create loadings
loadings = pd.DataFrame(
pca.components_.T, # transpose the matrix of loadings
columns=component_names, # so the columns are the principal components
index=X.columns, # and the rows are the original features
)
return pca, X_pca, loadings
def plot_variance(pca, width=8, dpi=100):
# Create figure
fig, axs = plt.subplots(1, 2)
n = pca.n_components_
grid = np.arange(1, n + 1)
# Explained variance
evr = pca.explained_variance_ratio_
axs[0].bar(grid, evr)
axs[0].set(
xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0)
)
# Cumulative Variance
cv = np.cumsum(evr)
axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-")
axs[1].set(
xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0)
)
# Set up figure
fig.set(figwidth=8, dpi=100)
return axs
def make_mi_scores(X, y):
X = X.copy()
for colname in X.select_dtypes(["object", "category"]):
X[colname], _ = X[colname].factorize()
# All discrete features should now have integer dtypes
discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes]
mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0)
mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns)
mi_scores = mi_scores.sort_values(ascending=False)
return mi_scores
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
for colname in X.select_dtypes(["category", "object"]):
X[colname], _ = X[colname].factorize()
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_squared_log_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
df = pd.read_csv("../input/fe-course-data/ames.csv")
Explanation: Introduction
In this exercise, you'll work through several applications of PCA to the Ames dataset.
Run this cell to set everything up!
End of explanation
features = [
"GarageArea",
"YearRemodAdd",
"TotalBsmtSF",
"GrLivArea",
]
print("Correlation with SalePrice:\n")
print(df[features].corrwith(df.SalePrice))
Explanation: Let's choose a few features that are highly correlated with our target, SalePrice.
End of explanation
X = df.copy()
y = X.pop("SalePrice")
X = X.loc[:, features]
# `apply_pca`, defined above, reproduces the code from the tutorial
pca, X_pca, loadings = apply_pca(X)
print(loadings)
Explanation: We'll rely on PCA to untangle the correlational structure of these features and suggest relationships that might be usefully modeled with new features.
Run this cell to apply PCA and extract the loadings.
End of explanation
# View the solution (Run this cell to receive credit!)
q_1.check()
Explanation: 1) Interpret Component Loadings
Look at the loadings for components PC1 and PC3. Can you think of a description of what kind of contrast each component has captured? After you've thought about it, run the next cell for a solution.
End of explanation
X = df.copy()
y = X.pop("SalePrice")
# YOUR CODE HERE: Add new features to X.
# ____
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
X = df.copy()
y = X.pop("SalePrice")
X["Feature1"] = X.GrLivArea - X.TotalBsmtSF
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
# Solution 1: Inspired by loadings
X = df.copy()
y = X.pop("SalePrice")
X["Feature1"] = X.GrLivArea + X.TotalBsmtSF
X["Feature2"] = X.YearRemodAdd * X.TotalBsmtSF
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
# Solution 2: Uses components
X = df.copy()
y = X.pop("SalePrice")
X = X.join(X_pca)
score = score_dataset(X, y)
print(f"Your score: {score:.5f} RMSLE")
q_2.assert_check_passed()
Explanation: Your goal in this question is to use the results of PCA to discover one or more new features that improve the performance of your model. One option is to create features inspired by the loadings, like we did in the tutorial. Another option is to use the components themselves as features (that is, add one or more columns of X_pca to X).
2) Create New Features
Add one or more new features to the dataset X. For a correct solution, get a validation score below 0.140 RMSLE. (If you get stuck, feel free to use the hint below!)
End of explanation
sns.catplot(
y="value",
col="variable",
data=X_pca.melt(),
kind='boxen',
sharey=False,
col_wrap=2,
);
Explanation: The next question explores a way you can use PCA to detect outliers in the dataset (meaning, data points that are unusually extreme in some way). Outliers can have a detrimental effect on model performance, so it's good to be aware of them in case you need to take corrective action. PCA in particular can show you anomalous variation which might not be apparent from the original features: neither small houses nor houses with large basements are unusual, but it is unusual for small houses to have large basements. That's the kind of thing a principal component can show you.
Run the next cell to show distribution plots for each of the principal components you created above.
End of explanation
# You can change PC1 to PC2, PC3, or PC4
component = "PC1"
idx = X_pca[component].sort_values(ascending=False).index
df.loc[idx, ["SalePrice", "Neighborhood", "SaleCondition"] + features]
Explanation: As you can see, in each of the components there are several points lying at the extreme ends of the distributions -- outliers, that is.
Now run the next cell to see those houses that sit at the extremes of a component:
End of explanation
# View the solution (Run this cell to receive credit!)
q_3.check()
Explanation: 3) Outlier Detection
Do you notice any patterns in the extreme values? Does it seem like the outliers are coming from some special subset of the data?
After you've thought about your answer, run the next cell for the solution and some discussion.
End of explanation |
10,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ex34-Correlations between SOI and SLP, Temperature and Precipitation
This tutorial will reproduce and extend the NCL
Step1: 1. Basic information
Data files
All of these data are publicly available from NCEP/NCAR Reanalysis 1 and GPCP Version 2.3 Combined Precipitation Data Set
Step2: Specification for reference years
Step3: Grid points near Tahiti and Darwin
These two points are used to construct an SOI time series spanning 1950-2018.
Step4: 2. Calculate SOI Index
Step7: 3. lag-0 correlation
At present, I have not found a good way to return multiple xarray.dataarray from xarray.apply_ufunc().
Therefore, I have to calculate Pearson correlation twice, which waste half a time.
3.1 Functions to calculate Pearson correlation
Step8: 3.2 Calculate lag-0 correlation between SOI and (slp, temperature, precipitation), respectively
Step9: 4. Visualization
4.1 SOI
Step10: 4.2 Correlations maps
4.2.1 SOI vs. SLP
Mask correlation with pvalues
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
slp_corr.where(slp_pval < 0.01).plot(ax=ax, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
ax.set_title('SOI SLP')
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
Step11: 4.2.2 SOI vs. TMP
Step12: 4.2.3 SOI vs. PRC | Python Code:
%matplotlib inline
import numpy as np
import xarray as xr
import pandas as pd
from numba import jit
from functools import partial
from scipy.stats import pearsonr
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# Set some parameters to apply to all plots. These can be overridden
import matplotlib
# Plot size to 12" x 7"
matplotlib.rc('figure', figsize = (15, 7))
# Font size to 14
matplotlib.rc('font', size = 14)
# Do not display top and right frame lines
matplotlib.rc('axes.spines', top = True, right = True)
# Remove grid lines
matplotlib.rc('axes', grid = False)
# Set backgound color to white
matplotlib.rc('axes', facecolor = 'white')
Explanation: ex34-Correlations between SOI and SLP, Temperature and Precipitation
This tutorial will reproduce and extend the NCL:Correlations example with python packages.
Read gridded sea level pressure from the 20th Century Reanalysis
use proxy grid points near Tahiti and Darwin to construct an SOI time series spanning 1950-2018
perform lag-0 correlations between
SOI and SLP
SOI and temperature
SOI and preciptation using GPCP data which spans 1979-2018.
In addtion, the significane level of (p<0.01) was dotted on the correlation maps.
End of explanation
filslp = "data/prmsl.mon.mean.nc"
filtmp = "data/air.sig995.mon.mean.nc"
filprc = "data/precip.mon.mean.nc"
Explanation: 1. Basic information
Data files
All of these data are publicly available from NCEP/NCAR Reanalysis 1 and GPCP Version 2.3 Combined Precipitation Data Set
End of explanation
yrStrt = 1950 # manually specify for convenience
yrLast = 2018 # 20th century ends 2018
clStrt = 1950 # reference climatology for SOI
clLast = 1979
yrStrtP = 1979 # 1st year GPCP
yrLastP = yrLast # match 20th century
Explanation: Specification for reference years
End of explanation
latT = -17.6 # Tahiti
lonT = 210.75
latD = -12.5 # Darwin
lonD = 130.83
Explanation: Grid points near Tahiti and Darwin
These two points are used to construct an SOI time series spanning 1950-2018.
End of explanation
# read slp data
ds_slp = xr.open_dataset(filslp).sel(time=slice(str(yrStrt)+'-01-01', str(yrLast)+'-12-31'))
# select grids of T and D
T = ds_slp.sel(lat=latT, lon=lonT, method='nearest')
D = ds_slp.sel(lat=latD, lon=lonD, method='nearest')
# monthly reference climatologies
TClm = T.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31'))
DClm = D.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31'))
# anomalies reference clim
TAnom = T.groupby('time.month') - TClm.groupby('time.month').mean('time')
DAnom = D.groupby('time.month') - DClm.groupby('time.month').mean('time')
# stddev of anomalies over clStrt & clLast
TAnomStd = np.std(TAnom.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31')))
DAnomStd = np.std(DAnom.sel(time=slice(str(clStrt)+'-01-01', str(clLast)+'-12-31')))
# signal and noise
soi_signal = ((TAnom/TAnomStd) - (DAnom/DAnomStd)).rename({'slp':'SOI'})
Explanation: 2. Calculate SOI Index
End of explanation
@jit(nogil=True)
def pr_cor_corr(x, y):
Uses the scipy stats module to calculate a pearson correlation test
:x vector: Input pixel vector to run tests on
:y vector: The date input vector
# Check NA values
co = np.count_nonzero(~np.isnan(x))
if co < len(y): # If fewer than length of y observations return np.nan
print('I am here')
return np.nan, np.nan
corr, _ = pearsonr(x, y)
return corr
@jit(nogil=True)
def pr_cor_pval(x, y):
Uses the scipy stats module to calculate a pearson correlation test
:x vector: Input pixel vector to run tests on
:y vector: The date input vector
# Check NA values
co = np.count_nonzero(~np.isnan(x))
if co < len(y): # If fewer than length of y observations return np.nan
return np.nan
# Run the pearson correlation test
_, p_value = pearsonr(x, y)
return p_value
# The function we are going to use for applying our pearson test per pixel
def pearsonr_corr(x, y, func=pr_cor_corr, dim='time'):
# x = Pixel value, y = a vector containing the date, dim == dimension
return xr.apply_ufunc(
func, x , y,
input_core_dims=[[dim], [dim]],
vectorize=True,
output_dtypes=[float]
)
Explanation: 3. lag-0 correlation
At present, I have not found a good way to return multiple xarray.dataarray from xarray.apply_ufunc().
Therefore, I have to calculate Pearson correlation twice, which waste half a time.
3.1 Functions to calculate Pearson correlation
End of explanation
ds_tmp = xr.open_dataset(filtmp).sel(time=slice(str(yrStrt)+'-01-01', str(yrLast)+'-12-31'))
ds_prc = xr.open_dataset(filprc).sel(time=slice(str(yrStrtP)+'-01-01', str(yrLastP)+'-12-31'))
# slp
da_slp = ds_slp.slp.stack(point=('lat', 'lon')).groupby('point')
slp_corr = pearsonr_corr(da_slp, soi_signal.SOI).unstack('point')
slp_pval = pearsonr_corr(da_slp, soi_signal.SOI, func= pr_cor_pval).unstack('point')
# tmp
da_tmp = ds_tmp.air.stack(point=('lat', 'lon')).groupby('point')
tmp_corr = pearsonr_corr(da_tmp, soi_signal.SOI).unstack('point')
tmp_pval = pearsonr_corr(da_tmp, soi_signal.SOI, func= pr_cor_pval).unstack('point')
# prc
soi_prc = soi_signal.sel(time=slice(str(yrStrtP)+'-01-01', str(yrLastP)+'-12-31'))
da_prc = ds_prc.precip.stack(point=('lat', 'lon')).groupby('point')
prc_corr = pearsonr_corr(da_prc, soi_prc.SOI).unstack('point')
prc_pval = pearsonr_corr(da_prc, soi_prc.SOI, func= pr_cor_pval).unstack('point')
Explanation: 3.2 Calculate lag-0 correlation between SOI and (slp, temperature, precipitation), respectively
End of explanation
# Convert to pandas.dataframe
df_soi = soi_signal.to_dataframe().drop('month', axis=1)
# 11-point smoother: Use reflective boundaries to fill out plot
window = 11
weights = [0.0270, 0.05856, 0.09030, 0.11742, 0.13567,
0.1421, 0.13567, 0.11742, 0.09030, 0.05856,
0.027]
ewma = partial(np.average, weights=weights)
rave = df_soi.rolling(window).apply(ewma).fillna(df_soi)
fig, ax = plt.subplots()
rave.plot(ax=ax, color='black', alpha=1.00, linewidth=2, legend=False)
d = rave.index
ax.fill_between(d, 0, rave['SOI'],
where=rave['SOI'] >0,
facecolor='blue', alpha=0.75, interpolate=True)
ax.fill_between(d, 0, rave['SOI'],
where=rave['SOI']<0,
facecolor='red', alpha=0.75, interpolate=True)
_ = ax.set_ylim(-5, 5)
_ = ax.set_title('SOI: %s-%s \n Based on NCEP/NCAR Reanalysis 1' %(str(yrStrt), str(yrLast)))
_ = ax.set_xlabel('')
Explanation: 4. Visualization
4.1 SOI
End of explanation
lons, lats = np.meshgrid(slp_corr.lon, slp_corr.lat)
sig_area = np.where(slp_pval < 0.01)
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
slp_corr.plot(ax=ax, vmax=0.7, vmin=-0.7, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
_ = ax.scatter(lons[sig_area], lats[sig_area], marker = '.', s = 1, c = 'k', alpha = 0.6, transform = ccrs.PlateCarree())
ax.set_title('Correlation Between SOI and SLP (%s-%s) \n Based on NCEP/NCAR Reanalysis 1 \n p < 0.01 has been dotted' %(str(yrStrt), str(yrLast)))
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
Explanation: 4.2 Correlations maps
4.2.1 SOI vs. SLP
Mask correlation with pvalues
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
slp_corr.where(slp_pval < 0.01).plot(ax=ax, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
ax.set_title('SOI SLP')
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
End of explanation
lons, lats = np.meshgrid(tmp_corr.lon, tmp_corr.lat)
sig_area = np.where(tmp_pval < 0.01)
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
tmp_corr.plot(ax=ax, vmax=0.7, vmin=-0.7, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
_ = ax.scatter(lons[sig_area], lats[sig_area], marker = '.', s = 1, c = 'k', alpha = 0.6, transform = ccrs.PlateCarree())
ax.set_title('Correlation Between SOI and TMP (%s-%s) \n Based on NCEP/NCAR Reanalysis 1 \n p < 0.01 has been dotted' %(str(yrStrt), str(yrLast)))
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
Explanation: 4.2.2 SOI vs. TMP
End of explanation
lons, lats = np.meshgrid(prc_corr.lon, prc_corr.lat)
sig_area = np.where(prc_pval < 0.01)
ax = plt.axes(projection=ccrs.Robinson(central_longitude=180))
prc_corr.plot(ax=ax, vmax=0.7, vmin=-0.7, cmap='RdYlBu_r', transform=ccrs.PlateCarree())
_ = ax.scatter(lons[sig_area], lats[sig_area], marker = '.', s = 1, c = 'k', alpha = 0.6, transform = ccrs.PlateCarree())
ax.set_title('Correlation Between SOI and GPCP Precipitation (%s-%s) \n (Based on NCEP/NCAR Reanalysis 1 \n p < 0.01 has been dotted' %(str(yrStrtP), str(yrLastP)))
ax.add_feature(cfeature.BORDERS)
ax.add_feature(cfeature.COASTLINE)
Explanation: 4.2.3 SOI vs. PRC
End of explanation |
10,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Styles
You can set particular styles
Step2: Spine Removal
Step3: Size and Aspect
You can use matplotlib's plt.figure(figsize=(width,height) to change the size of most seaborn plots.
You can control the size and aspect ratio of most seaborn grid plots by passing in parameters
Step4: Scale and Context
The set_context() allows you to override default parameters
Step5: Check out the documentation page for more info on these topics | Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
tips = sns.load_dataset('tips')
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Style and Color
We've shown a few times how to control figure aesthetics in seaborn, but let's now go over it formally:
End of explanation
sns.countplot(x='sex',data=tips)
sns.set_style('white')
sns.countplot(x='sex',data=tips)
sns.set_style('ticks')
sns.countplot(x='sex',data=tips,palette='deep')
Explanation: Styles
You can set particular styles:
End of explanation
sns.countplot(x='sex',data=tips)
sns.despine()
sns.countplot(x='sex',data=tips)
sns.despine(left=True)
Explanation: Spine Removal
End of explanation
# Non Grid Plot
plt.figure(figsize=(12,3))
sns.countplot(x='sex',data=tips)
# Grid Type Plot
sns.lmplot(x='total_bill',y='tip',size=2,aspect=4,data=tips)
Explanation: Size and Aspect
You can use matplotlib's plt.figure(figsize=(width,height) to change the size of most seaborn plots.
You can control the size and aspect ratio of most seaborn grid plots by passing in parameters: size, and aspect. For example:
End of explanation
sns.set_context('poster',font_scale=4)
sns.countplot(x='sex',data=tips,palette='coolwarm')
Explanation: Scale and Context
The set_context() allows you to override default parameters:
End of explanation
sns.puppyplot()
Explanation: Check out the documentation page for more info on these topics:
https://stanford.edu/~mwaskom/software/seaborn/tutorial/aesthetics.html
End of explanation |
10,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Siro Moreno Martín
Cómo destruir y corromper todo lo es bueno y bello
A veces, tenemos que usar Excel
Hay que reconocer que hay gente por ahí que hace maravillas, aunque me de cosilla pensarlo
Step1: Empezaremos cargando a Python datos desde un documento de excel.
Step2: Ahora que ya hemos cargado datos de un excel en Python, y que seguimos pudiendo usar otras librerías a pesar de tamaña atrocidad, crearemos un excel vacío nuevo para guardar toda nuestra mierda y mandársela a alguien que le importe.
Step3: Eso es todo artemaníacos! Ya podéis jugar con Excel y Python para crear aberraciones que pueblen la tierra!
Espero que esta mini-charla te ayude a crear la Ñapa Definitiva del Universo que seguro que quieres fabricar, pero si aún así quieres saber más sobre este paquete, puedes consultar su documentación | Python Code:
#Vamos a importar numpy y pandas para jugar un poquillo
import numpy as np
import pandas as pd
from openpyxl import Workbook
from openpyxl import load_workbook
#También usaremos funciones para dibujar gráficos
from openpyxl.chart import (
ScatterChart,
Reference,
Series,
)
Explanation: Siro Moreno Martín
Cómo destruir y corromper todo lo es bueno y bello
A veces, tenemos que usar Excel
Hay que reconocer que hay gente por ahí que hace maravillas, aunque me de cosilla pensarlo:
https://www.youtube.com/watch?v=AmlqgQidXtk
Pero el caso es que quién más, quién menos, todos alguna vez necesitamos tratar con archivos de Excel en algún momento de la vida. Y si tienes que trabajar con los datos y temes que si intentas programar una macro puedas invocar a Shub-Niggurath, siempre podemos contar con el poder de Anaconda!
Para esta charla he utilizado openpyxl, pero hay otros paquetes que también permiten trabajar con ficheros de Excel, como xlsreader y xlsxwriter.
Nota: para que los estilos del notebook se rendericen bien, haz File-> Trust Notebook. Si aún así no salen las letras de colores, ejecuta la última celda. Los colores importan because reasons.
End of explanation
#Vamos a cargar el workbook existente
wb_cargado = load_workbook(filename='datos.xlsx', read_only=True)#wb es un workbook iterable
ws_cargado = wb_cargado['datos'] # El primer elemento de wb, ws, es una worksheet iterable
#Comprobamos que nuestros importantísimos datos científicos se han cargado
for row in ws_cargado.rows: #Podemos iterar en las filas de la worksheet
for cell in row: #Cada fila es iterable a su vez, cada elemento es una celda
print(cell.value) #Las celdas ya no son iterables más. Gracias a los Dioses.
#Vamos a volcar los datos en algo así como más de Python, una lista de listas, que siempre mola.
datos = []
for row in ws_cargado.rows:
fila = []
for cell in row:
fila.append(cell.value)
datos.append(fila)
#Podemos pasar ahora estos datos a una tabla de pandas, por ejemplo. Pandas mola también.
pd.DataFrame(datos)
#Si intentamos usar indexación como si fuera un array, no podremos.
#Esto evita que los demonios de la dimensión J invadan la tierra.
ws_cargado.rows[1:5]
#Tenemos que usar el método que ya viene para trazar un rectángulo alquímico de celdas:
for row in ws_cargado.iter_rows('A1:B5'):
for cell in row:
print (cell.value)
#Inspeccionando los datos científicos, hemos descubierto que no son del todo correctos.
#Debemos corregirlos a algo más apropiado a lo que es usar Excel
nombres = []
for row in ws_cargado.rows:
nombres.append(row[1].value)
new_data = []
for word in nombres:
word = word.replace('mariposas', 'avispas')
word =word.replace('luz', 'oscuridad')
word =word.replace('amor', 'sufrimiento')
word =word.replace('paz','odio')
word =word.replace('felicidad', 'desesperación')
new_data.append(word)
#Podemos también generar datos nuevos como siempre, que luego guardaremos.
#Ahora, usaremos numpy para calcular vectores y mierdas así.
#Las matematicas son algo así como científico, estos datos son ciencia también.
theta = np.linspace(-0.5 * np.pi, 1.5 *np.pi, 100)
theta_2 = np.linspace(1.5 * np.pi, 7.5*np.pi, 6)
theta = np.concatenate((theta, theta_2))
x = np.cos(theta)
y = np.sin(theta)
Explanation: Empezaremos cargando a Python datos desde un documento de excel.
End of explanation
#Crear workbook nuevo
wb = Workbook()
#Seleccionar la worksheet activa
ws = wb.active
#Vamos a crear más páginas, simplemente porque podemos.
ws1 = wb.create_sheet()
ws1 = wb.create_sheet()
#Te interesa saber los nombres de las worksheet nuevas? Se puede!
wb.get_sheet_names()
#Otra manera de sacar los nombres de las cosas esas es ir iterando en el workbook:
#Cada elemento es una worksheet.
#A la gente que hizo esto les molaba la mierda esta de iterar cosas.
for sheet in wb:
print(sheet.title)
#Carguemos los datos corregidos en el workbook nuevo usando un bucle
for row in range(len(new_data)):
cell_name = 'A' + str(row + 1)
ws[cell_name] = new_data[row]
#Con otro bucle vamos a cargar los datos generados por ciencia.
colname = ['B','C']
data = np.array([x, y])
for col in range(2):
for row in range(1, len(x)+1):
letter = colname[col]
name = letter + str(row)
ws[name] = data[col, row - 1]
#Ahora vamos a añadir un gráfico al workbook. Es para lo que sirve excel, no?
chart = ScatterChart()
chart.title = "Scatter Chart"
chart.style = 44
chart.x_axis.title = 'Maldad'
chart.y_axis.title = 'Obscenidad'
#Añadimos una serie de datos para pintar la ciencia en la hoja
xvalues = Reference(ws, min_col=2, min_row=1, max_row=len(x)+1)
values = Reference(ws, min_col=3, min_row=1, max_row=len(x)+1)
series = Series(values, xvalues, title_from_data=False)
chart.series.append(series)
#El tamaño importa
chart.height= 15
chart.width = 18
#Después de haberlo definido, lo añadimos al workbook cual pegote
ws.add_chart(chart, "C1")
#Vamos a guardar nuestro coso horrible antes de que nos dé un colapso
#Y el mundo se quede sin disfrutar de nuestra obra
wb.save('resultado.xlsx')
Explanation: Ahora que ya hemos cargado datos de un excel en Python, y que seguimos pudiendo usar otras librerías a pesar de tamaña atrocidad, crearemos un excel vacío nuevo para guardar toda nuestra mierda y mandársela a alguien que le importe.
End of explanation
# Notebook style. Para que salgan las letras de colorines.
from IPython.core.display import HTML
css_file = './static/style.css'
HTML(open(css_file, "r").read())
Explanation: Eso es todo artemaníacos! Ya podéis jugar con Excel y Python para crear aberraciones que pueblen la tierra!
Espero que esta mini-charla te ayude a crear la Ñapa Definitiva del Universo que seguro que quieres fabricar, pero si aún así quieres saber más sobre este paquete, puedes consultar su documentación:
https://openpyxl.readthedocs.org/en/default/index.html
End of explanation |
10,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 通过子类化创建新的层和模型
<table class="tfo-notebook-buttons" align="left">
<td data-segment-approved="false"> <a target="_blank" href="https
Step2: Layer 类:状态(权重)和部分计算的组合
Keras 的一个中心抽象是 Layer 类。层封装了状态(层的“权重”)和从输入到输出的转换(“调用”,即层的前向传递)。
下面是一个密集连接的层。它具有一个状态:变量 w 和 b。
Step3: 您可以在某些张量输入上通过调用来使用层,这一点很像 Python 函数。
Step4: 请注意,权重 w 和 b 在被设置为层特性后会由层自动跟踪:
Step5: 请注意,您还可以使用一种更加快捷的方式为层添加权重:add_weight() 方法:
Step6: 层可以具有不可训练权重
除了可训练权重外,您还可以向层添加不可训练权重。训练层时,不必在反向传播期间考虑此类权重。
以下是添加和使用不可训练权重的方式:
Step7: 它是 layer.weights 的一部分,但被归类为不可训练权重:
Step8: 最佳做法:将权重创建推迟到得知输入的形状之后
上面的 Linear 层接受了一个 input_dim 参数,用于计算 __init__() 中权重 w 和 b 的形状:
Step9: 在许多情况下,您可能事先不知道输入的大小,并希望在得知该值时(对层进行实例化后的某个时间)再延迟创建权重。
在 Keras API 中,我们建议您在层的 build(self, inputs_shape) 方法中创建层权重。如下所示:
Step10: 层的 __call__() 方法将在首次调用时自动运行构建。现在,您有了一个延迟并因此更易使用的层:
Step11: 层可递归组合
如果将层实例分配为另一个层的特性,则外部层将开始跟踪内部层的权重。
我们建议在 __init__() 方法中创建此类子层(由于子层通常具有构建方法,它们将与外部层同时构建)。
Step12: add_loss() 方法
在编写层的 call() 方法时,您可以在编写训练循环时创建想要稍后使用的损失张量。这可以通过调用 self.add_loss(value) 来实现:
Step13: 这些损失(包括由任何内部层创建的损失)可通过 layer.losses 取回。此属性会在每个 __call__() 开始时重置到顶层,因此 layer.losses 始终包含在上一次前向传递过程中创建的损失值。
Step14: 此外,loss 属性还包含为任何内部层的权重创建的正则化损失:
Step15: 在编写训练循环时应考虑这些损失,如下所示:
```python
Instantiate an optimizer.
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
Iterate over the batches of a dataset.
for x_batch_train, y_batch_train in train_dataset
Step16: add_metric() 方法
与 add_loss() 类似,层还具有 add_metric() 方法,用于在训练过程中跟踪数量的移动平均值。
请思考下面的 "logistic endpoint" 层。它将预测和目标作为输入,计算通过 add_loss() 跟踪的损失,并计算通过 add_metric() 跟踪的准确率标量。
Step17: 可通过 layer.metrics 访问以这种方式跟踪的指标:
Step18: 和 add_loss() 一样,这些指标也是通过 fit() 跟踪的:
Step19: 可选择在层上启用序列化
如果需要将自定义层作为函数式模型的一部分进行序列化,您可以选择实现 get_config() 方法:
Step20: 请注意,基础 Layer 类的 __init__() 方法会接受一些关键字参数,尤其是 name 和 dtype。最好将这些参数传递给 __init__() 中的父类,并将其包含在层配置中:
Step21: 如果根据层的配置对层进行反序列化时需要更大的灵活性,还可以重写 from_config() 类方法。下面是 from_config() 的基础实现:
python
def from_config(cls, config)
Step26: call() 方法中的特权 mask 参数
call() 支持的另一个特权参数是 mask 参数。
它会出现在所有 Keras RNN 层中。掩码是布尔张量(在输入中每个时间步骤对应一个布尔值),用于在处理时间序列数据时跳过某些输入时间步骤。
当先前的层生成掩码时,Keras 会自动将正确的 mask 参数传递给 __call__()(针对支持它的层)。掩码生成层是配置了 mask_zero=True 的 Embedding 层和 Masking 层。
要详细了解遮盖以及如何编写启用遮盖的层,请查看了解填充和遮盖指南。
Model 类
通常,您会使用 Layer 类来定义内部计算块,并使用 Model 类来定义外部模型,即您将训练的对象。
例如,在 ResNet50 模型中,您会有几个子类化 Layer 的 ResNet 块,以及一个包含整个 ResNet50 网络的 Model。
Model 类具有与 Layer 相同的 API,但有如下区别:
它会公开内置训练、评估和预测循环(model.fit()、model.evaluate()、model.predict())。
它会通过 model.layers 属性公开其内部层的列表。
它会公开保存和序列化 API(save()、save_weights()…)
实际上,Layer 类对应于我们在文献中所称的“层”(如“卷积层”或“循环层”)或“块”(如“ResNet 块”或“Inception 块”)。
同时,Model 类对应于文献中所称的“模型”(如“深度学习模型”)或“网络”(如“深度神经网络”)。
因此,如果您想知道“我应该用 Layer 类还是 Model 类?”,请问自己:我是否需要在它上面调用 fit()?我是否需要在它上面调用 save()?如果是,则使用 Model。如果不是(要么因为您的类只是更大系统中的一个块,要么因为您正在自己编写训练和保存代码),则使用 Layer。
例如,我们可以使用上面的 mini-resnet 示例,用它来构建一个 Model,该模型可以通过 fit() 进行训练,并通过 save_weights() 进行保存:
```python
class ResNet(tf.keras.Model)
Step27: 让我们在 MNIST 上编写一个简单的训练循环:
Step28: 请注意,由于 VAE 是 Model 的子类,它具有内置的训练循环。因此,您也可以用以下方式训练它:
Step29: 超越面向对象的开发:函数式 API
这个示例对您来说是否包含了太多面向对象的开发?您也可以使用函数式 API 来构建模型。重要的是,选择其中一种样式并不妨碍您利用以另一种样式编写的组件:您随时可以搭配使用。
例如,下面的函数式 API 示例重用了我们在上面的示例中定义的同一个 Sampling 层: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow import keras
Explanation: 通过子类化创建新的层和模型
<table class="tfo-notebook-buttons" align="left">
<td data-segment-approved="false"> <a target="_blank" href="https://tensorflow.google.cn/guide/keras/custom_layers_and_models"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td>
<td data-segment-approved="false"><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_layers_and_models.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td data-segment-approved="false"> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_layers_and_models.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td>
<td data-segment-approved="false"> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/custom_layers_and_models.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
设置
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Explanation: Layer 类:状态(权重)和部分计算的组合
Keras 的一个中心抽象是 Layer 类。层封装了状态(层的“权重”)和从输入到输出的转换(“调用”,即层的前向传递)。
下面是一个密集连接的层。它具有一个状态:变量 w 和 b。
End of explanation
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
Explanation: 您可以在某些张量输入上通过调用来使用层,这一点很像 Python 函数。
End of explanation
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
Explanation: 请注意,权重 w 和 b 在被设置为层特性后会由层自动跟踪:
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
Explanation: 请注意,您还可以使用一种更加快捷的方式为层添加权重:add_weight() 方法:
End of explanation
class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
Explanation: 层可以具有不可训练权重
除了可训练权重外,您还可以向层添加不可训练权重。训练层时,不必在反向传播期间考虑此类权重。
以下是添加和使用不可训练权重的方式:
End of explanation
print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)
Explanation: 它是 layer.weights 的一部分,但被归类为不可训练权重:
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Explanation: 最佳做法:将权重创建推迟到得知输入的形状之后
上面的 Linear 层接受了一个 input_dim 参数,用于计算 __init__() 中权重 w 和 b 的形状:
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Explanation: 在许多情况下,您可能事先不知道输入的大小,并希望在得知该值时(对层进行实例化后的某个时间)再延迟创建权重。
在 Keras API 中,我们建议您在层的 build(self, inputs_shape) 方法中创建层权重。如下所示:
End of explanation
# At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)
Explanation: 层的 __call__() 方法将在首次调用时自动运行构建。现在,您有了一个延迟并因此更易使用的层:
End of explanation
# Let's assume we are reusing the Linear class
# with a `build` method that we defined above.
class MLPBlock(keras.layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))
Explanation: 层可递归组合
如果将层实例分配为另一个层的特性,则外部层将开始跟踪内部层的权重。
我们建议在 __init__() 方法中创建此类子层(由于子层通常具有构建方法,它们将与外部层同时构建)。
End of explanation
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
Explanation: add_loss() 方法
在编写层的 call() 方法时,您可以在编写训练循环时创建想要稍后使用的损失张量。这可以通过调用 self.add_loss(value) 来实现:
End of explanation
class OuterLayer(keras.layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
Explanation: 这些损失(包括由任何内部层创建的损失)可通过 layer.losses 取回。此属性会在每个 __call__() 开始时重置到顶层,因此 layer.losses 始终包含在上一次前向传递过程中创建的损失值。
End of explanation
class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super(OuterLayerWithKernelRegularizer, self).__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
Explanation: 此外,loss 属性还包含为任何内部层的权重创建的正则化损失:
End of explanation
import numpy as np
inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, the regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
Explanation: 在编写训练循环时应考虑这些损失,如下所示:
```python
Instantiate an optimizer.
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
Iterate over the batches of a dataset.
for x_batch_train, y_batch_train in train_dataset:
with tf.GradientTape() as tape:
logits = layer(x_batch_train) # Logits for this minibatch
# Loss value for this minibatch
loss_value = loss_fn(y_batch_train, logits)
# Add extra losses created during this forward pass:
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
```
有关编写训练循环的详细指南,请参阅从头开始编写训练循环指南。
这些损失还可以无缝使用 fit()(它们会自动求和并添加到主损失中,如果有):
End of explanation
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
Explanation: add_metric() 方法
与 add_loss() 类似,层还具有 add_metric() 方法,用于在训练过程中跟踪数量的移动平均值。
请思考下面的 "logistic endpoint" 层。它将预测和目标作为输入,计算通过 add_loss() 跟踪的损失,并计算通过 add_metric() 跟踪的准确率标量。
End of explanation
layer = LogisticEndpoint()
targets = tf.ones((2, 2))
logits = tf.ones((2, 2))
y = layer(targets, logits)
print("layer.metrics:", layer.metrics)
print("current accuracy value:", float(layer.metrics[0].result()))
Explanation: 可通过 layer.metrics 访问以这种方式跟踪的指标:
End of explanation
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam")
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
Explanation: 和 add_loss() 一样,这些指标也是通过 fit() 跟踪的:
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
Explanation: 可选择在层上启用序列化
如果需要将自定义层作为函数式模型的一部分进行序列化,您可以选择实现 get_config() 方法:
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
Explanation: 请注意,基础 Layer 类的 __init__() 方法会接受一些关键字参数,尤其是 name 和 dtype。最好将这些参数传递给 __init__() 中的父类,并将其包含在层配置中:
End of explanation
class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
Explanation: 如果根据层的配置对层进行反序列化时需要更大的灵活性,还可以重写 from_config() 类方法。下面是 from_config() 的基础实现:
python
def from_config(cls, config):
return cls(**config)
要详细了解序列化和保存,请参阅完整的保存和序列化模型指南。
call() 方法中的特权 training 参数
某些层,尤其是 BatchNormalization 层和 Dropout 层,在训练和推断期间具有不同的行为。对于此类层,标准做法是在 call() 方法中公开 training(布尔)参数。
通过在 call() 中公开此参数,可以启用内置的训练和评估循环(例如 fit())以在训练和推断中正确使用层。
End of explanation
from tensorflow.keras import layers
class Sampling(layers.Layer):
Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
Maps MNIST digits to a triplet (z_mean, z_log_var, z).
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
Converts z, the encoded digit vector, back into a readable digit.
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
Combines the encoder and decoder into an end-to-end model for training.
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
Explanation: call() 方法中的特权 mask 参数
call() 支持的另一个特权参数是 mask 参数。
它会出现在所有 Keras RNN 层中。掩码是布尔张量(在输入中每个时间步骤对应一个布尔值),用于在处理时间序列数据时跳过某些输入时间步骤。
当先前的层生成掩码时,Keras 会自动将正确的 mask 参数传递给 __call__()(针对支持它的层)。掩码生成层是配置了 mask_zero=True 的 Embedding 层和 Masking 层。
要详细了解遮盖以及如何编写启用遮盖的层,请查看了解填充和遮盖指南。
Model 类
通常,您会使用 Layer 类来定义内部计算块,并使用 Model 类来定义外部模型,即您将训练的对象。
例如,在 ResNet50 模型中,您会有几个子类化 Layer 的 ResNet 块,以及一个包含整个 ResNet50 网络的 Model。
Model 类具有与 Layer 相同的 API,但有如下区别:
它会公开内置训练、评估和预测循环(model.fit()、model.evaluate()、model.predict())。
它会通过 model.layers 属性公开其内部层的列表。
它会公开保存和序列化 API(save()、save_weights()…)
实际上,Layer 类对应于我们在文献中所称的“层”(如“卷积层”或“循环层”)或“块”(如“ResNet 块”或“Inception 块”)。
同时,Model 类对应于文献中所称的“模型”(如“深度学习模型”)或“网络”(如“深度神经网络”)。
因此,如果您想知道“我应该用 Layer 类还是 Model 类?”,请问自己:我是否需要在它上面调用 fit()?我是否需要在它上面调用 save()?如果是,则使用 Model。如果不是(要么因为您的类只是更大系统中的一个块,要么因为您正在自己编写训练和保存代码),则使用 Layer。
例如,我们可以使用上面的 mini-resnet 示例,用它来构建一个 Model,该模型可以通过 fit() 进行训练,并通过 save_weights() 进行保存:
```python
class ResNet(tf.keras.Model):
def __init__(self, num_classes=1000):
super(ResNet, self).__init__()
self.block_1 = ResNetBlock()
self.block_2 = ResNetBlock()
self.global_pool = layers.GlobalAveragePooling2D()
self.classifier = Dense(num_classes)
def call(self, inputs):
x = self.block_1(inputs)
x = self.block_2(x)
x = self.global_pool(x)
return self.classifier(x)
resnet = ResNet()
dataset = ...
resnet.fit(dataset, epochs=10)
resnet.save(filepath)
```
汇总:端到端示例
到目前为止,您已学习以下内容:
Layer 封装了状态(在 __init__() 或 build() 中创建)和一些计算(在 call() 中定义)。
层可以递归嵌套以创建新的更大的计算块。
层可以通过 add_loss() 和 add_metric() 创建并跟踪损失(通常是正则化损失)以及指标。
您要训练的外部容器是 Model。Model 就像 Layer,但是添加了训练和序列化实用工具。
让我们将这些内容全部汇总到一个端到端示例:我们将实现一个变分自动编码器 (VAE),并用 MNIST 数字对其进行训练。
我们的 VAE 将是 Model 的一个子类,它是作为子类化 Layer 的嵌套组合层进行构建的。它将具有正则化损失(KL 散度)。
End of explanation
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 2
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
Explanation: 让我们在 MNIST 上编写一个简单的训练循环:
End of explanation
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)
Explanation: 请注意,由于 VAE 是 Model 的子类,它具有内置的训练循环。因此,您也可以用以下方式训练它:
End of explanation
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input")
x = layers.Dense(intermediate_dim, activation="relu")(original_inputs)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder")
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling")
x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs)
outputs = layers.Dense(original_dim, activation="sigmoid")(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder")
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae")
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
Explanation: 超越面向对象的开发:函数式 API
这个示例对您来说是否包含了太多面向对象的开发?您也可以使用函数式 API 来构建模型。重要的是,选择其中一种样式并不妨碍您利用以另一种样式编写的组件:您随时可以搭配使用。
例如,下面的函数式 API 示例重用了我们在上面的示例中定义的同一个 Sampling 层:
End of explanation |
10,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Увидел у Игоря Викторовича в vk отличный пост
Step1: X = [Titan, Naga, Djinn, Mage, Golem, Gargoyle, Gremlin, Sold gems]
Step2: Решение найдено почти мгновенно.
Step3: Последний элемент ответа 0 - камни продавать не нужно. Сила набранной армии
Step4: Затраченные ресурсы
Step5: Ртуть и камни потратили полностью, монеты еще остались.
Внимательный читатель мог заметить, что решение не совсем честное. Задачу я увидел уже глубокой ночью и воспользовался тем инструментом, что уже знал. Так повезло, что найденное решение оказалось целочисленным. Но вообще, это не всегда так. Вот результат для задачи с такими же условиями, но когда у Дениса только 100k золота
Step6: Для общего случая ILP дальше не всегда можно просто докрутить до целых чисел. А это прекрасный повод забыть про родной L-BFGS и окунуться в мир целочисленных линейных задач.
Integer Linear Problem
Вообще, известно как с помощью дополняющих условий (Cutting-plane method) сводить задачу к целочисленной, но делать это на scipy не стоит.
Я начал копать с этой страницы
Step7: Теперь пришлось продавать камни.
Step8: Дальнобойные, воздушные
Step9: Затраченные ресурсы | Python Code:
import scipy.optimize
import numpy as np
import pandas as pd
gold = int(2 * 1e5)
gems = 115
mercury = 80
distant_min_health = 4000
air_min_health = 2000
gem_price = 500
units = [
{'name': 'titan', 'health': 300, 'gold': 5000, 'mercury': 1, 'gems': 3, 'available': 10},
{'name': 'naga', 'health': 120, 'gold': 1500, 'mercury': 0, 'gems': 2, 'available': 20},
{'name': 'djinn', 'health': 60, 'gold': 750, 'mercury': 1, 'gems': 1, 'available': 30},
{'name': 'mage', 'health': 40, 'gold': 500, 'mercury': 1, 'gems': 1, 'available': 55},
{'name': 'golem', 'health': 35, 'gold': 400, 'mercury': 1, 'gems': 0, 'available': 60},
{'name': 'gargoyle', 'health': 20, 'gold': 200, 'mercury': 0, 'gems': 0, 'available': 110},
{'name': 'gremlin', 'health': 4, 'gold': 70, 'mercury': 0, 'gems': 0, 'available': 500},
]
distant = ['titan', 'mage', 'gremlin']
air = ['djinn', 'gargoyle']
units = pd.DataFrame(units)
units.index = units.name
units['distant'] = 0
units['air'] = 0
units.loc[distant, 'distant'] = 1
units.loc[air, 'air'] = 1
units
Explanation: Увидел у Игоря Викторовича в vk отличный пост: https://vk.com/wall137669108_516
Problem
Simple solution in continuous space
End of explanation
loss_function = -np.hstack([units.health, [0]])
A = [(-units.health * units.distant).tolist() + [0],
(-units.health * units.air).tolist() + [0],
units.mercury.tolist() + [0],
units.gems.tolist() + [1],
units.gold.tolist() + [-gem_price]]
b = [-distant_min_health, -air_min_health, mercury, gems, gold]
bounds = [(0, available) for available in units.available] + [(0, gems)]
%%time
result = scipy.optimize.linprog(loss_function, A, b, bounds=bounds)
Explanation: X = [Titan, Naga, Djinn, Mage, Golem, Gargoyle, Gremlin, Sold gems]
End of explanation
result
Explanation: Решение найдено почти мгновенно.
End of explanation
-np.dot(result.x, np.array(A[0])), -np.dot(result.x, A[1])
Explanation: Последний элемент ответа 0 - камни продавать не нужно. Сила набранной армии: 12875.
Сила существ дальнего боя / воздушной армии:
End of explanation
np.dot(result.x, A[2]), np.dot(result.x, A[3]), np.dot(result.x, A[4])
Explanation: Затраченные ресурсы:
End of explanation
result.x
Explanation: Ртуть и камни потратили полностью, монеты еще остались.
Внимательный читатель мог заметить, что решение не совсем честное. Задачу я увидел уже глубокой ночью и воспользовался тем инструментом, что уже знал. Так повезло, что найденное решение оказалось целочисленным. Но вообще, это не всегда так. Вот результат для задачи с такими же условиями, но когда у Дениса только 100k золота:
End of explanation
import pulp
problem = pulp.LpProblem("Heroes III", pulp.LpMaximize)
titan = pulp.LpVariable('titan', lowBound=0, upBound=units.loc['titan'].available, cat='Integer')
naga = pulp.LpVariable('naga', lowBound=0, upBound=units.loc['naga'].available, cat='Integer')
djinn = pulp.LpVariable('djinn', lowBound=0, upBound=units.loc['djinn'].available, cat='Integer')
mage = pulp.LpVariable('mage', lowBound=0, upBound=units.loc['mage'].available, cat='Integer')
golem = pulp.LpVariable('golem', lowBound=0, upBound=units.loc['golem'].available, cat='Integer')
gargoyle = pulp.LpVariable('gargoyle', lowBound=0, upBound=units.loc['gargoyle'].available, cat='Integer')
gremlin = pulp.LpVariable('gremlin', lowBound=0, upBound=units.loc['gremlin'].available, cat='Integer')
sold_gems = pulp.LpVariable('sold gems', lowBound=0, upBound=gems, cat='Integer')
army = [titan, naga, djinn, mage, golem, gargoyle, gremlin]
# gain function
problem += np.dot(army, units.health.values)
# restrictions
problem += np.dot(army, units.health * units.distant) >= distant_min_health
problem += np.dot(army, units.health * units.air) >= air_min_health
problem += np.dot(army, units.mercury) <= mercury
problem += np.dot(army + [sold_gems], units.gems.tolist() + [1]) <= gems
problem += np.dot(army + [sold_gems], units.gold.tolist() + [-gem_price]) <= gold
%%time
pulp.LpStatus[problem.solve()]
solution = pd.DataFrame([{'value': parameter.value()} for parameter in problem.variables()],
index=[parameter.name
for parameter in problem.variables()])
solution.loc[['sold_gems'] + units.name.tolist()]
Explanation: Для общего случая ILP дальше не всегда можно просто докрутить до целых чисел. А это прекрасный повод забыть про родной L-BFGS и окунуться в мир целочисленных линейных задач.
Integer Linear Problem
Вообще, известно как с помощью дополняющих условий (Cutting-plane method) сводить задачу к целочисленной, но делать это на scipy не стоит.
Я начал копать с этой страницы: http://prod.sandia.gov/techlib/access-control.cgi/2013/138847.pdf
И остановился сразу как только нашел слово GNU. Интересно, что основатель проекта из МАИ. Список python биндингов: https://en.wikibooks.org/wiki/GLPK/Python
End of explanation
optimal_army = [unit.value() for unit in army]
Explanation: Теперь пришлось продавать камни.
End of explanation
np.dot(optimal_army, units.health * units.distant), np.dot(optimal_army, units.health * units.air)
Explanation: Дальнобойные, воздушные
End of explanation
np.dot(optimal_army, units.mercury), \
np.dot(optimal_army + [sold_gems.value()], units.gems.tolist() + [1]), \
np.dot(optimal_army + [sold_gems.value()], units.gold.tolist() + [-gem_price]), \
Explanation: Затраченные ресурсы
End of explanation |
10,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installing Python and GraphLab Create
Please follow the installation instructions here before getting started
Step1: Create some variables in Python
Step2: Advanced python types
Step3: Advanced printing
Step4: Conditional statements in python
Step5: Conditional loops
Step6: Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)
Step7: Creating functions in Python
Again, we don't use {}, but just indent the lines that are part of the function.
Step8: We can also define simple functions with lambdas | Python Code:
print 'Hello World!'
Explanation: Installing Python and GraphLab Create
Please follow the installation instructions here before getting started:
We have done
Installed Python
Started Ipython Notebook
Getting started with Python
End of explanation
i = 4 #int
type(i)
f = 4.1 #float
type(f)
b = True #boolean variable
s = "This is a string!"
print s
Explanation: Create some variables in Python
End of explanation
l = [3,1,2] #list
print l
d = {'foo':1, 'bar':2.3, 's':'my first dictionary'} #dictionary
print d
print d['foo'] #element of a dictionary
n = None #Python's null type
type(n)
Explanation: Advanced python types
End of explanation
print "Our float value is %s. Our int value is %s." % (f,i) #Python is pretty good with strings
Explanation: Advanced printing
End of explanation
if i == 1 and f > 4:
print "The value of i is 1 and f is greater than 4."
elif i > 4 or f > 4:
print "i or f are both greater than 4."
else:
print "both i and f are less than or equal to 4"
Explanation: Conditional statements in python
End of explanation
print l
for e in l:
print e
Explanation: Conditional loops
End of explanation
counter = 6
while counter < 10:
print counter
counter += 1
Explanation: Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)
End of explanation
def add2(x):
y = x + 2
return y
i = 6
add2(i)
Explanation: Creating functions in Python
Again, we don't use {}, but just indent the lines that are part of the function.
End of explanation
square = lambda x: x*x
square(5)
Explanation: We can also define simple functions with lambdas:
End of explanation |
10,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step2: Inspecting Quantization Errors with Quantization Debugger
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: We can see that the original model has a much higher top-5 accuracy for our
small dataset, while the quantized model has a significant accuracy loss.
Step 1. Debugger preparation
Easiest way to use the quantization debugger is to provide
tf.lite.TFLiteConverter that you have been using to quantize the model.
Step4: Step 2. Running the debugger and getting the results
When you call QuantizationDebugger.run(), the debugger will log differences
between float tensors and quantized tensors for the same op location, and
process them with given metrics.
Step5: The processed metrics can be accessed with
QuantizationDebugger.layer_statistics, or can be dumped to a text file in CSV
format with QuantizationDebugger.layer_statistics_dump().
Step6: For each row in the dump, the op name and index comes first, followed by
quantization parameters and error metrics (including
user-defined error metrics, if any). The resulting CSV file
can be used to pick problematic layers with large quantization error metrics.
With pandas or other data processing libraries, we can inspect detailed
per-layer error metrics.
Step7: Step 3. Data analysis
There are various ways to analyze the resulting. First, let's add some useful
metrics derived from the debugger's outputs. (scale means the quantization
scale factor for each tensor.)
Range (256 / scale)
RMSE / scale (sqrt(mean_squared_error) / scale)
The RMSE / scale is close to 1 / sqrt(12) (~ 0.289) when quantized
distribution is similar to the original float distribution, indicating a good
quantized model. The larger the value is, it's more likely for the layer not
being quantized well.
Step8: There are many layers with wide ranges, and some layers that have high
RMSE/scale values. Let's get the layers with high error metrics.
Step9: With these layers, you can try selective quantization to see if not quantizing
those layers improves model quality.
Step10: In addition to these, skipping quantization for the first few layers also helps
improving quantized model's quality.
Step11: Selective Quantization
Selective quantization skips quantization for some nodes, so that the
calculation can happen in the original floating-point domain. When correct
layers are skipped, we can expect some model quality recovery at the cost of
increased latency and model size.
However, if you're planning to run quantized models on integer-only accelerators
(e.g. Hexagon DSP, EdgeTPU), selective quantization would cause fragmentation of
the model and would result in slower inference latency mainly caused by data
transfer cost between CPU and those accelerators. To prevent this, you can
consider running
quantization aware training
to keep all the layers in integer while preserving the model accuracy.
Quantization debugger's option accepts denylisted_nodes and denylisted_ops
options for skipping quantization for specific layers, or all instances of
specific ops. Using suspected_layers we prepared from the previous step, we
can use quantization debugger to get a selectively quantized model.
Step12: The accuracy is still lower compared to the original float model, but we have
notable improvement from the whole quantized model by skipping quantization for
~10 layers out of 111 layers.
You can also try to not quantized all ops in the same class. For example, to
skip quantization for all mean ops, you can pass MEAN to denylisted_ops.
Step13: With these techniques, we are able to improve the quantized MobileNet V3 model
accuracy. Next we'll explore advanced techniques to improve the model accuracy
even more.
Advanced usages
Whith following features, you can further customize your debugging pipeline.
Custom metrics
By default, the quantization debugger emits five metrics for each float-quant
difference
Step14: The result of model_debug_metrics can be separately seen from
debugger.model_statistics.
Step15: Using (internal) mlir_quantize API to access in-depth features
Note
Step16: Whole model verify mode
The default behavior for the debug model generation is per-layer verify. In this
mode, the input for float and quantize op pair is from the same source (previous
quantized op). Another mode is whole-model verify, where the float and quantize
models are separated. This mode would be useful to observe how the error is
being propagated down the model. To enable, enable_whole_model_verify=True to
convert.mlir_quantize while generating the debug model manually.
Step17: Selective quantization from an already calibrated model
You can directly call convert.mlir_quantize to get the selective quantized
model from already calibrated model. This would be particularly useful when you
want to calibrate the model once, and experiment with various denylist
combinations. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
# Quantization debugger is available from TensorFlow 2.7.0
!pip uninstall -y tensorflow
!pip install tf-nightly
!pip install tensorflow_datasets --upgrade # imagenet_v2 needs latest checksum
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
#@title Boilerplates and helpers
MODEL_URI = 'https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5'
def process_image(data):
data['image'] = tf.image.resize(data['image'], (224, 224)) / 255.0
return data
# Representative dataset
def representative_dataset(dataset):
def _data_gen():
for data in dataset.batch(1):
yield [data['image']]
return _data_gen
def eval_tflite(tflite_model, dataset):
Evaluates tensorflow lite classification model with the given dataset.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_idx = interpreter.get_input_details()[0]['index']
output_idx = interpreter.get_output_details()[0]['index']
results = []
for data in representative_dataset(dataset)():
interpreter.set_tensor(input_idx, data[0])
interpreter.invoke()
results.append(interpreter.get_tensor(output_idx).flatten())
results = np.array(results)
gt_labels = np.array(list(dataset.map(lambda data: data['label'] + 1)))
accuracy = (
np.sum(np.argsort(results, axis=1)[:, -5:] == gt_labels.reshape(-1, 1)) /
gt_labels.size)
print(f'Top-5 accuracy (quantized): {accuracy * 100:.2f}%')
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(224, 224, 3), batch_size=1),
hub.KerasLayer(MODEL_URI)
])
model.compile(
loss='sparse_categorical_crossentropy',
metrics='sparse_top_k_categorical_accuracy')
model.build([1, 224, 224, 3])
# Prepare dataset with 100 examples
ds = tfds.load('imagenet_v2', split='test[:1%]')
ds = ds.map(process_image)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.representative_dataset = representative_dataset(ds)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_model = converter.convert()
test_ds = ds.map(lambda data: (data['image'], data['label'] + 1)).batch(16)
loss, acc = model.evaluate(test_ds)
print(f'Top-5 accuracy (float): {acc * 100:.2f}%')
eval_tflite(quantized_model, ds)
Explanation: Inspecting Quantization Errors with Quantization Debugger
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/quantization_debugger"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Although full-integer quantization provides improved model size and latency, the
quantized model won't always work as expected. It's usually expected for the
model quality (e.g. accuracy, mAP, WER) to be slightly lower than the original
float model. However, there are cases where the model quality can go below your
expectation or generated completely wrong results.
When this problem happens, it's tricky and painful to spot the root cause of the
quantization error, and it's even more difficult to fix the quantization error.
To assist this model inspection process, quantization debugger can be used
to identify problematic layers, and selective quantization can leave those
problematic layers in float so that the model accuracy can be recovered at the
cost of reduced benefit from quantization.
Note: This API is experimental, and there might be breaking changes in the API
in the course of improvements.
Quantization Debugger
Quantization debugger makes it possible to do quantization quality metric
analysis in the existing model. Quantization debugger can automate processes for
running model with a debug dataset, and collecting quantization quality metrics
for each tensors.
Note: Quantization debugger and selective quantization currently only works for
full-integer quantization with int8 activations.
Prerequisites
If you already have a pipeline to quantize a model, you have all necessary
pieces to run quantization debugger!
Model to quantize
Representative dataset
In addition to model and data, you will need to use a data processing framework
(e.g. pandas, Google Sheets) to analyze the exported results.
Setup
This section prepares libraries, MobileNet v3 model, and test dataset of 100
images.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset(ds)
# my_debug_dataset should have the same format as my_representative_dataset
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter, debug_dataset=representative_dataset(ds))
Explanation: We can see that the original model has a much higher top-5 accuracy for our
small dataset, while the quantized model has a significant accuracy loss.
Step 1. Debugger preparation
Easiest way to use the quantization debugger is to provide
tf.lite.TFLiteConverter that you have been using to quantize the model.
End of explanation
debugger.run()
Explanation: Step 2. Running the debugger and getting the results
When you call QuantizationDebugger.run(), the debugger will log differences
between float tensors and quantized tensors for the same op location, and
process them with given metrics.
End of explanation
RESULTS_FILE = '/tmp/debugger_results.csv'
with open(RESULTS_FILE, 'w') as f:
debugger.layer_statistics_dump(f)
!head /tmp/debugger_results.csv
Explanation: The processed metrics can be accessed with
QuantizationDebugger.layer_statistics, or can be dumped to a text file in CSV
format with QuantizationDebugger.layer_statistics_dump().
End of explanation
layer_stats = pd.read_csv(RESULTS_FILE)
layer_stats.head()
Explanation: For each row in the dump, the op name and index comes first, followed by
quantization parameters and error metrics (including
user-defined error metrics, if any). The resulting CSV file
can be used to pick problematic layers with large quantization error metrics.
With pandas or other data processing libraries, we can inspect detailed
per-layer error metrics.
End of explanation
layer_stats['range'] = 255.0 * layer_stats['scale']
layer_stats['rmse/scale'] = layer_stats.apply(
lambda row: np.sqrt(row['mean_squared_error']) / row['scale'], axis=1)
layer_stats[['op_name', 'range', 'rmse/scale']].head()
plt.figure(figsize=(15, 5))
ax1 = plt.subplot(121)
ax1.bar(np.arange(len(layer_stats)), layer_stats['range'])
ax1.set_ylabel('range')
ax2 = plt.subplot(122)
ax2.bar(np.arange(len(layer_stats)), layer_stats['rmse/scale'])
ax2.set_ylabel('rmse/scale')
plt.show()
Explanation: Step 3. Data analysis
There are various ways to analyze the resulting. First, let's add some useful
metrics derived from the debugger's outputs. (scale means the quantization
scale factor for each tensor.)
Range (256 / scale)
RMSE / scale (sqrt(mean_squared_error) / scale)
The RMSE / scale is close to 1 / sqrt(12) (~ 0.289) when quantized
distribution is similar to the original float distribution, indicating a good
quantized model. The larger the value is, it's more likely for the layer not
being quantized well.
End of explanation
layer_stats[layer_stats['rmse/scale'] > 0.7][[
'op_name', 'range', 'rmse/scale', 'tensor_name'
]]
Explanation: There are many layers with wide ranges, and some layers that have high
RMSE/scale values. Let's get the layers with high error metrics.
End of explanation
suspected_layers = list(
layer_stats[layer_stats['rmse/scale'] > 0.7]['tensor_name'])
Explanation: With these layers, you can try selective quantization to see if not quantizing
those layers improves model quality.
End of explanation
suspected_layers.extend(list(layer_stats[:5]['tensor_name']))
Explanation: In addition to these, skipping quantization for the first few layers also helps
improving quantized model's quality.
End of explanation
debug_options = tf.lite.experimental.QuantizationDebugOptions(
denylisted_nodes=suspected_layers)
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter,
debug_dataset=representative_dataset(ds),
debug_options=debug_options)
selective_quantized_model = debugger.get_nondebug_quantized_model()
eval_tflite(selective_quantized_model, ds)
Explanation: Selective Quantization
Selective quantization skips quantization for some nodes, so that the
calculation can happen in the original floating-point domain. When correct
layers are skipped, we can expect some model quality recovery at the cost of
increased latency and model size.
However, if you're planning to run quantized models on integer-only accelerators
(e.g. Hexagon DSP, EdgeTPU), selective quantization would cause fragmentation of
the model and would result in slower inference latency mainly caused by data
transfer cost between CPU and those accelerators. To prevent this, you can
consider running
quantization aware training
to keep all the layers in integer while preserving the model accuracy.
Quantization debugger's option accepts denylisted_nodes and denylisted_ops
options for skipping quantization for specific layers, or all instances of
specific ops. Using suspected_layers we prepared from the previous step, we
can use quantization debugger to get a selectively quantized model.
End of explanation
debug_options = tf.lite.experimental.QuantizationDebugOptions(
denylisted_ops=['MEAN'])
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter,
debug_dataset=representative_dataset(ds),
debug_options=debug_options)
selective_quantized_model = debugger.get_nondebug_quantized_model()
eval_tflite(selective_quantized_model, ds)
Explanation: The accuracy is still lower compared to the original float model, but we have
notable improvement from the whole quantized model by skipping quantization for
~10 layers out of 111 layers.
You can also try to not quantized all ops in the same class. For example, to
skip quantization for all mean ops, you can pass MEAN to denylisted_ops.
End of explanation
debug_options = tf.lite.experimental.QuantizationDebugOptions(
layer_debug_metrics={
'mean_abs_error': (lambda diff: np.mean(np.abs(diff)))
},
layer_direct_compare_metrics={
'correlation':
lambda f, q, s, zp: (np.corrcoef(f.flatten(),
(q.flatten() - zp) / s)[0, 1])
},
model_debug_metrics={
'argmax_accuracy': (lambda f, q: np.mean(np.argmax(f) == np.argmax(q)))
})
debugger = tf.lite.experimental.QuantizationDebugger(
converter=converter,
debug_dataset=representative_dataset(ds),
debug_options=debug_options)
debugger.run()
CUSTOM_RESULTS_FILE = '/tmp/debugger_results.csv'
with open(CUSTOM_RESULTS_FILE, 'w') as f:
debugger.layer_statistics_dump(f)
custom_layer_stats = pd.read_csv(CUSTOM_RESULTS_FILE)
custom_layer_stats[['op_name', 'mean_abs_error', 'correlation']].tail()
Explanation: With these techniques, we are able to improve the quantized MobileNet V3 model
accuracy. Next we'll explore advanced techniques to improve the model accuracy
even more.
Advanced usages
Whith following features, you can further customize your debugging pipeline.
Custom metrics
By default, the quantization debugger emits five metrics for each float-quant
difference: tensor size, standard deviation, mean error, max absolute error, and
mean squared error. You can add more custom metrics by passing them to options.
For each metrics, the result should be a single float value and the resulting
metric will be an average of metrics from all examples.
layer_debug_metrics: calculate metric based on diff for each op outputs
from float and quantized op outputs.
layer_direct_compare_metrics: rather than getting diff only, this will
calculate metric based on raw float and quantized tensors, and its
quantization parameters (scale, zero point)
model_debug_metrics: only used when float_model_(path|content) is
passed to the debugger. In addition to the op-level metrics, final layer
output is compared to the reference output from the original float model.
End of explanation
debugger.model_statistics
Explanation: The result of model_debug_metrics can be separately seen from
debugger.model_statistics.
End of explanation
from tensorflow.lite.python import convert
Explanation: Using (internal) mlir_quantize API to access in-depth features
Note: Some features in the folowing section,
TFLiteConverter._experimental_calibrate_only and converter.mlir_quantize are
experimental internal APIs, and subject to change in a non-backward compatible
way.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.representative_dataset = representative_dataset(ds)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter._experimental_calibrate_only = True
calibrated_model = converter.convert()
# Note that enable_numeric_verify and enable_whole_model_verify are set.
quantized_model = convert.mlir_quantize(
calibrated_model,
enable_numeric_verify=True,
enable_whole_model_verify=True)
debugger = tf.lite.experimental.QuantizationDebugger(
quant_debug_model_content=quantized_model,
debug_dataset=representative_dataset(ds))
Explanation: Whole model verify mode
The default behavior for the debug model generation is per-layer verify. In this
mode, the input for float and quantize op pair is from the same source (previous
quantized op). Another mode is whole-model verify, where the float and quantize
models are separated. This mode would be useful to observe how the error is
being propagated down the model. To enable, enable_whole_model_verify=True to
convert.mlir_quantize while generating the debug model manually.
End of explanation
selective_quantized_model = convert.mlir_quantize(
calibrated_model, denylisted_nodes=suspected_layers)
eval_tflite(selective_quantized_model, ds)
Explanation: Selective quantization from an already calibrated model
You can directly call convert.mlir_quantize to get the selective quantized
model from already calibrated model. This would be particularly useful when you
want to calibrate the model once, and experiment with various denylist
combinations.
End of explanation |
10,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EOF analysis - global hgt500
In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. It is similar to performing a principal components analysis on the data, except that the EOF method finds both temporal projections and spatial patterns. The term is also interchangeable with the geographically weighted PCAs in geophysics (https
Step1: 2. Load ght500 data
Step2: 3. Detrend
Here we used the detrend from scipy.signal. See help
Step3: 4. Carry out EOF analysis
4.1 Create an EOF solver to do the EOF analysis
Cosine of latitude weights are applied before the computation of EOFs.
Step4: 4.3 Retrieve the leading EOFs
Expressed as the correlation between the leading PC time series and the input SST anomalies at each grid point, and the
leading PC time series itself.
Step5: 5. Visualize leading EOFs
5.1 Plot EOFs and PCs
Step6: 5.2 Check variances explained by leading EOFs | Python Code:
% matplotlib inline
import numpy as np
from scipy import signal
import numpy.polynomial.polynomial as poly
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from eofs.standard import Eof
Explanation: EOF analysis - global hgt500
In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. It is similar to performing a principal components analysis on the data, except that the EOF method finds both temporal projections and spatial patterns. The term is also interchangeable with the geographically weighted PCAs in geophysics (https://en.wikipedia.org/wiki/Empirical_orthogonal_functions). The spatial patterns are the EOFs, and can be thought of as basis functions in terms of variance. The associated temporal projections are the pricipal components (PCs) and are the temporal coefficients of the EOF patterns.
Python EOF library
The EOF library is used here from Dawson (http://ajdawson.github.io/eofs/).
eofs is a Python package for EOF analysis of spatial-temporal data. Using EOFs (empirical orthogonal functions) is a common technique to decompose a signal varying in time and space into a form that is easier to interpret in terms of spatial and temporal variance. Some of the key features of eofs are:
Suitable for large data sets: computationally efficient for the large output data sets of modern climate models.
Transparent handling of missing values: missing values are removed automatically during computations and placed back into output fields.
Automatic metadata: metadata from input fields is used to construct metadata for output fields.
No Compiler required: a fast implementation written in pure Python using the power of numpy, no Fortran or C dependencies.
Data Source
The hgt data is downloaded from https://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis2.pressure.html.
We select hgt at 500mb level from 1979 to 2003, and convert them into yearly climatology using CDO.
cdo yearmean -sellevel,500/500, -selyear,1979/2003 hgt.mon.mean.nc hgt500.mon.mean.nc
1. Load basic libraries
End of explanation
infile = 'data/hgt500.mon.mean.nc'
ncin = Dataset(infile, 'r')
hgt = ncin.variables['hgt'][:,0,:,:]
lat = ncin.variables['lat'][:]
lon = ncin.variables['lon'][:]
ncin.close()
Explanation: 2. Load ght500 data
End of explanation
nt,nlat,nlon = hgt.shape
hgt = hgt.reshape((nt,nlat*nlon), order='F')
hgt_detrend = signal.detrend(hgt, axis=0, type='linear', bp=0)
hgt_detrend = hgt_detrend.reshape((nt,nlat,nlon), order='F')
print(hgt_detrend.shape)
Explanation: 3. Detrend
Here we used the detrend from scipy.signal. See help: signal.detrend?
End of explanation
wgts = np.cos(np.deg2rad(lat))
wgts = wgts.reshape(len(wgts), 1)
solver = Eof(hgt_detrend, weights=wgts)
Explanation: 4. Carry out EOF analysis
4.1 Create an EOF solver to do the EOF analysis
Cosine of latitude weights are applied before the computation of EOFs.
End of explanation
eof1 = solver.eofs(neofs=10)
pc1 = solver.pcs(npcs=10, pcscaling=0)
varfrac = solver.varianceFraction()
lambdas = solver.eigenvalues()
Explanation: 4.3 Retrieve the leading EOFs
Expressed as the correlation between the leading PC time series and the input SST anomalies at each grid point, and the
leading PC time series itself.
End of explanation
parallels = np.arange(-90,90,30.)
meridians = np.arange(-180,180,30)
for i in range(0,2):
fig = plt.figure(figsize=(12,9))
plt.subplot(211)
m = Basemap(projection='cyl', llcrnrlon=min(lon), llcrnrlat=min(lat), urcrnrlon=max(lon), urcrnrlat=max(lat))
x, y = m(*np.meshgrid(lon, lat))
clevs = np.linspace(np.min(eof1[i,:,:].squeeze()), np.max(eof1[i,:,:].squeeze()), 21)
cs = m.contourf(x, y, eof1[i,:,:].squeeze(), clevs, cmap=plt.cm.RdBu_r)
m.drawcoastlines()
m.drawparallels(parallels, labels=[1,0,0,0])
m.drawmeridians(meridians, labels=[1,0,0,1])
cb = m.colorbar(cs, 'right', size='5%', pad='2%')
cb.set_label('EOF', fontsize=12)
plt.title('EOF ' + str(i+1), fontsize=16)
plt.subplot(212)
days = np.linspace(1979,2003,nt)
plt.plot(days, pc1[:,i], linewidth=2)
plt.axhline(0, color='k')
plt.xlabel('Year')
plt.ylabel('PC Amplitude')
plt.ylim(np.min(pc1.squeeze()), np.max(pc1.squeeze()))
Explanation: 5. Visualize leading EOFs
5.1 Plot EOFs and PCs
End of explanation
plt.figure(figsize=(11,6))
eof_num = range(1, 16)
plt.plot(eof_num, varfrac[0:15], linewidth=2)
plt.plot(eof_num, varfrac[0:15], linestyle='None', marker="o", color='r', markersize=8)
plt.axhline(0, color='k')
plt.xticks(range(1, 16))
plt.title('Fraction of the total variance represented by each EOF')
plt.xlabel('EOF #')
plt.ylabel('Variance Fraction')
plt.xlim(1, 15)
plt.ylim(np.min(varfrac), np.max(varfrac)+0.01)
Explanation: 5.2 Check variances explained by leading EOFs
End of explanation |
10,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Introduction
In this assignment, we'll be returning to the scenario we started analyzing in the Model Evaluation assignment -- analyzing the obesity epidemic in the United States. Obesity rates vary across the nation by geographic location. In this colab, we'll be exploring how obesity rates vary with different health or societal factors across US cities.
In the Model Evaluation assignment, we limited our analysis to high (<30%) and low (>30%) categories. Today we'll go one step further and predict the obesity rates themselves.
Our data science question
Step2: Questions
Q0A) Look through the dataframe and make sure you understand what each variable is describing.
Q0B) What are the units for each variable?
Part 1
Step3: Q1B) For each model, what are the units of the weights? Intercepts?
Q1C) Mathematically, what does the weight represent?
Q1D) Mathematically, what does the intercept represent?
Q1E) Looking visually at the plots of the regression models, which model do you think will be better at predicting obesity rates for new, unseen data points (cities)? Why?
Prediction Error
Step4: Q1F) What are the units of the RMSE for each model?
Q1G) Which model had better RMSE?
Out-Of-Sample Prediction
In contrast to in-sample prediction, we can also perform out-of-sample prediction, by using our model to make predictions on new, previously unseen data. This is akin to applying our model on a test set, in machine learning parlance. Out-of-sample prediction error measures how well our model can generalize to new data.
Q1H) In general, would you expect in-sample prediction error, or out-of-sample prediction error to be higher?
Let's see how well our models perform on some US cities not included in the CDC500.
Step5: Q1I) How well did these model predict the obesity rates? Which model had better accuracy?
Q1J) For the model you selected in the question above, how much would you trust this model? What are its limitations?
Q1K)Can you think of any ways to create an even better model?
Part 2
Step6: Fit a Model
Now let's fit a linear regression model using all of the features in our dataframe.
Step7: Q2A) Look at the coefficients for each of the features. Which features contribute most to the prediction?
Prediction Error
Let's now analyze the in-sample and out-of-sample prediction error for our multiple linear regression model.
Step8: Q2B) How does the in-sample prediction RMSE compare with that of the single variable models A and B?
We'll also take a look at out-of-sample prediction error.
Step9: Q2B) How does the out-of-sample RMSE compare with that of the single variable models A and B?
Q2C) In general, how would you expect adding more variables to affect the resulting prediction error
Step10: Q2D) Take a look at the list of variables we'll be using this time. Do you think all of them will be useful/predictive?
Q2E) Based on your intuition, do you think adding all these models will help or hinder predictive accuracy?
Let's now build a model and see what happens.
Step11: Let's also look at prediction error | Python Code:
# Setup/Imports
!pip install datacommons --upgrade --quiet
!pip install datacommons_pandas --upgrade --quiet
# Data Commons Python and Pandas APIs
import datacommons
import datacommons_pandas
# For manipulating data
import numpy as np
import pandas as pd
# For implementing models and evaluation methods
from sklearn import linear_model
from sklearn.metrics import r2_score, mean_squared_error
# For plotting/printing
from matplotlib import pyplot as plt
import seaborn as sns
Explanation: <a href="https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/intro_data_science/Regression_Basics_and_Prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2022 Google LLC.
SPDX-License-Identifier: Apache-2.0
Regression: Basics and Prediction
Regression analysis is a powerful process for finding statistical relationships between variables. It's one of the most commonly used tools seen in the data science world, often used for prediction and forecasting.
In this assignment, we'll be focusing on linear regression, which forms the basis for most regression models. In particular, we'll explore linear regression as a tool for prediction. We'll cover interpreting regression models, in part 2.
Learning Objectives:
Linear regression for prediction
Mean-Squared error
In-sample vs out-of-sample prediction
Single variate vs. multivariate Regression
The effect of increasing variables
Need extra help?
If you're new to Google Colab, take a look at this getting started tutorial.
To build more familiarity with the Data Commons API, check out these Data Commons Tutorials.
And for help with Pandas and manipulating data frames, take a look at the Pandas Documentation.
We'll be using the scikit-learn library for implementing our models today. Documentation can be found here.
As usual, if you have any other questions, please reach out to your course staff!
Part 0: Getting Set Up
Run the following code boxes to load the python libraries and data we'll be using today.
End of explanation
# Load the data we'll be using
city_dcids = datacommons.get_property_values(["CDC500_City"],
"member",
limit=500)["CDC500_City"]
# We've compiled a list of some nice Data Commons Statistical Variables
# to use as features for you
stat_vars_to_query = [
"Count_Person",
"Percent_Person_PhysicalInactivity",
"Percent_Person_SleepLessThan7Hours",
"Percent_Person_WithHighBloodPressure",
"Percent_Person_WithMentalHealthNotGood",
"Percent_Person_WithHighCholesterol",
"Percent_Person_Obesity"
]
# Query Data Commons for the data and remove any NaN values
raw_features_df = datacommons_pandas.build_multivariate_dataframe(city_dcids,stat_vars_to_query)
raw_features_df.dropna(inplace=True)
# order columns alphabetically
raw_features_df = raw_features_df.reindex(sorted(raw_features_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
df = raw_features_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
df.insert(0, 'City Name', pd.Series(city_name_dict))
# Display results
display(df)
Explanation: Introduction
In this assignment, we'll be returning to the scenario we started analyzing in the Model Evaluation assignment -- analyzing the obesity epidemic in the United States. Obesity rates vary across the nation by geographic location. In this colab, we'll be exploring how obesity rates vary with different health or societal factors across US cities.
In the Model Evaluation assignment, we limited our analysis to high (<30%) and low (>30%) categories. Today we'll go one step further and predict the obesity rates themselves.
Our data science question: Can we predict the obesity rates of various US Cities based on other health or lifestyle factors?
Run the following code box to load the data. We've done some basic data cleaning and manipulation for you, but look through the code to make sure you understand what's going on.
End of explanation
var1 = "Count_Person"
var2 = "Percent_Person_PhysicalInactivity"
dep_var = "Percent_Person_Obesity"
df_single_vars = df[[var1, var2, dep_var]].copy()
x_a = df_single_vars[var1].to_numpy().reshape(-1, 1)
x_b = df_single_vars[var2].to_numpy().reshape(-1,1)
y = df_single_vars[dep_var].to_numpy().reshape(-1, 1)
# Fit models
model_a = linear_model.LinearRegression().fit(x_a,y)
model_b = linear_model.LinearRegression().fit(x_b,y)
# Make Predictions
predictions_a = model_a.predict(x_a)
predictions_b = model_b.predict(x_b)
df_single_vars["Prediction_A"] = predictions_a
df_single_vars["Prediction_B"] = predictions_b
# Plot Model A
print("Model A:")
print("---------")
print("Weights:", model_a.coef_)
print("Intercept:", model_a.intercept_)
fig, ax = plt.subplots()
p1 = sns.scatterplot(data=df_single_vars, x=var1, y=dep_var, ax=ax, color="orange")
p2 = sns.lineplot(data=df_single_vars, x=var1, y="Prediction_A", ax=ax, color="blue")
plt.show()
# Plot Model B
print("Model B:")
print("---------")
print("Weights:", model_b.coef_)
print("Intercept:", model_b.intercept_)
fig, ax = plt.subplots()
p1 = sns.scatterplot(data=df_single_vars, x=var2, y=dep_var, ax=ax, color="orange")
p2 = sns.lineplot(data=df_single_vars, x=var2, y="Prediction_B", ax=ax, color="blue")
plt.show()
Explanation: Questions
Q0A) Look through the dataframe and make sure you understand what each variable is describing.
Q0B) What are the units for each variable?
Part 1: Single Linear Regression
To help us build some intuition for how regression works, we'll start by using a single variable. We'll create two models, Model A and Model B. Model A will use Count_Person variable to predict obesity rates, while Model B will use the Percent_Person_PhysicalInactivity variable.
Model | Independent Variable | Dependent Variable
--- | --- | ---
Model A | Count_Person | Percent_Person_Obesity
Model B | Percent_Person_PhysicalInactivity | Percent_Person_Obesity
Q1A) Just using your intuition, which model do you think will be better at predicting obesity rates? Why?
Fit a Model
Let's now check your intuition by fitting linear regression models to our data.
Run the following code box to fit Model A and Model B.
End of explanation
print("Model A RMSE:", mean_squared_error(y, predictions_a, squared=False))
print("Model B RMSE:", mean_squared_error(y, predictions_b, squared=False))
Explanation: Q1B) For each model, what are the units of the weights? Intercepts?
Q1C) Mathematically, what does the weight represent?
Q1D) Mathematically, what does the intercept represent?
Q1E) Looking visually at the plots of the regression models, which model do you think will be better at predicting obesity rates for new, unseen data points (cities)? Why?
Prediction Error: MSE and RMSE
To quantify predictive accuracy, we find the prediction error, a metric of how far off our model predictions are from the true value. One of the most common metrics used is mean squared error:
$$ MSE = \frac{1}{\text{# total data points}}\sum_{\text{all data points}}(\text{predicted value} - \text{actual value})^2$$
MSE is a measure of the average difference between the predicted value and the actual value. The square ($^2$) can seem counterintuitive at first, but offers some nice mathematical properties.
There's also root mean squared error, the square root of the MSE, which scales the error metric to match the scale of the data points:
$$ RMSE = \sqrt{MSE} = \sqrt{\frac{1}{\text{# total data points}}\sum_{\text{all data points}}(\text{predicted value} - \text{actual value})^2}$$
Prediction error can actually refer to one of two things: in-sample prediction error or out-of-sample prediction error. We'll explore both in the sections below.
In-Sample Prediction
In-sample prediction refers to forecasting or predicting for a data point that was used to fit the model. This is akin to applying your model to the training set, in machine learning parlance. In-sample prediction error measures how well our model is able to reproduce the data we currently have.
Run the following code block to calculate the in-sample prediction RMSE for both models.
End of explanation
# make a prediction
new_dcids = [
"geoId/0617610", # Cupertino, CA
"geoId/0236400", # Juneau, AK
"geoId/2467675", # Rockville, MD
"geoId/4530850", # Greenville, SC
"geoId/3103950", # Bellevue, NE
]
new_df = datacommons_pandas.build_multivariate_dataframe(new_dcids,stat_vars_to_query)
x_a_new = new_df[var1].to_numpy().reshape(-1,1)
x_b_new = new_df[var2].to_numpy().reshape(-1,1)
y_new = new_df[dep_var].to_numpy().reshape(-1, 1)
predicted_a_new = model_a.predict(x_a_new)
predicted_b_new = model_b.predict(x_b_new)
new_df["Model A Prediction"] = predicted_a_new
new_df["Model B Prediction"] = predicted_b_new
print("Model A:")
print("--------")
display(new_df[["Model A Prediction", dep_var]])
fig, ax = plt.subplots()
p0 = sns.scatterplot(data=df_single_vars, x=var1, y=dep_var, ax=ax, color="orange")
p1 = sns.scatterplot(data=new_df, x=var1, y=dep_var, ax=ax, color="red")
p2 = sns.lineplot(data=df_single_vars, x=var1, y="Prediction_A", ax=ax, color="blue")
plt.show()
print("RMSE:", mean_squared_error(y_new, predicted_a_new, squared=False))
print("")
print("Model B:")
print("--------")
display(new_df[["Model B Prediction", dep_var]])
fig, ax = plt.subplots()
p0 = sns.scatterplot(data=df_single_vars, x=var2, y=dep_var, ax=ax, color="orange")
p1 = sns.scatterplot(data=new_df, x=var2, y=dep_var, ax=ax, color="red")
p2 = sns.lineplot(data=df_single_vars, x=var2, y="Prediction_B", ax=ax, color="blue")
plt.show()
print("RMSE:", mean_squared_error(y_new, predicted_b_new, squared=False))
print("")
Explanation: Q1F) What are the units of the RMSE for each model?
Q1G) Which model had better RMSE?
Out-Of-Sample Prediction
In contrast to in-sample prediction, we can also perform out-of-sample prediction, by using our model to make predictions on new, previously unseen data. This is akin to applying our model on a test set, in machine learning parlance. Out-of-sample prediction error measures how well our model can generalize to new data.
Q1H) In general, would you expect in-sample prediction error, or out-of-sample prediction error to be higher?
Let's see how well our models perform on some US cities not included in the CDC500.
End of explanation
display(df)
Explanation: Q1I) How well did these model predict the obesity rates? Which model had better accuracy?
Q1J) For the model you selected in the question above, how much would you trust this model? What are its limitations?
Q1K)Can you think of any ways to create an even better model?
Part 2: Multiple Linear Regression
Let's now see what happens if we increase the number of independent variables used to make our prediction. Using multiple independent variables is referred to as multiple linear regression.
Now let's use all the data we loaded at the beginning of the assignment. The following code box will display our dataframe in its entirety again, so you can refamiliarize yourself with the data we have available.
End of explanation
# fit a regression model
dep_var = "Percent_Person_Obesity"
y = df[dep_var].to_numpy().reshape(-1, 1)
x = df.loc[:, ~df.columns.isin([dep_var, "City Name"])]
model = linear_model.LinearRegression().fit(x,y)
predictions = model.predict(x)
df["Predicted"] = predictions
print("Features in Order:\n\n\t", x.columns)
print("\nWeights:\n\n\t", model.coef_)
print("\nIntercept:\n\n\t", model.intercept_)
Explanation: Fit a Model
Now let's fit a linear regression model using all of the features in our dataframe.
End of explanation
# Analyze in sample MSE
print("In-sample Prediction RMSE:", mean_squared_error(y, predictions, squared=False))
Explanation: Q2A) Look at the coefficients for each of the features. Which features contribute most to the prediction?
Prediction Error
Let's now analyze the in-sample and out-of-sample prediction error for our multiple linear regression model.
End of explanation
# Apply model to some out-of-sample datapoints
new_dcids = [
"geoId/0617610", # Cupertino, CA
"geoId/0236400", # Juneau, AK
"geoId/2467675", # Rockville, MD
"geoId/4530850", # Greenville, SC
"geoId/3103950", # Bellevue, NE
]
new_df = datacommons_pandas.build_multivariate_dataframe(new_dcids,stat_vars_to_query)
# sort columns alphabetically
new_df = new_df.reindex(sorted(new_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
new_df = new_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(new_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
new_df.insert(0, 'City Name', pd.Series(city_name_dict))
display(new_df)
new_y = new_df[dep_var].to_numpy().reshape(-1, 1)
new_x = new_df.loc[:, ~new_df.columns.isin([dep_var, "City Name"])]
predicted = model.predict(new_x)
new_df["Prediction"] = predicted
display(new_df[["Prediction", dep_var]])
print("Out-of-sample RMSE:", mean_squared_error(new_y, predicted, squared=False))
Explanation: Q2B) How does the in-sample prediction RMSE compare with that of the single variable models A and B?
We'll also take a look at out-of-sample prediction error.
End of explanation
# Load new data
new_stat_vars = [
'Percent_Person_Obesity',
'Count_Household',
'Count_HousingUnit',
'Count_Person',
'Count_Person_1OrMoreYears_DifferentHouse1YearAgo',
'Count_Person_BelowPovertyLevelInThePast12Months',
'Count_Person_EducationalAttainmentRegularHighSchoolDiploma',
'Count_Person_Employed',
'GenderIncomeInequality_Person_15OrMoreYears_WithIncome',
'Median_Age_Person',
'Median_Income_Household',
'Median_Income_Person',
'Percent_Person_PhysicalInactivity',
'Percent_Person_SleepLessThan7Hours',
'Percent_Person_WithHighBloodPressure',
'Percent_Person_WithHighCholesterol',
'Percent_Person_WithMentalHealthNotGood',
'UnemploymentRate_Person'
]
# Query Data Commons for the data and remove any NaN values
large_features_df = datacommons_pandas.build_multivariate_dataframe(city_dcids,new_stat_vars)
large_features_df.dropna(axis='index', inplace=True)
# order columns alphabetically
large_features_df = large_features_df.reindex(sorted(large_features_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
large_df = large_features_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(city_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
large_df.insert(0, 'City Name', pd.Series(city_name_dict))
# Display results
display(large_df)
Explanation: Q2B) How does the out-of-sample RMSE compare with that of the single variable models A and B?
Q2C) In general, how would you expect adding more variables to affect the resulting prediction error: increase, decrease, or no substantial change?
Variables: The More, the Merrier?
As we've seen in the sections above, adding more variables to our regression model tends to increase model accuracy. But is adding more and more variables always a good thing?
Let's explore what happens when we add even more variables. We've compiled a new list of statistical variables to predict obesity rates with. Run the code boxes below to load some more data.
End of explanation
# Build a new model
dep_var = "Percent_Person_Obesity"
y = large_df[dep_var].to_numpy().reshape(-1, 1)
x = large_df.loc[:, ~large_df.columns.isin([dep_var, "City Name"])]
large_model = linear_model.LinearRegression().fit(x,y)
predictions = large_model.predict(x)
large_df["Predicted"] = predictions
# Get out-of-sample RMSE
print("Features in Order:\n\n\t", x.columns)
print("\nWeights:\n\n\t", model.coef_)
print("\nIntercept:\n\n\t", model.intercept_)
Explanation: Q2D) Take a look at the list of variables we'll be using this time. Do you think all of them will be useful/predictive?
Q2E) Based on your intuition, do you think adding all these models will help or hinder predictive accuracy?
Let's now build a model and see what happens.
End of explanation
# Apply model to some out-of-sample datapoints
new_dcids = [
"geoId/0617610", # Cupertino, CA
"geoId/0236400", # Juneau, AK
"geoId/2467675", # Rockville, MD
"geoId/4530850", # Greenville, SC
"geoId/3103950", # Bellevue, NE
]
new_df = datacommons_pandas.build_multivariate_dataframe(new_dcids,new_stat_vars)
new_df.dropna()
# sort columns alphabetically
new_df = new_df.reindex(sorted(new_df.columns), axis=1)
# Add city name as a column for readability.
# --- First, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
new_df = new_df.copy(deep=True)
city_name_dict = datacommons.get_property_values(new_dcids, 'name')
city_name_dict = {key:value[0] for key, value in city_name_dict.items()}
new_df.insert(0, 'City Name', pd.Series(city_name_dict))
display(new_df)
new_y = new_df[dep_var].to_numpy().reshape(-1, 1)
new_x = new_df.loc[:, ~new_df.columns.isin([dep_var, "City Name"])]
new_predicted = large_model.predict(new_x)
new_df["Prediction"] = new_predicted
display(new_df[["Prediction", dep_var]])
print("In-sample Prediction RMSE:", mean_squared_error(y, predictions, squared=False))
print("Out-of-sample RMSE:", mean_squared_error(new_y, new_predicted, squared=False))
Explanation: Let's also look at prediction error:
End of explanation |
10,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this tutorial, we'll walk through downloading and preprocessing the compendium of ENCODE and Epigenomics Roadmap data.
This part won't be very iPython tutorial-ly...
First cd in the terminal over to the data directory and run the script get_dnase.sh.
That will download all of the BED files from ENCODE and Epigenomics Roadmap. Read the script to see where I'm getting those files from. Perhaps there will be more in the future, and you'll want to manipulate the links.
Once that has finished, we need to merge all of the BED files into one BED and an activity table.
I typically use the -y option to avoid the Y chromosome, since I don't know which samples sequenced male or female cells.
I'll use my default of extending the sequences to 600 bp, and merging sites that overlap by more than 200 bp. But you might want to edit these.
Step1: To convert the sequences to the format needed by Torch, we'll first convert to FASTA.
Step2: Finally, we convert to HDF5 for Torch and set aside some data for validation and testing.
-r permutes the sequences.
-c informs the script we're providing raw counts.
-v specifies the size of the validation set.
-t specifies the size of the test set. | Python Code:
!cd ../data; preprocess_features.py -y -m 200 -s 600 -o er -c genomes/human.hg19.genome sample_beds.txt
Explanation: In this tutorial, we'll walk through downloading and preprocessing the compendium of ENCODE and Epigenomics Roadmap data.
This part won't be very iPython tutorial-ly...
First cd in the terminal over to the data directory and run the script get_dnase.sh.
That will download all of the BED files from ENCODE and Epigenomics Roadmap. Read the script to see where I'm getting those files from. Perhaps there will be more in the future, and you'll want to manipulate the links.
Once that has finished, we need to merge all of the BED files into one BED and an activity table.
I typically use the -y option to avoid the Y chromosome, since I don't know which samples sequenced male or female cells.
I'll use my default of extending the sequences to 600 bp, and merging sites that overlap by more than 200 bp. But you might want to edit these.
End of explanation
!bedtools getfasta -fi ../data/genomes/hg19.fa -bed ../data/er.bed -s -fo ../data/er.fa
Explanation: To convert the sequences to the format needed by Torch, we'll first convert to FASTA.
End of explanation
!seq_hdf5.py -c -t 71886 -v 70000 ../data/er.fa ../data/er_act.txt ../data/er.h5
Explanation: Finally, we convert to HDF5 for Torch and set aside some data for validation and testing.
-r permutes the sequences.
-c informs the script we're providing raw counts.
-v specifies the size of the validation set.
-t specifies the size of the test set.
End of explanation |
10,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
Download
Step1: Exercícios - Métodos e Funções | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
Download: http://github.com/dsacademybr
End of explanation
# Exercício 1 - Crie uma função que imprima a sequência de números pares entre 1 e 20 (a função não recebe parâmetro) e
# depois faça uma chamada à função para listar os números
def listaPar():
for i in range(2, 21, 2):
print(i)
listaPar()
# Exercício 2 - Crie uam função que receba uma string como argumento e retorne a mesma string em letras maiúsculas.
# Faça uma chamada à função, passando como parâmetro uma string
def listaString(texto):
print(texto.upper())
return
listaString('Rumo à Análise de Dados')
# Exercício 3 - Crie uma função que receba como parâmetro uma lista de 4 elementos, adicione 2 elementos a lista e
# imprima a lista
def novaLista(lista):
print(lista.append(5))
print(lista.append(6))
lista1 = [1, 2, 3, 4]
novaLista(lista1)
print(lista1)
# Exercício 4 - Crie uma função que receba um argumento formal e uma possível lista de elementos. Faça duas chamadas
# à função, com apenas 1 elemento e na segunda chamada com 4 elementos
def printNum( arg1, *lista ):
print (arg1)
for i in lista:
print (i)
return;
# Chamada à função
printNum( 100 )
printNum( 'A', 'B', 'C' )
# Exercício 5 - Crie uma função anônima e atribua seu retorno a uma variável chamada soma. A expressão vai receber 2
# números como parâmetro e retornar a soma deles
soma = lambda arg1, arg2: arg1 + arg2
print ("A soma é : ", soma( 452, 298 ))
# Exercício 6 - Execute o código abaixo e certifique-se que compreende a diferença entre variável global e local
total = 0
def soma( arg1, arg2 ):
total = arg1 + arg2;
print ("Dentro da função o total é: ", total)
return total;
soma( 10, 20 );
print ("Fora da função o total é: ", total)
# Exercício 7 - Abaixo você encontra uma lista com temperaturas em graus Celsius
# Crie uma função anônima que converta cada temperatura para Fahrenheit
# Dica: para conseguir realizar este exercício, você deve criar sua função lambda, dentro de uma função
# (que será estudada no próximo capítulo). Isso permite aplicar sua função a cada elemento da lista
# Como descobrir a fórmula matemática que converte de Celsius para Fahrenheit? Pesquise!!!
Celsius = [39.2, 36.5, 37.3, 37.8]
Fahrenheit = map(lambda x: (float(9)/5)*x + 32, Celsius)
print (list(Fahrenheit))
# Exercício 8
# Crie um dicionário e liste todos os métodos e atributos do dicionário
dic = {'k1': 'Natal', 'k2': 'Recife'}
dir(dic)
import pandas as pd
pd.__version__
# Exercício 9
# Abaixo você encontra a importação do Pandas, um dos principais pacotes Python para análise de dados.
# Analise atentamente todos os métodos disponíveis. Um deles você vai usar no próximo exercício.
import pandas as pd
dir(pd)
# ************* Desafio ************* (pesquise na documentação Python)
# Exercício 10 - Crie uma função que receba o arquivo abaixo como argumento e retorne um resumo estatístico descritivo
# do arquivo. Dica, use Pandas e um de seus métodos, describe()
# Arquivo: "binary.csv"
import pandas as pd
file_name = "binary.csv"
def retornaArq(file_name):
df = pd.read_csv(file_name)
return df.describe()
retornaArq(file_name)
Explanation: Exercícios - Métodos e Funções
End of explanation |
10,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting wind profiles to energy potential
Wind turbines convert the kinetic energy of the wind to electrical energy. The amount of energy produces thus depends on the wind speed, and the rotor area. If the wind speed is assumed to be uniform, then the calculation of the wind power is quite straightforward. However, if the vertical variability of the wind is taken into account, the power estimate is a bit different. In this document I will do some simple tests to assess the sensitivity of the wind power estimate to the wind speed and the method that is used to calculate it.
A conceptual figure
Step1: Basic power estimate
The total kinetic energy of the mean flow that passes through the rotor plane in a given unit of time is equal to
Step2: With this simple set-up it is easy to see that a small difference in wind speed translates to a large difference in wind energy
Step3: and that an accurate estimate of $u_*$ is essential
Step4: A more sophisticated approach
As can be seen from the figure above, a large difference in wind speed can occur between the top and the bottom of the rotor plane, and therefore a more realistic approach might give different results.
The wind speed is a function of altitude. The area of the rotor plane can also be expressed as a function of altitude, namely by integrating the width of the rotor plane with respect to $z$
Step5: A simple uncertainty analysis
Below I will create 10000 randomly disturbed wind profiles, and perform the two power calculations for all these wind profiles. With this, I can make an estimate of the uncertainty in the power calculation that results from the uncertainty in the wind. Moreover, by comparing the two power estimates, I can compute some statistics on the error that is introduced by neglecting the vertical variability of the wind. | Python Code:
# Initialization
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
%matplotlib inline
plt.style.use('fivethirtyeight')
def logprofile(z,ust):
''' Return u as function of z(array) and u_star
Uses Charnock relation for wind-wave interactions'''
z0 = 0.011/9.81*ust # Charnock
return ust/0.4*np.log(z/z0)
# Create artifical wind profile
ust = 0.25
z = np.arange(0,305,5)+5
u = logprofile(z,ust)+np.random.normal(0,.1,len(z))
# Create an ellipse that visualizes the rotor disk in the figure
rotor = Ellipse(xy=(6.5,100), width=.5, height=150, angle=0,alpha=.3)
# Create the figure
fig,ax = plt.subplots()
ax.plot(u,z)
ax.add_artist(rotor)
ax.annotate('Rotor plane',(6.5,100),(6.7,200),arrowprops=dict(facecolor='black',width=2,headwidth=8),fontsize=16)
ax.fill_between(ax.get_xlim(),175,25,color='g',alpha=.2)
ax.set_xlabel('Wind speed (m/s)')
ax.set_ylabel('Altitude (m)')
plt.show()
Explanation: Converting wind profiles to energy potential
Wind turbines convert the kinetic energy of the wind to electrical energy. The amount of energy produces thus depends on the wind speed, and the rotor area. If the wind speed is assumed to be uniform, then the calculation of the wind power is quite straightforward. However, if the vertical variability of the wind is taken into account, the power estimate is a bit different. In this document I will do some simple tests to assess the sensitivity of the wind power estimate to the wind speed and the method that is used to calculate it.
A conceptual figure
End of explanation
def power(rho,r,u):
'''Return total wind power in MW as function of air density, rotor radius and wind speed at hub height'''
return .5 * rho * np.pi * r**2 * u**3 / 1e6
r = 75.
u = 8.
rho = 1.2
print 'Power estimate: %.2f MW'%power(rho,r,u)
Explanation: Basic power estimate
The total kinetic energy of the mean flow that passes through the rotor plane in a given unit of time is equal to:
$$
E = ^1!!/_2 m u^2 = ^1!!/_2 \rho V u^2 = ^1!!/_2 \rho A u t u^2 = ^1!!/_2 \rho A t u^3
$$
Where $m$ is the mass of the air, $u$ the mean horizontal wind speed, $\rho$ is air density, and $V$ is the volume of the air passing through the rotor plane $A$ in time $t$. The power $P$ in the mean wind is equal to
$$
P_{total} = ^E!!/t = ^1!!/_2 \rho A u^3
$$
and the fraction of it that is extracted by wind turbines and converted to electrical energy is
$$
P_{harvested} = ^1!!/_2 \rho A u^3 c_p
$$
where $c_p$, the power coefficient of the turbine, is a sort of overall turbine efficiency. Since I'm mainly interested in the meteorological input and I will not discuss the value of $c_p$ and use the total power rather than the harvested power in the remainder.
Obviously, $A$ is determined by the rotor radius, which is assumed to be 75 m in the sketch above. Often, a constant wind speed at hub-height is used in first order estimates of $P$. In the sketch, the hub-height is 100 m, where $u\approx$ 8 m/s. If we assume a constant air density of 1.2 kg m$^{-1}$, the total power can be calculated easily as
$$
P_{total} = ^1!!/_2 \rho \pi R^2 u^3 \approx 5.4 \text{ MW}
$$
End of explanation
for u in [7,8,9]:
print 'Wind speed: %.1f m/s, wind power: %.2f MW'%(u,power(rho,r,u))
Explanation: With this simple set-up it is easy to see that a small difference in wind speed translates to a large difference in wind energy:
End of explanation
for ust in [.2,.25,.3]:
u = logprofile(100,ust)
p = power(rho,r,u)
print 'u*: %.2f, wind speed: %.2f m/s, wind power: %.2f MW'%(ust,u,p)
Explanation: and that an accurate estimate of $u_*$ is essential:
End of explanation
# Vertical levels
dz = 1
z = np.arange(0,300+dz,dz)+dz
# Turbine characteristics
r = 75 # rotor radius
h = 100 # hub height
x = np.where(np.abs(z-h)<r,np.sqrt(r**2-(z-h)**2),0) # the error is only due to the construction of `np.where`
# Logprofile
ust = 0.25
u = logprofile(z,ust)+np.random.normal(0,.1,len(z))
# Energy
rho = 1.2
es = sum(u**3*x)*rho*dz*1e-6 #sophisticated method
eb = .5 * rho * np.pi * r**2 * u[h]**3 / 1e6 #basic method
print 'Wind power with basic formula: %.2f MW'%eb
print 'Wind power with sophicsticated formula: %.2f MW'%es
print 'Difference: %.2f MW'%(es-eb)
Explanation: A more sophisticated approach
As can be seen from the figure above, a large difference in wind speed can occur between the top and the bottom of the rotor plane, and therefore a more realistic approach might give different results.
The wind speed is a function of altitude. The area of the rotor plane can also be expressed as a function of altitude, namely by integrating the width of the rotor plane with respect to $z$:
$$
A = \int_{h-r}^{h+r} 2x(z) dz
$$
Here, $h$ is the hub height of the wind turbine, $r$ is the rotor radius, and $x(z)$ is the horizontal distance from the edges of the rotor plane to the vertical axis. An expression for $x(z)$ is given below:
$$
x(z) = \sqrt{r^2-(z-h)^2}
$$
If $z-h$ becomes larger than $r$ in the formula above, then the square root will not have real solutions. Thus, $x(z)$ is only defined in the rotor plane and it is set to 0 at all other altitudes. To find the estimated wind power, we evaluate the following summation:
$$
\sum_z \rho x(z)u(z)^3 \Delta z
$$
End of explanation
# Vertical levels
dz = 1
z = np.arange(0,300+dz,dz)+dz
# Turbine characteristics
r = 75 # rotor radius
h = 100 # hub height
x = np.where(np.abs(z-h)<r,np.sqrt(r**2-(z-h)**2),0) # the error is only due to the construction of `np.where`
# Store the output in these lists:
output_basic = []
output_sophisticated = []
# Perform 10000 load calculations
for i in range(10000):
# Logprofile
ust = 0.25
u = logprofile(z,ust)+np.random.normal(0,.1,len(z))
# Energy
rho = 1.2
es = sum(u**3*x)*rho*dz*1e-6
output_sophisticated.append(es)
eb = .5 * rho * np.pi * r**2 * u[h]**3 / 1e6
output_basic.append(eb)
# Some statistics
output_sophisticated = np.asarray(output_sophisticated)
print 'Sophisticated method:'
print 'Mean power: %.2f MW'%output_sophisticated.mean()
print 'Standard deviation: %.4f MW'%output_sophisticated.std()
output_basic = np.asarray(output_basic)
print '\nBasic method:'
print 'Mean power: %.2f MW'%output_basic.mean()
print 'Standard deviation: %.4f MW'%output_basic.std()
errors = output_basic-output_sophisticated
print '\nError statistics:'
print 'Mean absolute error: %.2f MW'%(np.mean(np.abs(errors)))
print 'Root mean squar error: %.2f MW' %(np.sqrt(np.mean(errors*errors)))
Explanation: A simple uncertainty analysis
Below I will create 10000 randomly disturbed wind profiles, and perform the two power calculations for all these wind profiles. With this, I can make an estimate of the uncertainty in the power calculation that results from the uncertainty in the wind. Moreover, by comparing the two power estimates, I can compute some statistics on the error that is introduced by neglecting the vertical variability of the wind.
End of explanation |
10,351 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
i need to create a dataframe containing tuples from a series of dataframes arrays. What I need is the following: | Problem:
import pandas as pd
import numpy as np
a = pd.DataFrame(np.array([[1, 2],[3, 4]]), columns=['one', 'two'])
b = pd.DataFrame(np.array([[5, 6],[7, 8]]), columns=['one', 'two'])
def g(a,b):
return pd.DataFrame(np.rec.fromarrays((a.values, b.values)).tolist(),columns=a.columns,index=a.index)
result = g(a.copy(),b.copy()) |
10,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
第5回 ランキング学習(Ranking SVM)
この演習課題ページでは,Ranking SVMの実装であるSVM-rankの使い方を説明します.この演習ページの目的は,SVM-rankを用いてモデルの学習,テストデータに対するランク付けが可能になることです.
この演習ページでは以下のツールを使用します.
- SVM-rank (by Prof. Thorsten Joachims)
- https
Step1: 正しく学習できていれば, ../data/svmrank_sample/model というファイルが生成されているはずです.
Step2: 2.3 テストデータへの適用
先ほど学習したモデルを使って,実際にテストデータに対してランキングを行うには,svm_rank_classifyを利用します.
Step3: なお,テストデータ中の1列目の値は,正解順位(正確には重要度)です. テストデータに対する精度(テストデータ中のペアの順序関係をどれだけ正しく再現できたか)を計算する際に利用されます.
Step4: テストデータに対する実際のランキングはpredictionファイルを確認します. | Python Code:
! ../bin/svm_rank_learn -c 0.03 ../data/svmrank_sample/train.dat ../data/svmrank_sample/model
Explanation: 第5回 ランキング学習(Ranking SVM)
この演習課題ページでは,Ranking SVMの実装であるSVM-rankの使い方を説明します.この演習ページの目的は,SVM-rankを用いてモデルの学習,テストデータに対するランク付けが可能になることです.
この演習ページでは以下のツールを使用します.
- SVM-rank (by Prof. Thorsten Joachims)
- https://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html
1. SVM-rankのインストール
SVM-rankのページに従って,SVM-rankをインストールします.
まずは,svm_rank.tar.gzをダウンロードします.
- http://download.joachims.org/svm_rank/current/svm_rank.tar.gz
ダウンロードしたらファイルを解凍し,コンパイルしてください.
以下はその一例です.
$ mkdir svm_rank # 解凍するファイルを入れるフォルダを作成
$ mv svm_rank.tar.gz svm_rank #ダウンロードしたアーカイブを今作成したフォルダに移動
$ cd svm_rank
$ tar xzvf svm_rank.tar.gz #ファイルを解凍
$ make
正しくコンパイルができていれば, svm_rank_learn と svm_rank_classify という2つのファイルが生成されているはずです.
作成したsvm_rank_learn と svm_rank_classifyを適当な場所にコピーします.この演習ページでは, h29iro/bin/にコピーした前提でコードを進めます.
2.サンプルファイルの実行
h29iro/data/svmrank_sample/ にサンプルファイルを置いています.これは,SVM-rankのページで配布されている以下のファイルをコピーしたものです.
- http://download.joachims.org/svm_light/examples/example3.tar.gz
このサンプルファイルには,training.dat(訓練データ)と test.dat(テストデータ)が含まれています.
2.1 訓練データ
訓練データ(../data/svmrank_sample/train.dat)の中身はこのようになっています
3 qid:1 1:1 2:1 3:0 4:0.2 5:0 # 1A
2 qid:1 1:0 2:0 3:1 4:0.1 5:1 # 1B
1 qid:1 1:0 2:1 3:0 4:0.4 5:0 # 1C
1 qid:1 1:0 2:0 3:1 4:0.3 5:0 # 1D
1 qid:2 1:0 2:0 3:1 4:0.2 5:0 # 2A
2 qid:2 1:1 2:0 3:1 4:0.4 5:0 # 2B
1 qid:2 1:0 2:0 3:1 4:0.1 5:0 # 2C
1 qid:2 1:0 2:0 3:1 4:0.2 5:0 # 2D
2 qid:3 1:0 2:0 3:1 4:0.1 5:1 # 3A
3 qid:3 1:1 2:1 3:0 4:0.3 5:0 # 3B
4 qid:3 1:1 2:0 3:0 4:0.4 5:1 # 3C
1 qid:3 1:0 2:1 3:1 4:0.5 5:0 # 3D
詳しいフォーマットの中身は,SVM-rankのページを参照してください.
各行1列目の数値が,その文書のクエリqidに対する重要性を表しており,SVM-rankはこの値を元にpairwise preference集合を生成し,学習データとします.
たとえば,上記訓練データは,下記のpairwise preference集合を訓練データとして与えていることになります.
1A>1B, 1A>1C, 1A>1D, 1B>1C, 1B>1D, 2B>2A, 2B>2C, 2B>2D, 3C>3A, 3C>3B, 3C>3D, 3B>3A, 3B>3D, 3A>3D
(SVM-rankのページより引用)
また, 3列目以降の x:y という文字列は特徴量を表しており,x次元目の値がyであることを示しています.
たとえば,1行目のデータは,クエリ$q_1$に対して, $f_1 = 1.0, f_2=1.0, f_3=0.0, f_4=0.2, f_5=0.0$という特徴ベクトルを持った文書1Aの重要性が3であることを示しています.
2.2 訓練データの学習
訓練データを学習し,モデルを生成するには, svm_rank_learn を用います.
End of explanation
!cat ../data/svmrank_sample/model
Explanation: 正しく学習できていれば, ../data/svmrank_sample/model というファイルが生成されているはずです.
End of explanation
!cat ../data/svmrank_sample/test.dat
Explanation: 2.3 テストデータへの適用
先ほど学習したモデルを使って,実際にテストデータに対してランキングを行うには,svm_rank_classifyを利用します.
End of explanation
! ../bin/svm_rank_classify ../data/svmrank_sample/test.dat ../data/svmrank_sample/model ../data/svmrank_sample/prediction
Explanation: なお,テストデータ中の1列目の値は,正解順位(正確には重要度)です. テストデータに対する精度(テストデータ中のペアの順序関係をどれだけ正しく再現できたか)を計算する際に利用されます.
End of explanation
!cat ../data/svmrank_sample/prediction
Explanation: テストデータに対する実際のランキングはpredictionファイルを確認します.
End of explanation |
10,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PySnpTools Tutorial
Step up notebook
Step1: Reading Bed files
Use "Bed" to access file "all.bed"
Step2: Find out about iids and sids
Step3: Read all the SNP data in to memory
Step4: Print the SNP data
snpdata.val is a NumPy array. We can apply any NumPy functions.
Step5: If all you want is to read data in a Numpy array, here it is one line
Step6: You can also create a SnpData object from scratch (without reading from a SnpReader)
Step7: <font color='red'>see PowerPoint summary</font>
Reading subsets of data, reading with re-ordering iids & sids (rows & cols), stacking
Reading SNP data for just one SNP
Step8: Print the data for iid #9 (in one line)
Step9: Read the data for the first 5 iids AND the first 5 sids
Step10: Stacking indexing is OK and efficient
Recall NumPy slice notation
Step11: Fancy indexing - list of indexes, slices, list of booleans, negatives(?)
on iid or sid or both
Step12: Question
Step13: Answer
Step14: Negatives
NumPy slices
Step15: <font color='red'>see PowerPoint summary</font>
More properties and attributes of SnpReaders
read() supports both NumPy memory layouts and 8-byte or 4-byte floats
Step16: Every reader includes an array of SNP properties called ".pos"
Step17: [chromosome, genetic distance, basepair distance]
Accessable without a SNP data read.
So, using Python-style fancy indexing, how to we read all SNPs at Chrom 5?
Step18: In one line
Step19: You can turn iid or sid names into indexes
Step20: Can use both .pos and iid_to_index (sid_to_index) at once
Step21: <font color='red'>see PowerPoint summary</font>
Other SnpReaders and how to write
Read from the PLINK phenotype file (text) instead of a Bed file
Looks like
Step22: Write 1st 10 iids and sids of Bed data into Pheno format
Step23: Write the snpdata to Bed format
Step24: Create a snpdata on the fly and write to Bed
Step25: The SnpNpz and SnpHdf5 SnpReaders
Pheno is slow because its txt. Bed format can only hold 0,1,2,missing.
Use SnpNpz for fastest read/write times, smallest file size.
Step26: Use SnpHdf5 for low-memory random-access reads, good speed and size, and compatiblity outside Python
Step27: <font color='red'>see PowerPoint summary</font>
Intersecting iids
What if we have two data sources with slightly different iids in different orders?
Step28: Create an intersecting and reordering reader
Step29: Example of use with NumPy's built-in linear regression
Step30: <font color='red'>see PowerPoint summary</font>
Standardization, Kernels
To Unit standardize
Step31: Sets means per sid to 0 and stdev to 1 and fills nan with 0.
In one line
Step32: Beta standardization
Step33: To create an kernel (the relateness of each iid pair as the dot product of their standardized SNP values)
Step34: <font color='red'>see PowerPoint summary</font>
PstReader
Every SnpReader is a PstReader
Step35: Can also create PstData from scratch, on the fly
Step36: Two new PstReaders
Step37: <font color='red'>see PowerPoint summary</font>
IntRangeSet
Union of two sets
<img src="example1.png">
Step38: Set difference
Suppose we want to find the intron regions of a gene but we are given only the transcription region and the exon regions.
<img src="example2.png">
Step39: Parse the exon start and last lists from strings to lists of integers (converting ‘last’ to ‘stop’)
Step40: Zip together the two lists to create an iterable of exon_start,exon_stop tuples. Then ‘set subtract’ all these ranges from int_range_set.
Step41: Create the desired output by iterating through each contiguous range of integers.
Step42: <font color='red'>see PowerPoint summary</font>
FastLMM | Python Code:
# set some ipython notebook properties
%matplotlib inline
# set degree of verbosity (adapt to INFO for more verbose output)
import logging
logging.basicConfig(level=logging.WARNING)
# set figure sizes
import pylab
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
Explanation: PySnpTools Tutorial
Step up notebook
End of explanation
import os
import numpy as np
from pysnptools.snpreader import Bed
snpreader = Bed("all.bed", count_A1=False)
# What is snpreader?
print snpreader
Explanation: Reading Bed files
Use "Bed" to access file "all.bed"
End of explanation
print snpreader.iid_count
print snpreader.sid_count
print snpreader.iid[:3]
print snpreader.sid
Explanation: Find out about iids and sids
End of explanation
snpdata = snpreader.read()
#What is snpdata?
print snpdata
#What do the iids and sid of snprdata look like?
print snpdata.iid_count, snpdata.sid_count
print snpdata.iid[:3]
print snpdata.sid
Explanation: Read all the SNP data in to memory
End of explanation
print snpdata.val[:7,:7]
print np.mean(snpdata.val)
Explanation: Print the SNP data
snpdata.val is a NumPy array. We can apply any NumPy functions.
End of explanation
print np.mean(Bed("all.bed",count_A1=False).read().val)
Explanation: If all you want is to read data in a Numpy array, here it is one line:
End of explanation
from pysnptools.snpreader import SnpData
snpdata1 = SnpData(iid=[['f1','c1'],['f1','c2'],['f2','c1']],
sid=['snp1','snp2'],
val=[[0,1],[2,.5],[.5,np.nan]])
print np.nanmean(snpdata1.val)
Explanation: You can also create a SnpData object from scratch (without reading from a SnpReader)
End of explanation
snpreader = Bed("all.bed",count_A1=False)
snp0reader = snpreader[:,0]
print snp0reader
print snp0reader.iid_count, snp0reader.sid_count
print snp0reader.sid
print snpreader # Is not changed
snp0data = snp0reader.read()
print snp0data
print snp0data.iid_count, snp0data.sid_count
print snp0data.sid
print snp0data.val[:10,:]
Explanation: <font color='red'>see PowerPoint summary</font>
Reading subsets of data, reading with re-ordering iids & sids (rows & cols), stacking
Reading SNP data for just one SNP
End of explanation
print Bed("all.bed",count_A1=False)[9,:].read().val
Explanation: Print the data for iid #9 (in one line)
End of explanation
snp55data = Bed("all.bed",count_A1=False)[:5,:5].read()
print snp55data
print snp55data.iid_count, snp55data.sid_count
print snp55data.iid
print snp55data.sid
print snp55data.val
Explanation: Read the data for the first 5 iids AND the first 5 sids:
End of explanation
snpreaderA = Bed("all.bed",count_A1=False) # read all
snpreaderB = snpreaderA[:,:250] #read first 250 sids
snpreaderC = snpreaderB[:10,:] # reader first 10 iids
snpreaderD = snpreaderC[::2,::2]
print snpreaderD
print snpreaderD.iid_count, snpreaderD.sid_count
print snpreaderD.iid
print snpreaderD.read().val[:10,:10] #only reads the final values desired (if practical)
Explanation: Stacking indexing is OK and efficient
Recall NumPy slice notation: start:stop:step, so ::2 is every other
End of explanation
# List of indexes (can use to reorder)
snpdata43210 = Bed("all.bed",count_A1=False)[[4,3,2,1,0],:].read()
print snpdata43210.iid
# List of booleans to select
snp43210B = snpdata43210[[False,True,True,False,False],:]
print snp43210B
print snp43210B.iid
Explanation: Fancy indexing - list of indexes, slices, list of booleans, negatives(?)
on iid or sid or both
End of explanation
print hasattr(snp43210B,'val')
Explanation: Question: Does snp43210B have a val property?
End of explanation
snpdata4321B = snp43210B.read(view_ok=True) #view_ok means ok to share memory
print snpdata4321B.val
Explanation: Answer: No. It's a subset of a SnpData, so it will read from a SnpData, but it is not a SnpData.
Use .read() to get the values.
End of explanation
print Bed("all.bed",count_A1=False)[::-10,:].iid[:10]
Explanation: Negatives
NumPy slices: start:stop:step
'start','stop': negative means counting from the end
'step': negative means count backwards
Indexes:
-1 means last, -2 means second from the list [Not Yet Implemented]
Lists of indexes can have negatives [Not Yet Implemented]
End of explanation
print Bed("all.bed",count_A1=False).read().val.flags
snpdata32c = Bed("all.bed",count_A1=False).read(order='C',dtype=np.float32)
print snpdata32c.val.dtype
print snpdata32c.val.flags
Explanation: <font color='red'>see PowerPoint summary</font>
More properties and attributes of SnpReaders
read() supports both NumPy memory layouts and 8-byte or 4-byte floats
End of explanation
print Bed("all.bed",count_A1=False).pos
Explanation: Every reader includes an array of SNP properties called ".pos"
End of explanation
snpreader = Bed("all.bed",count_A1=False)
print snpreader.pos[:,0]
chr5_bools = (snpreader.pos[:,0] == 5)
print chr5_bools
chr5reader = snpreader[:,chr5_bools]
print chr5reader
chr5data = chr5reader.read()
print chr5data.pos
Explanation: [chromosome, genetic distance, basepair distance]
Accessable without a SNP data read.
So, using Python-style fancy indexing, how to we read all SNPs at Chrom 5?
End of explanation
chr5data = Bed("all.bed",count_A1=False)[:,snpreader.pos[:,0] == 5].read()
print chr5data.val
Explanation: In one line
End of explanation
snpreader = Bed("all.bed",count_A1=False)
iid0 =[['cid499P1','cid499P1'],
['cid489P1','cid489P1'],
['cid479P1','cid479P1']]
indexes0 = snpreader.iid_to_index(iid0)
print indexes0
snpreader0 = snpreader[indexes0,:]
print snpreader0.iid
# more condensed
snpreader0 = snpreader[snpreader.iid_to_index(iid0),:]
print snpreader0.iid
Explanation: You can turn iid or sid names into indexes
End of explanation
snpdata0chr5 = snpreader[snpreader.iid_to_index(iid0),snpreader.pos[:,0] == 5].read()
print np.mean(snpdata0chr5.val)
Explanation: Can use both .pos and iid_to_index (sid_to_index) at once
End of explanation
from pysnptools.snpreader import Pheno
phenoreader = Pheno("pheno_10_causals.txt")
print phenoreader
print phenoreader.iid_count, phenoreader.sid_count
print phenoreader.sid
print phenoreader.pos
phenodata = phenoreader.read()
print phenodata.val[:10,:]
Explanation: <font color='red'>see PowerPoint summary</font>
Other SnpReaders and how to write
Read from the PLINK phenotype file (text) instead of a Bed file
Looks like:
cid0P0 cid0P0 0.4853395139922632
cid1P0 cid1P0 -0.2076984565752155
cid2P0 cid2P0 1.4909084058931985
cid3P0 cid3P0 -1.2128996652683697
cid4P0 cid4P0 0.4293203431508744
...
End of explanation
snpdata1010 = Bed("all.bed",count_A1=False)[:10,:10].read()
Pheno.write("deleteme1010.txt",snpdata1010)
print os.path.exists("deleteme1010.txt")
Explanation: Write 1st 10 iids and sids of Bed data into Pheno format
End of explanation
Bed.write("deleteme1010.bed",snpdata1010)
print os.path.exists("deleteme1010.bim")
Explanation: Write the snpdata to Bed format
End of explanation
snpdata1 = SnpData(iid=[['f1','c1'],['f1','c2'],['f2','c1']],
sid=['snp1','snp2'],
val=[[0,1],[2,1],[1,np.nan]])
Bed.write("deleteme1.bed",snpdata1)
print os.path.exists("deleteme1.fam")
Explanation: Create a snpdata on the fly and write to Bed
End of explanation
from pysnptools.snpreader import SnpNpz
SnpNpz.write("deleteme1010.snp.npz", snpdata1010)
print os.path.exists("deleteme1010.snp.npz")
Explanation: The SnpNpz and SnpHdf5 SnpReaders
Pheno is slow because its txt. Bed format can only hold 0,1,2,missing.
Use SnpNpz for fastest read/write times, smallest file size.
End of explanation
from pysnptools.snpreader import SnpHdf5
SnpHdf5.write("deleteme1010.snp.hdf5", snpdata1010)
print os.path.exists("deleteme1010.snp.hdf5")
Explanation: Use SnpHdf5 for low-memory random-access reads, good speed and size, and compatiblity outside Python
End of explanation
snpreader = Bed("all.bed",count_A1=False)
phenoreader = Pheno("pheno_10_causals.txt")[::-2,:] #half the iids, and in reverse order
print snpreader.iid_count, phenoreader.iid_count
print snpreader.iid[:5]
print phenoreader.iid[:5]
Explanation: <font color='red'>see PowerPoint summary</font>
Intersecting iids
What if we have two data sources with slightly different iids in different orders?
End of explanation
import pysnptools.util as pstutil
snpreader_i,phenoreader_i = pstutil.intersect_apply([snpreader,phenoreader])
print np.array_equal(snpreader_i.iid,phenoreader_i.iid)
snpdata_i = snpreader_i.read()
phenodata_i = phenoreader_i.read()
print np.array_equal(snpdata_i.iid,phenodata_i.iid)
print snpdata_i.val[:10,:]
print phenodata_i.val[:10,:]
Explanation: Create an intersecting and reordering reader
End of explanation
weights = np.linalg.lstsq(snpdata_i.val, phenodata_i.val)[0] #usually would add a 1's column
predicted = snpdata_i.val.dot(weights)
import matplotlib.pyplot as plt
plt.plot(phenodata_i.val, predicted, '.', markersize=10)
plt.show() #Easy to 'predict' seen 250 cases with 5000 variables.
# How does it predict unseen cases?
phenoreader_unseen = Pheno("pheno_10_causals.txt")[-2::-2,:]
snpreader_u,phenoreader_u = pstutil.intersect_apply([snpreader,phenoreader_unseen])
snpdata_u = snpreader_u.read()
phenodata_u = phenoreader_u.read()
predicted_u = snpdata_u.val.dot(weights)
plt.plot(phenodata_u.val, predicted_u, '.', markersize=10)
plt.show() #Hard to predict unseen 250 cases with 5000 variables.
Explanation: Example of use with NumPy's built-in linear regression
End of explanation
snpreader = Bed("all.bed",count_A1=False)
snpdata = snpreader.read()
snpdata = snpdata.standardize() #In place AND returns self
print snpdata.val[:,:5]
Explanation: <font color='red'>see PowerPoint summary</font>
Standardization, Kernels
To Unit standardize: read data, ".standardize()"
End of explanation
snpdata = Bed("all.bed",count_A1=False).read().standardize()
print snpdata.val[:,:5]
Explanation: Sets means per sid to 0 and stdev to 1 and fills nan with 0.
In one line:
End of explanation
from pysnptools.standardizer import Beta
snpdataB = Bed("all.bed",count_A1=False).read().standardize(Beta(1,25))
print snpdataB.val[:,:4]
Explanation: Beta standardization
End of explanation
from pysnptools.standardizer import Unit
kerneldata = Bed("all.bed",count_A1=False).read_kernel(standardizer=Unit())
print kerneldata.val[:,:4]
kerneldata = Bed("all.bed",count_A1=False).read_kernel(standardizer=Unit(),block_size=500)
print kerneldata.val[:,:4]
Explanation: To create an kernel (the relateness of each iid pair as the dot product of their standardized SNP values)
End of explanation
from pysnptools.snpreader import Bed
pstreader = Bed("all.bed",count_A1=False)
print pstreader.row_count, pstreader.col_count
print pstreader.col_property
Explanation: <font color='red'>see PowerPoint summary</font>
PstReader
Every SnpReader is a PstReader
End of explanation
from pysnptools.pstreader import PstData
data1 = PstData(row=['a','b','c'],
col=['y','z'],
val=[[1,2],[3,4],[np.nan,6]],
row_property=['A','B','C'])
reader2 = data1[data1.row < 'c', data1.col_to_index(['z','y'])]
print reader2
print reader2.read().val
print reader2.row_property
print reader2.col_property.shape, reader2.col_property.dtype
Explanation: Can also create PstData from scratch, on the fly
End of explanation
from pysnptools.pstreader import PstNpz, PstHdf5
fnnpz = "delme.pst.npz"
PstNpz.write(fnnpz,data1)
data2 = PstNpz(fnnpz).read()
fnhdf5 = "delme.pst.hdf5"
PstHdf5.write(fnhdf5,data2)
data3 = PstHdf5(fnhdf5).read()
print data2, data3
print data2.val
print data3.val
Explanation: Two new PstReaders: PstNpz and PstHdf5
End of explanation
from pysnptools.util import IntRangeSet
a = IntRangeSet("100:500,501:1000") # a is the set of integers from 100 to 500 (exclusive) and 501 to 1000 (exclusive)
b = IntRangeSet("-20,400:600") # b is the set of integers -20 and the range 400 to 600 (exclusive)
c = a | b # c is the union of a and b, namely -20 and 100 to 1000 (exclusive)
print c
print 200 in c
print -19 in c
Explanation: <font color='red'>see PowerPoint summary</font>
IntRangeSet
Union of two sets
<img src="example1.png">
End of explanation
from pysnptools.util import IntRangeSet
line = "chr15 29370 37380 29370,32358,36715 30817,32561,37380"
chr,trans_start,trans_last,exon_starts,exon_lasts = line.split() # split the line on white space
trans_start = int(trans_start)
trans_stop = int(trans_last) + 1 # add one to convert the inclusive "last" value into a Pythonesque exclusive "stop" value
int_range_set = IntRangeSet((trans_start,trans_stop)) # creates a IntRangeSet from 29370 (inclusive) to 37381 (exclusive)
print int_range_set # print at any time to see the current value
Explanation: Set difference
Suppose we want to find the intron regions of a gene but we are given only the transcription region and the exon regions.
<img src="example2.png">
End of explanation
exon_starts = [int(start) for start in exon_starts.strip(",").split(',')]
exon_stops = [int(last)+1 for last in exon_lasts.strip(",").split(',')]
assert len(exon_starts) == len(exon_stops)
Explanation: Parse the exon start and last lists from strings to lists of integers (converting ‘last’ to ‘stop’)
End of explanation
from itertools import izip
int_range_set -= izip(exon_starts,exon_stops)
print int_range_set # See what it looks like
Explanation: Zip together the two lists to create an iterable of exon_start,exon_stop tuples. Then ‘set subtract’ all these ranges from int_range_set.
End of explanation
for start, stop in int_range_set.ranges():
print "{0} {1} {2}".format(chr, start, stop-1)
Explanation: Create the desired output by iterating through each contiguous range of integers.
End of explanation
# import the algorithm
from fastlmm.association import single_snp_leave_out_one_chrom
from pysnptools.snpreader import Bed
# set up data
##############################
snps = Bed("all.bed",count_A1=False)
pheno_fn = "pheno_10_causals.txt"
cov_fn = "cov.txt"
# run gwas
###################################################################
results_df = single_snp_leave_out_one_chrom(snps, pheno_fn, covar=cov_fn)
# print head of results data frame
import pandas as pd
pd.set_option('display.width', 1000)
results_df.head(n=10)
# manhattan plot
import pylab
import fastlmm.util.util as flutil
flutil.manhattan_plot(results_df[["Chr", "ChrPos", "PValue"]],pvalue_line=1e-5,xaxis_unit_bp=False)
pylab.show()
Explanation: <font color='red'>see PowerPoint summary</font>
FastLMM
End of explanation |
10,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (5025, 5036)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
# len({word: None for word in source_text.split()}))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] for sentence in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learn_rate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learn_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
td_end_removed = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
td_start_added = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), td_end_removed], 1)
return td_start_added
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
# Encoder embedding
# source_vocab_size = len(source_letter_to_int)
# enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, 1000, rnn_size)
# Encoder
# enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
enc_LSTM = tf.contrib.rnn.BasicLSTMCell(rnn_size)
enc_LSTM = tf.contrib.rnn.DropoutWrapper(enc_LSTM, output_keep_prob=keep_prob)
enc_LSTM = tf.contrib.rnn.MultiRNNCell([enc_LSTM] * num_layers)
enc_RNN_out, enc_RNN_state = tf.nn.dynamic_rnn(enc_LSTM, rnn_inputs, dtype=tf.float32)
return enc_RNN_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State *
:param dec_cell: Decoder RNN Cell *
:param dec_embed_input: Decoder embedded input *
:param sequence_length: Sequence Length *
:param decoding_scope: TenorFlow Variable Scope for decoding *
:param output_fn: Function to apply the output layer *
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state, name=None)
train_pred, fin_state, fin_cntxt_state = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,\
train_decoder_fn,inputs=dec_embed_input,sequence_length=sequence_length,\
parallel_iterations=None, swap_memory=False,time_major=False, scope=decoding_scope, name=None)
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state *
:param dec_cell: Decoder RNN Cell *
:param dec_embeddings: Decoder embeddings *
:param start_of_sequence_id: GO ID *
:param end_of_sequence_id: EOS Id *
:param maximum_length: The maximum allowed time steps to decode *
:param vocab_size: Size of vocabulary *
:param decoding_scope: TensorFlow Variable Scope for decoding *
:param output_fn: Function to apply the output layer *
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings,\
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], maximum_length, vocab_size, dtype=tf.int32, name=None)
infer_logits, fin_state, fin_cntxt_state = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,\
infer_decoder_fn, inputs=None, sequence_length=maximum_length,\
parallel_iterations=None, swap_memory=False,time_major=False, scope=decoding_scope, name=None)
# infer_logits = output_fn(infer_pred)
return infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length *
:param rnn_size: RNN Size *
:param num_layers: Number of layers *
:param target_vocab_to_int: Dictionary to go from the target words to an id *
:param keep_prob: Dropout keep probability *
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Decoder RNNs
dec_LSTM = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_LSTM = tf.contrib.rnn.DropoutWrapper(dec_LSTM, output_keep_prob=keep_prob)
dec_LSTM = tf.contrib.rnn.MultiRNNCell([dec_LSTM] * num_layers)
# Create Output Function
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# Train Logits
train_logits = decoding_layer_train(encoder_state, dec_LSTM,\
dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
# Infer Logits
infer_logits = decoding_layer_infer(encoder_state, dec_LSTM,\
dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder **
:param target_data: Target placeholder **
:param keep_prob: Dropout keep probability placeholder **
:param batch_size: Batch Size **
:param sequence_length: Sequence Length **
:param source_vocab_size: Source vocabulary size **
:param target_vocab_size: Target vocabulary size **
:param enc_embedding_size: Decoder embedding size **
:param dec_embedding_size: Encoder embedding size **
:param rnn_size: RNN Size **
:param num_layers: Number of layers **
:param target_vocab_to_int: Dictionary to go from the target words to an id **
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Apply embedding to the input data for the encoder
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data
p_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Apply embedding to the target data for the decoder
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, p_target_data)
# Decode the encoded input
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state,\
target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.005
# Dropout Keep Probability
keep_probability = 0.8
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence = sentence.lower()
sequence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()]
return sequence
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'She dislikes lions, but loves grapes in Paris in the winter.'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
10,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercício 1
Step1: Exercício 2
Step2: Exercício 3
Step3: Exercício 4
Step4: Exercício 5
Step5: Exercício 5b
Step6: Exercício 6
Step7: Exercício 7 | Python Code:
print (Triangulo(5,5,5)) # equilatero
print (Triangulo(5,5,7)) # isóceles
print (Triangulo(3,4,5)) # escaleno
print (Triangulo(5,5,11)) # não é triângulo
Explanation: Exercício 1: Crie três funções:
1) Uma função chamada VerificaTriangulo() que recebe como parâmetro o comprimento dos três lados de um possível triângulo. Essa função deve retornar True caso esses comprimentos podem formar um triângulo e False caso contrário.
Para 3 segmentos com comprimento x, y e z, respectivamente, formarem um triângulo, eles devem obdecer a TODAS as seguintes condições:
x + y > z
x + z > y
y + z > x
2) Uma função chamada TipoTriangulo() que recebe os mesmos parâmetros e retorna o tipo de triângulo que os segmentos formariam:
"equilátero" se os três lados forem iguais
"isóceles" se dois dos três lados forem iguais
"escaleno" se os três lados forem diferentes
3) Uma função chamada Triangulo() que também receberá os mesmos parâmetros e retornará o tipo de triângulo, caso os segmentos formem um, ou a string "não é triângulo", caso contrário.
End of explanation
print (Bissexto(2000)) # True
print (Bissexto(2004)) # True
print (Bissexto(1900)) # False
Explanation: Exercício 2: Crie uma função para determinar se um ano é bissexto.
O ano é bissexto se for múltiplo de 400 ou múltiplo de 4 e não múltiplo de 100. Utilize o operador de resto da divisão (%) para determinar se um número é múltiplo de outro.
End of explanation
print (Crescente(1,2,3))
print (Crescente(1,3,2))
print (Crescente(2,1,3))
print (Crescente(2,3,1))
print (Crescente(3,1,2))
print (Crescente(3,2,1))
print (Crescente(1,2,2))
Explanation: Exercício 3: Crie uma função que receba três valores x, y, z como parâmetros e retorne-os em ordem crescente.
O Python permite que você faça comparações relacionais entre as 3 variáveis em uma única instrução:
Python
x < y < z
End of explanation
print (PesoIdeal("masculino", 1.87, 75)) # True
print (PesoIdeal("masculino", 1.92, 200)) # False
print (PesoIdeal("feminino", 1.87, 90)) # False
print (PesoIdeal("feminino", 1.6, 40)) # True
Explanation: Exercício 4: O peso ideial de uma pessoa segue a seguinte tabela:
|Altura|Peso Homem|Peso Mulher|
|--|--|--|
|1,5 m|50 kg|48 kg|
|1,7 m|74 kg|68 kg|
|1,9 m|98 kg|88 kg|
|2,1 m|122 kg|108 kg|
Faça uma função que receba como parâmetro o gênero, altura e peso da pessoa e retorne True se ela está com o peso ideal.
End of explanation
print (Circunferencia(0,0,10,5,5) ) # True
print (Circunferencia(0,0,10,15,5)) # False
Explanation: Exercício 5: Crie uma função que receba as coordenadas cx, cy, o raio r correspondentes ao centro e raio de uma circunferência e receba também coordenadas x, y de um ponto.
A função deve retornar True se o ponto está dentro da circunferência e False, caso contrário.
End of explanation
Verifica = Circunferencia(0,0,10)
print (Verifica(5,5))
print (Verifica(15,5))
Explanation: Exercício 5b: Crie uma função chamada Circunferencia que recebe como entrada as coordenadas do centro cx e cy e o raio r da circunferência. Essa função deve criar uma outra função chamada VerificaPonto que recebe como entrada as coordenadas x e y de um ponto e retorna True caso o ponto esteja dentro da circunferência, ou False caso contrário.
A função Circunferencia deve retornar a função Verifica.
End of explanation
import math
print (EstrelaMorte(0,0,20,3,3,10))
print (EstrelaMorte(0,0,200,3,3,10))
print (EstrelaMorte(0,0,200,195,3,10))
Explanation: Exercício 6:
A Estrela da Morte é uma arma desenvolvida pelo império para dominar o universo.
Um telescópio digital foi desenvolvido pelas forças rebeldes para detectar o local dela.
Mas tal telescópio só consegue mostrar o contorno das circunferências encontradas indicando o centro e o raio delas.
Sabendo que uma Estrela da Morte é definida por:
O raio de uma circunferência for 10 vezes maior que o raio da outra
A circunferência menor se encontrar totalmente dentro da maior
O contorno da circunferência menor está a pelo menos 2 unidades de distância do contorno da maior
Faça uma função (utilizando os exercícios anteriores), para detectar se duas circunferências definidas por (cx1,cy1,r1) e (cx2,cy2,r2) podem formar uma Estrela da Morte.
Bônus: plote as circunferências utilizando a biblioteca gráfica.
End of explanation
import math, cmath
print (RaizSegundoGrau(2,4,2) ) # -1.0
print (RaizSegundoGrau(2,2,2)) # -0.5 - 0.9j, -0.5+0.9j
print (RaizSegundoGrau(2,6,2)) # -2.6, -0.38
Explanation: Exercício 7: Crie uma função para determinar as raízes reais da equação do segundo grau:
$$
a.x^{2} + b.x + c = 0
$$
Faça com que a função retorne:
Uma raíz quando $b^2 = 4ac$
Raízes complexas quando $b^2 < 4ac$
Raízes reais, caso contrário
Utilize a biblioteca cmath para calcular a raíz quadrada para números complexos.
End of explanation |
10,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks
In this notebook, we train a CNN to classify images from the CIFAR-10 database.
The images in this database are small color images that fall into one of ten classes; some example images are pictured below.
<img src='notebook_ims/cifar_data.png' width=70% height=70% />
Test for CUDA
Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
Step1: Load the Data
Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data.
Step2: Visualize a Batch of Training Data
Step3: View an Image in More Detail
Here, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
Step4: Define the Network Architecture
This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following
Step5: Specify Loss Function and Optimizer
Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; this PyTorch classification example or this, more complex Keras example. Pay close attention to the value for learning rate as this value determines how your model converges to a small error.
TODO
Step6: Train the Network
Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting. (In fact, in the below example, we could have stopped around epoch 33 or so!)
Step7: Load the Model with the Lowest Validation Loss
Step8: Test the Trained Network
Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
Step9: Question | Python Code:
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
Explanation: Convolutional Neural Networks
In this notebook, we train a CNN to classify images from the CIFAR-10 database.
The images in this database are small color images that fall into one of ten classes; some example images are pictured below.
<img src='notebook_ims/cifar_data.png' width=70% height=70% />
Test for CUDA
Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
End of explanation
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
Explanation: Load the Data
Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
Explanation: Visualize a Batch of Training Data
End of explanation
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
Explanation: View an Image in More Detail
Here, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
End of explanation
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
Explanation: Define the Network Architecture
This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:
* Convolutional layers, which can be thought of as stack of filtered images.
* Maxpooling layers, which reduce the x-y size of an input, keeping only the most active pixels from the previous layer.
* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.
A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer.
<img src='notebook_ims/2_layer_conv.png' height=50% width=50% />
TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.
The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting.
It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at this PyTorch classification example or this, more complex Keras example to help decide on a final structure.
Output volume for a convolutional layer
To compute the output size of a given convolutional layer we can perform the following calculation (taken from Stanford's cs231n course):
We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by (W−F+2P)/S+1.
For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
End of explanation
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
Explanation: Specify Loss Function and Optimizer
Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; this PyTorch classification example or this, more complex Keras example. Pay close attention to the value for learning rate as this value determines how your model converges to a small error.
TODO: Define the loss and optimizer and see how these choices change the loss over time.
End of explanation
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for data, target in train_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_cifar.pt')
valid_loss_min = valid_loss
Explanation: Train the Network
Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting. (In fact, in the below example, we could have stopped around epoch 33 or so!)
End of explanation
model.load_state_dict(torch.load('model_cifar.pt'))
Explanation: Load the Model with the Lowest Validation Loss
End of explanation
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for data, target in test_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
Explanation: Test the Trained Network
Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
End of explanation
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
Explanation: Question: What are your model's weaknesses and how might they be improved?
Answer: This model seems to do best on vehicles rather than animals. For example, it does best on the automobile class and worst on the cat class. I suspect it's because animals vary in color and size and so it would improve this model if I could increase the number of animal images in the first place or perhaps if I added another convolutional layer to detect finer patterns in these images. I could also experiment with a smaller learning rate so that the model takes small steps in the right direction as it is training.
Visualize Sample Test Results
End of explanation |
10,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create environment
Step1: Random action
Step2: No action | Python Code:
import json
from os import path
import pandas as pd
import gym.envs
import numpy as np
num_steps = 100
gym.envs.register(id='obs-v2',
entry_point='gym_bs.envs:EuropeanOptionEnv',
kwargs={'t': num_steps,
'n': 1,
's0': 49,
'k': 50,
'max_stock': 1,
'sigma': .1})
params = dict(n_iter=10000, batch_size=50, elite_frac=0.3)
env = gym.make('obs-v2')
env = gym.wrappers.Monitor(env, "/tmp/gym-results/obs-v2", video_callable=False, write_upon_reset=True, force=True)
observation = env.reset()
Explanation: Create environment
End of explanation
%%time
df = pd.DataFrame.from_dict({'reward': [], 'observation': []})
for _ in range(10):
observation = env.reset()
done = False
while not done:
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
df = df.append( pd.DataFrame.from_dict({'reward': reward, 'observation': [observation]}))
%matplotlib inline
df.reward.clip_lower(-15).hist(bins=100)
Explanation: Random action
End of explanation
%%time
df = pd.DataFrame.from_dict({'reward': [], 'underlying': [], 'tau': [], 'stocks': []})
action = np.array([0.])
for _ in range(1000):
observation = env.reset()
done = False
while not done:
# action = env.action_space.sample()
observation, reward, done, info = env.step(action)
df = df.append( pd.DataFrame.from_dict({'reward': reward,
'underlying': observation[0],
'tau': observation[1],
'stocks': observation[2]}))
%matplotlib inline
df.reward.clip_lower(-1500).hist(bins=100)
%matplotlib inline
# fig = plt.Figure()
df.underlying.hist(bins=20, figsize=(10, 6))
done = False
df = pd.DataFrame.from_dict({'reward': [], 'observation': []})
while not done:
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
df = df.append( pd.DataFrame.from_dict('reward': reward, 'observation': observation))
Explanation: No action
End of explanation |
10,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Permeability of Berea
Berea Sandstone Simulation Using PoreSpy and OpenPNM
The example explains effective permeabilty calculations using PoreSpy and OpenPNM software. The simulation is performed on X-ray tomography image of BereaSandstone. The calculated effective permeablity value can compared with value report in Dong et al.
Start by importing the necessary packages
Step1: Load BreaSandstone Image file
Give path to image file and load the image. Please note image should be binarized or in boolean format before performing next steps.
Step2: Confirm image and check image porosity
Be patient, this might take ~30 seconds (depending on your CPU)
Step3: Extract pore network using SNOW algorithm in PoreSpy
The SNOW algorithm (an accronym for Sub-Network from an Oversegmented Watershed) was presented by Gostick. The algorithm was used to extract pore network from BereaSandstone image.
Step4: Import network in OpenPNM
The output from the SNOW algorithm above is a plain python dictionary containing all the extracted pore-scale data, but it is NOT yet an OpenPNM network. We need to create an empty network in OpenPNM, then populate it with the data from SNOW
Step5: Now we can print the network to see how the transferred worked.
Note to developers
Step6: Check network health
Remove isolated pores or cluster of pores from the network by checking it network health. Make sure ALL keys in network health functions have no value.
Step7: Assign phase
In this example air is considered as fluid passing through porous channels.
Step8: Assign physics
Step9: Assign Algorithm and boundary conditions
Select stokes flow algorithm for simulation and assign dirichlet boundary conditions in top and bottom faces of the network.
Step10: Calculate effective permeability
Caclulate effective permeablity using hagen poiseuille equation. Use cross section area and flow length manually from image dimension.
Step11: Note to developers | Python Code:
import os
import imageio
import scipy as sp
import numpy as np
import openpnm as op
import porespy as ps
import matplotlib.pyplot as plt
np.set_printoptions(precision=4)
np.random.seed(10)
%matplotlib inline
Explanation: Predicting Permeability of Berea
Berea Sandstone Simulation Using PoreSpy and OpenPNM
The example explains effective permeabilty calculations using PoreSpy and OpenPNM software. The simulation is performed on X-ray tomography image of BereaSandstone. The calculated effective permeablity value can compared with value report in Dong et al.
Start by importing the necessary packages
End of explanation
path = '../../_fixtures/ICL-Sandstone(Berea)/'
file_format = '.tif'
file_name = 'Berea'
file = file_name + file_format
fetch_file = os.path.join(path, file)
im = imageio.mimread(fetch_file)
im = ~np.array(im, dtype=bool)[:250, :250, :250] # Make image a bit smaller
Explanation: Load BreaSandstone Image file
Give path to image file and load the image. Please note image should be binarized or in boolean format before performing next steps.
End of explanation
# NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(1, 3, figsize=(12,5))
ax[0].imshow(im[:, :, 100]);
ax[1].imshow(ps.visualization.show_3D(im));
ax[2].imshow(ps.visualization.sem(im));
ax[0].set_title("Slice No. 100 View");
ax[1].set_title("3D Sketch");
ax[2].set_title("SEM View");
print(ps.metrics.porosity(im))
Explanation: Confirm image and check image porosity
Be patient, this might take ~30 seconds (depending on your CPU)
End of explanation
# NBVAL_IGNORE_OUTPUT
resolution = 5.345e-6
snow = ps.networks.snow2(im, voxel_size=resolution)
Explanation: Extract pore network using SNOW algorithm in PoreSpy
The SNOW algorithm (an accronym for Sub-Network from an Oversegmented Watershed) was presented by Gostick. The algorithm was used to extract pore network from BereaSandstone image.
End of explanation
settings = {'pore_shape': 'pyramid',
'throat_shape': 'cuboid',
'pore_diameter': 'equivalent_diameter',
'throat_diameter': 'inscribed_diameter'}
pn, geo = op.io.PoreSpy.import_data(snow.network, settings=settings)
Explanation: Import network in OpenPNM
The output from the SNOW algorithm above is a plain python dictionary containing all the extracted pore-scale data, but it is NOT yet an OpenPNM network. We need to create an empty network in OpenPNM, then populate it with the data from SNOW:
End of explanation
# NBVAL_IGNORE_OUTPUT
print(pn)
Explanation: Now we can print the network to see how the transferred worked.
Note to developers: We need to ignore the output of the following cell since the number of pores differs depending on whether the code is run on a windows or linux machine.
End of explanation
h = pn.check_network_health()
op.topotools.trim(network=pn, pores=h['trim_pores'])
h = pn.check_network_health()
print(h)
Explanation: Check network health
Remove isolated pores or cluster of pores from the network by checking it network health. Make sure ALL keys in network health functions have no value.
End of explanation
air = op.phases.Air(network=pn)
Explanation: Assign phase
In this example air is considered as fluid passing through porous channels.
End of explanation
phys_air = op.physics.Basic(network=pn, phase=air, geometry=geo)
Explanation: Assign physics
End of explanation
perm = op.algorithms.StokesFlow(network=pn, phase=air)
perm.set_value_BC(pores=pn.pores('zmax'), values=0)
perm.set_value_BC(pores=pn.pores('zmin'), values=101325)
perm.run()
air.update(perm.results())
Explanation: Assign Algorithm and boundary conditions
Select stokes flow algorithm for simulation and assign dirichlet boundary conditions in top and bottom faces of the network.
End of explanation
resolution = 5.345e-6
Q = perm.rate(pores=pn.pores('zmin'), mode='group')[0]
A = (im.shape[0] * im.shape[1]) * resolution**2
L = im.shape[2] * resolution
mu = air['pore.viscosity'].max()
delta_P = 101325 - 0
K = Q * L * mu / (A * delta_P)
Explanation: Calculate effective permeability
Caclulate effective permeablity using hagen poiseuille equation. Use cross section area and flow length manually from image dimension.
End of explanation
# NBVAL_IGNORE_OUTPUT
print(f'The value of K is: {K/0.98e-12*1000:.2f} mD')
Explanation: Note to developers: We need to ignore the output of the following cell since the results are slightly different on different platforms (windows vs linux)
End of explanation |
10,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We will discover the best actor/director according to imdb ratings
First we import all the necessary libraries to process and plot data.
Step1: We read our data and use the actordirector variable as a choice for whether the user wants to find the best actor or director.
Step2: Using the groupby methds, we can group our data based on the feature chosen by the user (actordirector variable). Then we get statistics that help us sort them accordingly.
Step3: And there we have it. The beautiful plot in ggplot style, where we have ordered the actors/directors with more than 'nrMovies' movies, sorted by average imdb movie rating. One can of course use different kinds of performance measures to sort them. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
Explanation: We will discover the best actor/director according to imdb ratings
First we import all the necessary libraries to process and plot data.
End of explanation
df = pd.read_csv('movie_metadata.csv')
actordirector = 'director_name' # actor_1_name
assert actordirector in ('actor_1_name','director_name'), 'Specify director_name or actor_1_name'
if actordirector == 'director_name':
nrMovies = 10
elif actordirector == 'actor_1_name':
nrMovies = 15
Explanation: We read our data and use the actordirector variable as a choice for whether the user wants to find the best actor or director.
End of explanation
grouped = df.groupby(df[actordirector], as_index=False)
groupdf = grouped.imdb_score.agg([np.mean, np.std, len]) # mean, standard deviation, and nr-of-movies columns
groupdf['se'] = groupdf['std'] / np.sqrt(groupdf.len) # standard error column
groupdf.dropna(axis=0, inplace=True)
groupdf = groupdf[groupdf.len>=nrMovies] # select actors/directors with more than nrMovies movies
groupdf.sort_values(['mean'],ascending=True,inplace=True) # sorted by average imdb movie rating
groupdf.reset_index(inplace=True)
groupdf['names'] = groupdf.index
Explanation: Using the groupby methds, we can group our data based on the feature chosen by the user (actordirector variable). Then we get statistics that help us sort them accordingly.
End of explanation
fig = groupdf.plot(kind='scatter', x='mean', y='names',yticks=range(len(groupdf)),xerr='se',figsize=(11,11))
fig.set_yticklabels(groupdf[actordirector] , rotation=0)
plt.show()
Explanation: And there we have it. The beautiful plot in ggplot style, where we have ordered the actors/directors with more than 'nrMovies' movies, sorted by average imdb movie rating. One can of course use different kinds of performance measures to sort them.
End of explanation |
10,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Waves in magnetized Plasmas
Step1: We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM fields $E_y$ (X-wave) and $E_z$ (O-wave) at every timestep so we can analyze them later
Step2: EM Waves
As discussed above, the simulation was initialized with a broad spectrum of waves through the thermal noise of the plasma. We can see the noisy fields in the plot below
Step3: O-Wave
To analyze the dispersion relation of the O-waves we use a 2D (Fast) Fourier transform of $E_z(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum, alongside with the theoretical prediction for the dispersion relation (in simulation units)
Step4: X-wave
To analyze the dispersion relation of the O-waves we use a 2D (Fast) Fourier transform of $E_y(x,t)$ field values that we stored during the simulation. The theoretical prediction has 2 branches | Python Code:
import em1ds as zpic
electrons = zpic.Species( "electrons", -1.0, ppc = 64, uth=[0.005,0.005,0.005])
sim = zpic.Simulation( nx = 1000, box = 100.0, dt = 0.05, species = electrons )
#Bz0 = 0.5
Bz0 = 1.0
#Bz0 = 4.0
sim.emf.set_ext_fld('uniform', B0= [0.0, 0.0, Bz0])
Explanation: Waves in magnetized Plasmas: O-waves and X-waves
To study electromagnetic waves in a magnetized plasma, in particular polarized either along the applied magnetic fields (O-waves) or perpendicular to it (X-waves) we initialize the simulation with a uniform thermal plasma, effectively injecting waves of all possible wavelengths into the simulation.
The external magnetic field is applied along the z direction, and can be controlled through the Bz0 variable:
End of explanation
import numpy as np
niter = 1000
Ey_t = np.zeros((niter,sim.nx))
Ez_t = np.zeros((niter,sim.nx))
print("\nRunning simulation up to t = {:g} ...".format(niter * sim.dt))
while sim.n < niter:
print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r')
Ey_t[sim.n,:] = sim.emf.Ey
Ez_t[sim.n,:] = sim.emf.Ez
sim.iter()
print("\nDone.")
Explanation: We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM fields $E_y$ (X-wave) and $E_z$ (O-wave) at every timestep so we can analyze them later:
End of explanation
import matplotlib.pyplot as plt
iter = sim.n//2
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ez_t[iter,:], label = "$E_z$")
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ey_t[iter,:], label = "$E_y$")
plt.grid(True)
plt.xlabel("$x_1$ [$c/\omega_n$]")
plt.ylabel("$E$ field []")
plt.title("$E_z$, $E_y$, t = {:g}".format( iter * sim.dt))
plt.legend()
plt.show()
Explanation: EM Waves
As discussed above, the simulation was initialized with a broad spectrum of waves through the thermal noise of the plasma. We can see the noisy fields in the plot below:
End of explanation
import matplotlib.pyplot as plt
import matplotlib.colors as colors
# (omega,k) power spectrum
win = np.hanning(niter)
for i in range(sim.nx):
Ez_t[:,i] *= win
sp = np.abs(np.fft.fft2(Ez_t))**2
sp = np.fft.fftshift( sp )
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-4, vmax = 0.1),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
plt.colorbar().set_label('$|FFT(E_z)|^2$')
# Theoretical prediction
k = np.linspace(-k_max, k_max, num = 512)
plt.plot( k, np.sqrt( 1 + k**2), label = "theoretical", ls = "--" )
plt.ylim(0,12)
plt.xlim(0,12)
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("O-Wave dispersion relation")
plt.legend()
plt.show()
Explanation: O-Wave
To analyze the dispersion relation of the O-waves we use a 2D (Fast) Fourier transform of $E_z(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum, alongside with the theoretical prediction for the dispersion relation (in simulation units):
$\omega = \sqrt{(1 + k^2)}$
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.colors as colors
win = np.hanning(niter)
for i in range(sim.nx):
Ey_t[:,i] *= win
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
sp = np.abs( np.fft.fft2(Ey_t))**2
sp = np.fft.fftshift( sp )
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-4, vmax = 0.1),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
plt.colorbar().set_label('$|FFT(E_y)|^2$')
k = np.linspace(-k_max, k_max, num = 512)
wa=np.sqrt((k**2+Bz0**2+2-np.sqrt(k**4-2*k**2*Bz0**2+Bz0**4+4*Bz0**2))/2)
wb=np.sqrt((k**2+Bz0**2+2+np.sqrt(k**4-2*k**2*Bz0**2+Bz0**4+4*Bz0**2))/2)
plt.plot( k,wb, label = 'theoretical $\omega_+$', color = 'r', ls = "--" )
plt.plot( k,wa, label = 'theoretical $\omega_-$', color = 'b', ls = "--" )
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("X-wave dispersion relation")
plt.legend()
plt.ylim(0,12)
plt.xlim(0,12)
plt.show()
Explanation: X-wave
To analyze the dispersion relation of the O-waves we use a 2D (Fast) Fourier transform of $E_y(x,t)$ field values that we stored during the simulation. The theoretical prediction has 2 branches:
$\omega = 0$
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
End of explanation |
10,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5T_Pandas 실습하기 - World 데이터베이스를 이용한 데이터 분석 ( filtering, merge, text mining )
실습
인구가 1억 명 이상이면서, Asia에 포함된 국가 리스트 DataFrame
국가명과 도시명을 가지고 있는 DataFrame(현재는 각각 country_df, city_df에 있는 상태)
Government Form 에 "Republic"이라는 텍스트가 포함된 모든 국가의 DataFrame
Step1: 1. 인구가 1억 명 이상이면서, Asia에 포함된 국가 리스트 DataFrame
Step2: 2. 국가명과 도시명을 가지고 있는 DataFrame(현재는 각각 country_df, city_df에 있는 상태)
Step3: 3. Government Form 에 "Republic"이라는 텍스트가 포함된 모든 국가의 DataFrame | Python Code:
city_df = pd.read_csv("world_City.csv")
country_df = pd.read_csv("world_Country.csv")
city_df.head()
country_df.head(3)
country_df.columns
country_df.count()
Explanation: 5T_Pandas 실습하기 - World 데이터베이스를 이용한 데이터 분석 ( filtering, merge, text mining )
실습
인구가 1억 명 이상이면서, Asia에 포함된 국가 리스트 DataFrame
국가명과 도시명을 가지고 있는 DataFrame(현재는 각각 country_df, city_df에 있는 상태)
Government Form 에 "Republic"이라는 텍스트가 포함된 모든 국가의 DataFrame
End of explanation
is_population = country_df["Population"] > 100000000
is_asia = country_df["Continent"] == "Asia"
df = country_df[is_population][is_asia]
# 번외로 정렬과 수치연산을 해보겠다.
df
# 정렬은 sort_values()
df.sort_values("Population", ascending=False)
# 수치연산은 꽤 쉽다.
df["Population"].min()
#min, max, sum, mean ...
df["Population"].mean()
#아시아의 국가별 평균 인구수
country_df[is_asia]["Population"].mean()
Explanation: 1. 인구가 1억 명 이상이면서, Asia에 포함된 국가 리스트 DataFrame
End of explanation
city_df.merge(country_df, left_on="CountryCode", right_on="Code")
city_df.merge(country_df, left_on="CountryCode", right_on="Code").rename(columns={
"Name_x": "City",
"Name_y": "Country",
})[["City", "Country"]]
Explanation: 2. 국가명과 도시명을 가지고 있는 DataFrame(현재는 각각 country_df, city_df에 있는 상태)
End of explanation
country_df["GovernmentForm"].unique()
"Republic" in country_df["GovernmentForm"] #이건 안 됩니다.
# for index, row in country_df.iterrows():
# if "Republic" in row["GovermentForm"]:
# pass
country_df["GovernmentForm"].str
country_df.dropna(inplace=True) #만약 데이터 안에 NaN 값이 있다면 전처리
country_df.fillna("") #이렇게 하는 것이 더 적합할 수 있다.
is_republic = country_df["GovernmentForm"].str.contains("Republic")
country_df[is_republic]
Explanation: 3. Government Form 에 "Republic"이라는 텍스트가 포함된 모든 국가의 DataFrame
End of explanation |
10,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$ \LaTeX \text{ command declarations here.}
\newcommand{\N}{\mathcal{N}}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\norm}[1]{\|#1\|_2}
\newcommand{\d}{\mathop{}!\mathrm{d}}
\newcommand{\qed}{\qquad \mathbf{Q.E.D.}}
\newcommand{\vx}{\mathbf{x}}
\newcommand{\vy}{\mathbf{y}}
\newcommand{\vt}{\mathbf{t}}
\newcommand{\vb}{\mathbf{b}}
\newcommand{\vw}{\mathbf{w}}
$$
EECS 445
Step1: Implement Newton’s method to find a minimizer of the regularized negative log likelihood. Try setting $\lambda$ = 10. Use the first 2000 examples as training data, and the last 1000 as test data.
Exercise
* Implement the loss function with the following formula
Step2: Recall
Step3: Exercise
* Implement training process(When to stop iterating?)
* Implement test function
* Compute the test error
* Compute the value of the objective function at the optimum | Python Code:
# all the packages you need
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
from numpy.linalg import inv
# load data from .mat
mat = scipy.io.loadmat('mnist_49_3000.mat')
print (mat.keys())
x = mat['x'].T
y = mat['y'].T
print (x.shape, y.shape)
# show example image
plt.imshow (x[4, :].reshape(28, 28))
# add bias term
x = np.hstack([np.ones((3000, 1)), x])
# convert label -1 to 0
y[y == -1] = 0
print(y[y == 0].size, y[y == 1].size)
# split into train set and test set
x_train = x[: 2000, :]
y_train = y[: 2000, :]
x_test = x[2000 : , :]
y_test = y[2000 : , :]
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\N}{\mathcal{N}}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\norm}[1]{\|#1\|_2}
\newcommand{\d}{\mathop{}!\mathrm{d}}
\newcommand{\qed}{\qquad \mathbf{Q.E.D.}}
\newcommand{\vx}{\mathbf{x}}
\newcommand{\vy}{\mathbf{y}}
\newcommand{\vt}{\mathbf{t}}
\newcommand{\vb}{\mathbf{b}}
\newcommand{\vw}{\mathbf{w}}
$$
EECS 445: Machine Learning
Hands On 05: Linear Regression II
Instructor: Zhao Fu, Valli, Jacob Abernethy and Jia Deng
Date: September 26, 2016
Problem 1a: MAP estimation for Linear Regression with unusual Prior
Assume we have $n$ vectors $\vec{x}_1, \cdots, \vec{x}_n$. We also assume that for each $\vec{x}_i$ we have observed a target value $t_i$, where
$$
\begin{gather}
t_i = \vec{w}^T \vec{x_i} + \epsilon \
\epsilon \sim \mathcal{N}(0, \beta^{-1})
\end{gather}
$$
where $\epsilon$ is the "noise term".
(a) Quick quiz: what is the likelihood given $\vec{w}$? That is, what's $p(t_i | \vec{x}_i, \vec{w})$?
Answer: $p(t_i | \vec{x}_i, \vec{w}) = \mathcal{N}(t_i|\vec{w}^\top \vec{x_i}, \beta^{-1}) = \frac{1}{(2\pi \beta^{-1})^\frac{1}{2}} \exp{(-\frac{\beta}{2}(t_i - \vec{w}^\top \vec{x_i})^2)}$
Problem 1: MAP estimation for Linear Regression with unusual Prior
Assume we have $n$ vectors $\vec{x}_1, \cdots, \vec{x}_n$. We also assume that for each $\vec{x}_i$ we have observed a target value $t_i$, sampled IID. We will also put a prior on $\vec{w}$, using PSD matrix $\Sigma$.
$$
\begin{gather}
t_i = \vec{w}^T \vec{x_i} + \epsilon \
\epsilon \sim \mathcal{N}(0, \beta^{-1}) \
\vec{w} \sim \mathcal{N}(0, \Sigma)
\end{gather}
$$
Note: the difference here is that our prior is a multivariate gaussian with non-identity covariance! Also we let $\mathcal{X} = {\vec{x}_1, \cdots, \vec{x}_n}$
(a) Compute the log posterior function, $\log p(\vec{w}|\vec{t}, \mathcal{X},\beta)$
Hint: use Bayes' Rule
(b) Compute the MAP estimate of $\vec{w}$ for this model
Hint: the solution is very similar to the MAP estimate for a gaussian prior with identity covariance
Problem 2: Handwritten digit classification with logistic regression
Download the file mnist_49_3000.mat from Canvas. This is a subset of the MNIST handwritten digit database, which is a well-known benchmark database for classification algorithms. This subset contains examples of the digits 4 and 9. The data file contains variables x and y, with the former containing patterns and the latter labels. The images are stored as column vectors.
Exercise:
* Load data and visualize data (Use scipy.io.loadmat to load matrix)
* Add bias to the features $\phi(\vx)^T = [1, \vx^T]$
* Split dataset into training set with the first 2000 data and test set with the last 1000 data
End of explanation
# Initialization of parameters
w = np.zeros((785, 1))
lmd = 10
def computeE(w, x, y, lmd) :
E = np.dot(y.T, np.log(1 + np.exp(-np.dot(x, w)))) + np.dot(1 - y.T, np.log(1 + \
np.exp(np.dot(x, w)))) + lmd * np.dot(w.T, w)
return E[0][0]
print (computeE(w, x, y, lmd))
def sigmoid(a) :
return np.exp(a + 1e-6) / (1 + np.exp(a + 1e-6))
def computeGradientE(w, x, y, lmd) :
return np.dot(x.T, sigmoid(np.dot(x, w)) - y) + lmd * w
print (computeGradientE(w, x, y, lmd).shape)
Explanation: Implement Newton’s method to find a minimizer of the regularized negative log likelihood. Try setting $\lambda$ = 10. Use the first 2000 examples as training data, and the last 1000 as test data.
Exercise
* Implement the loss function with the following formula:
$$
\begin{align}
E(\vw)
&= -\ln P(\vy = \vt| \mathcal{X}, \vw) \
&= \boxed{\sum \nolimits_{n=1}^N \left[ t_n \ln (1+\exp(-\vw^T\phi(\vx_n))) + (1-t_n) \ln(1+\exp(\vw^T\phi(\vx_n))) \right] + \lambda \vw^T\vw}\
\end{align}
$$
* Implement the gradient of loss $$\nabla_\vw E(\vw) = \boxed{ \Phi^T \left( \sigma(\Phi \vw) - \vt \right) + \lambda \vw}$$
where $\sigma(a) = \frac{\exp(a)}{1+\exp(a)}$
End of explanation
def computeR(w, x, y) :
return sigmoid(np.dot(x, w)) * (1 - sigmoid(np.dot(x, w)))
# print (computeR(w, x, y).T)
def computeHessian(w, x, y, lmd) :
return np.dot(x.T * computeR(w, x, y).T, x) + lmd * np.eye(w.shape[0])
# print (computeHessian(w, x, y, lmd))
def update(w, x, y, lmd) :
hessian = computeHessian(w, x, y, lmd)
gradient = computeGradientE(w, x, y, lmd)
# print (np.sum(hessian))
return w - np.dot(inv(hessian), gradient)
print (update(w, x, y, lmd).shape)
Explanation: Recall: Newton's Method
$$
\vx_{n+1}= \vx_n - \left(\nabla^2 f(\vx_n)\right)^{-1} \nabla_\vx f(\vx_n)
$$
of which $\nabla^2 f(\vx_n)$ is Hessian matrix which is the second order derivative
$$
\nabla^2 f = \begin{bmatrix}
\frac{\partial f}{\partial x_1\partial x_1} & \cdots & \frac{\partial f}{\partial x_1\partial x_n}\
\vdots & \ddots & \vdots\
\frac{\partial f}{\partial x_n\partial x_1} & \cdots & \frac{\partial f}{\partial x_n\partial x_n}
\end{bmatrix}
$$
$$
\begin{align}
\nabla^2 E(\vw)
&= \nabla_\vw \nabla_\vw E(\vw) \
&= \sum \nolimits_{n=1}^N \phi(\vx_n) r_n(\vw) \phi(\vx_n)^T + \lambda I
\end{align}
$$
of which $r_n(\vw) = \sigma(\vw^T \phi(\vx_n)) \cdot ( 1 - \sigma(\vw^T \phi(\vx_n)) )$
Exercise
* Implement $r_n(\vw)$
* Implement $\nabla^2 E(\vw)$
* Implement update function
End of explanation
def train(w, x, y, lmd) :
w_new = update(w, x, y, lmd)
diff = np.sum(np.abs(w_new - w))
while diff > 1e-6:
w = w_new
w_new = update(w, x, y, lmd)
diff = np.sum(np.abs(w_new - w))
return w
w_train = train(w, x_train, y_train, lmd)
def test(w, x, y) :
tmp = np.dot(x, w)
y_pred = np.zeros(y.shape)
y_pred[tmp > 0] = 1
error = np.mean(np.abs(y_pred - y))
return error
print (test(w, x_test, y_test))
print (test(w_train, x_test, y_test))
print (computeE(w_train, x_train, y_train, lmd))
Explanation: Exercise
* Implement training process(When to stop iterating?)
* Implement test function
* Compute the test error
* Compute the value of the objective function at the optimum
End of explanation |
10,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
from collections import Counter
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(sorted_vocab)}
int_to_vocab = {i: word for word, i in vocab_to_int.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_dict = {'.': 'Period',
',': 'Comma',
'"': 'Quotation_Mark',
';': 'Semicolon',
'!': 'Exclamation_Mark',
'?': 'Question_Mark',
'(': 'Left_Parentheses',
')': 'Right_Parentheses',
'--': 'Dash',
'\n': 'Return'}
return {symbol: ('||'+symbol_name+'||') for symbol, symbol_name in token_dict.items()}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text) // (batch_size * seq_length)
int_text = int_text[:n_batches * batch_size * seq_length]
int_text.append(int_text[0])
batches = np.zeros((n_batches, 2, batch_size, seq_length), dtype=int)
for ii in range(batch_size):
batches[:, 0, ii, :] = np.reshape(int_text[ii * n_batches * seq_length : (ii+1) * n_batches * seq_length],
(n_batches, seq_length))
batches[:, 1, ii, :] = np.reshape(int_text[ii * n_batches * seq_length + 1 : (ii+1) * n_batches * seq_length + 1],
(n_batches, seq_length))
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
len(int_text)
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 10
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 50
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 100
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, init_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
top_n = 5
p = np.squeeze(probabilities)
p[p.argsort()][:-top_n] = 0
p = p / np.sum(p)
# assert len(int_to_vocab) == len(p), 'Length is not equal! Length of int_to_vocab is {},\
# but length of probilities is {}'.format(len(int_to_vocab), len(p))
return int_to_vocab[np.random.choice(len(p), 1, p=p)[0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
10,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling 1
Step1: 1) Fit a Linear model
Step2: This catalog has a lot of information, but for this tutorial we are going to work only with periods and magnitudes. Let's grab them using the keywords 'Period' and __Ksmag__. Note that 'e__Ksmag_' refers to the error bars in the magnitude measurements.
Step3: Let's take a look at the magnitude measurements as a function of period
Step4: One could say that there is a linear relationship between log period and magnitudes. To probe it, we want to make a fit to the data. This is where astropy.modeling is useful. We are going to understand how in three simple lines we can make any fit we want. We are going to start with the linear fit, but first, let's understand what a model and a fitter are.
Models in Astropy
Models in Astropy are known parametrized functions. With this format they are easy to define and to use, given that we do not need to write the function expression every time we want to use a model, just the name. They can be linear or non-linear in the variables. Some examples of models are
Step5: Step 2
Step6: Step 3
Step7: And that's it!
We can evaluate the fit at our particular x axis by doing best_fit(x).
Step8: Conclusion
Step9: Let's plot it to see how it looks
Step10: To fit this data let's remember the three steps
Step11: What would happend if we use a different fitter (method)? Let's use the same model but with SimplexLSQFitter as fitter.
Step12: Note that we got a warning after using SimplexLSQFitter to fit the data. The first line says
Step13: As we can see, the Reduced Chi Square for the first fit is closer to one, which means this fit is better. Note that this is what we expected after the discussion of the warnings.
We can also compare the two fits visually
Step14: Results are as espected, the fit performed with the linear fitter is better than the second, non linear one.
Conclusion
Step15: Let's do our three steps to make the fit we want. For this fit we're going to use a non-linear fitter, LevMarLSQFitter, because the model we need (Gaussian1D) is non-linear in the parameters.
Step16: We can get the covariance matrix from LevMarLSQFitter, which provides an error for our fit parameters by doing fitter.fit_info['param_cov']. The elements in the diagonal of this matrix are the square of the errors. We can check the order of the parameters using
Step17: Then
Step18: We can apply the same method with scipy.optimize.curve_fit, and compare the results using again the Reduced Chi Square Value.
Step19: Compare results
Step20: As we can see there is a very small difference in the Reduced Chi Squared. This actually needed to happen, because the fitter in astropy.modeling uses scipy to fit. The advantage of using astropy.modeling is you only need to change the name of the fitter and the model to perform a completely different fit, while scipy require us to remember the expression of the function we wanted to use.
Step21: Conclusion | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from astropy.modeling import models, fitting
from astroquery.vizier import Vizier
import scipy.optimize
# Make plots display in notebooks
%matplotlib inline
Explanation: Modeling 1: Make a quick fit using astropy.modeling
Authors
Rocio Kiman, Lia Corrales, Zé Vinícius, Kelle Cruz, Stephanie T. Douglas
Learning Goals
Use astroquery to download data from Vizier
Use basic models in astropy.modeling
Learn common functions to fit
Generate a quick fit to data
Plot the model with the data
Compare different models and fitters
Keywords
modeling, model fitting, astrostatistics, astroquery, Vizier, scipy, matplotlib, error bars, scatter plots
Summary
In this tutorial, we will become familiar with the models available in astropy.modeling and learn how to make a quick fit to our data.
Imports
End of explanation
catalog = Vizier.get_catalogs('J/A+A/605/A100')
Explanation: 1) Fit a Linear model: Three steps to fit data using astropy.modeling
We are going to start with a linear fit to real data. The data comes from the paper Bhardwaj et al. 2017. This is a catalog of Type II Cepheids, which is a type of variable stars that pulsate with a period between 1 and 50 days. In this part of the tutorial, we are going to measure the Cepheids Period-Luminosity relation using astropy.modeling. This relation states that if a star has a longer period, the luminosity we measure is higher.
To get it, we are going to import it from Vizier using astroquery.
End of explanation
period = np.array(catalog[0]['Period'])
log_period = np.log10(period)
k_mag = np.array(catalog[0]['__Ksmag_'])
k_mag_err = np.array(catalog[0]['e__Ksmag_'])
Explanation: This catalog has a lot of information, but for this tutorial we are going to work only with periods and magnitudes. Let's grab them using the keywords 'Period' and __Ksmag__. Note that 'e__Ksmag_' refers to the error bars in the magnitude measurements.
End of explanation
plt.errorbar(log_period, k_mag, k_mag_err, fmt='k.')
plt.xlabel(r'$\log_{10}$(Period [days])')
plt.ylabel('Ks')
Explanation: Let's take a look at the magnitude measurements as a function of period:
End of explanation
model = models.Linear1D()
Explanation: One could say that there is a linear relationship between log period and magnitudes. To probe it, we want to make a fit to the data. This is where astropy.modeling is useful. We are going to understand how in three simple lines we can make any fit we want. We are going to start with the linear fit, but first, let's understand what a model and a fitter are.
Models in Astropy
Models in Astropy are known parametrized functions. With this format they are easy to define and to use, given that we do not need to write the function expression every time we want to use a model, just the name. They can be linear or non-linear in the variables. Some examples of models are:
Gaussian1D
Trapezoid1D
Polynomial1D
Sine1D
Linear1D
The list continues.
Fitters in Astropy
Fitters in Astropy are the classes resposable for making the fit. They can be linear or non-linear in the parameters (no the variable, like models). Some examples are:
LevMarLSQFitter() Levenberg-Marquardt algorithm and least squares statistic.
LinearLSQFitter() A class performing a linear least square fitting.
SLSQPLSQFitter() SLSQP optimization algorithm and least squares statistic.
SimplexLSQFitter() Simplex algorithm and least squares statistic.
More detailles here
Now we continue with our fitting.
Step 1: Model
First we need to choose which model we are going to use to fit to our data. As we said before, our data looks like a linear relation, so we are going to use a linear model.
End of explanation
fitter = fitting.LinearLSQFitter()
Explanation: Step 2: Fitter
Second we are going to choose the fitter we want to use. This choice is basically which method we want to use to fit the model to the data. In this case we are going to use the Linear Least Square Fitting. In the next exercise (Modeling 2: Create a User Defined Model) we are going to analyze how to choose the fitter.
End of explanation
best_fit = fitter(model, log_period, k_mag, weights=1.0/k_mag_err**2)
print(best_fit)
Explanation: Step 3: Fit Data
Finally, we give to our fitter (method to fit the data) the model and the data to perform the fit. Note that we are including weights: This means that values with higher error will have smaller weight (less importance) in the fit, and the contrary for data with smaller errors. This way of fitting is called Weighted Linear Least Squares and you can find more information about it here or here.
End of explanation
plt.errorbar(log_period,k_mag,k_mag_err,fmt='k.')
plt.plot(log_period, best_fit(log_period), color='g', linewidth=3)
plt.xlabel(r'$\log_{10}$(Period [days])')
plt.ylabel('Ks')
Explanation: And that's it!
We can evaluate the fit at our particular x axis by doing best_fit(x).
End of explanation
N = 100
x1 = np.linspace(0, 4, N) # Makes an array from 0 to 4 of N elements
y1 = x1**3 - 6*x1**2 + 12*x1 - 9
# Now we add some noise to the data
y1 += np.random.normal(0, 2, size=len(y1)) #One way to add random gaussian noise
sigma = 1.5
y1_err = np.ones(N)*sigma
Explanation: Conclusion: Remember, you can fit data with three lines of code:
1) Choose a model.
2) Choose a fitter.
3) Pass to the fitter the model and the data to perform fit.
Exercise
Use the model Polynomial1D(degree=1) to fit the same data and compare the results.
2) Fit a Polynomial model: Choose fitter wisely
For our second example, let's fit a polynomial of degree more than 1. In this case, we are going to create fake data to make the fit. Note that we're adding gaussian noise to the data with the function np.random.normal(0,2) which gives a random number from a gaussian distribution with mean 0 and standard deviation 2.
End of explanation
plt.errorbar(x1, y1, yerr=y1_err,fmt='k.')
plt.xlabel('$x_1$')
plt.ylabel('$y_1$')
Explanation: Let's plot it to see how it looks:
End of explanation
model_poly = models.Polynomial1D(degree=3)
fitter_poly = fitting.LinearLSQFitter()
best_fit_poly = fitter_poly(model_poly, x1, y1, weights = 1.0/y1_err**2)
print(best_fit_poly)
Explanation: To fit this data let's remember the three steps: model, fitter and perform fit.
End of explanation
fitter_poly_2 = fitting.SimplexLSQFitter()
best_fit_poly_2 = fitter_poly_2(model_poly, x1, y1, weights = 1.0/y1_err**2)
print(best_fit_poly_2)
Explanation: What would happend if we use a different fitter (method)? Let's use the same model but with SimplexLSQFitter as fitter.
End of explanation
def calc_reduced_chi_square(fit, x, y, yerr, N, n_free):
'''
fit (array) values for the fit
x,y,yerr (arrays) data
N total number of points
n_free number of parameters we are fitting
'''
return 1.0/(N-n_free)*sum(((fit - y)/yerr)**2)
reduced_chi_squared = calc_reduced_chi_square(best_fit_poly(x1), x1, y1, y1_err, N, 4)
print('Reduced Chi Squared with LinearLSQFitter: {}'.format(reduced_chi_squared))
reduced_chi_squared = calc_reduced_chi_square(best_fit_poly_2(x1), x1, y1, y1_err, N, 4)
print('Reduced Chi Squared with SimplexLSQFitter: {}'.format(reduced_chi_squared))
Explanation: Note that we got a warning after using SimplexLSQFitter to fit the data. The first line says:
WARNING: Model is linear in parameters; consider using linear fitting methods. [astropy.modeling.fitting]
If we look at the model we chose: $y = c_0 + c_1\times x + c_2\times x^2 + c_3\times x^3$, it is linear in the parameters $c_i$. The warning means that SimplexLSQFitter works better with models that are not linear in the parameters, and that we should use a linear fitter like LinearLSQFitter. The second line says:
WARNING: The fit may be unsuccessful; Maximum number of iterations reached. [astropy.modeling.optimizers]
So it's not surprising that the results are different, because this means that the fitter is not working properly. Let's discuss a method of choosing between fits and remember to pay attention when you choose the fitter.
Compare results
One way to check which model parameters are a better fit is calculating the Reduced Chi Square Value. Let's define a function to do that because we're going to use it several times.
End of explanation
plt.errorbar(x1, y1, yerr=y1_err,fmt='k.')
plt.plot(x1, best_fit_poly(x1), color='r', linewidth=3, label='LinearLSQFitter()')
plt.plot(x1, best_fit_poly_2(x1), color='g', linewidth=3, label='SimplexLSQFitter()')
plt.xlabel(r'$\log_{10}$(Period [days])')
plt.ylabel('Ks')
plt.legend()
Explanation: As we can see, the Reduced Chi Square for the first fit is closer to one, which means this fit is better. Note that this is what we expected after the discussion of the warnings.
We can also compare the two fits visually:
End of explanation
mu, sigma, amplitude = 0.0, 10.0, 10.0
N2 = 100
x2 = np.linspace(-30, 30, N)
y2 = amplitude * np.exp(-(x2-mu)**2 / (2*sigma**2))
y2 = np.array([y_point + np.random.normal(0, 1) for y_point in y2]) #Another way to add random gaussian noise
sigma = 1
y2_err = np.ones(N)*sigma
plt.errorbar(x2, y2, yerr=y2_err, fmt='k.')
plt.xlabel('$x_2$')
plt.ylabel('$y_2$')
Explanation: Results are as espected, the fit performed with the linear fitter is better than the second, non linear one.
Conclusion: Pay attention when you choose the fitter.
3) Fit a Gaussian: Let's compare to scipy
Scipy has the function scipy.optimize.curve_fit to fit in a similar way that we are doing. Let's compare the two methods with fake data in the shape of a Gaussian.
End of explanation
model_gauss = models.Gaussian1D()
fitter_gauss = fitting.LevMarLSQFitter()
best_fit_gauss = fitter_gauss(model_gauss, x2, y2, weights=1/y2_err**2)
print(best_fit_gauss)
Explanation: Let's do our three steps to make the fit we want. For this fit we're going to use a non-linear fitter, LevMarLSQFitter, because the model we need (Gaussian1D) is non-linear in the parameters.
End of explanation
model_gauss.param_names
cov_diag = np.diag(fitter_gauss.fit_info['param_cov'])
print(cov_diag)
Explanation: We can get the covariance matrix from LevMarLSQFitter, which provides an error for our fit parameters by doing fitter.fit_info['param_cov']. The elements in the diagonal of this matrix are the square of the errors. We can check the order of the parameters using:
End of explanation
print('Amplitude: {} +\- {}'.format(best_fit_gauss.amplitude.value, np.sqrt(cov_diag[0])))
print('Mean: {} +\- {}'.format(best_fit_gauss.mean.value, np.sqrt(cov_diag[1])))
print('Standard Deviation: {} +\- {}'.format(best_fit_gauss.stddev.value, np.sqrt(cov_diag[2])))
Explanation: Then:
End of explanation
def f(x,a,b,c):
return a * np.exp(-(x-b)**2/(2.0*c**2))
p_opt, p_cov = scipy.optimize.curve_fit(f,x2, y2, sigma=y1_err)
a,b,c = p_opt
best_fit_gauss_2 = f(x2,a,b,c)
print(p_opt)
print('Amplitude: {} +\- {}'.format(p_opt[0], np.sqrt(p_cov[0,0])))
print('Mean: {} +\- {}'.format(p_opt[1], np.sqrt(p_cov[1,1])))
print('Standard Deviation: {} +\- {}'.format(p_opt[2], np.sqrt(p_cov[2,2])))
Explanation: We can apply the same method with scipy.optimize.curve_fit, and compare the results using again the Reduced Chi Square Value.
End of explanation
reduced_chi_squared = calc_reduced_chi_square(best_fit_gauss(x2), x2, y2, y2_err, N2, 3)
print('Reduced Chi Squared using astropy.modeling: {}'.format(reduced_chi_squared))
reduced_chi_squared = calc_reduced_chi_square(best_fit_gauss_2, x2, y2, y2_err, N2, 3)
print('Reduced Chi Squared using scipy: {}'.format(reduced_chi_squared))
Explanation: Compare results
End of explanation
plt.errorbar(x2, y2, yerr=y2_err, fmt='k.')
plt.plot(x2, best_fit_gauss(x2), 'g-', linewidth=6, label='astropy.modeling')
plt.plot(x2, best_fit_gauss_2, 'r-', linewidth=2, label='scipy')
plt.xlabel('$x_2$')
plt.ylabel('$y_2$')
plt.legend()
Explanation: As we can see there is a very small difference in the Reduced Chi Squared. This actually needed to happen, because the fitter in astropy.modeling uses scipy to fit. The advantage of using astropy.modeling is you only need to change the name of the fitter and the model to perform a completely different fit, while scipy require us to remember the expression of the function we wanted to use.
End of explanation
N3 = 100
x3 = np.linspace(0, 3, N3)
y3 = 5.0 * np.sin(2 * np.pi * x3)
y3 = np.array([y_point + np.random.normal(0, 1) for y_point in y3])
sigma = 1.5
y3_err = np.ones(N)*sigma
plt.errorbar(x3, y3, yerr=y3_err, fmt='k.')
plt.xlabel('$x_3$')
plt.ylabel('$y_3$')
Explanation: Conclusion: Choose the method most convenient for every case you need to fit. We recomend astropy.modeling because is easier to write the name of the function you want to fit than to remember the expression every time we want to use it. Also, astropy.modeling becomes useful with more complicated models like two gaussians plus a black body, but that is another tutorial.
Summary:
Let's review the conclusion we got in this tutorial:
You can fit data with three lines of code:
model
fitter
perform fit to data
Pay attention when you choose the fitter.
Choose the method most convenient for every case you need to fit. We recomend astropy.modeling to make quick fits of known functions.
4) Exercise: Your turn to choose
For the next data:
* Choose model and fitter to fit this data
* Compare different options
End of explanation |
10,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear systems
<img src="https
Step1: Diagonally dominance
We say that matrix $A_{nxn}$ is diagonally dominant iff
$$ |a_{ii}| \geq \sum_{j\neq i} |a_{ij}|, \quad \forall i=1, n \, $$
equivalent
$$ 2 \cdot |a_{ii}| \geq \sum_{j = 1, n} |a_{ij}|, \quad \forall i=1, n \, $$
Also we say that matrix $A_{nxn}$ is strictly diagonally dominant iff it's diagonally dominant and
$$ \exists i
Step2: NOTE! All generate functions will return already strictly diagonally dominant matrices
Step3: Gauss method
To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations
Step4: Gauss-Seidel method
The Gauss–Seidel method is an iterative technique for solving a square system of $n$ linear equations with unknown $x$
Step5: Jacobi method
The Jacobi method is an iterative technique for solving a square system of $n$ linear equations with unknown $x$
Step6: Evaluation
To evaluate algorithm we are going to calculate
$$ score = || A \cdot \mathbf{x^*} - \mathbf{b} || $$
where $\mathbf{x}^*$ is a result of algorithm.
Step7: Back to examples
Example 1
Do you remember example from the problem statement? Now, we are going to see how iterative algorithms converges for that example.
Step8: Seidel vs Jacobi
Here is tricky example of linear system, when Jacobi method converges faster than Gauss-Seidel.
As you will see, Jacobi needs only 1 iteration to find an answer, while Gauss-Seidel needs over 10 iterations to find approximate solution.
Step9: Hilbert matrices
Now, let's have a look at Hilbert matrix. It's said that non-iterative methods should do bad here. Proof?
Step10: As we can see, it's true for huge $n>1000$, that direct methods works bad on Hilbert matrices. But how about the size of matrix? Is $n=500$ enough?
Step11: Random matrices
It's time to test our methods on random generated matrices.
To be deterministic, we are going to set seed equals to $17$ every time, we call generate_random(...)
Step12: Runtime
Now, let's compare our methods by actual time running.
To have more accurate results, we need to run them on large matrix(eg. random $200x200$).
Step13: Convergence speed
We already have compared methods by accuracy and time, but how about speed.
Now we are going to show what will be the error of method after some number of iterations. To be more clear we use logarithmic x-scale. | Python Code:
def is_square(a):
return a.shape[0] == a.shape[1]
def has_solutions(a, b):
return np.linalg.matrix_rank(a) == np.linalg.matrix_rank(np.append(a, b[np.newaxis].T, axis=1))
Explanation: Linear systems
<img src="https://i.ytimg.com/vi/7ujEpq7MWfE/maxresdefault.jpg" width="400" />
Given square matrix $A_{nxn}$, and vector $\mathbf{b}$, find such vector $\mathbf{x}$ that $A \cdot \mathbf{x} = \mathbf{b}$.
Example 1
$$
\begin{bmatrix} 10 & -1 & 2 & 0 \ -1 & 11 & -1 & 3 \ 2 & -1 & 10 & -1 \ 0 & 3 & -1 & 8 \end{bmatrix} \cdot
\begin{bmatrix} x_1 \ x_2 \ x_3 \ x_4 \end{bmatrix} =
\begin{bmatrix} 6 \ 25 \ -11 \ 15 \end{bmatrix}
$$
Equation above can be rewritten as
$$
\left{
\begin{aligned}
10x_1 - x_2 + 2x_3 + 0x_4 &= 6 \
-x_1 + 11x_2 + -x_3 + 3x_4 &= 25 \
2x_1 - x_2 + 10x_3 - x_4 &= -11 \
0x_1 + 3x_2 - 2x_3 + 8x_4 &= 15
\end{aligned}
\right.
$$
We can easily check that $\mathbf{x} = [1, 2, -1, 1]$ is a solution.
This particular example has only one solution, but generaly speaking matrix equation can have no, one or infinite number of solutions.
Some methods can found them all(eg. Gauss), but some can only converge to one solution(eg. Seidel, Jacobi).
To start, let's define some helper functions
End of explanation
def is_dominant(a):
return np.all(np.abs(a).sum(axis=1) <= 2 * np.abs(a).diagonal()) and \
np.any(np.abs(a).sum(axis=1) < 2 * np.abs(a).diagonal())
def make_dominant(a):
for i in range(a.shape[0]):
a[i][i] = max(abs(a[i][i]), np.abs(a[i]).sum() - abs(a[i][i]) + 1)
return a
Explanation: Diagonally dominance
We say that matrix $A_{nxn}$ is diagonally dominant iff
$$ |a_{ii}| \geq \sum_{j\neq i} |a_{ij}|, \quad \forall i=1, n \, $$
equivalent
$$ 2 \cdot |a_{ii}| \geq \sum_{j = 1, n} |a_{ij}|, \quad \forall i=1, n \, $$
Also we say that matrix $A_{nxn}$ is strictly diagonally dominant iff it's diagonally dominant and
$$ \exists i : |a_{ii}| > \sum_{j\neq i} |a_{ij}|$$
End of explanation
def generate_random(n):
return make_dominant(np.random.rand(n, n) * n), np.random.rand(n) * n
def generate_hilbert(n):
return make_dominant(sp.linalg.hilbert(n)), np.arange(1, n + 1, dtype=np.float)
def linalg(a, b, debug=False):
return np.linalg.solve(a, b),
Explanation: NOTE! All generate functions will return already strictly diagonally dominant matrices
End of explanation
def gauss(a, b, debug=False):
assert is_square(a) and has_solutions(a, b)
a = np.append(a.copy(), b[np.newaxis].T, axis=1)
i = 0
k = 0
while i < a.shape[0]:
r = np.argmax(a[i:, i]) + i
a[[i, r]] = a[[r, i]]
if a[i][i] == 0:
break
for j in range(a.shape[0]):
if j == i:
continue
a[j] -= (a[j][i] / a[i][i]) * a[i]
a[i] = a[i] / a[i][i]
i += 1
assert np.count_nonzero(a[i:]) == 0
return a[:, -1],
Explanation: Gauss method
To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:
Swapping two rows
Multiplying a row by a non-zero number
Adding a multiple of one row to another row.
Using these operations, a matrix can always be transformed into an upper triangular matrix, and in fact one that is in row echelon form. Once all of the leading coefficients (the left-most non-zero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where multiple elementary operations might be done at each step), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form.
$$\left[\begin{array}{rrr|r}
1 & 3 & 1 & 9 \
1 & 1 & -1 & 1 \
3 & 11 & 5 & 35
\end{array}\right]\to
\left[\begin{array}{rrr|r}
1 & 3 & 1 & 9 \
0 & -2 & -2 & -8 \
0 & 2 & 2 & 8
\end{array}\right]\to
\left[\begin{array}{rrr|r}
1 & 3 & 1 & 9 \
0 & -2 & -2 & -8 \
0 & 0 & 0 & 0
\end{array}\right]\to
\left[\begin{array}{rrr|r}
1 & 0 & -2 & -3 \
0 & 1 & 1 & 4 \
0 & 0 & 0 & 0
\end{array}\right] $$
Using row operations to convert a matrix into reduced row echelon form is sometimes called Gauss–Jordan elimination. Some authors use the term Gaussian elimination to refer to the process until it has reached its upper triangular, or (non-reduced) row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.
End of explanation
def seidel(a, b, x0 = None, limit=20000, debug=False):
assert is_square(a) and is_dominant(a) and has_solutions(a, b)
if x0 is None:
x0 = np.zeros_like(b, dtype=np.float)
x = x0.copy()
while limit > 0:
tx = x.copy()
for i in range(a.shape[0]):
x[i] = (b[i] - a[i, :].dot(x)) / a[i][i] + x[i]
if debug:
print(x)
if np.allclose(x, tx, atol=ATOL, rtol=RTOL):
return x, limit
limit -= 1
return x, limit
Explanation: Gauss-Seidel method
The Gauss–Seidel method is an iterative technique for solving a square system of $n$ linear equations with unknown $x$:
$$ A \mathbf{x} = \mathbf{b} $$
It is defined by the iteration
$$ L_* \mathbf{x}^{(k+1)} = \mathbf{b} - U \mathbf{x}^{(k)}, $$
where $\mathbf{x}^{(k)}$ is the $k$-th approximation or iteration of $\mathbf{x},\,\mathbf{x}^{k+1}$ is the next or $k + 1$ iteration of $\mathbf{x}$, and the matrix $A$ is decomposed into a lower triangular component $L_$, and a strictly upper triangular component $U: A = L_ + U $.
In more detail, write out $A$, $\mathbf{x}$ and $\mathbf{b}$ in their components:
$$A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \ x_2 \ \vdots \ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \ b_2 \ \vdots \ b_n \end{bmatrix}.$$
Then the decomposition of $A$ into its lower triangular component and its strictly upper triangular component is given by:
$$A=L_+U \qquad \text{where} \qquad L_ = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \ a_{21} & a_{22} & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \quad U = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \ 0 & 0 & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \0 & 0 & \cdots & 0 \end{bmatrix}.$$
The system of linear equations may be rewritten as:
$$L_* \mathbf{x} = \mathbf{b} - U \mathbf{x} $$
The Gauss–Seidel method now solves the left hand side of this expression for $\mathbf{x}$, using previous value for $\mathbf{x}$ on the right hand side. Analytically, this may be written as:
$$ \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}). $$
However, by taking advantage of the triangular form of $L_$, the elements of $\mathbf{x}^{(k + 1)}$ can be computed sequentially using forward substitution*:
$$ x^{(k+1)}i = \frac{1}{a{ii}} \left(b_i - \sum_{j=1}^{i-1}a_{ij}x^{(k+1)}j - \sum{j=i+1}^{n}a_{ij}x^{(k)}_j \right),\quad i=1,2,\dots,n. $$
The procedure is generally continued until the changes made by an iteration are below some tolerance.
End of explanation
def jacobi(a, b, x0 = None, limit=20000, debug=False):
assert is_square(a) and is_dominant(a) and has_solutions(a, b)
if x0 is None:
x0 = np.zeros_like(b, dtype=np.float)
x = x0.copy()
while limit > 0:
tx = x.copy()
for i in range(a.shape[0]):
x[i] = (b[i] - a[i, :].dot(tx)) / a[i][i] + tx[i]
if debug:
print(x)
if np.allclose(x, tx, atol=ATOL, rtol=RTOL):
return x, limit
limit -= 1
return x, limit
Explanation: Jacobi method
The Jacobi method is an iterative technique for solving a square system of $n$ linear equations with unknown $x$:
$$ A \mathbf{x} = \mathbf{b} $$
where
$$A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \ x_2 \ \vdots \ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \ b_2 \ \vdots \ b_n \end{bmatrix}.$$
Then $A$ can be decomposed into a diagonal component $D$, and the remainder $R$:
$$A=D+R \qquad \text{where} \qquad D = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \ 0 & a_{22} & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \0 & 0 & \cdots & a_{nn} \end{bmatrix} \text{ and } R = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \ a_{21} & 0 & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \ a_{n1} & a_{n2} & \cdots & 0 \end{bmatrix}. $$
The solution is then obtained iteratively via
$$ \mathbf{x}^{(k+1)} = D^{-1} (\mathbf{b} - R \mathbf{x}^{(k)}), $$
where $\mathbf{x}^{(k)}$ is the $k$-th approximation or iteration of $\mathbf{x}$ and $\mathbf{x}^{(k+1)}$ is the next or $k + 1$ iteration of $\mathbf{x}$. The element-based formula is thus:
$$ x^{(k+1)}i = \frac{1}{a{ii}} \left(b_i -\sum_{j\ne i}a_{ij}x^{(k)}_j\right),\quad i=1,2,\ldots,n. $$
The computation of $x_{i}^{(k+1)}$ requires each element in $\mathbf{x}^k$ except itself. Unlike the Gauss–Seidel method, we can't overwrite $x_i^k$ with $x_i^{(k+1)}$, as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size $n$.
End of explanation
def norm(a, b, res):
return np.linalg.norm(a.dot(res) - b)
def run(method, a, b, verbose=False, **kwargs):
if not verbose:
print("-" * 100)
print(method.__name__.upper())
res = method(a, b, **kwargs)
score = norm(a, b, res[0])
if not verbose:
print("res =", res)
print("score =", score)
return score
Explanation: Evaluation
To evaluate algorithm we are going to calculate
$$ score = || A \cdot \mathbf{x^*} - \mathbf{b} || $$
where $\mathbf{x}^*$ is a result of algorithm.
End of explanation
a4 = np.array([[10., -1., 2., 0.],
[-1., 11., -1., 3.],
[2., -1., 10., -1.],
[0., 3., -1., 8.]])
print(a4)
b = np.array([6., 25., -11., 15.])
print("b =", b)
_ = run(linalg, a4, b)
_ = run(gauss, a4, b)
_ = run(seidel, a4, b)
_ = run(jacobi, a4, b)
Explanation: Back to examples
Example 1
Do you remember example from the problem statement? Now, we are going to see how iterative algorithms converges for that example.
End of explanation
a4 = np.array([[1., -1/8, 1/32, 1/64],
[-1/2, 2., 1/16, 1/32],
[-1., 1/4, 4., 1/16],
[-1., 1/4, 1/8, 8.]])
print(a4)
b = np.array([1., 4., 16., 64.])
print("b =", b)
_ = run(linalg, a4, b)
_ = run(gauss, a4, b)
_ = run(seidel, a4, b)
_ = run(jacobi, a4, b)
Explanation: Seidel vs Jacobi
Here is tricky example of linear system, when Jacobi method converges faster than Gauss-Seidel.
As you will see, Jacobi needs only 1 iteration to find an answer, while Gauss-Seidel needs over 10 iterations to find approximate solution.
End of explanation
a, b = generate_hilbert(1000)
print("LINALG =", run(linalg, a, b, verbose=True))
print("GAUSS =", run(gauss, a, b, verbose=True))
print("SEIDEL =", run(seidel, a, b, x0=np.zeros_like(b, dtype=np.float), verbose=True))
print("JACOBI =", run(jacobi, a, b, x0=np.zeros_like(b, dtype=np.float), verbose=True))
Explanation: Hilbert matrices
Now, let's have a look at Hilbert matrix. It's said that non-iterative methods should do bad here. Proof?
End of explanation
def plot_hilbert_score_by_matrix_size(method, sizes):
scores = np.zeros_like(sizes, dtype=np.float)
for i in range(len(sizes)):
a, b = generate_hilbert(sizes[i])
scores[i] = run(method, a, b, verbose=True)
plt.plot(sizes, scores, label=method.__name__)
sizes = np.linspace(1, 600, num=50, dtype=np.int)
plt.figure(figsize=(15, 10))
plot_hilbert_score_by_matrix_size(linalg, sizes)
plot_hilbert_score_by_matrix_size(gauss, sizes)
plot_hilbert_score_by_matrix_size(seidel, sizes)
plot_hilbert_score_by_matrix_size(jacobi, sizes)
plt.title("Scores of different methods for Hilbert matrices") \
.set_fontsize("xx-large")
plt.xlabel("n").set_fontsize("xx-large")
plt.ylabel("score").set_fontsize("xx-large")
legend = plt.legend(loc="upper right")
for label in legend.get_texts():
label.set_fontsize("xx-large")
plt.show()
Explanation: As we can see, it's true for huge $n>1000$, that direct methods works bad on Hilbert matrices. But how about the size of matrix? Is $n=500$ enough?
End of explanation
a, b = generate_random(20)
_ = run(linalg, a, b)
_ = run(gauss, a, b)
_ = run(seidel, a, b)
_ = run(jacobi, a, b)
Explanation: Random matrices
It's time to test our methods on random generated matrices.
To be deterministic, we are going to set seed equals to $17$ every time, we call generate_random(...)
End of explanation
a, b = generate_random(200)
%timeit run(linalg, a, b, verbose=True)
%timeit run(gauss, a, b, verbose=True)
%timeit run(seidel, a, b, verbose=True)
%timeit run(jacobi, a, b, verbose=True)
Explanation: Runtime
Now, let's compare our methods by actual time running.
To have more accurate results, we need to run them on large matrix(eg. random $200x200$).
End of explanation
def plot_convergence(method, a, b, limits):
scores = np.zeros_like(limits, dtype=np.float)
for i in range(len(limits)):
scores[i] = run(method, a, b, x0 = np.zeros_like(b, dtype=np.float), limit=limits[i], verbose=True)
plt.plot(limits, scores, label=method.__name__)
a, b = generate_random(15)
limits = np.arange(0, 350)
plt.figure(figsize=(15, 10))
plot_convergence(seidel, a, b, limits)
plot_convergence(jacobi, a, b, limits)
plt.title("Convergence of Seidel/Jacobi methods for random matrix").set_fontsize("xx-large")
plt.xlabel("n_iters").set_fontsize("xx-large")
plt.ylabel("score").set_fontsize("xx-large")
plt.xscale("log")
legend = plt.legend(loc="upper right")
for label in legend.get_texts():
label.set_fontsize("xx-large")
plt.show()
Explanation: Convergence speed
We already have compared methods by accuracy and time, but how about speed.
Now we are going to show what will be the error of method after some number of iterations. To be more clear we use logarithmic x-scale.
End of explanation |
10,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test isotherm fitting
Our strategy here is to generate data points that follow a given isotherm model, then fit an isotherm model to the data using pyIAST, and check that pyIAST identifies the parameters correctly.
Step1: We test all analytical models implemented in pyIAST.
Step2: This dictionary gives the model parameters for which we generate synthetic data to test pyIAST fitting. Note that, because the DSLF model has so many parameters, it is highly likely that such a model will overfit the data. Thus, we expect pyIAST to reach a local minimum for DSLF yet still obtain a reasonable fit with the default starting guess.
Step4: The loading function generates synthetic data for a given model. We pass it an array of pressures and it returns loading using the given model. Note that the parameters for each model are taken from the above dictionary.
Step5: Test model fits
Loop through all models, generate synthetic data using parameters in model_params and the loading function here, then fit model using pyIAST. Plot data and fits, check that pyIAST identified parameters match the model.
Step6: Quick visual test on the Interpolator isotherm | Python Code:
import pyiast
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
Explanation: Test isotherm fitting
Our strategy here is to generate data points that follow a given isotherm model, then fit an isotherm model to the data using pyIAST, and check that pyIAST identifies the parameters correctly.
End of explanation
models = pyiast._MODELS
models
Explanation: We test all analytical models implemented in pyIAST.
End of explanation
model_params = {
"Langmuir": {"M": 10.0, "K": 10.0},
"Quadratic": {"M": 10.0, "Ka": 10.0, "Kb": 10.0 ** 2 * 3},
"BET": {"M": 10.0, "Ka": 10.0, "Kb": .2},
"DSLangmuir": {"M1": 10.0, "K1": 1.0,
"M2": 30.0, "K2": 30.0}, # warning: 1/2 is arbitrary
"Henry": {"KH": 10.0},
"TemkinApprox": {"M": 10.0, "K": 10.0, "theta": -0.1}
}
Explanation: This dictionary gives the model parameters for which we generate synthetic data to test pyIAST fitting. Note that, because the DSLF model has so many parameters, it is highly likely that such a model will overfit the data. Thus, we expect pyIAST to reach a local minimum for DSLF yet still obtain a reasonable fit with the default starting guess.
End of explanation
def loading(P, model):
Return loading at pressure P using a given model.
:param P: np.array array of pressures
:param model: string specify model
if model not in models:
raise Exception("This model is not implemented in the test suite.")
if model == "Langmuir":
M = model_params[model]["M"]
K = model_params[model]["K"]
return M * K * P / (1.0 + K * P)
if model == "Quadratic":
M = model_params[model]["M"]
Ka = model_params[model]["Ka"]
Kb = model_params[model]["Kb"]
return M * P * (Ka + 2.0 * Kb * P) / (1.0 + Ka * P + Kb * P ** 2)
if model == "BET":
M = model_params[model]["M"]
Ka = model_params[model]["Ka"]
Kb = model_params[model]["Kb"]
return M * Ka * P / (1.0 - Kb * P) / (1.0 - Kb * P + Ka * P)
if model == "DSLangmuir":
M1 = model_params[model]["M1"]
K1 = model_params[model]["K1"]
M2 = model_params[model]["M2"]
K2 = model_params[model]["K2"]
return M1 * K1 * P / (1.0 + K1 * P) +\
M2 * K2 * P / (1.0 + K2 * P)
if model == "TemkinApprox":
M = model_params[model]["M"]
K = model_params[model]["K"]
theta = model_params[model]["theta"]
fractional_langmuir_loading = K * P / (1.0 + K * P)
return M * (fractional_langmuir_loading + theta *
fractional_langmuir_loading ** 2 *
(fractional_langmuir_loading - 1.0))
if model == "Henry":
return model_params[model]["KH"] * P
Explanation: The loading function generates synthetic data for a given model. We pass it an array of pressures and it returns loading using the given model. Note that the parameters for each model are taken from the above dictionary.
End of explanation
for model in models:
print("Testing model:", model)
# Generate synthetic data
df = pd.DataFrame()
df['P'] = np.linspace(0, 1, 40)
df['L'] = loading(df['P'], model)
# use pyIAST to fit model to data
isotherm = pyiast.ModelIsotherm(df, pressure_key='P', loading_key='L',
model=model)
isotherm.print_params()
# plot fit
P_plot = np.linspace(0, 1, 100)
fig = plt.figure()
plt.scatter(df['P'], df['L'], label='Synthetic data', clip_on=False)
plt.plot(P_plot, isotherm.loading(P_plot), label='pyIAST fit')
plt.xlim([0, 1])
plt.ylim(ymin=0)
plt.xlabel('Pressure')
plt.ylabel('Uptake')
plt.title(model)
plt.legend(loc='lower right')
plt.show()
# assert parameters are equal
for param in isotherm.params.keys():
np.testing.assert_almost_equal(isotherm.params[param],
model_params[model][param],
decimal=3)
Explanation: Test model fits
Loop through all models, generate synthetic data using parameters in model_params and the loading function here, then fit model using pyIAST. Plot data and fits, check that pyIAST identified parameters match the model.
End of explanation
isotherm = pyiast.InterpolatorIsotherm(df, pressure_key='P', loading_key='L')
pyiast.plot_isotherm(isotherm)
Explanation: Quick visual test on the Interpolator isotherm
End of explanation |
10,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The objective of this notebook, and more broadly, this project is to see whether we can discern a linear relationship between metrics found on Rotton Tomatoes and Box Office performance.
Box office performance is measured in millions as is budget.
Because we have used scaling, interpretation of the raw coefficients will be difficult. Luckily, sklearns standard scaler has an inverse_transform method, thus, if we had to, we could reverse transform the coefficients (sc_X_train for the holdout group and sc_x for the non-holdout group) to get some interpretation. The same logic follows for interpreting target variables should we use the model for prediction.
The year, country, language and month will all be made into dummy variables. I will do this with the built in pd.get_dummies() function! This will turn all columns into type object into dummies! I will also use the optional parameter drop_first to avoid the dummy variable trap!
I will use sklearn's standard scaler on all of my variables, except for the dummies! This is important since we will be using regularized regression i.e. Lasso, Ridge, Elastic Net
I will shuffle my dataframe before the train, test split. I will utilise the X_train, y_train, X_test and y_test variables in GridsearchCV. This is an example of employing cross-validation with a holdout set. This will help guard against overfitting.
To be truly honest, I do not have enough data to justify using a hold out set, however, I want to implement it as an academic exercise! It also give me more code to write!
I will then re-implement the models with using the hold out set to compare results!
Let's get to it!
CROSS-VALIDATION WITH HOLDOUT SECTION
Step1: Upon further thought, it doesnt make sense to have rank_in_genre as a predictor variable for box office budget. When the movie is release, it is not ranked immeadiately. The ranks assigned often occur many years after the movie is released and so it not related to the amount of money accrued at the box office. We will drop this variable.
Right now, our index is the name of the movie! We dont need these to be indicies, it would be cleaner to have a numeric index
The month and year columns are currently in numerical form, however, for our analysis, we require that these be of type object!
Step2: From the above plots, we see that we have heavy skewness in all of our features and our target variable.
The features will be scaled using standard scaler.
When splitting the data into training and test. I will fit my scaler according to the training data!
There is no sign of multi-collinearity $(>= 0.9)$ - good to go!
Step3: Baseline Model and Cross-Validation with Holdout Sets
Creation of Holdout Set
Step4: Baseline Model
As we can see - the baseline model of regular linear regression is dreadful! Let's move on to more sophisticated methods!
Step5: Ridge, Lasso and Elastic Net regression - Holdouts
Step6: Cross-Validation - No Holdout Sets
Step7: Analysis of Results!
Ridge Analysis
Step8: Lasso Analysis
Step9: Elastic Net Analysis | Python Code:
df = unpickle_object("final_dataframe_for_analysis.pkl") #dataframe we got from webscraping and cleaning!
#see other notebooks for more info.
df.dtypes # there are all our features. Our target variable is Box_office
df.shape
Explanation: The objective of this notebook, and more broadly, this project is to see whether we can discern a linear relationship between metrics found on Rotton Tomatoes and Box Office performance.
Box office performance is measured in millions as is budget.
Because we have used scaling, interpretation of the raw coefficients will be difficult. Luckily, sklearns standard scaler has an inverse_transform method, thus, if we had to, we could reverse transform the coefficients (sc_X_train for the holdout group and sc_x for the non-holdout group) to get some interpretation. The same logic follows for interpreting target variables should we use the model for prediction.
The year, country, language and month will all be made into dummy variables. I will do this with the built in pd.get_dummies() function! This will turn all columns into type object into dummies! I will also use the optional parameter drop_first to avoid the dummy variable trap!
I will use sklearn's standard scaler on all of my variables, except for the dummies! This is important since we will be using regularized regression i.e. Lasso, Ridge, Elastic Net
I will shuffle my dataframe before the train, test split. I will utilise the X_train, y_train, X_test and y_test variables in GridsearchCV. This is an example of employing cross-validation with a holdout set. This will help guard against overfitting.
To be truly honest, I do not have enough data to justify using a hold out set, however, I want to implement it as an academic exercise! It also give me more code to write!
I will then re-implement the models with using the hold out set to compare results!
Let's get to it!
CROSS-VALIDATION WITH HOLDOUT SECTION
End of explanation
df['Month'] = df['Month'].astype(object)
df['Year'] = df['Year'].astype(object)
del df['Rank_in_genre']
df.reset_index(inplace=True)
del df['index']
percentage_missing(df)
df.hist(layout=(4,2), figsize=(50,50))
Explanation: Upon further thought, it doesnt make sense to have rank_in_genre as a predictor variable for box office budget. When the movie is release, it is not ranked immeadiately. The ranks assigned often occur many years after the movie is released and so it not related to the amount of money accrued at the box office. We will drop this variable.
Right now, our index is the name of the movie! We dont need these to be indicies, it would be cleaner to have a numeric index
The month and year columns are currently in numerical form, however, for our analysis, we require that these be of type object!
End of explanation
plot_corr_matrix(df)
X = unpickle_object("X_features_selection.pkl") #all features from the suffled dataframe. Numpy array
y = unpickle_object("y_variable_selection.pkl") #target variable from shuffled dataframe. Numpy array
final_df = unpickle_object("analysis_dataframe.pkl") #this is the shuffled dataframe!
Explanation: From the above plots, we see that we have heavy skewness in all of our features and our target variable.
The features will be scaled using standard scaler.
When splitting the data into training and test. I will fit my scaler according to the training data!
There is no sign of multi-collinearity $(>= 0.9)$ - good to go!
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 0) #train on 75% of data
sc_X_train = StandardScaler()
sc_y_train = StandardScaler()
sc_X_train.fit(X_train[:,:6])#only need to learn fit of first 6 - rest are dummies
sc_y_train.fit(y_train)
X_train[:,:6] = sc_X_train.transform(X_train[:,:6]) #only need to transform first 6 columns - rest are dummies
X_test[:,:6] = sc_X_train.transform(X_test[:,:6]) #same as above
y_train = sc_y_train.transform(y_train)
y_test = sc_y_train.transform(y_test)
Explanation: Baseline Model and Cross-Validation with Holdout Sets
Creation of Holdout Set
End of explanation
baseline_model(X_train, X_test, y_train, y_test)
Explanation: Baseline Model
As we can see - the baseline model of regular linear regression is dreadful! Let's move on to more sophisticated methods!
End of explanation
holdout_results = holdout_grid(["Ridge", "Lasso", "Elastic Net"], X_train, X_test, y_train, y_test)
pickle_object(holdout_results, "holdout_model_results")
Explanation: Ridge, Lasso and Elastic Net regression - Holdouts
End of explanation
sc_X = StandardScaler()
sc_y = StandardScaler()
sc_X.fit(X[:,:6])#only need to learn fit of first 6 - rest are dummies
sc_y.fit(y)
X[:,:6] = sc_X.transform(X[:,:6]) #only need to transform first 6 columns - rest are dummies
y = sc_y.transform(y)
no_holdout_results = regular_grid(["Ridge", "Lasso", "Elastic Net"], X, y)
pickle_object(no_holdout_results, "no_holdout_model_results")
Explanation: Cross-Validation - No Holdout Sets
End of explanation
extract_model_comparisons(holdout_results, no_holdout_results, "Ridge")
Explanation: Analysis of Results!
Ridge Analysis
End of explanation
extract_model_comparisons(holdout_results, no_holdout_results, "Lasso")
Explanation: Lasso Analysis
End of explanation
extract_model_comparisons(holdout_results, no_holdout_results, "Elastic Net")
Explanation: Elastic Net Analysis
End of explanation |
10,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: To verify this actually works as a RAM, I'll run a little simulation
Step2: The simulation results show the values [1, 4, 7, 10, 13, 16, 19, 22, 25, 28] entering on the data input bus
and being stored in the first ten RAM locations
during the interval $t =$ [0, 19] when the write control input is high.
Then the same set of data appears on the data output bus during $t =$ [20, 39] when the
first ten locations of the RAM are read back.
So the RAM passes this simple simulation.
Now let's see how Yosys interprets this RAM description.
First, I'll generate the Verilog code for a RAM with an eight-bit address bus ($2^8$ = 256 locations) with
each word able to store an eight-bit byte.
This works out to a total storage of 256 $\times$ 8 bits $=$ 2048 bits.
That should fit comfortably in a single, 4096-bit BRAM.
Step3: Next, I'll pass the Verilog code to Yosys and see what FPGA resources it uses
Step4: Yowsa! It looks like Yosys is building the RAM from individual flip-flops -- 2056 of 'em.
That's definitely not what we want.
The reason for the lousy implementation is that I haven't given Yosys a description that will
trigger its RAM inference procedure.
I searched and
found the following Verilog code
that does trigger Yosys
Step5: Now the statistics are much more reasonable
Step6: This reduces the resource usage by a single LUT
Step7: I'll run a simulation of the dual-port RAM similar to the one I did above,
but here I'll start reading from the RAM before I finish writing data to it
Step8: In the simulation output, you can see the value that entered the RAM through the data_i bus
on the rising clock edge at time $t$ then exited on the data_o bus three clock cycles
later at time $t + 6$.
In this example, the dual-port RAM is acting as a simple delay line.
Skinny RAM, Fat RAM
In the previous section, I built a 256-byte RAM. Is that it? Is that all you can do with the block RAM?
Obviously not or else why would I even bring this up?
Since block RAMs first appeared in FPGAs, the designers have allowed you to select the width of
the data locations.
While the number of data bits in the RAM is constant, you can clump them into different
word sizes.
Naturally, you'll get more addressable RAM locations if you use a narrow word width ("skinny" RAM)
as compared to using wider words ("fat" RAM).
Here are the allowable widths and the corresponding number of addressable locations
for the 4K BRAMs in the iCE40 FPGA
Step9: Two BRAMs are used because a data width of sixteen bits is needed to hold the ten-bit words and a single 4K BRAM
can only hold 256 of those.
What about a wide RAM with 128 ($2^7$) 24-bit words? That requires 3072 total bits so you would think that should fit
into a single 4K BRAM, right?
Step10: Since the maximum width of a single BRAM is sixteen bits,
Yosys employs two BRAMs in parallel to get the entire 24-bit word width and then only uses half the
addressable locations in each BRAM.
Thus, a total of 8192 bits of BRAM is used to store the 3072 bits that were requested.
How about a 24-bit wide RAM with 512 words?
That's 4$\times$ bigger than the previous one, so will it take 4$\times$ the number of BRAMs?
Step11: Actually, it only takes three BRAMs instead of eight.
Why is that?
Because Yosys was able to stack two 256$\times$16 BRAMs to create a 512$\times$16 RAM, and then
put this in parallel with a single 512$\times$8 BRAM to create a total 512$\times$24 RAM.
From these examples, you can see the efficiency with which you use BRAM resources
is very dependent upon the RAM aspect-ratio (#locations $\times$ data width) that you specify,
sometimes in unexpected ways.
Demo Time!
As I've said before, it would be a shame to do all this work and then not do something fun with it.
So I'll show how to use a BRAM to record an on-off sequence of button presses and then play
it back by lighting the iCEstick LEDs.
(OK, maybe it's not that fun.)
Here's the basics of how the circuit operates
Step12: After converting the MyHDL to Verilog, I wrote the pin assignments for the LEDs and buttons to a file.
I'm using all five LEDs on the iCEstick along with the
two pushbuttons I connected to pins 114 and 118 of the FPGA through the iCEstick I/O header.
Step13: Finally, I synthesized, compiled, and downloaded the FPGA bitstream using the
(by now) familiar sequence of commands
Step14: Once the bitstream was downloaded, I could store and playback a sequence of button pushes.
It's difficult to describe using text or images, so here's a nice video of how it works | Python Code:
from pygmyhdl import *
@chunk
def ram(clk_i, en_i, wr_i, addr_i, data_i, data_o):
'''
Inputs:
clk_i: Data is read/written on the rising edge of this clock input.
en_i: When high, the RAM is enabled for read/write operations.
wr_i: When high, data is written to the RAM; when low, data is read from the RAM.
addr_i: Address bus for selecting which RAM location is being read/written.
data_i: Data bus for writing data into the RAM.
Outputs:
data_o: Data bus for reading data from the RAM.
'''
# Create an array of words to act as RAM locations for storing data.
# The number of bits in each word is set by the width of the data input bus.
# The number of words is determined by the width of the address bus so,
# for example, a 4-bit address would create 2**4 = 16 RAM locations.
mem = [Bus(len(data_i)) for _ in range(2**len(addr_i))]
# Perform read/write operations on the rising edge of the clock.
@seq_logic(clk_i.posedge)
def logic():
if en_i:
# The read/write operations only get executed if the enable input is high.
if wr_i:
# If the write-control is high, write the value on the input data bus
# into the array of words at the given address value.
mem[addr_i.val].next = data_i
else:
# If the write-control is low, read data from the word at the
# given address value and send it to the output data bus.
data_o.next = mem[addr_i.val]
Explanation: Note: If you're reading this as a static HTML page, you can also get it as an
executable Jupyter notebook here.
Block (RAM) Party!
I've already presented some simple combinational and sequential circuits using the FPGA's LUTs and D flip-flops.
But the iCE40 has even more to offer: block RAMs!
These are specialized blocks (hence the name) of high density RAM
embedded within the FPGA fabric.
These BRAMs provide a place to store lots of bits without using up all the DFFs in the FPGA.
Now I'll show you how to use them.
Inferring Block RAMs
If you look at the
iCE40 technology library docs,
you'll see how to instantiate a BRAM using Verilog or VHDL.
But that doesn't do us much good since we're using MyHDL.
So I'm going to demonstrate how to describe a RAM using MyHDL such that Yosys will infer that a BRAM is what I want.
As a first cut, here's a description of a simple RAM:
End of explanation
initialize() # Yeah, yeah, get things ready for simulation...
# Create wires and buses to connect to the RAM.
clk = Wire(name='clk')
en = Wire(name='en')
wr = Wire(name='wr')
addr = Bus(8, name='addr')
data_i = Bus(8, name='data_i')
data_o = Bus(8, name='data_o')
# Instantiate the RAM.
ram(clk_i=clk, en_i=en, wr_i=wr, addr_i=addr, data_i=data_i, data_o=data_o)
def ram_test_bench():
'''RAM test bench: write 10 values to RAM, then read them back.'''
en.next = 1 # Enable the RAM.
# Write data to the first 10 locations in the RAM.
wr.next = 1 # Enable writes to RAM.
for i in range(10):
addr.next = i # Select RAM location to be written.
data_i.next = 3 * i + 1 # Generate a value to write to the location.
# Pulse the clock to write the data to RAM.
clk.next = 0
yield delay(1)
clk.next = 1
yield delay(1)
# Read data from the 10 locations that were written.
wr.next = 0 # Disable writes to RAM == enable reads from RAM.
for i in range(10):
addr.next = i # Select the RAM location to be read.
# Pulse the clock to read the data from RAM.
clk.next = 0
yield delay(1)
clk.next = 1
yield delay(1)
# Simulate the RAM using the test bench.
simulate(ram_test_bench())
# Look at the RAM inputs and outputs as the simulation was executed.
show_text_table('en clk wr addr data_i data_o')
Explanation: To verify this actually works as a RAM, I'll run a little simulation:
End of explanation
toVerilog(ram, clk_i=Wire(), en_i=Wire(), wr_i=Wire(), addr_i=Bus(8), data_i=Bus(8), data_o=Bus(8))
Explanation: The simulation results show the values [1, 4, 7, 10, 13, 16, 19, 22, 25, 28] entering on the data input bus
and being stored in the first ten RAM locations
during the interval $t =$ [0, 19] when the write control input is high.
Then the same set of data appears on the data output bus during $t =$ [20, 39] when the
first ten locations of the RAM are read back.
So the RAM passes this simple simulation.
Now let's see how Yosys interprets this RAM description.
First, I'll generate the Verilog code for a RAM with an eight-bit address bus ($2^8$ = 256 locations) with
each word able to store an eight-bit byte.
This works out to a total storage of 256 $\times$ 8 bits $=$ 2048 bits.
That should fit comfortably in a single, 4096-bit BRAM.
End of explanation
log = !yosys -p "synth_ice40" ram.v
print_stats(log) # Just print the FPGA resource usage stats from the log output.
Explanation: Next, I'll pass the Verilog code to Yosys and see what FPGA resources it uses:
End of explanation
@chunk
def ram(clk_i,wr_i, addr_i, data_i, data_o):
'''
Inputs:
clk_i: Data is read/written on the rising edge of this clock input.
wr_i: When high, data is written to the RAM; when low, data is read from the RAM.
addr_i: Address bus for selecting which RAM location is being read/written.
data_i: Data bus for writing data into the RAM.
Outputs:
data_o: Data bus for reading data from the RAM.
'''
mem = [Bus(len(data_i)) for _ in range(2**len(addr_i))]
@seq_logic(clk_i.posedge)
def logic():
if wr_i:
mem[addr_i.val].next = data_i
else:
data_o.next = mem[addr_i.val]
toVerilog(ram, clk_i=Wire(), wr_i=Wire(), addr_i=Bus(8), data_i=Bus(8), data_o=Bus(8))
log = !yosys -p "synth_ice40" ram.v
print_stats(log)
Explanation: Yowsa! It looks like Yosys is building the RAM from individual flip-flops -- 2056 of 'em.
That's definitely not what we want.
The reason for the lousy implementation is that I haven't given Yosys a description that will
trigger its RAM inference procedure.
I searched and
found the following Verilog code
that does trigger Yosys:
module test(input clk, wen, input [8:0] addr, input [7:0] wdata, output reg [7:0] rdata);
reg [7:0] mem [0:511];
initial mem[0] = 255;
always @(posedge clk) begin
if (wen) mem[addr] <= wdata;
rdata <= mem[addr];
end
endmodule
Then I just fiddled with my code until it produced something like that.
It turns out the culprit is the presence of the enable input (en_i).
Here's what happens if I take that out and leave the RAM enabled all the time:
End of explanation
@chunk
def simpler_ram(clk_i,wr_i, addr_i, data_i, data_o):
'''
Inputs:
clk_i: Data is read/written on the rising edge of this clock input.
wr_i: When high, data is written to the RAM; when low, data is read from the RAM.
addr_i: Address bus for selecting which RAM location is being read/written.
data_i: Data bus for writing data into the RAM.
Outputs:
data_o: Data bus for reading data from the RAM.
'''
mem = [Bus(len(data_i)) for _ in range(2**len(addr_i))]
@seq_logic(clk_i.posedge)
def logic():
if wr_i:
mem[addr_i.val].next = data_i
data_o.next = mem[addr_i.val] # RAM address is always read out!
toVerilog(simpler_ram, clk_i=Wire(), wr_i=Wire(), addr_i=Bus(8), data_i=Bus(8), data_o=Bus(8))
log = !yosys -p "synth_ice40" simpler_ram.v
print_stats(log)
Explanation: Now the statistics are much more reasonable: only a single block RAM is used.
You can even remove the else clause and continually read out the RAM location at the current address:
End of explanation
@chunk
def dualport_ram(clk_i, wr_i, wr_addr_i, rd_addr_i, data_i, data_o):
'''
Inputs:
clk_i: Data is read/written on the rising edge of this clock input.
wr_i: When high, data is written to the RAM; when low, data is read from the RAM.
wr_addr_i: Address bus for selecting which RAM location is being written.
rd_addr_i: Address bus for selecting which RAM location is being read.
data_i: Data bus for writing data into the RAM.
Outputs:
data_o: Data bus for reading data from the RAM.
'''
mem = [Bus(len(data_i)) for _ in range(2**len(wr_addr_i))]
@seq_logic(clk_i.posedge)
def logic():
if wr_i:
mem[wr_addr_i.val].next = data_i
data_o.next = mem[rd_addr_i.val] # Read from a different location than write.
Explanation: This reduces the resource usage by a single LUT:
The iCE40 BRAMs even allow a dual-port mode:
a value can be written to one address while data is read from a second, independent address.
(This is useful for building things like
FIFOs.)
End of explanation
initialize()
# Create wires and buses to connect to the dual-port RAM.
clk = Wire(name='clk')
wr = Wire(name='wr')
wr_addr = Bus(8, name='wr_addr') # Address bus for writes.
rd_addr = Bus(8, name='rd_addr') # Second address bus for reads.
data_i = Bus(8, name='data_i')
data_o = Bus(8, name='data_o')
# Instantiate the RAM.
dualport_ram(clk_i=clk, wr_i=wr, wr_addr_i=wr_addr, rd_addr_i=rd_addr, data_i=data_i, data_o=data_o)
def ram_test_bench():
for i in range(10): # Perform 10 RAM writes and reads.
# Write data to address i.
wr_addr.next = i
data_i.next = 3 * i + 1
wr.next = 1
# Read data from address i-3. After three clocks, the data that entered
# on the data_i bus will start to appear on the data_o bus.
rd_addr.next = i - 3
# Pulse the clock to trigger the write and read operations.
clk.next = 0
yield delay(1)
clk.next = 1
yield delay(1)
# Simulate the RAM using the test bench.
simulate(ram_test_bench())
# Look at the RAM inputs and outputs as the simulation was executed.
show_text_table('clk wr wr_addr data_i rd_addr data_o')
Explanation: I'll run a simulation of the dual-port RAM similar to the one I did above,
but here I'll start reading from the RAM before I finish writing data to it:
End of explanation
toVerilog(ram, clk_i=Wire(), wr_i=Wire(), addr_i=Bus(9), data_i=Bus(10), data_o=Bus(10))
log = !yosys -p "synth_ice40" ram.v
print_stats(log)
Explanation: In the simulation output, you can see the value that entered the RAM through the data_i bus
on the rising clock edge at time $t$ then exited on the data_o bus three clock cycles
later at time $t + 6$.
In this example, the dual-port RAM is acting as a simple delay line.
Skinny RAM, Fat RAM
In the previous section, I built a 256-byte RAM. Is that it? Is that all you can do with the block RAM?
Obviously not or else why would I even bring this up?
Since block RAMs first appeared in FPGAs, the designers have allowed you to select the width of
the data locations.
While the number of data bits in the RAM is constant, you can clump them into different
word sizes.
Naturally, you'll get more addressable RAM locations if you use a narrow word width ("skinny" RAM)
as compared to using wider words ("fat" RAM).
Here are the allowable widths and the corresponding number of addressable locations
for the 4K BRAMs in the iCE40 FPGA:
| Data Width | # of Locations | Address Width |
|:----------:|---------------:|:-------------:|
| 2 | 2048 | 11 |
| 4 | 1024 | 10 |
| 8 | 512 | 9 |
| 16 | 256 | 8 |
So how do you set the RAM width?
Easy, just set the number of data bus bits and Yosys will select the smallest
width that will hold it.
For example, specifying a data width of seven bits will make Yosys choose a BRAM width of eight bits.
Of course, that means you'll waste one bit of every memory location, but c'est la vie.
Specifying the number of addressable locations in the RAM is done similarly by setting the
width of the address bus.
To illustrate, an eleven-bit address bus would translate to $2^{11} =$ 2048 addressable locations.
Let's try some various word and address widths and see what Yosys does with them.
First, here's a RAM with 512 ($2^9$) ten-bit words:
End of explanation
toVerilog(ram, clk_i=Wire(), wr_i=Wire(), addr_i=Bus(7), data_i=Bus(24), data_o=Bus(24))
log = !yosys -p "synth_ice40" ram.v
print_stats(log)
Explanation: Two BRAMs are used because a data width of sixteen bits is needed to hold the ten-bit words and a single 4K BRAM
can only hold 256 of those.
What about a wide RAM with 128 ($2^7$) 24-bit words? That requires 3072 total bits so you would think that should fit
into a single 4K BRAM, right?
End of explanation
toVerilog(ram, clk_i=Wire(), wr_i=Wire(), addr_i=Bus(9), data_i=Bus(24), data_o=Bus(24))
log = !yosys -p "synth_ice40" ram.v
print_stats(log)
Explanation: Since the maximum width of a single BRAM is sixteen bits,
Yosys employs two BRAMs in parallel to get the entire 24-bit word width and then only uses half the
addressable locations in each BRAM.
Thus, a total of 8192 bits of BRAM is used to store the 3072 bits that were requested.
How about a 24-bit wide RAM with 512 words?
That's 4$\times$ bigger than the previous one, so will it take 4$\times$ the number of BRAMs?
End of explanation
@chunk
def gen_reset(clk_i, reset_o):
'''
Generate a reset pulse to initialize everything.
Inputs:
clk_i: Input clock.
Outputs:
reset_o: Active-high reset pulse.
'''
cntr = Bus(1) # Reset counter.
@seq_logic(clk_i.posedge)
def logic():
if cntr < 1:
# Generate a reset while the counter is less than some threshold
# and increment the counter.
cntr.next = cntr.next + 1
reset_o.next = 1
else:
# Release the reset once the counter passes the threshold and
# stop incrementing the counter.
reset_o.next = 0
@chunk
def sample_en(clk_i, do_sample_o, frq_in=12e6, frq_sample=100):
'''
Send out a pulse every so often to trigger a sampling operation.
Inputs:
clk_i: Input clock.
frq_in: Frequency of the input clock (defaults to 12 MHz).
frq_sample: Frequency of the sample clock (defaults to 100 Hz).
Outputs:
do_sample_o: Sends out a single-cycle pulse every 1/frq_sample seconds.
'''
# Compute the width of the counter and when it should roll-over based
# on the master clock frequency and the desired sampling frequency.
from math import ceil, log2
rollover = int(ceil(frq_in / frq_sample)) - 1
cntr = Bus(int(ceil(log2(frq_in/frq_sample))))
# Sequential logic for generating the sampling pulse.
@seq_logic(clk_i.posedge)
def counter():
cntr.next = cntr + 1 # Increment the counter.
do_sample_o.next = 0 # Clear the sampling pulse output except...
if cntr == rollover:
do_sample_o.next = 1 # ...when the counter rolls over.
cntr.next = 0
@chunk
def record_play(clk_i, button_a, button_b, leds_o):
'''
Sample value on button B input, store in RAM, and playback by turning LEDs on/off.
Inputs:
clk_i: Clock input.
button_a: Button A input. High when pressed. Controls record/play operation.
button_b: Button B input. High when pressed. Used to input samples for controlling LEDs.
Outputs:
leds_o: LED outputs.
'''
# Instantiate the reset generator.
reset = Wire()
gen_reset(clk_i, reset)
# Instantiate the sampling pulse generator.
do_sample = Wire()
sample_en(clk_i, do_sample)
# Instantiate a RAM for holding the samples.
wr = Wire()
addr = Bus(11)
end_addr = Bus(len(addr)) # Holds the last address of the recorded samples.
data_i = Bus(1)
data_o = Bus(1)
ram(clk_i, wr, addr, data_i, data_o)
# States of the record/playback controller.
state = Bus(3) # Holds the current state of the controller.
INIT = 0 # Initialize. The reset pulse sends us here.
WAITING_TO_RECORD = 1 # Getting read to record samples.
RECORDING = 2 # Actually storing samples in RAM.
WAITING_TO_PLAY = 3 # Getting ready to play back samples.
PLAYING = 4 # Actually playing back samples.
# Sequential logic for the record/playback controller.
@seq_logic(clk_i.posedge)
def fsm():
wr.next = 0 # Keep the RAM write-control off by default.
if reset: # Initialize the controller using the pulse from the reset generator.
state.next = INIT # Go to the INIT state after the reset is released.
elif do_sample: # Process a sample whenever the sampling pulse arrives.
if state == INIT: # Initialize the controller.
leds_o.next = 0b10101 # Light LEDs to indicate the INIT state.
if button_a == 1:
# Get ready to start recording when button A is pressed.
state.next = WAITING_TO_RECORD # Go to record setup state.
elif state == WAITING_TO_RECORD: # Setup for recording.
leds_o.next = 0b11010 # Light LEDs to indicate this state.
if button_a == 0:
# Start recording once button A is released.
addr.next = 0 # Start recording from beginning of RAM.
data_i.next = button_b # Record the state of button B.
wr.next = 1 # Write button B state to RAM.
state.next = RECORDING # Go to recording state.
elif state == RECORDING: # Record samples of button B to RAM.
addr.next = addr + 1 # Next location for storing sample.
data_i.next = button_b # Sample state of button B.
wr.next = 1 # Write button B state to RAM.
# For feedback to the user, display the state of button B on the LEDs.
leds_o.next = concat(1,button_b, button_b, button_b, button_b)
if button_a == 1:
# If button A pressed, then get ready to play back the stored samples.
end_addr.next = addr+1 # Store the last sample address.
state.next = WAITING_TO_PLAY # Go to playback setup state.
elif state == WAITING_TO_PLAY: # Setup for playback.
leds_o.next = 0b10000 # Light LEDs to indicate this state.
if button_a == 0:
# Start playback once button A is released.
addr.next = 0 # Start playback from beginning of RAM.
state.next = PLAYING # Go to playback state.
elif state == PLAYING: # Show recorded state of button B on the LEDs.
leds_o.next = concat(1,data_o[0],data_o[0],data_o[0],data_o[0])
addr.next = addr + 1 # Advance to the next sample.
if addr == end_addr:
# Loop back to the start of RAM if this is the last sample.
addr.next = 0
if button_a == 1:
# Record a new sample if button A is pressed.
state.next = WAITING_TO_RECORD
Explanation: Actually, it only takes three BRAMs instead of eight.
Why is that?
Because Yosys was able to stack two 256$\times$16 BRAMs to create a 512$\times$16 RAM, and then
put this in parallel with a single 512$\times$8 BRAM to create a total 512$\times$24 RAM.
From these examples, you can see the efficiency with which you use BRAM resources
is very dependent upon the RAM aspect-ratio (#locations $\times$ data width) that you specify,
sometimes in unexpected ways.
Demo Time!
As I've said before, it would be a shame to do all this work and then not do something fun with it.
So I'll show how to use a BRAM to record an on-off sequence of button presses and then play
it back by lighting the iCEstick LEDs.
(OK, maybe it's not that fun.)
Here's the basics of how the circuit operates:
When button A is pressed and released, set the RAM address to 0 and start recording.
Every 0.01 seconds, sample the on-off value of button B, store it in RAM at the current address,
and increment the address.
If button A is not pressed, return to step 2 and take another sample.
Otherwise, store the current address to mark the end of the recording,
and halt here until button A is released.
When button A is released, reset the address to 0 (the start of the recording).
Every 0.01 seconds, read a button sample from the RAM and turn an LED on or off depending upon its value.
If the current address equals the end-of-recording address, reset the address to the beginning of the recording (address 0).
Otherwise, increment the current address.
If button A is not pressed, return to step 5 and display another sample.
Otherwise, loop back to step 1 and to start a new recording.
The iCEstick board already had the LEDs I needed, but no buttons.
To fix that, I wired some external buttons to the board as shown in this schematic:
<img src="record_play_circuit.png" alt="Record/playback schematic." width="600px" />
Next, I broke the record/playback logic into four pieces:
A RAM for storing the button samples (already coded above).
A counter that generates a sampling pulse every 0.01 seconds.
A controller that manages the recording/playback process.
A reset circuit that generates a single pulse to initialize the controller's state.
The MyHDL code for the circuit is shown below:
End of explanation
toVerilog(record_play, clk_i=Wire(), button_a=Wire(), button_b=Wire(), leds_o=Bus(5))
with open('record_play.pcf', 'w') as pcf:
pcf.write(
'''
set_io clk_i 21
set_io leds_o[0] 99
set_io leds_o[1] 98
set_io leds_o[2] 97
set_io leds_o[3] 96
set_io leds_o[4] 95
set_io button_a 118
set_io button_b 114
'''
)
Explanation: After converting the MyHDL to Verilog, I wrote the pin assignments for the LEDs and buttons to a file.
I'm using all five LEDs on the iCEstick along with the
two pushbuttons I connected to pins 114 and 118 of the FPGA through the iCEstick I/O header.
End of explanation
!yosys -q -p "synth_ice40 -blif record_play.blif" record_play.v
!arachne-pnr -q -d 1k -p record_play.pcf record_play.blif -o record_play.asc
!icepack record_play.asc record_play.bin
!iceprog record_play.bin
Explanation: Finally, I synthesized, compiled, and downloaded the FPGA bitstream using the
(by now) familiar sequence of commands:
End of explanation
HTML('<div style="padding-bottom:50.000%;"><iframe src="https://streamable.com/s/ihqg4/eqmlzq" frameborder="0" width="100%" height="100%" allowfullscreen style="width:640px;position:absolute;"></iframe></div>')
Explanation: Once the bitstream was downloaded, I could store and playback a sequence of button pushes.
It's difficult to describe using text or images, so here's a nice video of how it works:
End of explanation |
10,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenCV template recognition from http
Step1: Crop the image to make an initial test case
Step2: next grab a gold coin as template
Step3: next process template | Python Code:
! wget http://docs.opencv.org/master/res_mario.jpg
import cv2
import numpy as np
from matplotlib import pyplot as plt
from PIL import Image as PIL_Image
from IPython.display import Image as IpyImage
IpyImage(filename='res_mario.jpg')
Explanation: OpenCV template recognition from http://docs.opencv.org/master/d4/dc6/tutorial_py_template_matching.html#gsc.tab=0
End of explanation
img_full = PIL_Image.open('res_mario.jpg')
img_half = img_full.crop((0,0,img_full.size[0]/2,img_full.size[1]))
img_half.save('mario_test1.jpg')
IpyImage(filename='mario_test1.jpg')
Explanation: Crop the image to make an initial test case
End of explanation
source = PIL_Image.open('mario_test1.jpg')
coin = source.crop((100,113,110,129))
coin.save('coin.jpg')
IpyImage(filename = 'coin.jpg')
Explanation: next grab a gold coin as template
End of explanation
img_rgb = cv2.imread('mario_test1.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('coin.jpg',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
cv2.imwrite('res.jpg',img_rgb)
IpyImage(filename = 'res.jpg')
Explanation: next process template
End of explanation |
10,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modelling On the Job Search
The implementation draws heavily from the material provided on the Quantitative Economics website.
Model Features
Step1: Parameterization
Step3: Bellman Operator
Step5: Value Function Iterations
Step6: Solving the Model
Step7: Plotting
Step8: Formatting | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.optimize import minimize
from scipy.integrate import fixed_quad as integrate
import time
from scipy import interp
Explanation: Modelling On the Job Search
The implementation draws heavily from the material provided on the Quantitative Economics website.
Model Features:
Job-specific human capital accumulation combined with on-the-job search
Infinite horizon dynamic programming with one state variable and two controls
Model Setup:
Let $x_{t}$ denote the time-t-job-specific human capital of a worker employed at a given firm
Let $w_{t}$ denote current wages
Let $w_{t}=x_{t}(1-s_{t}-\phi_{t})$ where
$\phi_{t}$ is investment in job-specific human capital for the current role
$s_{t}$ is search effort, devoted to obtaining new offers from other firms
If the worker remains in the current job, evolution of ${x_{t}}$ is given by $x_{t+1}=G(x_{t},\phi_{t})$
When search effort at t is $s_{t}$, the worker receives a new job offer with probability $\pi(s_{t})\in[0,1]$
Value of offer is $U_{t+1}$, where ${U_{t}}$ is idd with common distribution F
Worker has the right to reject the current offer and continue with existing job
In particular, $x_{t+1}=U_{t+1}$ if accepts, and $x_{t+1}=G(x_{t},\phi_{t})$ if rejects.
The Bellman Equation:
$$V(x) = \underset{s+\phi<1}{\max}{x(1-s-\phi)+\beta(1-\pi(s))V(G(x,\phi))+...
+\beta\pi(s)\int V(\max{G(x,\phi),u})F(du)} $$
Parameterizations:
$$G(x,\phi) = A(x\phi)^{\alpha} \
\pi(s) = \sqrt{s} \
F = Beta(2,2)$$
where:
$$A = 1.4 \
\alpha = 0.6 \
\beta = 0.96 $$
Roadmap:
Construct the Bellman operator
Do value function iterations
Load Resources
End of explanation
# production function
A = 1.4
alpha = 0.6
G = lambda x, phi: A*(x*phi)**alpha
# discount factor
beta = 0.96
# tolerence
epsilon = 1e-4
# minimization method
method = "COBYLA"
# probability of having a new job offer (a function of search effort)
pi = np.sqrt
# distribution of the new job offer
F = stats.beta(2,2)
# x_grid
grid_size = 25
grid_max = max(A**(1/(1-alpha)), F.ppf(1-epsilon))
x_grid = np.linspace(epsilon, grid_max, grid_size)
Explanation: Parameterization
End of explanation
def bellman_operator(V, brute_force=False, return_policies=False):
Parameters
----------
V: array_like(float)
Array representing an approximate value function
brute_force: bool, optional(default=False)
Default is False. If the brute_force flag is True, then grid
search is performed at each maximization step.
return_policies: bool, optional(default=False)
Indicates whether to return just the updated value function TV or
both the greedy policy computed from V and TV
Returns
-------
new_V: array_like(float)
The updated value function Tv, as an array representing
the values TV(x) over x in x_grid.
s_policy: array_like(float)
The greedy policy computed from V. Only returned if return_policies == True
# set up
Vf = lambda x: interp(x, x_grid, V)
N = len(x_grid)
new_V, s_policy, phi_policy = np.empty(N), np.empty(N), np.empty(N)
a, b = F.ppf(0.005), F.ppf(0.995)
c1 = lambda z: 1 - sum(z) # used to enforce s+phi <= 1
c2 = lambda z: z[0] - epsilon # used to enforce s >= epsilon
c3 = lambda z: z[1] - epsilon # used to enforce phi >= epsilon
constraints = [{"type":"ineq","fun":i} for i in [c1, c2, c3]]
guess = (0.2, 0.2)
# solve r.h.s. of Bellman equation
for i, x in enumerate(x_grid):
# set up objective function
def w(z):
s, phi = z
h = lambda u: Vf(np.maximum(G(x,phi),u))*F.pdf(u)
integral, err = integrate(h,a,b)
q = pi(s)*integral + (1-pi(s))*Vf(G(x,phi))
# minus because we minimize
return -x*(1-s-phi) - beta*q
# either use SciPy solver
if not brute_force:
max_s, max_phi = minimize(w, guess, constraints=constraints, method=method)["x"]
max_val = -w((max_s,max_phi))
# or search on a grid
else:
search_grid = np.linspace(epsilon, 1.0, 15)
max_val = -1.0
for s in search_grid:
for phi in search_grid:
current_val = -w((s,phi)) if s + phi <= 1.0 else -1.0
if current_val > max_val:
max_val, max_s, max_phi = current_val, s, phi
# store results
new_V[i] = max_val
s_policy[i], phi_policy[i] = max_s, max_phi
if return_policies:
return s_policy, phi_policy
else:
return new_V
Explanation: Bellman Operator
End of explanation
def compute_fixed_point(T, v, error_tol=1e-4, max_iter=50, verbose=1, print_skip=5, *args, **kwargs):
Computes and returns T^k v, an approximate fixed point
Here T is an operator, v is an initial condition and k is the number of iterates.
Provided that T is a contraction mapping or similar, T^k v will be an approximation to be fixed point.
Parameters
----------
T: callable
function that acts on v
v: object
An object such that T(v) is defined
error_tol: scaler(float), optional(default=1e-3)
Error tolerance
max_iter: scaler(int), optional(default=True)
Maximum number of iterations
verbose: bool, optional(default=True)
If True, then print current error at each iterate.
args, kwargs:
Other arguments and keyword arguments that are passed directly to the
function T each time it is called.
Returns
-------
v: object
The approximate fixed point
iterate = 0
error = error_tol + 1
if verbose:
start_time = time.time()
msg = "{i:<11}{d:<10}{t:<10}".format(i="Iteration",
d="Distance",
t="Elapsed (seconds)") # < means left alighned
print(msg)
print("-"*len(msg))
while iterate < max_iter and error > error_tol:
new_v = T(v, *args, **kwargs)
iterate += 1
error = np.max(np.abs(new_v - v))
if verbose & (iterate%print_skip==0):
etime = time.time() - start_time
msg = "{i:<11}{d:<10.3e}{t:<10.3e}".format(i=iterate,d=error,t=etime)
print(msg)
v = new_v
return v
Explanation: Value Function Iterations
End of explanation
# starting value
v_init = x_grid * 0.5
# determine fix point using minimize
V = compute_fixed_point(bellman_operator, v_init)
print(V[0:5])
# starting value
v_init = x_grid * 0.5
# determine fix point using grid search
V = compute_fixed_point(bellman_operator, v_init, brute_force=True)
print(V[0:5])
Explanation: Solving the Model
End of explanation
# determine optimal policy
s_policy, phi_policy = bellman_operator(V, return_policies=True)
# === plot policies === #
fig, ax = plt.subplots()
ax.set_xlim(0, max(x_grid))
ax.set_ylim(-0.1, 1.1)
ax.plot(x_grid, phi_policy, 'b-', label='phi')
ax.plot(x_grid, s_policy, 'g-', label='s')
ax.set_xlabel("x")
ax.legend()
plt.show()
Explanation: Plotting
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1K5apRH').read())
Explanation: Formatting
End of explanation |
10,371 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm trying to reduce noise in a python image array by removing all completely isolated single cells, i.e. setting nonzero value cells to 0 if they are completely surrounded by other "0"s like this: | Problem:
import numpy as np
import scipy.ndimage
square = np.zeros((32, 32))
square[10:-10, 10:-10] = np.random.randint(1, 255, size = (12, 12))
np.random.seed(12)
x, y = (32*np.random.random((2, 20))).astype(int)
square[x, y] = np.random.randint(1, 255, size = (20,))
def filter_isolated_cells(array, struct):
filtered_array = np.copy(array)
id_regions, num_ids = scipy.ndimage.label(filtered_array, structure=struct)
id_sizes = np.array(scipy.ndimage.sum(array, id_regions, range(num_ids + 1)))
area_mask = (id_sizes == 1)
filtered_array[area_mask[id_regions]] = 0
return filtered_array
arr = np.sign(square)
filtered_array = filter_isolated_cells(arr, struct=np.ones((3,3)))
square = np.where(filtered_array==1, square, 0) |
10,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: INPE
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
10,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Statistics
Summarizing data.
Plotting data.
Confidence intervals.
Statistical tests.
About this Notebook
In this notebook, we download a dataset with data about customers. Then, we calculate statistical measures and plot distributions. Finally, we perform statistical tests.
Step1: Importing Needed packages
Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests.
Step2: Print the current version of Python
Step3: Downloading Data
Run system commands using ! (platform dependant)
Step4: To download the data, we will use !wget (on DataScientistWorkbench)
Step5: Understanding the Data
customer_dbase_sel.csv
Step6: Data Exploration
Step7: Labeling Data
income > 30000 --> High-income --> 1
income < 30000 --> Low-income --> 0
Step8: Data Exploration
Select 4 data columns for visualizing
Step9: Compute descriptive statistics for the data
Step10: Drop NaN (Not-a-Number) observations
Step11: Print observations with NaN commutetime
Step12: Visualize data
Step13: Confidence Intervals
For computing confidence intervals and performing simple statistical tests, we will use the stats sub-module of scipy
Step14: Confidence intervals tell us how close we think the mean is to the true value, with a certain level of confidence.
We compute mean mu, standard deviation sigma and the number of observations N in our sample of the debt-to-income ratio
Step15: The 95% confidence interval for the mean of N draws from a Normal distribution with mean mu and standard deviation sigma is
Step16: Statistical Tests
Select columns by name
Step17: Compute means for cardspent and debtinc for the male and female populations
Step18: Compute mean for cardspent for female population only
Step19: We have seen above that the mean cardspent and debtinc in the male and female populations were different. To test if this is significant, we do a 2-sample t-test with scipy.stats.ttest_ind()
Step20: In the case of amount spent on primary credit card, we conclude that men tend to charge more on their primary card (p-value = 2e-6 < 0.05, statistically significant).
Step21: In the case of debt-to-income ratio, we conclude that there is no significant difference between men and women (p-value = 0.758 > 0.05, not statistically significant).
Plot Data
Plot statistical measures for amounts spent on primary credit card
Use boxplot to compare medians, 25% and 75% percentiles, 12.5% and 87.5% percentiles
Step22: Plot observations with boxplot
Step23: Plot age vs. income data to find some interesting relationships. | Python Code:
# Run this cell :)
1+2
Explanation: Introduction to Statistics
Summarizing data.
Plotting data.
Confidence intervals.
Statistical tests.
About this Notebook
In this notebook, we download a dataset with data about customers. Then, we calculate statistical measures and plot distributions. Finally, we perform statistical tests.
End of explanation
# Uncomment next command if you need to install a missing module
#!pip install statsmodels
import matplotlib.pyplot as plt
import pandas as pd
try:
import statsmodels.api as sm
except:
!pip install statsmodels
import numpy as np
%matplotlib inline
Explanation: Importing Needed packages
Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests.
End of explanation
import sys
print(sys.version)
Explanation: Print the current version of Python:
End of explanation
import sys
if sys.platform.startswith('linux'):
!ls
elif sys.platform.startswith('freebsd'):
!ls
elif sys.platform.startswith('darwin'):
!ls
elif sys.platform.startswith('win'):
!dir
Explanation: Downloading Data
Run system commands using ! (platform dependant)
End of explanation
if sys.platform.startswith('linux'):
!wget -O /resources/customer_dbase_sel.csv http://analytics.romanko.ca/data/customer_dbase_sel.csv
Explanation: To download the data, we will use !wget (on DataScientistWorkbench)
End of explanation
url = "http://analytics.romanko.ca/data/customer_dbase_sel.csv"
df = pd.read_csv(url)
## On DataScientistWorkbench you can read from /resources directory
#df = pd.read_csv("/resources/customer_dbase_sel.csv")
# display first 5 rows of the dataset
df.head()
Explanation: Understanding the Data
customer_dbase_sel.csv:
We have downloaded an extract from IBM SPSS sample dataset with customer data, customer_dbase_sel.csv, which contains customer-specific data such as age, income, credit card spendings, commute type and time, etc. Dataset source
custid e.g. 0648-AIPJSP-UVM (customer id)
gender e.g. Female or Male
age e.g. 26
debtinc e.g. 11.1 (debt to income ratio in %)
card e.g. Visa, Mastercard (type of primary credit card)
carditems e.g. 1, 2, 3 ... (# of primary credit card purchases in the last month)
cardspent e.g 228.27 (amount in \$ spent on the primary credit card last month)
commute e.g. Walk, Car, Bus (commute type)
commutetime e.g. 22 (time in minutes to commute to work)
income e.g. 16.00 (income in thousands \$ per year)
edcat e.g. College degree, Post-undergraduate degree (education level)
Reading the data in
End of explanation
# Summarize the data
df.describe()
# Number of rows and columns in the data
df.shape
# Display column names
df.columns
Explanation: Data Exploration
End of explanation
# To label data into high-income and low-income
df['income_category'] = df['annual_income'].map(lambda x: 1 if x>30000 else 0)
df[['annual_income','income_category']].head()
Explanation: Labeling Data
income > 30000 --> High-income --> 1
income < 30000 --> Low-income --> 0
End of explanation
viz = df[['cardspent','debtinc','carditems','commutetime']]
viz.head()
Explanation: Data Exploration
Select 4 data columns for visualizing:
End of explanation
viz.describe()
Explanation: Compute descriptive statistics for the data:
End of explanation
df[['commutetime']].dropna().count()
Explanation: Drop NaN (Not-a-Number) observations:
End of explanation
print( df[np.isnan(df["commutetime"])] )
Explanation: Print observations with NaN commutetime:
End of explanation
viz.hist()
plt.show()
df[['cardspent']].hist()
plt.show()
df[['commutetime']].hist()
plt.show()
Explanation: Visualize data:
End of explanation
from scipy import stats
Explanation: Confidence Intervals
For computing confidence intervals and performing simple statistical tests, we will use the stats sub-module of scipy:
End of explanation
mu, sigma = np.mean(df[['debtinc']]), np.std(df[['debtinc']])
print ("mean = %G, st. dev = %g" % (mu, sigma))
N = len(df[['debtinc']])
N
Explanation: Confidence intervals tell us how close we think the mean is to the true value, with a certain level of confidence.
We compute mean mu, standard deviation sigma and the number of observations N in our sample of the debt-to-income ratio:
End of explanation
conf_int = stats.norm.interval( 0.95, loc = mu, scale = sigma/np.sqrt(N) )
conf_int
print ("95%% confidence interval for the mean of debt to income ratio = [%g %g]") % (conf_int[0], conf_int[1])
Explanation: The 95% confidence interval for the mean of N draws from a Normal distribution with mean mu and standard deviation sigma is
End of explanation
adf=df[['gender','cardspent','debtinc']]
print(adf['gender'])
Explanation: Statistical Tests
Select columns by name:
End of explanation
gender_data = adf.groupby('gender')
print (gender_data.mean())
Explanation: Compute means for cardspent and debtinc for the male and female populations:
End of explanation
adf[adf['gender'] == 'Female']['cardspent'].mean()
Explanation: Compute mean for cardspent for female population only:
End of explanation
female_card = adf[adf['gender'] == 'Female']['cardspent']
male_card = adf[adf['gender'] == 'Male']['cardspent']
tc, pc = stats.ttest_ind(female_card, male_card)
print ("t-test: t = %g p = %g" % (tc, pc))
Explanation: We have seen above that the mean cardspent and debtinc in the male and female populations were different. To test if this is significant, we do a 2-sample t-test with scipy.stats.ttest_ind():
End of explanation
female_debt = adf[adf['gender'] == 'Female']['debtinc']
male_debt = adf[adf['gender'] == 'Male']['debtinc']
td, pd = stats.ttest_ind(female_debt, male_debt)
print ("t-test: t = %g p = %g" % (td, pd))
Explanation: In the case of amount spent on primary credit card, we conclude that men tend to charge more on their primary card (p-value = 2e-6 < 0.05, statistically significant).
End of explanation
adf.boxplot(column='cardspent', by='gender', grid=False, showfliers=False)
plt.show()
Explanation: In the case of debt-to-income ratio, we conclude that there is no significant difference between men and women (p-value = 0.758 > 0.05, not statistically significant).
Plot Data
Plot statistical measures for amounts spent on primary credit card
Use boxplot to compare medians, 25% and 75% percentiles, 12.5% and 87.5% percentiles:
End of explanation
gend = list(['Female', 'Male'])
for i in [1,2]:
y = adf.cardspent[adf.gender==gend[i-1]].dropna()
# Add some random "jitter" to the x-axis
x = np.random.normal(i, 0.04, size=len(y))
plt.plot(x, y, 'r.', alpha=0.2)
plt.boxplot([female_card,male_card],labels=gend)
plt.ylabel("cardspent")
plt.ylim((-50,850))
plt.show()
Explanation: Plot observations with boxplot:
End of explanation
plt.scatter(df.age, df.annual_income)
plt.xlabel("Age")
plt.ylabel("Income")
plt.show()
Explanation: Plot age vs. income data to find some interesting relationships.
End of explanation |
10,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
결합, 주변, 조건부 확률 밀도 함수
결합누적확률분포는 이제 변수가 2개니까 다변수로 붙어서 나온 개념이다.
두 개 이상의 확률 변수가 서로 관계를 가지며 존재하는 경우를 생각해 보자. 예를 들어 학교에 있는 학생의 키와 몸무게를 측정하는 경우 한 명의 학생 $\omega$에 대해 두 개의 자료 ($x$, $y$)가 한 쌍으로 나오게 된다. 이렇게 취득한 자료를 확률 변수 $X$와 $Y$로 볼 때, 이들의 확률 분포를 한번에 묘사하기 위한 확률 분포를 결합 확률 분포(join probability distribution)라고 한다.
결합 확률 분포도 단일 연속 확률 변수의 경우와 마찬가지로 누적 확률 분포 함수(cumulative probability function)와 확률 밀도 함수(probability density function)를 통해 서술된다.
연속 확률 변수의 결합 누적 확률 분포 함수
두 확률 변수 $X$, $Y$에 대한 누적 확률 분포 함수 $F_{XY}(x, y) $는 다음과 같이 정의한다.
$$ F_{XY}(x, y) = P({ X < x } \cap { Y < y }) = P(X < x, Y < y) $$
만약 구간의 끝을 나타내는 두 독립 변수 $x$, $y$중 하나가 무한대 값을 가지는 경우에는 해당 변수의 값은 어떤 값을 가져도 상관없으므로 남은 하나의 변수에 대한 누적 확률 분포 함수로 줄어든다. 이를 주변 확률 분포(marginal probability distribution)이라고 한다.
$$ F_X(x)=F_{XY}(x, \infty) $$
$$ F_Y(x)=F_{XY}(\infty, y) $$
누적 확률 분포 함수 $F_{XY}(x, y) $는 다음과 같은 특성을 가진다.
$$ F_{XY}(\infty, \infty)=1 $$
$$ F_{XY}(-\infty, y)=F_{XY}(x,-\infty)=0 $$
연속 확률 변수의 결합 확률 밀도 함수
단일 확률 변수의 경우처럼 누적 결합 확률 분포 함수를 미분하여 결합 확률 밀도 함수를 정의할 수 있다. 다만 이 경우에는 독립 변수가 2개이므로 각각에 대해 모두 편미분(partial differentication)해야 한다.
$$ f_{XY} = \dfrac{\partial^2 F_{XY}(x, y)}{\partial x \partial y} $$
결합 확률 밀도 함수를 특정 구간에 대해 적분하면 해당 구간에 대한 확률이 된다.
$$ \int_{x_1}^{x_2} \int_{y_1}^{y_2} f_{XY}(x,y)dxdy = P\big({ x_1 \leq X \leq x_2, \; y_1 \leq Y \leq y_2 }\big) $$
따라서 결합 확률 밀도 함수를 모든 변수에 대해 $-\infty$에서 $\infty$ 까지 적분하면 값이 1이 된다.
$$ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f_{XY}(x,y)dxdy=1 $$
연속 확률 변수의 결합 확률 밀도 함수는 2차원 함수가 된다. 아래는 다변수 정규 분포의 결합 확률 밀도의 예를 그린 것이다.
Step1: 동일한 결합 확률 밀도 함수를 3차원으로 그리면 아래와 같다.
Step2: 이산 확률 변수의 결합 확률 질량 함수
이젠 사건이 아니라 숫자다. x, y가 숫자로 들어간다.
여기서는 확률로 나타낼 수 있다. 누적의 의미가 없어진다. 그 값 자체가 확률
다변수 이산 확률 변수에 대해서는 결합 확률 질량 함수를 구할 수 있다. 결합 확률 질량 함수는 모든 확률 변수의 값이 특정 숫자가 될 확률을 뜻한다.
$$ f_{XY}(x,y) = P(X=x,Y=y) $$
결합 확률 질량 함수는 댜음과 같이 2차원 표의 형태가 된다.
Step3: 주변 확률 밀도 함수
주변 확률 밀도 함수(marginal probability density function)는 다변수 확률 변수에서 특정한 하나의 변수에 대한 기대값을 말한다. 따라서 결합 확률 밀도 함수를 특정 변수에 대해서만 적분하여 구한다.
기댓값 계산(적분)으로 인해 차원이 한 개 줄어들기 때문에 2차원 확률 변수의 주변 확률 밀도 함수는 1차원 함수가 된다.
$$
\begin{align}%\label{}
\nonumber f_X(x) = \text{E}{Y}[f{XY}(x,y)] = \int_{-\infty}^{\infty} f_{XY}(x,y)dy \
\nonumber f_Y(y) = \text{E}{X}[f{XY}(x,y)] = \int_{-\infty}^{\infty} f_{XY}(x,y)dx
\end{align}
$$
이산 확률 변수의 경우에는 주변 확률 질량 함수(marginal probability mass function) 다음과 같이 정의한다
$$
\begin{align}%\label{}
\nonumber f_X(x) = \text{E}{Y}[f{XY}(x,y)] = \sum_{y_j} f_{XY}(x,y_j) \
\nonumber f_Y(y) = \text{E}{X}[f{XY}(x,y)] = \sum_{x_i} f_{XY}(x_i,y) \
\end{align}
$$
위에서 예로 든 이산 확률 변수의 경우에 주변 확률 질량 함수를 계산하면 다음과 같다.
Step4: 위에서 예로 든 연속 확률 변수의 경우에 주변 확률 밀도 함수를 계산하면 다음과 같다.
Step5: 다변수 가우시안 정규분포의 첫 번째 꼭지점이 높은 이유는? 맨처음 올라가는 것의 미분 기울기가 급격하게 올라가기 때문
아까 달팽이그림에서는 미분만 나타난 것이다. 그걸 적분한 것만이 값을 가질 수 있다. 0부터 1사이의 값
조인트와 마지날은 상대적인 개념이다. 즉, 관계적인 개념.
독립은 pdf의 모양과만 관계가 있다.
조건부 확률 밀도 함수
오로지 y값 말고 x값에만 관심 있으면 y를 다 없애면 된다.
조건부도 마찬가지로 차원이 줄어든다. 2차원이 1차원으로
조건부는 특정한 값을 하나 찝어. 값이 하나 박히다. 단면만 보는 것. 마지날은 눌러 찍은 것이고.
그런데 여기서 마지날은 다 똑같아진다. 어디서든 곡선들의 합이 다 같아지기 때문이다.
정규화 하기 이전의 컨디셔널
조건부 확률 밀도 함수(conditional probability density function)는 다변수 확률 변수 중 하나의 값이 특정 값으로 고정되어 상수가 되어 버린 경우이다.
$$ f_{X \mid Y}(x \mid y_0) = \dfrac{f_{XY}(x, y=y_0)}{f_{Y}(y_0)} $$
$$ f_{Y \mid X}(y \mid x_0) = \dfrac{f_{XY}(x, y=y_0)}{f_{X}(x_0)} $$
주변 확률 밀도 함수와 마찬가지로 차원이 감소하지만 기댓값(적분) 연산으로 변수가 없어진 주변 확률 밀도 함수와는 다른 값이다.
위에서 예로 든 연속 확률 변수의 조건부 확률 밀도 함수를 그림으로 나타내면 다음과 같다. (이 그림은 사실 normailization이 되지 않았다.)
Step6: 위에서 예로 든 이산 확률 변수의 경우에 조건부 확률 질량 함수를 계산하면 다음과 같다. | Python Code:
mu = [2, 3]
cov = [[2, -1],[2, 4]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(-1, 5, 150)
yy = np.linspace(0, 6, 120)
XX, YY = np.meshgrid(xx, yy)
ZZ = rv.pdf(np.dstack([XX, YY]))
plt.contour(XX, YY, ZZ)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Joint Probability Density")
plt.axis("equal")
plt.show()
Explanation: 결합, 주변, 조건부 확률 밀도 함수
결합누적확률분포는 이제 변수가 2개니까 다변수로 붙어서 나온 개념이다.
두 개 이상의 확률 변수가 서로 관계를 가지며 존재하는 경우를 생각해 보자. 예를 들어 학교에 있는 학생의 키와 몸무게를 측정하는 경우 한 명의 학생 $\omega$에 대해 두 개의 자료 ($x$, $y$)가 한 쌍으로 나오게 된다. 이렇게 취득한 자료를 확률 변수 $X$와 $Y$로 볼 때, 이들의 확률 분포를 한번에 묘사하기 위한 확률 분포를 결합 확률 분포(join probability distribution)라고 한다.
결합 확률 분포도 단일 연속 확률 변수의 경우와 마찬가지로 누적 확률 분포 함수(cumulative probability function)와 확률 밀도 함수(probability density function)를 통해 서술된다.
연속 확률 변수의 결합 누적 확률 분포 함수
두 확률 변수 $X$, $Y$에 대한 누적 확률 분포 함수 $F_{XY}(x, y) $는 다음과 같이 정의한다.
$$ F_{XY}(x, y) = P({ X < x } \cap { Y < y }) = P(X < x, Y < y) $$
만약 구간의 끝을 나타내는 두 독립 변수 $x$, $y$중 하나가 무한대 값을 가지는 경우에는 해당 변수의 값은 어떤 값을 가져도 상관없으므로 남은 하나의 변수에 대한 누적 확률 분포 함수로 줄어든다. 이를 주변 확률 분포(marginal probability distribution)이라고 한다.
$$ F_X(x)=F_{XY}(x, \infty) $$
$$ F_Y(x)=F_{XY}(\infty, y) $$
누적 확률 분포 함수 $F_{XY}(x, y) $는 다음과 같은 특성을 가진다.
$$ F_{XY}(\infty, \infty)=1 $$
$$ F_{XY}(-\infty, y)=F_{XY}(x,-\infty)=0 $$
연속 확률 변수의 결합 확률 밀도 함수
단일 확률 변수의 경우처럼 누적 결합 확률 분포 함수를 미분하여 결합 확률 밀도 함수를 정의할 수 있다. 다만 이 경우에는 독립 변수가 2개이므로 각각에 대해 모두 편미분(partial differentication)해야 한다.
$$ f_{XY} = \dfrac{\partial^2 F_{XY}(x, y)}{\partial x \partial y} $$
결합 확률 밀도 함수를 특정 구간에 대해 적분하면 해당 구간에 대한 확률이 된다.
$$ \int_{x_1}^{x_2} \int_{y_1}^{y_2} f_{XY}(x,y)dxdy = P\big({ x_1 \leq X \leq x_2, \; y_1 \leq Y \leq y_2 }\big) $$
따라서 결합 확률 밀도 함수를 모든 변수에 대해 $-\infty$에서 $\infty$ 까지 적분하면 값이 1이 된다.
$$ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f_{XY}(x,y)dxdy=1 $$
연속 확률 변수의 결합 확률 밀도 함수는 2차원 함수가 된다. 아래는 다변수 정규 분포의 결합 확률 밀도의 예를 그린 것이다.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.contour(XX, YY, ZZ, levels=np.linspace(0, 0.1, 20))
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_title("Joint Probability Density")
plt.show()
Explanation: 동일한 결합 확률 밀도 함수를 3차원으로 그리면 아래와 같다.
End of explanation
pmf = np.array([[0, 0, 0, 0, 1, 1],
[0, 0, 1, 2, 1, 0],
[0, 1, 3, 3, 1, 0],
[0, 1, 2, 1, 0, 0],
[1, 1, 0, 0, 0, 0]])
pmf = pmf / pmf.sum()
pmf
sns.heatmap(pmf)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Joint Probability Mass Function")
plt.show()
Explanation: 이산 확률 변수의 결합 확률 질량 함수
이젠 사건이 아니라 숫자다. x, y가 숫자로 들어간다.
여기서는 확률로 나타낼 수 있다. 누적의 의미가 없어진다. 그 값 자체가 확률
다변수 이산 확률 변수에 대해서는 결합 확률 질량 함수를 구할 수 있다. 결합 확률 질량 함수는 모든 확률 변수의 값이 특정 숫자가 될 확률을 뜻한다.
$$ f_{XY}(x,y) = P(X=x,Y=y) $$
결합 확률 질량 함수는 댜음과 같이 2차원 표의 형태가 된다.
End of explanation
pmf
pmf_marginal_x = pmf.sum(axis=0)
pmf_marginal_x
pmf_marginal_y = pmf.sum(axis=1)
pmf_marginal_y[:, np.newaxis]
Explanation: 주변 확률 밀도 함수
주변 확률 밀도 함수(marginal probability density function)는 다변수 확률 변수에서 특정한 하나의 변수에 대한 기대값을 말한다. 따라서 결합 확률 밀도 함수를 특정 변수에 대해서만 적분하여 구한다.
기댓값 계산(적분)으로 인해 차원이 한 개 줄어들기 때문에 2차원 확률 변수의 주변 확률 밀도 함수는 1차원 함수가 된다.
$$
\begin{align}%\label{}
\nonumber f_X(x) = \text{E}{Y}[f{XY}(x,y)] = \int_{-\infty}^{\infty} f_{XY}(x,y)dy \
\nonumber f_Y(y) = \text{E}{X}[f{XY}(x,y)] = \int_{-\infty}^{\infty} f_{XY}(x,y)dx
\end{align}
$$
이산 확률 변수의 경우에는 주변 확률 질량 함수(marginal probability mass function) 다음과 같이 정의한다
$$
\begin{align}%\label{}
\nonumber f_X(x) = \text{E}{Y}[f{XY}(x,y)] = \sum_{y_j} f_{XY}(x,y_j) \
\nonumber f_Y(y) = \text{E}{X}[f{XY}(x,y)] = \sum_{x_i} f_{XY}(x_i,y) \
\end{align}
$$
위에서 예로 든 이산 확률 변수의 경우에 주변 확률 질량 함수를 계산하면 다음과 같다.
End of explanation
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.contour(XX, YY, 0.4*ZZ, levels=np.linspace(0, 0.04, 30), alpha=0.3)
ax.plot(yy, ZZ.mean(axis=1), zdir='x', lw=3)
ax.plot(xx, ZZ.mean(axis=0), zdir='y', lw=3)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_title("Marginal Probability Density")
ax.view_init(55, -40)
plt.show()
Explanation: 위에서 예로 든 연속 확률 변수의 경우에 주변 확률 밀도 함수를 계산하면 다음과 같다.
End of explanation
fig, [ax1, ax2] = plt.subplots(2, 1, figsize=(8, 12), subplot_kw={'projection': '3d'})
ax1.plot_wireframe(XX, YY, ZZ, rstride=30, cstride=0, lw=3)
ax1.set_xlabel("x")
ax1.set_ylabel("y")
ax1.set_title("Conditional Probability Density $f(x \mid y)$")
ax2.plot_wireframe(XX, YY, ZZ, rstride=0, cstride=30, lw=3)
ax2.set_xlabel("x")
ax2.set_ylabel("y")
ax2.set_title("Conditional Probability Density $f(y \mid x)$")
plt.tight_layout()
plt.show()
Explanation: 다변수 가우시안 정규분포의 첫 번째 꼭지점이 높은 이유는? 맨처음 올라가는 것의 미분 기울기가 급격하게 올라가기 때문
아까 달팽이그림에서는 미분만 나타난 것이다. 그걸 적분한 것만이 값을 가질 수 있다. 0부터 1사이의 값
조인트와 마지날은 상대적인 개념이다. 즉, 관계적인 개념.
독립은 pdf의 모양과만 관계가 있다.
조건부 확률 밀도 함수
오로지 y값 말고 x값에만 관심 있으면 y를 다 없애면 된다.
조건부도 마찬가지로 차원이 줄어든다. 2차원이 1차원으로
조건부는 특정한 값을 하나 찝어. 값이 하나 박히다. 단면만 보는 것. 마지날은 눌러 찍은 것이고.
그런데 여기서 마지날은 다 똑같아진다. 어디서든 곡선들의 합이 다 같아지기 때문이다.
정규화 하기 이전의 컨디셔널
조건부 확률 밀도 함수(conditional probability density function)는 다변수 확률 변수 중 하나의 값이 특정 값으로 고정되어 상수가 되어 버린 경우이다.
$$ f_{X \mid Y}(x \mid y_0) = \dfrac{f_{XY}(x, y=y_0)}{f_{Y}(y_0)} $$
$$ f_{Y \mid X}(y \mid x_0) = \dfrac{f_{XY}(x, y=y_0)}{f_{X}(x_0)} $$
주변 확률 밀도 함수와 마찬가지로 차원이 감소하지만 기댓값(적분) 연산으로 변수가 없어진 주변 확률 밀도 함수와는 다른 값이다.
위에서 예로 든 연속 확률 변수의 조건부 확률 밀도 함수를 그림으로 나타내면 다음과 같다. (이 그림은 사실 normailization이 되지 않았다.)
End of explanation
pmf
conf_y0 = pmf[0, :] / pmf_marginal_y[0]
conf_y0
cond_y1 = pmf[1, :] / pmf_marginal_y[1]
cond_y1
Explanation: 위에서 예로 든 이산 확률 변수의 경우에 조건부 확률 질량 함수를 계산하면 다음과 같다.
End of explanation |
10,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Problem Statement
Whether one trades in Stocks, Index, Currencies, Commodities, a person would like to know questions like
Step7: 2.
sentiment scoring
Step9: 3.
merging data sets
APPLE stock data was obtained using Quandl API at "https
Step13: Method Selection and Model Evaluation
This problem lends itself to learning algorithms such as SVMs, logistic regression, and random forest classification trees. However, the difficulty with stock price prediction is that the observations are highly dependent to each other. Therefore, even a little knowledge whether about the outcome stock behaviour might give an edge in investement.
Due to high correlation between stock prices of each day, generic learning algorithms usually perform very poorly on this type of problems and therefore we sought to use Deep Learning methods too. Particularly, we used basic 2-layer feedword neural network and recurrent neural networks with tensorflow.
The reason behind using recurrent neural networks (RNNs) is that they have loops. These loops allow the network to use information from previous passes, which acts as memory and can store the dependency relationship between past and present time.
<img src = "https
Step14: There is exterme fluctuation betweeen opening and closing prices of Apple, Inc. (as expected).
Let's choose the features and label (bin_diff) and make the dataframe ready for machine learning and deep learning.
Step15: Let's drop the observation with "0" and make it binary classification.
Step16: Also, to make the models work properly, from now on, we re-code loss category from -1 to 0.
Step17: let's look at the features and standardize them.
Step18: Results
As expected, SVMs, logistic regression, and random forest classifiers have worse/equal to performance than a coin flip based on ROC area calculated for the hold-out test set. The feed forward neural network has a slighly better performance with 0.53 ROC area while the RNN has the highest area of 0.56.
| Method | ROC area |
| ------ | ----------- |
| Logistic Regression | 0.4373|
| SVM | 0.4430|
| Random Forets Classifier | 0.4735|
| Feed forward neural network | 0.5323|
| Random Forets Classifier | 0.5557 |
The PR curves also look reasonable only for neural nets.
Logistic regression
Step19: Support Vector Machines
Step20: Random Forest Tree Classifiers
Step21: Fit random forest with 500 trees
Step22: Feed Forward Neural Network
Step23: Recursive Neural Nets
Step24: Training the RNN | Python Code:
#data munging and feature extraction packages
import requests
import requests_ftp
import requests_cache
import lxml
import itertools
import pandas as pd
import re
import numpy as np
import seaborn as sns
import string
from bs4 import BeautifulSoup
from collections import Counter
from matplotlib import pyplot as plt
from wordcloud import WordCloud
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = [10, 8]
#machine learning from scikit-learn
from sklearn.metrics import classification_report,confusion_matrix, precision_recall_curve, roc_curve, auc, accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
#Deep learning from Tensor Flow
#feed forward neural network
import tensorflow as tf
from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNClassifier
from tensorflow.contrib.layers import real_valued_column
#recurrent neural nets
from tensorflow.contrib.layers.python.layers.initializers import xavier_initializer
from tensorflow.contrib import rnn
def motley_page_links(page):
Given a page number, it returns all article links.
Input: a page number (default = 1)
Output: a list with links on the given page
response = requests.get(
'https://www.fool.com/search/solr.aspx?page={}&q=apple&sort=date&source=isesitbut0000001'.format(page))
response.raise_for_status()
html = response.text
parsed_html = BeautifulSoup(html, 'lxml')
div_with_links = parsed_html.find_all(name = 'dl',
attrs = {'class' : 'results'})
links = []
for link in div_with_links[0].find_all('a', href = True):
links.append(link['href'])
return links
def motley_all_links(no_pages = 1):
Given number of pages, it returns all the links
from "no_pages"
Input: number of pages (default = 1)
Output: a list with links from the pages
all_links = []
for page in range(1, (no_pages + 1)):
all_links.extend(motley_page_links(page))
return all_links
def motley_article_info(url):
Given an article url, it returns title, date, content
and url of that article.
Input: article url
Ouput: a dictionary with 'title', 'date',
'article', and 'url' as keys.
response = requests.get(url)
response.raise_for_status()
html = response.text
parsed_html = BeautifulSoup(html, 'lxml')
content = parsed_html.find_all(name = 'div',
attrs = {'class' : 'full_article'})
date = parsed_html.find_all(name = 'div', attrs = {'class' : 'publication-date'})[0].text.strip()
title = parsed_html.find_all('h1')[0].text
article = ' '.join([t.text for t in content[0].find_all('p')])
return {'title' : title,
'date' : date,
'article' : article,
'url' : url}
def motley_df(no_pages):
Creates DataFrame for the articles in url
with author, text, title, and url as column
names.
Input: A url, number of pages
Output: DataFrame with 4 columns: author,
text, title, and url.
#get all links in the specified number of pages
#from url
links = motley_all_links(no_pages)
#create dataframe for each link and
#combine them into one dataframe
article_df = pd.DataFrame(index = [999999], columns=['article', 'date', 'title', 'url'])
for i, link in enumerate(links):
try:
append_to = pd.DataFrame(motley_article_info(link), index = [i])
article_df = article_df.append(append_to)
except:
pass
article_df = article_df.drop(999999)
return article_df
#df = motley_df(1000)
#convert_to_csv(df, "mfool.csv")
Explanation: Problem Statement
Whether one trades in Stocks, Index, Currencies, Commodities, a person would like to know questions like:
What is the market trend? Will it continue?
Whether market will close higher or lower compared to its opening levels?
What could be expected high, low, close levels?
There could be many more such questions. The first challenge is to know the trend and direction of the market. If there was a crystal ball that could provide meaningful prediction in advance on the trend and direction of the markets that could help take correct trading position to make profitable trades.
Predictive analytics based on historical price data using supervised machine learning approach can provide prediction in advance on whether the next day market will close higher or lower compared to its opening levels.
We chose to investigate whether there is a connection between the sentiment in the news for a given day and the resulting market value changes for Apple, Inc on the same day. Particularly, we treated this as a supervised machine learning approach and discrete classification problem where stock price either went down or up.
Data Munging and Feature Extraction
As stated, we limited our scope to examine the news related to tech giant Apple. We scaped the Motley Fool site for a three year's worth of relevant news articles based on searches by the company name.
Afterwards, we used the Sentiment Lexicon, a dictionary of positive/negative words released by the University of Pittsburgh described in this paper and that can be downloaded here, to count the sentiment score for each article. A negative word received a "-1" score, whereas positive word scored "+1". If a word is not found the dictionary, its score is zero.
For the historical Apple stock prices, we used "go-to" Quandl website and their API. We calculated the difference between opening and closing prices to get the returns. If a return is negative and less than -0.001, then we classified it as -1. If a return is between -0.001 and 0.001, we classified it as 0. Finally, if a return is greater than 0.001, we classified it as 1. This "diff" columns constituted the label.
In the next step, we merged two data sets based on the date. For the final version of the data set, we chose only the sentiment score as a feature and diff as a label to train all our models.
Note: we commented out lines of codes where the scraping part is done. The scrapped data is available in github repo with name "mfool.csv".
End of explanation
motley = pd.read_csv('mfool.csv')
negative = pd.read_csv('negative-words.txt', sep = ' ', header = None)
positive = pd.read_csv('positive-words.txt', sep=' ', header=None)
def score_word(word):
returns -1 if negative meaning, +1 if positive meaning,
else 0
input: a word
ouput: -1, 0, or + 1
if word.lower() in negative.values:
return -1
elif word.lower() in positive.values:
return +1
return 0
def get_scores(article):
returns sentiment scores for a given article
input: an article
output: sentiment score
wordsArticle = article.split(' ')
scores = [score_word(word) for word in wordsArticle]
return sum(scores)
motley['sentiment'] = motley['article'].apply(get_scores)
plt.hist(motley.sentiment, bins=50)
plt.xlabel('sentiment scores')
plt.ylabel('frequency')
plt.title('Distribution of sentiment scores of articles');
# motley.to_csv('motley_with_s_scores.csv', encoding='utf-8')
most_positive_article = motley['article'][motley['sentiment'] == np.max(motley['sentiment'])].values[0]
wc = WordCloud().generate(most_positive_article)
plt.imshow(wc)
plt.axis('off');
most_negative_article = motley['article'][motley['sentiment'] == np.min(motley['sentiment'])].values[0]
wc = WordCloud().generate(most_negative_article)
plt.imshow(wc)
plt.axis('off');
Explanation: 2.
sentiment scoring
End of explanation
path = "../datasets/"
aapl = pd.read_csv(path+'WIKI_PRICES_AAPL.csv')
fool = pd.read_csv(path+'motley_with_s_scores.csv')
def format_df(stock_df, news_df, word):
merges stock_df and news_df on "date"
column
input: stock df, news df, word
output: merged df
stock_df['diff'] = stock_df['close']-stock_df['open']
news_df['Count'] = news_df['article'].apply(lambda x: x.count(word))
news_df.loc[news_df['Count'] <= 5, 'sentiment'] = 0
news_df['date'] = pd.to_datetime(news_df['date'])
news_df['date'] = news_df['date'].dt.strftime('%Y-%m-%d')
news_df = news_df.groupby(['date'], as_index = False).sum()
news_df = news_df[['date', 'sentiment', 'Count']]
merged_df = pd.merge(news_df, stock_df)
merged_df['bin_sentiment'] = pd.cut(merged_df['sentiment'], [-np.inf, -0.001, 0.001, np.inf], labels = [-1, 0, 1])
merged_df['bin_diff'] = pd.cut(merged_df['diff'], [-np.inf, -0.001, 0.001, np.inf], labels = [-1, 0, 1])
return merged_df
merged_df = format_df(aapl, fool, 'Apple')
merged_df.head()
#merged_df.to_csv('merged_df.csv', encoding='utf-8')
Explanation: 3.
merging data sets
APPLE stock data was obtained using Quandl API at "https://www.quandl.com/api/v3/datasets/WIKI/AAPL.csv"
End of explanation
def plot_ROC(y_test, scores, label, color):
plots ROC curve
input: y_test, scores, and title
output: ROC curve
false_pr, true_pr, _ = roc_curve(y_test, scores[:, 1])
roc_auc = auc(false_pr, true_pr)
plt.plot(false_pr, true_pr, lw = 3,
label='{}: area={:10.4f})'.format(label, roc_auc), color = color)
plt.plot([0, 1], [0, 1], color='black', lw=1, linestyle='--')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.legend(loc="best")
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('ROC')
def plot_PR(y_test, scores, label, color):
plots PR curve
input: y_test, scores, title
output: Precision-Recall curve
precision, recall, _ = precision_recall_curve(y_test, scores[:, 1])
plt.plot(recall, precision,lw = 2,
label='{}'.format(label), color = color)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(loc="best")
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('PR')
def plot_confusionmatrix(ytrue, ypred):
plots confusion matrix heatmap and prints out
classification report
input: ytrue (actual value), ypred(predicted value)
output: confusion matrix heatmap and classification report
print (classification_report(ytrue, ypred))
print ('##################################################################')
cnf_matrix = confusion_matrix(ytrue, ypred)
sns.heatmap(cnf_matrix, cmap='coolwarm_r', annot = True, linewidths=.5, fmt = '.4g')
plt.title('Confusion matrix')
plt.xlabel('Prediction')
plt.ylabel('Actual');
apple = pd.read_csv(path + 'merged_df.csv')
apple.head()
print (apple.shape)
apple.plot('date', 'diff');
Explanation: Method Selection and Model Evaluation
This problem lends itself to learning algorithms such as SVMs, logistic regression, and random forest classification trees. However, the difficulty with stock price prediction is that the observations are highly dependent to each other. Therefore, even a little knowledge whether about the outcome stock behaviour might give an edge in investement.
Due to high correlation between stock prices of each day, generic learning algorithms usually perform very poorly on this type of problems and therefore we sought to use Deep Learning methods too. Particularly, we used basic 2-layer feedword neural network and recurrent neural networks with tensorflow.
The reason behind using recurrent neural networks (RNNs) is that they have loops. These loops allow the network to use information from previous passes, which acts as memory and can store the dependency relationship between past and present time.
<img src = "https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/image_folder_6/recurrent.jpg">
Note SVMs and logistic regression were optimized using penalty, random forest with out-of-bad errors, and neural nets with cross validation
End of explanation
aapl = apple.copy()[['date', 'sentiment', 'bin_diff']]
aapl.head()
plt.hist(aapl['bin_diff']);
Explanation: There is exterme fluctuation betweeen opening and closing prices of Apple, Inc. (as expected).
Let's choose the features and label (bin_diff) and make the dataframe ready for machine learning and deep learning.
End of explanation
aapl = aapl[aapl['bin_diff'] != 0]
Explanation: Let's drop the observation with "0" and make it binary classification.
End of explanation
label = aapl['bin_diff'] == 1
label = label.astype(int)
Explanation: Also, to make the models work properly, from now on, we re-code loss category from -1 to 0.
End of explanation
InputDF = aapl.copy().drop('bin_diff', axis = 1)
InputDF = InputDF.set_index('date')
InputDF.head()
InputDF = InputDF.apply(lambda x:(x -x.mean())/x.std())
InputDF.head()
test_size = 600
xtrain, xtest = InputDF.iloc[:test_size, :], InputDF.iloc[test_size:, :]
ytrain, ytest = label[:test_size], label[test_size:]
Explanation: let's look at the features and standardize them.
End of explanation
for pen in [1e-3, 1, 100]:
logreg = LogisticRegression()
logreg_model = logreg.fit(xtrain, ytrain)
logpred = logreg_model.predict(xtest)
logscores = logreg_model.predict_proba(xtest)
print ("Accuracy score is {} for penalty = {}".format(accuracy_score(ytest, logpred), pen))
print ()
print ("confusion matrix for penalty = {}".format(pen))
plot_confusionmatrix(ytest, logpred)
plot_ROC(ytest, logscores, 'Logistic regression', 'r')
plot_PR(ytest, logscores, 'Logistic regression', 'b')
Explanation: Results
As expected, SVMs, logistic regression, and random forest classifiers have worse/equal to performance than a coin flip based on ROC area calculated for the hold-out test set. The feed forward neural network has a slighly better performance with 0.53 ROC area while the RNN has the highest area of 0.56.
| Method | ROC area |
| ------ | ----------- |
| Logistic Regression | 0.4373|
| SVM | 0.4430|
| Random Forets Classifier | 0.4735|
| Feed forward neural network | 0.5323|
| Random Forets Classifier | 0.5557 |
The PR curves also look reasonable only for neural nets.
Logistic regression
End of explanation
for pen in [1e-3, 1, 100]:
svm = SVC(probability=True, C = pen)
svm_model = svm.fit(xtrain, ytrain)
svmpred = svm_model.predict(xtest)
svmscores = svm_model.predict_proba(xtest)
print ("Accuracy score is {} for penalty = {}".format(accuracy_score(ytest, svmpred), pen))
print ()
pen = 1
svm = SVC(probability=True, C = pen)
svm_model = svm.fit(xtrain, ytrain)
svmpred = svm_model.predict(xtest)
svmscores = svm_model.predict_proba(xtest)
print ("confusion matrix for penalty = {}".format(pen))
plot_confusionmatrix(ytest, svmpred)
print ("ROC curve for penalty = {}".format(pen))
plot_ROC(ytest, svmscores, 'SVM', 'r')
plot_PR(ytest, svmscores, 'SVM', 'b')
Explanation: Support Vector Machines
End of explanation
oobscores = []
notrees = [10, 50, 100, 200, 500, 1000]
for n in notrees:
rf = RandomForestClassifier(n_estimators=n, oob_score=True, random_state=42)
rf_model = rf.fit(xtrain, ytrain)
oobscores.append(rf_model.oob_score_)
plt.scatter(notrees, oobscores)
plt.xlabel('no of trees')
plt.ylabel('oob scores')
plt.title('random forest classifiers');
Explanation: Random Forest Tree Classifiers
End of explanation
rf = RandomForestClassifier(n_estimators=500, oob_score=True, random_state=42)
rf_model = rf.fit(xtrain, ytrain)
rfpred = rf.predict(xtest)
rfscores = rf.predict_proba(xtest)
plot_confusionmatrix(ytest, rfpred)
plot_ROC(ytest, rfscores, 'Random Forest', 'r')
plot_PR(ytest, rfscores, 'Random Forest', 'b')
Explanation: Fit random forest with 500 trees
End of explanation
num_features = len(InputDF.columns)
dropout=0.2
hidden_1_size = 25
hidden_2_size = 5
num_classes = label.nunique()
NUM_EPOCHS=20
BATCH_SIZE=1
lr=0.0001
np.random.RandomState(52);
val = (InputDF[:-test_size].values, label[:-test_size].values)
train = (InputDF[-test_size:].values, label[-test_size:].values)
NUM_TRAIN_BATCHES = int(len(train[0])/BATCH_SIZE)
NUM_VAL_BATCHES = int(len(val[1])/BATCH_SIZE)
class Model():
def __init__(self):
global_step = tf.contrib.framework.get_or_create_global_step()
self.input_data = tf.placeholder(dtype=tf.float32,shape=[None,num_features])
self.target_data = tf.placeholder(dtype=tf.int32,shape=[None])
self.dropout_prob = tf.placeholder(dtype=tf.float32,shape=[])
with tf.variable_scope("ff"):
droped_input = tf.nn.dropout(self.input_data,keep_prob=self.dropout_prob)
layer_1 = tf.contrib.layers.fully_connected(
num_outputs=hidden_1_size,
inputs=droped_input,
)
layer_2 = tf.contrib.layers.fully_connected(
num_outputs=hidden_2_size,
inputs=layer_1,
)
self.logits = tf.contrib.layers.fully_connected(
num_outputs=num_classes,
activation_fn =None,
inputs=layer_2,
)
with tf.variable_scope("loss"):
self.losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = self.logits,
labels = self.target_data)
mask = (1-tf.sign(1-self.target_data)) #Don't give credit for flat days
mask = tf.cast(mask,tf.float32)
self.loss = tf.reduce_sum(self.losses)
with tf.name_scope("train"):
opt = tf.train.AdamOptimizer(lr)
gvs = opt.compute_gradients(self.loss)
self.train_op = opt.apply_gradients(gvs, global_step=global_step)
with tf.name_scope("predictions"):
self.probs = tf.nn.softmax(self.logits)
self.predictions = tf.argmax(self.probs, 1)
correct_pred = tf.cast(tf.equal(self.predictions, tf.cast(self.target_data,tf.int64)),tf.float64)
self.accuracy = tf.reduce_mean(correct_pred)
with tf.Graph().as_default():
model = Model()
input_ = train[0]
target = train[1]
losses = []
with tf.Session() as sess:
init = tf.initialize_all_variables()
sess.run([init])
epoch_loss =0
for e in range(NUM_EPOCHS):
if epoch_loss >0 and epoch_loss <1:
break
epoch_loss =0
for batch in range(0,NUM_TRAIN_BATCHES):
start = batch*BATCH_SIZE
end = start + BATCH_SIZE
feed = {
model.input_data:input_[start:end],
model.target_data:target[start:end],
model.dropout_prob:0.9
}
_,loss,acc = sess.run(
[
model.train_op,
model.loss,
model.accuracy,
]
,feed_dict=feed
)
epoch_loss+=loss
losses.append(epoch_loss)
#print('step - {0} loss - {1} acc - {2}'.format((1+batch+NUM_TRAIN_BATCHES*e),epoch_loss,acc))
print('################ done training ################')
final_preds =np.array([])
final_scores =None
for batch in range(0,NUM_VAL_BATCHES):
start = batch*BATCH_SIZE
end = start + BATCH_SIZE
feed = {
model.input_data:val[0][start:end],
model.target_data:val[1][start:end],
model.dropout_prob:1
}
acc,preds,probs = sess.run(
[
model.accuracy,
model.predictions,
model.probs
]
,feed_dict=feed
)
#print(acc)
final_preds = np.concatenate((final_preds,preds),axis=0)
if final_scores is None:
final_scores = probs
else:
final_scores = np.concatenate((final_scores,probs),axis=0)
print ('################ done testing ################')
prediction_conf = final_scores[np.argmax(final_scores, 1)]
plt.scatter(np.linspace(0, 1, len(losses)), losses);
plt.title('Validation loss with epoch')
plt.ylabel('Validation Loss')
plt.xlabel('epoch progression');
plot_confusionmatrix(ytest, final_preds)
plot_ROC(ytest, final_scores, 'Feed forward neural net', 'r')
plot_PR(ytest, final_probs, 'Feed forward neural net', 'b')
Explanation: Feed Forward Neural Network
End of explanation
RNN_HIDDEN_SIZE=4
FIRST_LAYER_SIZE=50
SECOND_LAYER_SIZE=10
NUM_LAYERS=2
BATCH_SIZE=1
NUM_EPOCHS=25
lr=0.0003
NUM_TRAIN_BATCHES = int(len(train[0])/BATCH_SIZE)
NUM_VAL_BATCHES = int(len(val[1])/BATCH_SIZE)
ATTN_LENGTH=30
beta=0
np.random.RandomState(52);
class RNNModel():
def __init__(self):
global_step = tf.contrib.framework.get_or_create_global_step()
self.input_data = tf.placeholder(dtype=tf.float32,shape=[BATCH_SIZE,num_features])
self.target_data = tf.placeholder(dtype=tf.int32,shape=[BATCH_SIZE])
self.dropout_prob = tf.placeholder(dtype=tf.float32,shape=[])
def makeGRUCells():
base_cell = rnn.GRUCell(num_units=RNN_HIDDEN_SIZE,)
layered_cell = rnn.MultiRNNCell([base_cell] * NUM_LAYERS,state_is_tuple=False)
attn_cell =tf.contrib.rnn.AttentionCellWrapper(cell=layered_cell,attn_length=ATTN_LENGTH,state_is_tuple=False)
return attn_cell
self.gru_cell = makeGRUCells()
self.zero_state = self.gru_cell.zero_state(1, tf.float32)
self.start_state = tf.placeholder(dtype=tf.float32,shape=[1,self.gru_cell.state_size])
with tf.variable_scope("ff",initializer=xavier_initializer(uniform=False)):
droped_input = tf.nn.dropout(self.input_data,keep_prob=self.dropout_prob)
layer_1 = tf.contrib.layers.fully_connected(
num_outputs=FIRST_LAYER_SIZE,
inputs=droped_input,
)
layer_2 = tf.contrib.layers.fully_connected(
num_outputs=RNN_HIDDEN_SIZE,
inputs=layer_1,
)
split_inputs = tf.reshape(droped_input,shape=[1,BATCH_SIZE,num_features],name="reshape_l1") # Each item in the batch is a time step, iterate through them
split_inputs = tf.unstack(split_inputs,axis=1,name="unpack_l1")
states =[]
outputs =[]
with tf.variable_scope("rnn",initializer=xavier_initializer(uniform=False)) as scope:
state = self.start_state
for i, inp in enumerate(split_inputs):
if i >0:
scope.reuse_variables()
output, state = self.gru_cell(inp, state)
states.append(state)
outputs.append(output)
self.end_state = states[-1]
outputs = tf.stack(outputs,axis=1) # Pack them back into a single tensor
outputs = tf.reshape(outputs,shape=[BATCH_SIZE,RNN_HIDDEN_SIZE])
self.logits = tf.contrib.layers.fully_connected(
num_outputs=num_classes,
inputs=outputs,
activation_fn=None
)
with tf.variable_scope("loss"):
self.penalties = tf.reduce_sum([beta*tf.nn.l2_loss(var) for var in tf.trainable_variables()])
self.losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = self.logits,
labels = self.target_data)
self.loss = tf.reduce_sum(self.losses + beta*self.penalties)
with tf.name_scope("train_step"):
opt = tf.train.AdamOptimizer(lr)
gvs = opt.compute_gradients(self.loss)
self.train_op = opt.apply_gradients(gvs, global_step=global_step)
with tf.name_scope("predictions"):
self.probs = tf.nn.softmax(self.logits)
self.predictions = tf.argmax(self.probs, 1)
correct_pred = tf.cast(tf.equal(self.predictions, tf.cast(self.target_data,tf.int64)),tf.float64)
self.accuracy = tf.reduce_mean(correct_pred)
Explanation: Recursive Neural Nets
End of explanation
with tf.Graph().as_default():
model = RNNModel()
input_ = train[0]
target = train[1]
losses = []
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run([init])
loss = 2000
for e in range(NUM_EPOCHS):
state = sess.run(model.zero_state)
epoch_loss =0
for batch in range(0,NUM_TRAIN_BATCHES):
start = batch*BATCH_SIZE
end = start + BATCH_SIZE
feed = {
model.input_data:input_[start:end],
model.target_data:target[start:end],
model.dropout_prob:0.5,
model.start_state:state
}
_,loss,acc,state = sess.run(
[
model.train_op,
model.loss,
model.accuracy,
model.end_state
]
,feed_dict=feed
)
epoch_loss+=loss
losses.append(epoch_loss)
#print('step - {0} loss - {1} acc - {2}'.format((e),epoch_loss,acc))
print('################ done training ################')
final_preds =np.array([])
final_scores = None
for batch in range(0,NUM_VAL_BATCHES):
start = batch*BATCH_SIZE
end = start + BATCH_SIZE
feed = {
model.input_data:val[0][start:end],
model.target_data:val[1][start:end],
model.dropout_prob:1,
model.start_state:state
}
acc,preds,state, probs = sess.run(
[
model.accuracy,
model.predictions,
model.end_state,
model.probs
]
,feed_dict=feed
)
#print(acc)
assert len(preds) == BATCH_SIZE
final_preds = np.concatenate((final_preds,preds),axis=0)
if final_scores is None:
final_scores = probs
else:
final_scores = np.concatenate((final_scores,probs),axis=0)
print('################ done testing ################')
plt.scatter(np.linspace(0, 1, len(losses)), losses);
plt.title('Validation loss with epoch')
plt.ylabel('Validation Loss')
plt.xlabel('epoch progression');
plot_confusionmatrix(ytest, final_preds)
plot_ROC(ytest, final_scores, 'Feed forward neural net', 'r')
plot_PR(ytest, final_scores, 'Feed forward neural net', 'b')
Explanation: Training the RNN
End of explanation |
10,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Linear Regression
Learning Objectives
Analyze a Pandas Dataframe.
Create Seaborn plots for Exploratory Data Analysis.
Train a Linear Regression Model using Scikit-Learn.
Introduction
This lab is an introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Import Libraries
Step1: Load the Dataset
We will use the USA housing prices dataset found on Kaggle. The data contains the following columns
Step2: Let's check for any null values.
Step3: Let's take a peek at the first and last five rows of the data for all columns.
Step4: Exploratory Data Analysis (EDA)
Let's create some simple plots to check out the data!
Step5: Training a Linear Regression Model
Regression is a supervised machine learning process. It is similar to classification, but rather than predicting a label, we try to predict a continuous value. Linear regression defines the relationship between a target variable (y) and a set of predictive features (x). Simply stated, If you need to predict a number, then use regression.
Let's now begin to train our regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use.
X and y arrays
Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems.
Step6: Train - Test - Split
Now let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model. Note that we are using 40% of the data for testing.
What is Random State?
If an integer for random state is not specified in the code, then every time the code is executed, a new random value is generated and the train and test datasets will have different values each time. However, if a fixed value is assigned -- like random_state = 0 or 1 or 101 or any other integer, then no matter how many times you execute your code the result would be the same, e.g. the same values will be in the train and test datasets. Thus, the random state that you provide is used as a seed to the random number generator. This ensures that the random numbers are generated in the same order.
Step7: Creating and Training the Model
Step8: Model Evaluation
Let's evaluate the model by checking out it's coefficients and how we can interpret them.
Step9: Interpreting the coefficients
Step10: Residual Histogram
Step11: Regression Evaluation Metrics
Here are three common evaluation metrics for regression problems | Python Code:
# Importing Pandas, a data processing and CSV file I/O libraries
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns # Seaborn is a Python data visualization library based on matplotlib.
%matplotlib inline
Explanation: Introduction to Linear Regression
Learning Objectives
Analyze a Pandas Dataframe.
Create Seaborn plots for Exploratory Data Analysis.
Train a Linear Regression Model using Scikit-Learn.
Introduction
This lab is an introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Import Libraries
End of explanation
# Next, we read the dataset into a Pandas dataframe.
df_USAhousing = pd.read_csv('../USA_Housing_toy.csv')
# Show the first five row.
df_USAhousing.head()
Explanation: Load the Dataset
We will use the USA housing prices dataset found on Kaggle. The data contains the following columns:
'Avg. Area Income': Avg. Income of residents of the city house is located in.
'Avg. Area House Age': Avg Age of Houses in same city
'Avg. Area Number of Rooms': Avg Number of Rooms for Houses in same city
'Avg. Area Number of Bedrooms': Avg Number of Bedrooms for Houses in same city
'Area Population': Population of city house is located in
'Price': Price that the house sold at
'Address': Address for the house
End of explanation
# The isnull() method is used to check and manage NULL values in a data frame.
df_USAhousing.isnull().sum()
# Pandas describe() is used to view some basic statistical details of a data frame or a series of numeric values.
df_USAhousing.describe()
# Pandas info() function is used to get a concise summary of the dataframe.
df_USAhousing.info()
Explanation: Let's check for any null values.
End of explanation
print(df_USAhousing,5) # TODO 1
Explanation: Let's take a peek at the first and last five rows of the data for all columns.
End of explanation
# Plot pairwise relationships in a dataset. By default, this function will create a grid of Axes such that each numeric variable in data will be
# shared across the y-axes across a single row and the x-axes across a single column.
sns.pairplot(df_USAhousing)
# It is used basically for univariant set of observations and visualizes it through a histogram i.e. only one observation
# and hence we choose one particular column of the dataset.
sns.displot(df_USAhousing['Price'])
# The heatmap is a way of representing the data in a 2-dimensional form. The data values are represented as colors in the graph.
# The goal of the heatmap is to provide a colored visual summary of information.
sns.heatmap(df_USAhousing.corr()) # TODO 2
Explanation: Exploratory Data Analysis (EDA)
Let's create some simple plots to check out the data!
End of explanation
X = df_USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms',
'Avg. Area Number of Bedrooms', 'Area Population']]
y = df_USAhousing['Price']
Explanation: Training a Linear Regression Model
Regression is a supervised machine learning process. It is similar to classification, but rather than predicting a label, we try to predict a continuous value. Linear regression defines the relationship between a target variable (y) and a set of predictive features (x). Simply stated, If you need to predict a number, then use regression.
Let's now begin to train our regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use.
X and y arrays
Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems.
End of explanation
# Import train_test_split function from sklearn.model_selection
from sklearn.model_selection import train_test_split
# Split up the data into a training set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
Explanation: Train - Test - Split
Now let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model. Note that we are using 40% of the data for testing.
What is Random State?
If an integer for random state is not specified in the code, then every time the code is executed, a new random value is generated and the train and test datasets will have different values each time. However, if a fixed value is assigned -- like random_state = 0 or 1 or 101 or any other integer, then no matter how many times you execute your code the result would be the same, e.g. the same values will be in the train and test datasets. Thus, the random state that you provide is used as a seed to the random number generator. This ensures that the random numbers are generated in the same order.
End of explanation
# Import LinearRegression function from sklearn.model_selection
from sklearn.linear_model import LinearRegression
# LinearRegression fits a linear model with coefficients w = (w1, …, wp) to minimize the residual sum of squares between the observed targets
# in the dataset, and the targets predicted by the linear approximation.
lm = LinearRegression()
# Train the Linear Regression Classifer
lm.fit(X_train,y_train) # TODO 3
Explanation: Creating and Training the Model
End of explanation
# print the intercept
print(lm.intercept_)
# Pandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns).
coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient'])
coeff_df
Explanation: Model Evaluation
Let's evaluate the model by checking out it's coefficients and how we can interpret them.
End of explanation
# Predict values based on linear model object.
predictions = lm.predict(X_test)
# Scatter plots are widely used to represent relation among variables and how change in one affects the other.
plt.scatter(y_test,predictions)
Explanation: Interpreting the coefficients:
Holding all other features fixed, a 1 unit increase in Avg. Area Income is associated with an increase of \$21.52 .
Holding all other features fixed, a 1 unit increase in Avg. Area House Age is associated with an increase of \$164883.28 .
Holding all other features fixed, a 1 unit increase in Avg. Area Number of Rooms is associated with an increase of \$122368.67 .
Holding all other features fixed, a 1 unit increase in Avg. Area Number of Bedrooms is associated with an increase of \$2233.80 .
Holding all other features fixed, a 1 unit increase in Area Population is associated with an increase of \$15.15 .
Predictions from our Model
Let's grab predictions off our test set and see how well it did!
End of explanation
# It is used basically for univariant set of observations and visualizes it through a histogram i.e. only one observation
# and hence we choose one particular column of the dataset.
sns.displot((y_test-predictions),bins=50);
Explanation: Residual Histogram
End of explanation
# Importing metrics from sklearn
from sklearn import metrics
# Show the values of MAE, MSE, RMSE
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
Explanation: Regression Evaluation Metrics
Here are three common evaluation metrics for regression problems:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$
Mean Squared Error (MSE) is the mean of the squared errors:
$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$
Comparing these metrics:
MAE is the easiest to understand, because it's the average error.
MSE is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world.
RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units.
All of these are loss functions, because we want to minimize them.
End of explanation |
10,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TMY tutorial
This tutorial shows how to use the pvlib.tmy module to read data from TMY2 and TMY3 files.
This tutorial has been tested against the following package versions
Step1: pvlib comes packaged with a TMY2 and a TMY3 data file.
Step2: Import the TMY data using the functions in the pvlib.tmy module.
Step3: Print the TMY3 metadata and the first 5 lines of the data.
Step4: The TMY readers have an optional argument to coerce the year to a single value.
Step5: Here's the TMY2 data.
Step6: Finally, the TMY readers can access TMY files directly from the NREL website. | Python Code:
# built in python modules
import datetime
import os
import inspect
# python add-ons
import numpy as np
import pandas as pd
# plotting libraries
%matplotlib inline
import matplotlib.pyplot as plt
try:
import seaborn as sns
except ImportError:
pass
import pvlib
Explanation: TMY tutorial
This tutorial shows how to use the pvlib.tmy module to read data from TMY2 and TMY3 files.
This tutorial has been tested against the following package versions:
* pvlib 0.3.0
* Python 3.5.1
* IPython 4.1
* pandas 0.18.0
Authors:
* Will Holmgren (@wholmgren), University of Arizona. July 2014, July 2015, March 2016.
Import modules
End of explanation
# Find the absolute file path to your pvlib installation
pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))
Explanation: pvlib comes packaged with a TMY2 and a TMY3 data file.
End of explanation
tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'))
tmy2_data, tmy2_metadata = pvlib.tmy.readtmy2(os.path.join(pvlib_abspath, 'data', '12839.tm2'))
Explanation: Import the TMY data using the functions in the pvlib.tmy module.
End of explanation
print(tmy3_metadata)
tmy3_data.head(5)
tmy3_data['GHI'].plot()
Explanation: Print the TMY3 metadata and the first 5 lines of the data.
End of explanation
tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'), coerce_year=1987)
tmy3_data['GHI'].plot()
Explanation: The TMY readers have an optional argument to coerce the year to a single value.
End of explanation
print(tmy2_metadata)
print(tmy2_data.head())
Explanation: Here's the TMY2 data.
End of explanation
tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3('http://rredc.nrel.gov/solar/old_data/nsrdb/1991-2005/data/tmy3/722740TYA.CSV', coerce_year=2015)
tmy3_data['GHI'].plot(figsize=(12,6))
plt.title('Tucson TMY GHI')
Explanation: Finally, the TMY readers can access TMY files directly from the NREL website.
End of explanation |
10,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Retrieve your DKRZ data form
Via this form you can retrieve previously generated data forms and make them accessible via the Web again for completion.
Additionally you can get information on the data ingest process status related to your form based request.
Step1: Please provide your last name
please set your last name in the cell below
---- e.g. MY_LAST_NAME = "mueller"
and evaluate the cell (Press "Shift"-Return in the cell)
you will then be asked for the key associated to your form
(the key was provided to you as part of your previous form generation step)
Step2: Get status information related to your form based request
Step3: Contact the DKRZ data managers for form related issues | Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-retrieval')
Explanation: Retrieve your DKRZ data form
Via this form you can retrieve previously generated data forms and make them accessible via the Web again for completion.
Additionally you can get information on the data ingest process status related to your form based request.
End of explanation
from dkrz_forms import form_handler, form_widgets
#please provide your last name - replacing ... below
MY_LAST_NAME = "ki"
form_info = form_widgets.check_and_retrieve(MY_LAST_NAME)
Explanation: Please provide your last name
please set your last name in the cell below
---- e.g. MY_LAST_NAME = "mueller"
and evaluate the cell (Press "Shift"-Return in the cell)
you will then be asked for the key associated to your form
(the key was provided to you as part of your previous form generation step)
End of explanation
# To be completed
Explanation: Get status information related to your form based request
End of explanation
# tob be completed
Explanation: Contact the DKRZ data managers for form related issues
End of explanation |
10,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2>Create a cross-section from AWOT radar instance</h2>
<p> This example uses a gridded NetCDF windsyn file and produces a 2-panel plot
of horizontal CAPPI of reflectivity at 2 km. The flight track is overlaid along
with wind vectors derived from the radar.
Step1: <b>Supply input data and set some plotting parameters.</b>
Step2: <b>Set up some characteristics for plotting.</b>
<ul>
<li>Use Cylindrical Equidistant Area map projection.</li>
<li>Set the spacing of the barbs and X-axis time step for labels.</li>
<li>Set the start and end times for subsetting.</li>
<li>Add landmarks.</li>
</ul>
Step3: <b>Read in the flight and radar data</b>
Step4: <b>Make a cross-section following the flight track displayed in the top panel and use the vertical wind velocity field.</b>
Step5: <b>Now let's make a vertical cross-section along lon/lat pairs of reflectivity</b>
Step6: <b>Here's an alternative method to produce the same plot above. And notice the second plot has discrete levels by setting the <i>discrete_cmap_levels</i> keyword.</b> | Python Code:
# Load the needed packages
from glob import glob
import matplotlib.pyplot as plt
import numpy as np
import awot
from awot.graph.common import create_basemap
from awot.graph import RadarHorizontalPlot, RadarVerticalPlot, FlightLevel
%matplotlib inline
Explanation: <h2>Create a cross-section from AWOT radar instance</h2>
<p> This example uses a gridded NetCDF windsyn file and produces a 2-panel plot
of horizontal CAPPI of reflectivity at 2 km. The flight track is overlaid along
with wind vectors derived from the radar.
End of explanation
# Set the project name
Project="DYNAMO"
# Choose what file to process
yymmdd, modn = '111124', '0351'
# Set the data directory
fDir = "/Users/guy/data/dynamo/"+yymmdd+"I/"
# Construct the full path name for windsyn NetCDF file
P3Radf = str(glob(fDir+"/*"+modn+"*windsyn*.nc")).strip('[]')
# Construct the full path name for Flight level NetCDF file
FltLevf = str(glob(fDir+"20*"+yymmdd+"*_DJ*.nc")).strip('[]')
corners = [77.8, -2.0, 79.6, -0.2]
figtitle = '24 Nov RCE'
Explanation: <b>Supply input data and set some plotting parameters.</b>
End of explanation
# Set up some characteristics for plotting
# Set map projection to use
proj = 'cea'
Wbarb_Spacing = 300 # Spacing of wind barbs along flight path (sec)
# Choose the X-axis time step (in seconds) where major labels will be
XlabStride = 60
# Optional settings
start_time = "2011-11-24 03:51:00"
end_time = "2011-11-24 04:57:00"
# Map spacing
dLon = 0.5
dLat = 0.5
# Should landmarks be plotted? [If yes, then modify the section below
Lmarks=True
if Lmarks:
# Create a list of Landmark data
LocMark = []
# Add locations as [ StringName, Longitude, Latitude ,XlabelOffset, YlabelOffset]
LocMark.append(['Diego Garcia', 72.4160, -7.3117, 0.1, -0.6])
LocMark.append(['R/V Revelle', 80.5010, 0.12167, -0.4, -0.6])
LocMark.append(['Gan', 73.1017, -0.6308, -0.9, 0.0])
LocMark.append(['R/V Marai', 80.50, -7.98, -0.1, -0.6])
# Build a few variables for plotting the labels
# Build arrays for plotting
Labels = []
LabLons = []
LabLats = []
XOffset = []
YOffset = []
for L1, L2, L3, L4, L5 in LocMark:
Labels.append(L1)
LabLons.append(L2)
LabLats.append(L3)
XOffset.append(L4)
YOffset.append(L5)
# Add PPI plot at 2 km level
cappi_ht = 2000.
Explanation: <b>Set up some characteristics for plotting.</b>
<ul>
<li>Use Cylindrical Equidistant Area map projection.</li>
<li>Set the spacing of the barbs and X-axis time step for labels.</li>
<li>Set the start and end times for subsetting.</li>
<li>Add landmarks.</li>
</ul>
End of explanation
fl1 = awot.io.read_netcdf(fname=FltLevf[1:-1], platform='p-3')
r1 = awot.io.read_windsyn_tdr_netcdf(fname=P3Radf[1:-1], field_mapping=None)
Explanation: <b>Read in the flight and radar data</b>
End of explanation
fig, (axPPI, axXS) = plt.subplots(2, 1, figsize=(8, 8))
# Set the map for plotting
bm1 = create_basemap(corners=corners, proj=proj, resolution='l', area_thresh=1.,
lat_spacing=dLat, lon_spacing=dLon, ax=axPPI)
# Create a Flightlevel instance for the track
flp1 = FlightLevel(fl1, basemap=bm1)
flp1.plot_trackmap(start_time=start_time, end_time=end_time,
min_altitude=50., max_altitude= 8000.,
addlegend=False, addtitle=False, ax=axPPI)
# Create a RadarGrid
rgp1 = RadarHorizontalPlot(r1, basemap=bm1)
rgp1.plot_cappi('reflectivity', cappi_ht, vmin=15., vmax=60., title=' ',
#rgp1.plot_cappi('Uwind', 2., vmin=-20., vmax=20., title=' ',
# cmap='RdBu_r',
color_bar=True, cb_pad="10%", cb_loc='right', cb_tick_int=4,
ax=axPPI)
rgp1.overlay_wind_vector(height_level=cappi_ht, vscale=200, vtrim=6, qcolor='0.50',
refUposX=.75, refUposY=.97, plot_km=True)
flp1.plot_radar_cross_section(r1, 'Wwind', plot_km=True,
start_time=start_time, end_time=end_time,
vmin=-3., vmax=3., title=' ',
cmap='RdBu_r',
color_bar=True, cb_orient='vertical', cb_tick_int=4,
x_axis_array='time',
ax=axXS)
Explanation: <b>Make a cross-section following the flight track displayed in the top panel and use the vertical wind velocity field.</b>
End of explanation
fig, (axPPI2, axXS2) = plt.subplots(2, 1, figsize=(7, 7))
# Set the map for plotting
bm2 = create_basemap(corners=corners, proj=proj, resolution='l', area_thresh=1.,
lat_spacing=dLat, lon_spacing=dLon, ax=axPPI2)
# Create a Flightlevel instance for the track
flp2 = FlightLevel(fl1, basemap=bm2)
flp2.plot_trackmap(start_time=start_time, end_time=end_time,
min_altitude=50., max_altitude= 8000.,
addlegend=False, addtitle=False, ax=axPPI2)
# Create a RadarGrid
rgph = RadarHorizontalPlot(r1, basemap=bm2)
# Add PPI plot at 2 km
rgph.plot_cappi('reflectivity', cappi_ht, vmin=15., vmax=60., title=' ',
color_bar=True, cb_pad="10%", cb_loc='right', cb_tick_int=4,
ax=axPPI2)
rgph.overlay_wind_vector(height_level=2., vscale=200, vtrim=6, qcolor='0.50')
# Add Cross-sectional line to horizontal plot
rgph.plot_line_geo([78.3, 79.0], [-1.1, -1.5], lw=4, alpha=.8, line_style='w-',
label0=True, label_offset=(0.05,-0.05))
rgph.plot_cross_section('reflectivity', (78.3, -1.1), (79.0, -1.5),
vmin=15., vmax=60., title=' ',
color_bar=True, cb_orient='vertical', cb_tick_int=4,
plot_km=True, ax=axXS2)
# Alternatively the commented out code below will also display the plot
#rgpv = RadarVerticalPlot(fl1, instrument='tdr_grid')
# Add the cross-section along those coordinates
#rgpv.plot_cross_section('dBZ', (78.3, -1.1), (79.0, -1.5),
# vmin=15., vmax=60., title=' ',
# color_bar=False, cb_orient='vertical', cb_tick_int=4,
# ax=axXS)
Explanation: <b>Now let's make a vertical cross-section along lon/lat pairs of reflectivity</b>
End of explanation
fig, (axPPI3, axXS3) = plt.subplots(2, 1, figsize=(7, 7))
# Set the map for plotting
bm3 = create_basemap(corners=corners, proj=proj, resolution='l', area_thresh=1.,
lat_spacing=dLat, lon_spacing=dLon, ax=axPPI3)
# Create a Flightlevel instance for the track
flp2 = FlightLevel(fl1, basemap=bm3)
flp2.plot_trackmap(start_time=start_time, end_time=end_time,
min_altitude=50., max_altitude= 8000.,
addlegend=False, addtitle=False, ax=axPPI3)
# Create a RadarGrid
rgph = RadarHorizontalPlot(r1, basemap=bm3)
# Add PPI plot at 2 km level
rgph.plot_cappi('reflectivity', cappi_ht, vmin=15., vmax=60., title=' ',
color_bar=True, cb_pad="10%", cb_loc='right', cb_tick_int=4,
ax=axPPI3)
rgpv = RadarVerticalPlot(r1, basemap=bm3)
# Add the cross-section along those coordinates
rgpv.plot_cross_section('reflectivity', (78.3, -1.1), (79.0, -1.5),
vmin=15., vmax=60., title=' ',
color_bar=True, cb_orient='vertical', cb_tick_int=4,
discrete_cmap_levels=[10., 15., 20., 25., 30., 35., 40., 45., 50., 55., 60.], ax=axXS3)
Explanation: <b>Here's an alternative method to produce the same plot above. And notice the second plot has discrete levels by setting the <i>discrete_cmap_levels</i> keyword.</b>
End of explanation |
10,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
10,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Task
Say you think you have pairs of numbers serialized as comma separated values in a file. You want to extract the pair from each line, then sum over the result (per line).
Sample Data
Step1: Initial Implementation
Step2: Iteration 1
First, instantiate a vaquero instance. Here, I've set the maximum number of failures allowed to 5. After that many failures, the Vaquero object raises a VaqueroException. Generally, you want it to be large enough to collect a lot of unexpected failures. But, you don't want it to be so large you exhaust memory. This is an iterative process.
Also, as a tip, always instantiate the Vaquero object in its own cell. This way, you get to inspect it in your notebook even if it raises a VaqueroException.
I also registered all functions (well, callables) in this notebook with vaquero. The error capturing machinery only operates on the registered functions. And, it always ignores a KeyboardInterrupt.
Step3: Just to be sure, I'll check the registered functions. It does matching by name, which is a bit naive. But, it's also surprisingly robust given vaquero usage patterns. Looking, you can see some things that don't belong. But, again, it mostly works well.
Step4: Now, run my trivial examples over the initial implementation.
Step5: It was not successful.
Step6: So, look at the failures. There were two functions, and both had failures.
Step7: To get a sense of what happened, examine the failing functions.
You can do this by calling examine with the name of the function (or the function object). It returns the captured invocations and errors.
Here you can see that the to_int function from cell In [3] failed with a ValueError exception.
Step8: Often though, we want to query only parts of the capture for a specific function. To do so, you can use JMESPath, specifying the selector as an argument to exam. Also, you can say, show me only the set applied to the selected result (assuming it's hashable), to simplify things.
Step9: And, for sum_pair.
Step10: Iteration 2
We know know that there are some ints encoded as doubles. But, we know from our data source, it can only be an int. So, in to_ints, let's parse the strings first as floats, then create an int from it. It's robust.
Also, we know that some lines don't have two components. Those are just bad lines. Let's assert there are two parts as as post condition of extract_pairs.
Finally, after a bit of digging, we found that $ means NA. After cursing for a minute because that's crazy -- although, crazy is common in dirty data -- you decide to ignore those entries. Instead of adding this to an existing function, you write an assert_no_missing_data function.
Step11: Now, we have one more success, but still two failures.
Step12: Let's quickly examine.
Step13: Both these exceptions are bad data. We want to ignore them.
Step14: Looking at the results accumulated,
Step15: Things look good.
Now that we have something that works, we can use Vaquero in a more production-oriented mode. That is, we allow for unlimited errors, but we don't capture anything. That is, we note the failure, but otherwise ignore it since we won't be post-processing.
Step16: They still show up as failures, but it doesn't waste memory storing the captures. | Python Code:
lines = ["1, 1.0", # An errant float
"1, $", # A bad number
"1,-1", # A good line
"10"] # Missing the second value
Explanation: Task
Say you think you have pairs of numbers serialized as comma separated values in a file. You want to extract the pair from each line, then sum over the result (per line).
Sample Data
End of explanation
def extract_pairs(s):
return s.split(",")
def to_int(items):
return [int(item) for item in items]
def sum_pair(items):
return items[0], items[1]
Explanation: Initial Implementation
End of explanation
vaquero = Vaquero(max_failures=5)
vaquero.register_targets(callables_from(globals()))
Explanation: Iteration 1
First, instantiate a vaquero instance. Here, I've set the maximum number of failures allowed to 5. After that many failures, the Vaquero object raises a VaqueroException. Generally, you want it to be large enough to collect a lot of unexpected failures. But, you don't want it to be so large you exhaust memory. This is an iterative process.
Also, as a tip, always instantiate the Vaquero object in its own cell. This way, you get to inspect it in your notebook even if it raises a VaqueroException.
I also registered all functions (well, callables) in this notebook with vaquero. The error capturing machinery only operates on the registered functions. And, it always ignores a KeyboardInterrupt.
End of explanation
vaquero.target_funcs
Explanation: Just to be sure, I'll check the registered functions. It does matching by name, which is a bit naive. But, it's also surprisingly robust given vaquero usage patterns. Looking, you can see some things that don't belong. But, again, it mostly works well.
End of explanation
results = []
for s in lines:
with vaquero.on_input(s):
results.append(sum_pair(to_int(extract_pairs(s))))
Explanation: Now, run my trivial examples over the initial implementation.
End of explanation
vaquero.was_successful
Explanation: It was not successful.
End of explanation
vaquero.stats()
Explanation: So, look at the failures. There were two functions, and both had failures.
End of explanation
vaquero.examine('to_int')
Explanation: To get a sense of what happened, examine the failing functions.
You can do this by calling examine with the name of the function (or the function object). It returns the captured invocations and errors.
Here you can see that the to_int function from cell In [3] failed with a ValueError exception.
End of explanation
vaquero.examine('to_int', '[*].exc_value', as_set=True)
Explanation: Often though, we want to query only parts of the capture for a specific function. To do so, you can use JMESPath, specifying the selector as an argument to exam. Also, you can say, show me only the set applied to the selected result (assuming it's hashable), to simplify things.
End of explanation
vaquero.examine('sum_pair')
Explanation: And, for sum_pair.
End of explanation
def no_missing_data(s):
assert '$' not in s, "'{}' has missing data".format(s)
def extract_pairs(s):
parts = s.split(",")
assert len(parts) == 2, "'{}' not in 2 parts".format(s)
return tuple(parts)
def to_int(items):
return [int(float(item)) for item in items]
def sum_pair(items):
assert len(items) == 2, "Line is improperly formatted"
return items[0] + items[1]
vaquero.reset()
vaquero.register_targets(globals())
results = []
for s in lines:
with vaquero.on_input(s):
no_missing_data(s)
results.append(sum_pair(to_int(extract_pairs(s))))
Explanation: Iteration 2
We know know that there are some ints encoded as doubles. But, we know from our data source, it can only be an int. So, in to_ints, let's parse the strings first as floats, then create an int from it. It's robust.
Also, we know that some lines don't have two components. Those are just bad lines. Let's assert there are two parts as as post condition of extract_pairs.
Finally, after a bit of digging, we found that $ means NA. After cursing for a minute because that's crazy -- although, crazy is common in dirty data -- you decide to ignore those entries. Instead of adding this to an existing function, you write an assert_no_missing_data function.
End of explanation
vaquero.stats()
Explanation: Now, we have one more success, but still two failures.
End of explanation
vaquero.examine('extract_pairs')
vaquero.examine('no_missing_data')
Explanation: Let's quickly examine.
End of explanation
vaquero.stats_ignoring('AssertionError')
Explanation: Both these exceptions are bad data. We want to ignore them.
End of explanation
results
Explanation: Looking at the results accumulated,
End of explanation
vaquero.reset(turn_off_error_capturing=True)
# Or, Vaquero(capture_error_invocations=False)
results = []
for s in lines:
with vaquero.on_input(s):
no_missing_data(s)
results.append(sum_pair(to_int(extract_pairs(s))))
results
Explanation: Things look good.
Now that we have something that works, we can use Vaquero in a more production-oriented mode. That is, we allow for unlimited errors, but we don't capture anything. That is, we note the failure, but otherwise ignore it since we won't be post-processing.
End of explanation
vaquero.stats()
Explanation: They still show up as failures, but it doesn't waste memory storing the captures.
End of explanation |
10,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Note
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. The following cell will restart the kernel.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Tutorial
Now you are ready to start creating your own AutoML text sentiment analysis model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
Step13: Quick peek at your data
This tutorial uses a version of the Crowdflower Claritin-Twitter dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step14: Create the Dataset
Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters
Step15: Create and run training pipeline
To train an AutoML model, you perform two steps
Step16: Run the training pipeline
Next, you run the training job by invoking the method run, with the following parameters
Step17: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
Step18: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
Step19: Send a online prediction request
Send a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
Step20: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step21: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step22: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: AutoML training text sentiment analysis model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_text_sentiment_analysis_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_text_sentiment_analysis_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_text_sentiment_analysis_online.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to create text sentiment analysis models and do online prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the Crowdflower Claritin-Twitter dataset that consists of tweets tagged with sentiment, the author's gender, and whether or not they mention any of the top 10 adverse events reported to the FDA. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. In this tutorial, you will use the tweets' data to build an AutoML-text-sentiment-analysis model on Google Cloud platform.
Objective
In this tutorial, you create an AutoML text sentiment analysis model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Create a training job for the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
Note: You may encounter a PIP dependency error during the installation of the Google Cloud Storage package. This can be ignored as it will not affect the proper running of this script.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. The following cell will restart the kernel.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aiplatform
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
IMPORT_FILE = "gs://cloud-samples-data/language/claritin.csv"
SENTIMENT_MAX = 4
Explanation: Tutorial
Now you are ready to start creating your own AutoML text sentiment analysis model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
This tutorial uses a version of the Crowdflower Claritin-Twitter dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
dataset = aiplatform.TextDataset.create(
display_name="Crowdflower Claritin-Twitter" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aiplatform.schema.dataset.ioformat.text.sentiment,
)
print(dataset.resource_name)
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
job = aiplatform.AutoMLTextTrainingJob(
display_name="claritin_" + TIMESTAMP,
prediction_type="sentiment",
sentiment_max=SENTIMENT_MAX,
)
print(job)
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A text classification model.
sentiment: A text sentiment analysis model.
extraction: A text entity extraction model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
sentiment_max: If a sentiment analysis task, the maximum sentiment value.
End of explanation
model = job.run(
dataset=dataset,
model_display_name="claritin_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
)
Explanation: Run the training pipeline
Next, you run the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 180 minutes.
End of explanation
# Get model resource ID
models = aiplatform.Model.list(filter="display_name=claritin_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aiplatform.gapic.ModelServiceClient(
client_options=client_options
)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
endpoint = model.deploy()
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
End of explanation
test_item = ! gsutil cat $IMPORT_FILE | head -n1
if len(test_item[0]) == 3:
_, test_item, test_label, max = str(test_item[0]).split(",")
else:
test_item, test_label, max = str(test_item[0]).split(",")
print(test_item, test_label)
Explanation: Send a online prediction request
Send a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
instances_list = [{"content": test_item}]
prediction = endpoint.predict(instances_list)
print(prediction)
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
{ 'content': text_string }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
sentiment: The sentiment value.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
# Delete the dataset using the Vertex dataset object
dataset.delete()
# Delete the model using the Vertex model object
model.delete()
# Delete the endpoint using the Vertex endpoint object
endpoint.delete()
# Delete the AutoML or Pipeline training job
job.delete()
# Delete the Cloud storage bucket
if os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Model
Endpoint
AutoML Training Job
Cloud Storage Bucket
End of explanation |
10,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ugly To Pretty for CSVS
Run on linux. Set an import path and an export path to folders.
Will take every file in import directory that is a mathematica generated CSV and turn it into a nicely fomatted CSV in Output directory.
Paths
Step1: Function
Step2: Run | Python Code:
importpath = "/home/jwb/repos/github-research/csvs/Companies/Ugly/Stack/"
exportpath = "/home/jwb/repos/github-research/csvs/Companies/Pretty/Stack/"
Explanation: Ugly To Pretty for CSVS
Run on linux. Set an import path and an export path to folders.
Will take every file in import directory that is a mathematica generated CSV and turn it into a nicely fomatted CSV in Output directory.
Paths
End of explanation
import csv
import pandas as pd
import os
def arrayer(path):
with open(path, "rt") as f:
reader = csv.reader(f)
names = set()
times = {}
windows = []
rownum = 0
for row in reader:
newrow = [(i[1:-1],j[:-2]) for i,j in zip(row[1::2], row[2::2])] #Drops the timewindow, and groups the rest of the row into [name, tally]
rowdict = dict(newrow)
names.update([x[0] for x in newrow]) #adds each name to a name set
l=row[0].replace("DateObject[{","").strip("{}]}").replace(",","").replace("}]","").split() #Strips DateObject string
timestamp=':'.join(l[:3])+'-'+':'.join(l[3:]) #Formats date string
windows.append(timestamp) #add timestamp to list
times[timestamp] = rowdict #link results as value in timestamp dict
rownum += 1
cols = [[times[k][name] if name in times[k] else ' 0' for name in names ] for k in windows] #put the tally for each name across each timestamp in a nested list of Columns
data = pd.DataFrame(cols,columns=list(names),index=windows) #Put into dataframe with labels
return data.transpose()
Explanation: Function
End of explanation
for filename in os.listdir(importpath):
arrayer(importpath+filename).to_csv(exportpath+filename, encoding='utf-8')
Explanation: Run
End of explanation |
10,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Even if your data is not strictly related to fields commonly used in
astrophysical codes or your code is not supported yet, you can still feed it to
yt to use its advanced visualization and analysis facilities. The only
requirement is that your data can be represented as three-dimensional NumPy arrays with a consistent grid structure. What follows are some common examples of loading in generic array data that you may find useful.
Generic Unigrid Data
The simplest case is that of a single grid of data spanning the domain, with one or more fields. The data could be generated from a variety of sources; we'll just give three common examples
Step1: In this example, we'll just create a 3-D array of random floating-point data using NumPy
Step2: To load this data into yt, we need associate it with a field. The data dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call load_uniform_grid
Step3: load_uniform_grid takes the following arguments and optional keywords
Step4: Particle fields are detected as one-dimensional fields. The number of
particles is set by the number_of_particles key in
data. Particle fields are then added as one-dimensional arrays in
a similar manner as the three-dimensional grid fields
Step5: In this example only the particle position fields have been assigned. number_of_particles must be the same size as the particle
arrays. If no particle arrays are supplied then number_of_particles is assumed to be zero. Take a slice, and overlay particle positions
Step6: Generic AMR Data
In a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids
Step7: We'll just fill each grid with random density data, with a scaling with the grid refinement level.
Step8: Particle fields are supported by adding 1-dimensional arrays to each grid and
setting the number_of_particles key in each grid's dict. If a grid has no particles, set number_of_particles = 0, but the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays
Step9: Then, call load_amr_grids
Step10: load_amr_grids also takes the same keywords bbox and sim_time as load_uniform_grid. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice | Python Code:
import yt
import numpy as np
Explanation: Even if your data is not strictly related to fields commonly used in
astrophysical codes or your code is not supported yet, you can still feed it to
yt to use its advanced visualization and analysis facilities. The only
requirement is that your data can be represented as three-dimensional NumPy arrays with a consistent grid structure. What follows are some common examples of loading in generic array data that you may find useful.
Generic Unigrid Data
The simplest case is that of a single grid of data spanning the domain, with one or more fields. The data could be generated from a variety of sources; we'll just give three common examples:
Data generated "on-the-fly"
The most common example is that of data that is generated in memory from the currently running script or notebook.
End of explanation
arr = np.random.random(size=(64,64,64))
Explanation: In this example, we'll just create a 3-D array of random floating-point data using NumPy:
End of explanation
data = dict(density = (arr, "g/cm**3"))
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])
ds = yt.load_uniform_grid(data, arr.shape, length_unit="Mpc", bbox=bbox, nprocs=64)
Explanation: To load this data into yt, we need associate it with a field. The data dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call load_uniform_grid:
End of explanation
slc = yt.SlicePlot(ds, "z", ["density"])
slc.set_cmap("density", "Blues")
slc.annotate_grids(cmap=None)
slc.show()
Explanation: load_uniform_grid takes the following arguments and optional keywords:
data : This is a dict of numpy arrays, where the keys are the field names
domain_dimensions : The domain dimensions of the unigrid
length_unit : The unit that corresponds to code_length, can be a string, tuple, or floating-point number
bbox : Size of computational domain in units of code_length
nprocs : If greater than 1, will create this number of subarrays out of data
sim_time : The simulation time in seconds
mass_unit : The unit that corresponds to code_mass, can be a string, tuple, or floating-point number
time_unit : The unit that corresponds to code_time, can be a string, tuple, or floating-point number
velocity_unit : The unit that corresponds to code_velocity
magnetic_unit : The unit that corresponds to code_magnetic, i.e. the internal units used to represent magnetic field strengths.
periodicity : A tuple of booleans that determines whether the data will be treated as periodic along each axis
This example creates a yt-native dataset ds that will treat your array as a
density field in cubic domain of 3 Mpc edge size and simultaneously divide the
domain into nprocs = 64 chunks, so that you can take advantage
of the underlying parallelism.
The optional unit keyword arguments allow for the default units of the dataset to be set. They can be:
* A string, e.g. length_unit="Mpc"
* A tuple, e.g. mass_unit=(1.0e14, "Msun")
* A floating-point value, e.g. time_unit=3.1557e13
In the latter case, the unit is assumed to be cgs.
The resulting ds functions exactly like a dataset like any other yt can handle--it can be sliced, and we can show the grid boundaries:
End of explanation
posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
posy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
data = dict(density = (np.random.random(size=(64,64,64)), "Msun/kpc**3"),
number_of_particles = 10000,
particle_position_x = (posx_arr, 'code_length'),
particle_position_y = (posy_arr, 'code_length'),
particle_position_z = (posz_arr, 'code_length'))
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])
ds = yt.load_uniform_grid(data, data["density"][0].shape, length_unit=(1.0, "Mpc"), mass_unit=(1.0,"Msun"),
bbox=bbox, nprocs=4)
Explanation: Particle fields are detected as one-dimensional fields. The number of
particles is set by the number_of_particles key in
data. Particle fields are then added as one-dimensional arrays in
a similar manner as the three-dimensional grid fields:
End of explanation
slc = yt.SlicePlot(ds, "z", ["density"])
slc.set_cmap("density", "Blues")
slc.annotate_particles(0.25, p_size=12.0, col="Red")
slc.show()
Explanation: In this example only the particle position fields have been assigned. number_of_particles must be the same size as the particle
arrays. If no particle arrays are supplied then number_of_particles is assumed to be zero. Take a slice, and overlay particle positions:
End of explanation
grid_data = [
dict(left_edge = [0.0, 0.0, 0.0],
right_edge = [1.0, 1.0, 1.0],
level = 0,
dimensions = [32, 32, 32]),
dict(left_edge = [0.25, 0.25, 0.25],
right_edge = [0.75, 0.75, 0.75],
level = 1,
dimensions = [32, 32, 32])
]
Explanation: Generic AMR Data
In a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids: a top-level grid (level == 0) covering the entire domain and a subgrid at level == 1.
End of explanation
for g in grid_data:
g["density"] = (np.random.random(g["dimensions"]) * 2**g["level"], "g/cm**3")
Explanation: We'll just fill each grid with random density data, with a scaling with the grid refinement level.
End of explanation
grid_data[0]["number_of_particles"] = 0 # Set no particles in the top-level grid
grid_data[0]["particle_position_x"] = (np.array([]), "code_length") # No particles, so set empty arrays
grid_data[0]["particle_position_y"] = (np.array([]), "code_length")
grid_data[0]["particle_position_z"] = (np.array([]), "code_length")
grid_data[1]["number_of_particles"] = 1000
grid_data[1]["particle_position_x"] = (np.random.uniform(low=0.25, high=0.75, size=1000), "code_length")
grid_data[1]["particle_position_y"] = (np.random.uniform(low=0.25, high=0.75, size=1000), "code_length")
grid_data[1]["particle_position_z"] = (np.random.uniform(low=0.25, high=0.75, size=1000), "code_length")
Explanation: Particle fields are supported by adding 1-dimensional arrays to each grid and
setting the number_of_particles key in each grid's dict. If a grid has no particles, set number_of_particles = 0, but the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays:
End of explanation
ds = yt.load_amr_grids(grid_data, [32, 32, 32])
Explanation: Then, call load_amr_grids:
End of explanation
slc = yt.SlicePlot(ds, "z", ["density"])
slc.annotate_particles(0.25, p_size=15.0, col="Pink")
slc.show()
Explanation: load_amr_grids also takes the same keywords bbox and sim_time as load_uniform_grid. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:
End of explanation |
10,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decoding Filing Periods
The raw data tables mix together filings from different reporting periods (e.g. quarterlys vs. semi-annual vs. pre-elections). But we need these filings to be sorted (or at least sortable) so that or users, for example, can compare the performance of two candidates in the same reporting period.
There are two vectors at play here
Step1: Will also need to execute some raw SQL, so I'll import a helper function in order to make the results more readable
Step3: Let's start by examining the distinct values of the statement type on CVR_CAMPAIGN_DISCLOSURE_CD. And let's narrow the scope to only the Form 460 filings.
Step5: Not all of these values are defined, as previously noted in our docs
Step7: One of the tables that caught my eye is FILING_PERIOD_CD, which appears to have a row for each quarterly filing period
Step9: Every period is described as a quarter, and the records are equally divided among them
Step11: The difference between every START_DATE and END_DATE is actually a three-month interval
Step13: And they have covered every year between 1973 and 2334 (how optimistic!)
Step15: Filings are linked to filing periods via FILER_FILINGS_CD.PERIOD_ID. While that column is not always populated, it is if you limit your results to just the Form 460 filings
Step17: Also, is Schwarzenegger running this cycle? Who else could be filing from so far into the future?
AAANNNNYYYway...Also need to check to make sure the join between FILER_FILINGS_CD and CVR_CAMPAIGN_DISCLOSURE_CD isn't filtering out too many filings
Step19: So only a handful, mostly local campaigns or just nonsense test data.
So another important thing to check is how well these the dates from the filing period look-up records line up with the dates on the Form 460 filing records. It would be bad if the CVR_CAMPAIGN_DISCLOSURE_CD.FROM_DATE were before FILING_PERIOD_CD.START_DATE or if CVR_CAMPAIGN_DISCLOSURE_CD.THRU_DATE were after FILING_PERIOD_CD.END_DATE.
Step21: So half of the time, the THRU_DATE on the filing is later than the FROM_DATE on the filing period. How big of a difference can exist between these two dates?
Step23: Ugh. Looks like, in most of the problem cases, the from date can be a whole quarter later than the end date of the filing period. Let's take a closer look at these...
Step25: So, actually, this sort of makes sense | Python Code:
from calaccess_processed.models.tracking import ProcessedDataVersion
ProcessedDataVersion.objects.latest()
Explanation: Decoding Filing Periods
The raw data tables mix together filings from different reporting periods (e.g. quarterlys vs. semi-annual vs. pre-elections). But we need these filings to be sorted (or at least sortable) so that or users, for example, can compare the performance of two candidates in the same reporting period.
There are two vectors at play here:
1. The "Statement Type", as described in CAL-ACCESS parlance, which indicates the length of time covered by the filing and how close it was filed to the election.
2. The actual time interval the filing covers, denoted by a start date and an end date.
This notebook is pulling data from the downloads-website's dev database, which was last updated on...
End of explanation
from project import sql_to_agate
Explanation: Will also need to execute some raw SQL, so I'll import a helper function in order to make the results more readable:
End of explanation
sql_to_agate(
SELECT UPPER("STMT_TYPE"), COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD"
WHERE "FORM_TYPE" = 'F460'
GROUP BY 1
ORDER BY COUNT(*) DESC;
).print_table()
Explanation: Let's start by examining the distinct values of the statement type on CVR_CAMPAIGN_DISCLOSURE_CD. And let's narrow the scope to only the Form 460 filings.
End of explanation
sql_to_agate(
SELECT FF."STMNT_TYPE", LU."CODE_DESC", COUNT(*)
FROM "FILER_FILINGS_CD" FF
JOIN "LOOKUP_CODES_CD" LU
ON FF."STMNT_TYPE" = LU."CODE_ID"
AND LU."CODE_TYPE" = 10000
GROUP BY 1, 2;
).print_table()
Explanation: Not all of these values are defined, as previously noted in our docs:
* PR might be pre-election
* QS is pro probably quarterly statement
* YE might be...I don't know "Year-end"?
* S is probably semi-annual
Maybe come back later and look at the actual filings. There aren't that many.
There's another similar-named column on FILER_FILINGS_CD, but this seems to be a completely different thing:
End of explanation
sql_to_agate(
SELECT *
FROM "FILING_PERIOD_CD"
).print_table()
Explanation: One of the tables that caught my eye is FILING_PERIOD_CD, which appears to have a row for each quarterly filing period:
End of explanation
sql_to_agate(
SELECT "PERIOD_DESC", COUNT(*)
FROM "FILING_PERIOD_CD"
GROUP BY 1;
).print_table()
Explanation: Every period is described as a quarter, and the records are equally divided among them:
End of explanation
sql_to_agate(
SELECT "END_DATE" - "START_DATE" AS duration, COUNT(*)
FROM "FILING_PERIOD_CD"
GROUP BY 1;
).print_table()
Explanation: The difference between every START_DATE and END_DATE is actually a three-month interval:
End of explanation
sql_to_agate(
SELECT DATE_PART('year', "START_DATE")::int as year, COUNT(*)
FROM "FILING_PERIOD_CD"
GROUP BY 1
ORDER BY 1 DESC;
).print_table()
Explanation: And they have covered every year between 1973 and 2334 (how optimistic!):
End of explanation
sql_to_agate(
SELECT ff."PERIOD_ID", fp."START_DATE", fp."END_DATE", fp."PERIOD_DESC", COUNT(*)
FROM "FILER_FILINGS_CD" ff
JOIN "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
ON ff."FILING_ID" = cvr."FILING_ID"
AND ff."FILING_SEQUENCE" = cvr."AMEND_ID"
AND cvr."FORM_TYPE" = 'F460'
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
GROUP BY 1, 2, 3, 4
ORDER BY fp."START_DATE" DESC;
).print_table()
Explanation: Filings are linked to filing periods via FILER_FILINGS_CD.PERIOD_ID. While that column is not always populated, it is if you limit your results to just the Form 460 filings:
End of explanation
sql_to_agate(
SELECT cvr."FILING_ID", cvr."FORM_TYPE", cvr."FILER_NAML"
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
LEFT JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
WHERE cvr."FORM_TYPE" = 'F460'
AND (ff."FILING_ID" IS NULL OR ff."FILING_SEQUENCE" IS NULL)
ORDER BY cvr."FILING_ID";
).print_table(max_column_width=60)
Explanation: Also, is Schwarzenegger running this cycle? Who else could be filing from so far into the future?
AAANNNNYYYway...Also need to check to make sure the join between FILER_FILINGS_CD and CVR_CAMPAIGN_DISCLOSURE_CD isn't filtering out too many filings:
End of explanation
sql_to_agate(
SELECT
CASE
WHEN cvr."FROM_DATE" < fp."START_DATE" THEN 'filing from_date before period start_date'
WHEN cvr."THRU_DATE" > fp."END_DATE" THEN 'filing thru_date after period end_date'
ELSE 'okay'
END as test,
COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
GROUP BY 1;
).print_table(max_column_width=60)
Explanation: So only a handful, mostly local campaigns or just nonsense test data.
So another important thing to check is how well these the dates from the filing period look-up records line up with the dates on the Form 460 filing records. It would be bad if the CVR_CAMPAIGN_DISCLOSURE_CD.FROM_DATE were before FILING_PERIOD_CD.START_DATE or if CVR_CAMPAIGN_DISCLOSURE_CD.THRU_DATE were after FILING_PERIOD_CD.END_DATE.
End of explanation
sql_to_agate(
SELECT
cvr."THRU_DATE" - fp."END_DATE" as date_diff,
COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
AND cvr."THRU_DATE" > fp."END_DATE"
GROUP BY 1
ORDER BY COUNT(*) DESC;
).print_table(max_column_width=60)
Explanation: So half of the time, the THRU_DATE on the filing is later than the FROM_DATE on the filing period. How big of a difference can exist between these two dates?
End of explanation
sql_to_agate(
SELECT
cvr."FILING_ID",
cvr."AMEND_ID",
cvr."FROM_DATE",
cvr."THRU_DATE",
fp."START_DATE",
fp."END_DATE"
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
AND 90 < cvr."THRU_DATE" - fp."END_DATE"
AND cvr."THRU_DATE" - fp."END_DATE" < 93
ORDER BY cvr."THRU_DATE" DESC;
).print_table(max_column_width=60)
Explanation: Ugh. Looks like, in most of the problem cases, the from date can be a whole quarter later than the end date of the filing period. Let's take a closer look at these...
End of explanation
sql_to_agate(
SELECT UPPER(cvr."STMT_TYPE"), COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
AND 90 < cvr."THRU_DATE" - fp."END_DATE"
AND cvr."THRU_DATE" - fp."END_DATE" < 93
GROUP BY 1
ORDER BY COUNT(*) DESC;
).print_table(max_column_width=60)
Explanation: So, actually, this sort of makes sense: Quarterly filings are for three month intervals, while the semi-annual filings are for six month intervals. And FILING_PERIOD_CD only has records for three month intervals. Let's test this theory by getting the distinct CVR_CAMPAIGN_DISCLOSURE_CD.STMT_TYPE values from these records:
End of explanation |
10,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The point of this notebook is to do a quick prediction on some sample images with a pretrained network.
Important Imports
Step1: We're going to test on some train images, so loading the training set labels.
need to repopulate with test
Step2: Using the DataLoader to set up the parameters, you could replace it with something much simpler.
Step3: The next function is going to iterate over a test generator to get the outputs.
Step4: We get the default "no transformation" parameters for the model.
Step5: And set up the test generator on the first 256 patients of the training set (512 images).
Step6: Then we can get some predictions.
Step7: Legend
0 - No DR
1 - Mild DR
2 - Moderate DR
3 - Severe DR
4 - PDR
X axis for labels
Y axis for probability
Results are for left and right eyes (A and C respectively) | Python Code:
import sys
sys.path.append('../')
import cPickle as pickle
import re
import glob
import os
from generators import DataLoader
import time
import holoviews as hv
import theano
import theano.tensor as T
import numpy as np
import pandas as p
import lasagne as nn
from utils import hms, architecture_string, get_img_ids_from_iter
%pylab inline
rcParams['figure.figsize'] = 16, 6
# rcParams['text.color'] = 'red'
# rcParams['xtick.color'] = 'red'
# rcParams['ytick.color'] = 'red'
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
dump_path = '../dumps/2015_07_17_123003.pkl'
model_data = pickle.load(open(dump_path, 'rb'))
# Let's set the in and output layers to some local vars.
l_out = model_data['l_out']
l_ins = model_data['l_ins']
chunk_size = model_data['chunk_size'] * 2
batch_size = model_data['batch_size']
#print "Batch size: %i." % batch_size
#print "Chunk size: %i." % chunk_size
output = nn.layers.get_output(l_out, deterministic=True)
input_ndims = [len(nn.layers.get_output_shape(l_in))
for l_in in l_ins]
xs_shared = [nn.utils.shared_empty(dim=ndim)
for ndim in input_ndims]
idx = T.lscalar('idx')
givens = {}
for l_in, x_shared in zip(l_ins, xs_shared):
givens[l_in.input_var] = x_shared[idx * batch_size:(idx + 1) * batch_size]
compute_output = theano.function(
[idx],
output,
givens=givens,
on_unused_input='ignore'
)
# Do transformations per patient instead?
if 'paired_transfos' in model_data:
paired_transfos = model_data['paired_transfos']
else:
paired_transfos = False
#print paired_transfos
Explanation: The point of this notebook is to do a quick prediction on some sample images with a pretrained network.
Important Imports
End of explanation
train_labels = p.read_csv('../data/new_trainLabels.csv')
print train_labels.head(20)
# Get all patient ids.
patient_ids = sorted(set(get_img_ids_from_iter(train_labels.image)))
num_chunks = int(np.ceil((2 * len(patient_ids)) / float(chunk_size)))
# Where all the images are located:
# it looks for [img_dir]/[patient_id]_[left or right].jpeg
img_dir = '../test_resized/'
Explanation: We're going to test on some train images, so loading the training set labels.
need to repopulate with test
End of explanation
data_loader = DataLoader()
new_dataloader_params = model_data['data_loader_params']
new_dataloader_params.update({'images_test': patient_ids})
new_dataloader_params.update({'labels_test': train_labels.level.values})
new_dataloader_params.update({'prefix_train': img_dir})
data_loader.set_params(new_dataloader_params)
Explanation: Using the DataLoader to set up the parameters, you could replace it with something much simpler.
End of explanation
def do_pred(test_gen):
outputs = []
for e, (xs_chunk, chunk_shape, chunk_length) in enumerate(test_gen()):
num_batches_chunk = int(np.ceil(chunk_length / float(batch_size)))
print "Chunk %i/%i" % (e + 1, num_chunks)
print " load data onto GPU"
for x_shared, x_chunk in zip(xs_shared, xs_chunk):
x_shared.set_value(x_chunk)
print " compute output in batches"
outputs_chunk = []
for b in xrange(num_batches_chunk):
out = compute_output(b)
outputs_chunk.append(out)
outputs_chunk = np.vstack(outputs_chunk)
outputs_chunk = outputs_chunk[:chunk_length]
outputs.append(outputs_chunk)
return np.vstack(outputs), xs_chunk
Explanation: The next function is going to iterate over a test generator to get the outputs.
End of explanation
no_transfo_params = model_data['data_loader_params']['no_transfo_params']
#print no_transfo_params
Explanation: We get the default "no transformation" parameters for the model.
End of explanation
# The default gen with "no transfos".
test_gen = lambda: data_loader.create_fixed_gen(
data_loader.images_test[:128*2],
chunk_size=chunk_size,
prefix_train=img_dir,
prefix_test=img_dir,
transfo_params=no_transfo_params,
paired_transfos=paired_transfos,
)
Explanation: And set up the test generator on the first 256 patients of the training set (512 images).
End of explanation
%%time
outputs_orig, chunk_orig = do_pred(test_gen)
d={}
for i,patient in zip(range(0,outputs_orig.shape[0],2),patient_ids):
a=hv.RGB.load_image('../test_resized//'+str(patient)+'_left.jpeg')
b=hv.RGB.load_image('../test_resized//'+str(patient)+'_right.jpeg')
a=a + hv.Bars(outputs_orig[i])
b=b+hv.Bars(outputs_orig[i+1])
d[patient] = (a+b).cols(2)
hv.notebook_extension()
result=hv.HoloMap(d)
Explanation: Then we can get some predictions.
End of explanation
result
Explanation: Legend
0 - No DR
1 - Mild DR
2 - Moderate DR
3 - Severe DR
4 - PDR
X axis for labels
Y axis for probability
Results are for left and right eyes (A and C respectively)
End of explanation |
10,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: TensorFlow Programming Concepts
Learning Objectives
Step2: Don't forget to execute the preceding code block (the import statements).
Other common import statements include the following
Step3: Exercise | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
import tensorflow as tf
Explanation: TensorFlow Programming Concepts
Learning Objectives:
* Learn the basics of the TensorFlow programming model, focusing on the following concepts:
* tensors
* operations
* graphs
* sessions
* Build a simple TensorFlow program that creates a default graph, and a session that runs the graph
Note: Please read through this tutorial carefully. The TensorFlow programming model is probably different from others that you have encountered, and thus may not be as intuitive as you'd expect.
Overview of Concepts
TensorFlow gets its name from tensors, which are arrays of arbitrary dimensionality. Using TensorFlow, you can manipulate tensors with a very high number of dimensions. That said, most of the time you will work with one or more of the following low-dimensional tensors:
A scalar is a 0-d array (a 0th-order tensor). For example, "Howdy" or 5
A vector is a 1-d array (a 1st-order tensor). For example, [2, 3, 5, 7, 11] or [5]
A matrix is a 2-d array (a 2nd-order tensor). For example, [[3.1, 8.2, 5.9][4.3, -2.7, 6.5]]
TensorFlow operations create, destroy, and manipulate tensors. Most of the lines of code in a typical TensorFlow program are operations.
A TensorFlow graph (also known as a computational graph or a dataflow graph) is, yes, a graph data structure. A graph's nodes are operations (in TensorFlow, every operation is associated with a graph). Many TensorFlow programs consist of a single graph, but TensorFlow programs may optionally create multiple graphs. A graph's nodes are operations; a graph's edges are tensors. Tensors flow through the graph, manipulated at each node by an operation. The output tensor of one operation often becomes the input tensor to a subsequent operation. TensorFlow implements a lazy execution model, meaning that nodes are only computed when needed, based on the needs of associated nodes.
Tensors can be stored in the graph as constants or variables. As you might guess, constants hold tensors whose values can't change, while variables hold tensors whose values can change. However, what you may not have guessed is that constants and variables are just more operations in the graph. A constant is an operation that always returns the same tensor value. A variable is an operation that will return whichever tensor has been assigned to it.
To define a constant, use the tf.constant operator and pass in its value. For example:
x = tf.constant(5.2)
Similarly, you can create a variable like this:
y = tf.Variable([5])
Or you can create the variable first and then subsequently assign a value like this (note that you always have to specify a default value):
y = tf.Variable([0])
y = y.assign([5])
Once you've defined some constants or variables, you can combine them with other operations like tf.add. When you evaluate the tf.add operation, it will call your tf.constant or tf.Variable operations to get their values and then return a new tensor with the sum of those values.
Graphs must run within a TensorFlow session, which holds the state for the graph(s) it runs:
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
print(y.eval())
When working with tf.Variables, you must explicitly initialize them by calling tf.global_variables_initializer at the start of your session, as shown above.
Note: A session can distribute graph execution across multiple machines (assuming the program is run on some distributed computation framework). For more information, see Distributed TensorFlow.
Summary
TensorFlow programming is essentially a two-step process:
Assemble constants, variables, and operations into a graph.
Evaluate those constants, variables and operations within a session.
Creating a Simple TensorFlow Program
Let's look at how to code a simple TensorFlow program that adds two constants.
Provide import statements
As with nearly all Python programs, you'll begin by specifying some import statements.
The set of import statements required to run a TensorFlow program depends, of course, on the features your program will access. At a minimum, you must provide the import tensorflow statement in all TensorFlow programs:
End of explanation
from __future__ import print_function
import tensorflow as tf
# Create a graph.
g = tf.Graph()
# Establish the graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of the following three operations:
# * Two tf.constant operations to create the operands.
# * One tf.add operation to add the two operands.
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
my_sum = tf.add(x, y, name="x_y_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
print(my_sum.eval())
Explanation: Don't forget to execute the preceding code block (the import statements).
Other common import statements include the following:
import matplotlib.pyplot as plt # Dataset visualization.
import numpy as np # Low-level numerical Python library.
import pandas as pd # Higher-level numerical Python library.
TensorFlow provides a default graph. However, we recommend explicitly creating your own Graph instead to facilitate tracking state (e.g., you may wish to work with a different Graph in each cell).
End of explanation
# Create a graph.
g = tf.Graph()
# Establish our graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of three operations.
# (Creating a tensor is an operation.)
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
my_sum = tf.add(x, y, name="x_y_sum")
# Task 1: Define a third scalar integer constant z.
z = tf.constant(4, name="z_const")
# Task 2: Add z to `my_sum` to yield a new sum.
new_sum = tf.add(my_sum, z, name="x_y_z_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
# Task 3: Ensure the program yields the correct grand total.
print(new_sum.eval())
Explanation: Exercise: Introduce a Third Operand
Revise the above code listing to add three integers, instead of two:
Define a third scalar integer constant, z, and assign it a value of 4.
Add z to my_sum to yield a new sum.
Hint: See the API docs for tf.add() for more details on its function signature.
Re-run the modified code block. Did the program generate the correct grand total?
Solution
Click below for the solution.
End of explanation |
10,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 8 - pre-class assignment
Goals for today's pre-class assignment
Use complex if statements and loops to make decisions in a computer program
Assignment instructions
Watch the videos below, read through Sections 4.1, 4.4, and 4.5 of the Python Tutorial, and complete the programming problems assigned below.
This assignment is due by 11
Step1: Question 1
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step6: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("8_wSb927nH0",width=640,height=360) # Complex 'if' statements
Explanation: Day 8 - pre-class assignment
Goals for today's pre-class assignment
Use complex if statements and loops to make decisions in a computer program
Assignment instructions
Watch the videos below, read through Sections 4.1, 4.4, and 4.5 of the Python Tutorial, and complete the programming problems assigned below.
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 8. Submission instructions can be found at the end of the notebook.
End of explanation
# put your code here.
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("MzZCeHB0CbE",width=640,height=360) # Complex loops
Explanation: Question 1: In the cell below, use numpy's 'arange' method to create an array filled with all of the integers between 1 and 10 (inclusive). Loop through the array, and use if/elif/else to:
Print out if the number is even or odd.
Print out if the number is divisible by 3.
Print out if the number is divisible by 5.
If the number is not divisible by either 3 or 5, print out "wow, that's disappointing."
Note 1: You may need more than one if/elif/else statement to do this!
Note 2: If you have a numpy array named my_numpy_array, you don't necessarily have to use the numpy nditer method. You can loop using the standard python syntax as well. In other words:
for val in my_numpy_array:
print(val)
will work just fine.
End of explanation
# put your code here.
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
Explanation: Question 2: In the space below, loop through the given array, breaking when you get to the first negative number. Print out the value you're examining after you check for negative numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for negative numbers. What is that variable equal to after the loop?
End of explanation
# put your code here
Explanation: Question 3: In the space below, loop through the array given above, skipping every even number with the continue statement. Print out the value you're examining after you check for even numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for even numbers. What is that variable equal to after the loop?
End of explanation
# put your code here!
Explanation: Question 4: Copy and paste your code from question #2 above and change it in two ways:
Modify the numbers in the array so the if/break statement is never called.
There is an else clause after the end of the loop (not the end of the if statement!) that prints out "yay, success!" if the loop completes successfully, but not if it breaks.
Verify that if you use the original array, the print statement in the else clause doesn't work!
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/l7LqskZxIADofpZy2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
10,389 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
display a few of images from dataset using imshow()
| Python Code::
import matplotlib.pyplot as plt
fig, ax = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(5,5))
for images, labels in ds.take(1):
for i in range(3):
for j in range(3):
ax[i][j].imshow(images[i*3+j].numpy().astype("uint8"))
ax[i][j].set_title(ds.class_names[labels[i*3+j]])
plt.show()
|
10,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Recreate this scatter plot of b vs a. Note the color and size of the points. Also note the figure size. See if you can figure out how to stretch it in a similar fashion. Remeber back to your matplotlib lecture...
Step2: Create a histogram of the 'a' column.
Step3: These plots are okay, but they don't look very polished. Use style sheets to set the style to 'ggplot' and redo the histogram from above. Also figure out how to add more bins to it.*
Step4: Create a boxplot comparing the a and b columns.
Step5: Create a kde plot of the 'd' column
Step6: Figure out how to increase the linewidth and make the linestyle dashed. (Note | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
df3 = pd.read_csv('df3')
%matplotlib inline
df3.info()
df3.head()
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Pandas Data Visualization Exercise
This is just a quick exercise for you to review the various plots we showed earlier. Use df3 to replicate the following plots.
End of explanation
df3.plot.scatter(x='a',y='b',s=50,c='red',figsize=(12,3))
Explanation: Recreate this scatter plot of b vs a. Note the color and size of the points. Also note the figure size. See if you can figure out how to stretch it in a similar fashion. Remeber back to your matplotlib lecture...
End of explanation
df3['a'].plot.hist()
Explanation: Create a histogram of the 'a' column.
End of explanation
plt.style.use('ggplot')
df3['a'].plot.hist(bins=20,alpha=0.5)
Explanation: These plots are okay, but they don't look very polished. Use style sheets to set the style to 'ggplot' and redo the histogram from above. Also figure out how to add more bins to it.*
End of explanation
df3[['a','b']].plot.box()
Explanation: Create a boxplot comparing the a and b columns.
End of explanation
df3['d'].plot.kde()
df3['d'].plot.kde(lw='4',ls='--')
Explanation: Create a kde plot of the 'd' column
End of explanation
df3.ix[0:30].plot.area(alpha=0.5)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
Explanation: Figure out how to increase the linewidth and make the linestyle dashed. (Note: You would usually not dash a kde plot line)
Create an area plot of all the columns for just the rows up to 30. (hint: use .ix).
End of explanation |
10,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AST 337 In-Class Lab #2
Wednesday, September 13, 2017
Names
Step1: The first step is to download the datasets we need from VizieR for the following clusters
Step2: As before, we would like to check the table and its contents
Step3: And we would like to check the column headers of the dataset and the data types (dtypes). Do so in the following two cells for the M22 dataset.
Step4: Calculating values for color-magnitude diagrams (CMDs)
We are most interested in the measured B and V magnitude columns for the stars in these clusters. However, these are both apparent magnitudes, and we want to use absolute magnitude for the V vs. B-V color-magnitude diagram.
Therefore, for each dataset we downloaded, we will need to do four things
Step5: In pandas, dataframes have both "heads" and "tails". What do you expect the "tail" method to do? Try in the cell below.
Now add a new column for B-V color to the existing M22 dataframe. We can label this column whatever we like, as long as the label isn't already used for a different column -- if the column label already exists, it will overwrite it!
Let's label the new column "BVcolor", the values of which are the differences between values in the existing B and V columns
Step6: Did that work? Check by viewing the table in the cell below.
Step7: What if we wanted to calculate the V-I color instead of B-V? Add a new V-I column to the dataframe, and check to ensure the dataframe has updated
Step8: A brief overview of functions in python
Programs often involve tasks that must be done repetitively, or there are tasks that we want to perform that are common to many programs. We will write a function that uses the distance modulus equation and calculates absolute magnitudes.
An example of a standard python function is the logarithm function, which is built into python within the numpy package (short for "numerical python"). We saw last week that np.log10 takes the base 10 logarithm of the input, and returns the corresponding power
Step10: There are many, many reasons why one might want to take the log of something, so it is useful to have the log function defined once and for all in a standard python package. This way, any program that needs to take the log can do so, rather than having the user come up with it again and again. But what if the function we want to use does not exist in python?
The capability that we are seeking is provided by defining new functions. This allows us to make our own functions that are just like the log function, and can be called in a similar way. Functions are defined by the following syntax
Step11: This defines a very simple function. Let's walk through this declaration step by step
Step12: Note that myfunc includes something called a "docstring" (denoted with the triple quotations at the start and end). This is a description of what the function does and is visible when call that function with a question mark (as below). Many companies (e.g. Google) have extensive rules about what should be included in a docstring. For example, <a href="http
Step13: Try writing a simple function below called test_function that takes two numbers a and b as input, multiplies them togther, then divides the product by 2.0, and returns the answer.
Step14: If all went according to plan, the following should return the value 42
Step15: Using a function to calculate absolute magnitude
Recall that the distance modulus equation is as follows
Step16: Let's test your new function with the Sun, which has an apparent magnitude of -26.74. The Sun is, on average, located at a distance of 4.848e-6 pc from Earth.
Question
Step17: Now that we have a handy function to calculate absolute magnitudes from apparent ones, we can add a new column for absolute magnitude to our existing dataframe. First, we'll need the approximate distances to each of the clusters, provided here.
Step18: Now we will add a new column for absolute magnitude, Mv to our existing M22 dataframe. Use your new absmagcalc function to calculate the absolute magnitudes from the distance and existing apparent V magnitude column, and provide the output for this new column below
Step19: In the cell below, check your dataframe to see if it has been updated with the new column
Step20: We are now ready to plot!
Plotting from a pandas dataframe
Using the matplotlib.pyplot skills we learned last week, we can now plot our first color magnitude diagram. One convenient aspect of pandas is that we can plot columns taken directly from the dataframe itself. For example, for M22
Step21: Exercise 2 (Additional Clusters)
Now you will read in the data for the other clusters you downloaded from VizieR at the beginning of the lab and plot these data. For each cluster
Step22: Exercise 3 (Comprehension Questions)
<p>1) How do the color-magnitude diagrams (CMDs) you have plotted compare to the H-R diagrams from last week? What features are similar? What features are different?</p>
-
<p>2) Why do you think there is so much scatter in the observational data? Do you think this is an astrophysical effect or an instrumental one? What are some potential sources of error? </p>
-
<p>3) Which clusters do you think are older or younger? Rank NGC 188, M4, M22, and M67 by relative age. How can you tell?</p>
-
<p>4) Why might the main sequences be offset for each of the clusters? (Hint | Python Code:
# Load packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: AST 337 In-Class Lab #2
Wednesday, September 13, 2017
Names: [insert your names here]
In this lab, you will (1) work directly with published astronomical data from the VizieR database, (2) continue using pandas and matplotlib to read, manipulate, and plot datasets, and (3) gain experience writing functions.
On the science end, you will create color-magnitude diagrams -- the observer's H-R diagram -- for various stellar populations, compare the clusters, and relate these to the results from last week.
End of explanation
m22 = pd.read_csv('M22.tsv')
Explanation: The first step is to download the datasets we need from VizieR for the following clusters:
* M22 (globular cluster) -- BVI photometry of M22 (Monaco+, 2004) -- we'll download this one together as a class
* M4 (globular cluster) -- M4 UBV color-magnitude diagrams (Mochejska+, 2002)
* M67 (open cluster) -- BVI photometry in M67 (Stassun+, 2002)
* NGC 188 (open cluster) -- A star catalog for the open cluster NGC 188 (Stetson+, 2004)
Save each of these as semicolon delimited .tsv files in the directory with this notebook.
First, we'll need to read the datasets into pandas, as we did in last week's lab. However, VizieR gives us more information than just the raw data to look at, so we'll need to give pd.read_csv additional information so it can parse the datafile.
Let's first look at the actual M22 datafile, by opening the file in Jupyter's text editor mode. Go back to the browser tab with the directory containing this notebook, and double-click on the file you just saved, M22.tsv. Take a look at the contents of the file, then come back to this notebook.
Questions:
<p>1) What information is contained in the header?</p>
-
<p>2) How many commented "#" lines are there before the dataset begins? (see the line numbers)</p>
-
As useful as the header information is for us, pandas only needs to know where the data values start and what the column headers are.
To help pandas parse the data easily, edit the text file in the other Jupyter tab to add the '#' symbol at the beginnings of the two rows describing the column units and dashed lines. Be sure to save the text file!
We will now tell pandas to skip any commented rows and that the file is semicolon delimited by adding parameters to the read_csv function, separated by commas:
* comment = '#'
* delimiter = ';'
EDIT the cell below to add these parameters to the regular pd.read_csv command, then run the cell to read in the file.
End of explanation
m22
Explanation: As before, we would like to check the table and its contents:
End of explanation
# Check the columns here
# Check the datatypes here. Do any columns need to be converted from object to float?
Explanation: And we would like to check the column headers of the dataset and the data types (dtypes). Do so in the following two cells for the M22 dataset.
End of explanation
m22.head()
Explanation: Calculating values for color-magnitude diagrams (CMDs)
We are most interested in the measured B and V magnitude columns for the stars in these clusters. However, these are both apparent magnitudes, and we want to use absolute magnitude for the V vs. B-V color-magnitude diagram.
Therefore, for each dataset we downloaded, we will need to do four things:
Read in the datafiles.
Ensure the data columns we want to manipulate have the appropriate data type. (Depending on the dataset, we might need to use the pd.to_numeric function, as in Lab1.)
Use the apparent magnitude and distance to calculate the absolute V magnitude (Y-axis proxy for luminosity).
Use the B and V magnitudes to calculate the color (X-axis proxy for temperature).
In the next steps, we are going to calculate new pandas series for absolute V magnitude and B-V color and add them to our existing M22 dataframe.
Questions:
<p>3) What quantities are needed to calculate an absolute magnitude?</p>
-
<p>4) Do we need to use apparent magnitudes or absolute magnitudes to calculate the B-V color? Why?</p>
-
Adding a new column to an existing pandas dataframe
For the X-axis of our CMD, we want to add a column to our existing data frame with the new calculated B-V value. With pandas, this is very simple -- all we have to do is define a new column label, and subtract the existing columns!
Let's first remind ourselves what the M22 dataframe/table looks like right now. We can view a snippet of the top of the table by using the following method, which shows the first five rows only:
End of explanation
m22['BVcolor'] = m22['Bmag'] - m22['Vmag']
Explanation: In pandas, dataframes have both "heads" and "tails". What do you expect the "tail" method to do? Try in the cell below.
Now add a new column for B-V color to the existing M22 dataframe. We can label this column whatever we like, as long as the label isn't already used for a different column -- if the column label already exists, it will overwrite it!
Let's label the new column "BVcolor", the values of which are the differences between values in the existing B and V columns:
End of explanation
m22.head() # Could also do m22.columns
Explanation: Did that work? Check by viewing the table in the cell below.
End of explanation
# Calculate a new column labeled "VIcolor"
# Check for the updated dataframe column
Explanation: What if we wanted to calculate the V-I color instead of B-V? Add a new V-I column to the dataframe, and check to ensure the dataframe has updated:
End of explanation
input_value = 1000.0 # define a variable
return_value = np.log10(input_value) # use that variable within a function
print(return_value) # print the output of the function, which has been saved to the new variable "return_value"
Explanation: A brief overview of functions in python
Programs often involve tasks that must be done repetitively, or there are tasks that we want to perform that are common to many programs. We will write a function that uses the distance modulus equation and calculates absolute magnitudes.
An example of a standard python function is the logarithm function, which is built into python within the numpy package (short for "numerical python"). We saw last week that np.log10 takes the base 10 logarithm of the input, and returns the corresponding power:
End of explanation
def myfunc(arg1, arg2):
This is a function that does nothing in particular
print("I am a function! Here are my arguments:")
print(arg1)
print(arg2)
print("I am returning my first argument now!")
return(arg1)
Explanation: There are many, many reasons why one might want to take the log of something, so it is useful to have the log function defined once and for all in a standard python package. This way, any program that needs to take the log can do so, rather than having the user come up with it again and again. But what if the function we want to use does not exist in python?
The capability that we are seeking is provided by defining new functions. This allows us to make our own functions that are just like the log function, and can be called in a similar way. Functions are defined by the following syntax:
End of explanation
myfunc('star', 3.14159265)
Explanation: This defines a very simple function. Let's walk through this declaration step by step:
The first line begins with def, then the name of the function, and then in parentheses a list of arguments for the function, then a colon. Arguments are inputs to the function. For example, the np.sin function takes an angle as an argument, and calculates the sine of that angle. In this example our function has two arguments. The number of arguments is arbitrary, and can be zero, in which case the parentheses are just left empty. It is also possible to write functions where the number of arguments is variable, and need not be the same every time the function is called.
After the define line, we begin the body of the function. Note that all the lines in the function body are indented. This indentation is IMPORTANT. In python, indentation is used to indicate that a particular line belongs to a particular function, loop, or other block of code. All the lines of the function are indented four spaces. If you're using entering this manually in ipython, either at the command line or in the notebook, you don't need to type in those four spaces by hand; the ipython shell will automatically enter them for you after seeing the def line. If you're using emacs or another text editor, you can just hit the tab key and the correct number of spaces will be entered for you.
Within the body of the function, we can enter whatever commands we like. We can print things, for example. Or do a calculation. The arguments in that appeared in parentheses in the definition are accessible within the function, and can be manipulated however we like.
At the end of the function, we have a statement that begins return. A return function causes the function to give back a value, which the calling program can print, assign to a variable, or do something else with. For example, the np.log10 function returns the base 10 log for any positive number (or array of numbers!). Return values are optional: functions don't have to return anything, and can just end.
OK, with that in mind, let's run the myfunc function above, with a string and a float number as input arguments:
End of explanation
myfunc?
Explanation: Note that myfunc includes something called a "docstring" (denoted with the triple quotations at the start and end). This is a description of what the function does and is visible when call that function with a question mark (as below). Many companies (e.g. Google) have extensive rules about what should be included in a docstring. For example, <a href="http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html">here</a> is a sample Google docstring.
Generally speaking, docstrings should include a description of what the function does, some notes about the input and output, and specifics about any optional inputs ("keywords") and what they do. Keep your eye out for these as we proceed as we'll be asking you to include docstrings with all of the functions that you write this semester.
You can view the docstring for any function in python (built-in ones, and ones you write yourself!) using the question mark. Try it below:
End of explanation
def test_function(a,b):
# Your docstring goes here, in triple quotations
# Your code goes here
# Your return statement goes here
Explanation: Try writing a simple function below called test_function that takes two numbers a and b as input, multiplies them togther, then divides the product by 2.0, and returns the answer.
End of explanation
answer_to_life_the_universe_and_everything = test_function(28,3)
print(answer_to_life_the_universe_and_everything)
Explanation: If all went according to plan, the following should return the value 42:
End of explanation
# Write your function here (don't forget a docstring!):
Explanation: Using a function to calculate absolute magnitude
Recall that the distance modulus equation is as follows:
$M - m = -5 log10(d) + 5$
In the cell below, write a function called absmagcalc that takes in two variables (distance in parsecs and apparent magnitude), calculates the absolute magnitude, then returns the value of the absolute magnitude.
End of explanation
# Use your new function here:
Explanation: Let's test your new function with the Sun, which has an apparent magnitude of -26.74. The Sun is, on average, located at a distance of 4.848e-6 pc from Earth.
Question:
5) What is the absolute magnitude of the Sun?
End of explanation
# all values in parsecs
dist_m22 = 3000.0
dist_ngc188 = 1770.0
dist_m67 = 850.0
dist_m4 = 1904.5
Explanation: Now that we have a handy function to calculate absolute magnitudes from apparent ones, we can add a new column for absolute magnitude to our existing dataframe. First, we'll need the approximate distances to each of the clusters, provided here.
End of explanation
# Edit to continue this calculation using the absmagcalc function you defined
m22['Mv'] =
Explanation: Now we will add a new column for absolute magnitude, Mv to our existing M22 dataframe. Use your new absmagcalc function to calculate the absolute magnitudes from the distance and existing apparent V magnitude column, and provide the output for this new column below:
End of explanation
# Check the dataframe again -- do you have all the columns you need?
Explanation: In the cell below, check your dataframe to see if it has been updated with the new column:
End of explanation
# Plot your data directly from the m22 dataframe below:
Explanation: We are now ready to plot!
Plotting from a pandas dataframe
Using the matplotlib.pyplot skills we learned last week, we can now plot our first color magnitude diagram. One convenient aspect of pandas is that we can plot columns taken directly from the dataframe itself. For example, for M22:
* the X-axis is the series: m22['BVcolor']
* the Y-axis is the series: m22['Mv']
In the following exercises, you will plot a color-magnitude diagram for M22, and then load and manipulate new pandas dataframes for two open clusters and a globular cluster.
Exercise 1
Plot the V vs. B-V color-magnitude diagram for M22. Scale the plot as necessary to show all of the data clearly (there are a lot of data points, so you may want to make the symbols small using markersize = 3 or another size). Don't forget to add axes labels+units and a title.
Hint #1: When scaling the axes, think about how this plot is analogous to the H-R diagrams from last week. Which way should the axes go?
Hint #2: Using a subscript for the Y-axis, you can use M$_{V}$ to indicate absolute magnitude (double-click the cell to see).
(For plotting methods, you may find it useful to refer back to Homework0 and Lab1.)
End of explanation
# Load datafiles into new pandas dataframes. Be sure to check the datatypes -- if necessary, use pd.to_numeric.
# Calculate B-V colors for NGC 188 and M67.
# Freebie! For M4, the B-V values are already provided in the table -- no need to calculate this one.
# Calculate absolute V magnitudes (recall that the distances to each cluster are given earlier in the lab)
# Make a multipanel plot showing each of the four clusters.
# In each of the panel titles, put the name of the cluster and its type (e.g., "M22 - globular"):
fig,((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(10,10))
fig.suptitle('Globular and Open Cluster Comparison')
# Continue the rest of the plotting below
# Make an overlay plot showing all four clusters on the same axes.
# Hint: You may want to try plotting the datasets in various orders to make sure all datasets are visible.
Explanation: Exercise 2 (Additional Clusters)
Now you will read in the data for the other clusters you downloaded from VizieR at the beginning of the lab and plot these data. For each cluster:
1. Comment the datafile as needed
2. Load the datafile using pandas (pd.read_csv)
3. Calculate the B-V color
4. Use your absolute magnitude function to calculate M$_{V}$ from the V mag and the distance to each cluster
5. Make plots! (multipanel and overlay)
End of explanation
# Bonus Plot
Explanation: Exercise 3 (Comprehension Questions)
<p>1) How do the color-magnitude diagrams (CMDs) you have plotted compare to the H-R diagrams from last week? What features are similar? What features are different?</p>
-
<p>2) Why do you think there is so much scatter in the observational data? Do you think this is an astrophysical effect or an instrumental one? What are some potential sources of error? </p>
-
<p>3) Which clusters do you think are older or younger? Rank NGC 188, M4, M22, and M67 by relative age. How can you tell?</p>
-
<p>4) Why might the main sequences be offset for each of the clusters? (Hint: How would uncertainty in a measured/estimated value shift values up/down on the Y-axis?)</p>
-
<p>5) Bonus question, if there's time: Earlier, we also calculated a V-I color column for M22. Plot the V vs. V-I CMD for M22. Why do you think the plot looks different from the V vs. B-V one?</p>
-
End of explanation |
10,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistical Moments - Skewness and Kurtosis
Bonus
Step1: Sometimes mean and variance are not enough to describe a distribution. When we calculate variance, we square the deviations around the mean. In the case of large deviations, we do not know whether they are likely to be positive or negative. This is where the skewness and symmetry of a distribution come in. A distribution is <i>symmetric</i> if the parts on either side of the mean are mirror images of each other. For example, the normal distribution is symmetric. The normal distribution with mean $\mu$ and standard deviation $\sigma$ is defined as
$$ f(x) = \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$
We can plot it to confirm that it is symmetric
Step2: A distribution which is not symmetric is called <i>skewed</i>. For instance, a distribution can have many small positive and a few large negative values (negatively skewed) or vice versa (positively skewed), and still have a mean of 0. A symmetric distribution has skewness 0. Positively skewed unimodal (one mode) distributions have the property that mean > median > mode. Negatively skewed unimodal distributions are the reverse, with mean < median < mode. All three are equal for a symmetric unimodal distribution.
The explicit formula for skewness is
Step3: Although skew is less obvious when graphing discrete data sets, we can still compute it. For example, below are the skew, mean, and median for S&P 500 returns 2012-2014. Note that the skew is negative, and so the mean is less than the median.
Step4: Kurtosis
Kurtosis attempts to measure the shape of the deviation from the mean. Generally, it describes how peaked a distribution is compared the the normal distribution, called mesokurtic. All normal distributions, regardless of mean and variance, have a kurtosis of 3. A leptokurtic distribution (kurtosis > 3) is highly peaked and has fat tails, while a platykurtic distribution (kurtosis < 3) is broad. Sometimes, however, kurtosis in excess of the normal distribution (kurtosis - 3) is used, and this is the default in scipy. A leptokurtic distribution has more frequent large jumps away from the mean than a normal distribution does while a platykurtic distribution has fewer.
Step5: The formula for kurtosis is
$$ K = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) $$
while excess kurtosis is given by
$$ K_E = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) - \frac{3(n-1)^2}{(n-2)(n-3)} $$
For a large number of samples, the excess kurtosis becomes approximately
$$ K_E \approx \frac{1}{n} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} - 3 $$
Since above we were considering perfect, continuous distributions, this was the form that kurtosis took. However, for a set of samples drawn for the normal distribution, we would use the first definition, and (excess) kurtosis would only be approximately 0.
We can use scipy to find the excess kurtosis of the S&P 500 returns from before.
Step6: The histogram of the returns shows significant observations beyond 3 standard deviations away from the mean, multiple large spikes, so we shouldn't be surprised that the kurtosis is indicating a leptokurtic distribution.
Other standardized moments
It's no coincidence that the variance, skewness, and kurtosis take similar forms. They are the first and most important standardized moments, of which the $k$th has the form
$$ \frac{E[(X - E[X])^k]}{\sigma^k} $$
The first standardized moment is always 0 $(E[X - E[X]] = E[X] - E[E[X]] = 0)$, so we only care about the second through fourth. All of the standardized moments are dimensionless numbers which describe the distribution, and in particular can be used to quantify how close to normal (having standardized moments $0, \sigma, 0, \sigma^2$) a distribution is.
Normality Testing Using Jarque-Bera
The Jarque-Bera test is a common statistical test that compares whether sample data has skewness and kurtosis similar to a normal distribution. We can run it here on the S&P 500 returns to find the p-value for them coming from a normal distribution.
The Jarque Bera test's null hypothesis is that the data came from a normal distribution. Because of this it can err on the side of not catching a non-normal process if you have a low p-value. To be safe it can be good to increase your cutoff when using the test.
Remember to treat p-values as binary and not try to read into them or compare them. We'll use a cutoff of 0.05 for our p-value.
Test Calibration
Remember that each test is written a little differently across different programming languages. You might not know if it's the null or alternative hypothesis that the tested data come from a normal distribution. It is recommended that you use the ? notation plus online searching to find documentation on the test; plus it is often a good idea to calibrate a test by checking it on simulated data and making sure it gives the right answer. Let's do that now.
Step7: Great, if properly calibrated we should expect to be wrong $5\%$ of the time at a 0.05 significance level, and this is pretty close. This means that the test is working as we expect. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
Explanation: Statistical Moments - Skewness and Kurtosis
Bonus: Jarque-Bera Normality Test
By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Granizo-Mackenzie.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
End of explanation
# Plot a normal distribution with mean = 0 and standard deviation = 2
xs = np.linspace(-6,6, 300)
normal = stats.norm.pdf(xs)
plt.plot(xs, normal);
Explanation: Sometimes mean and variance are not enough to describe a distribution. When we calculate variance, we square the deviations around the mean. In the case of large deviations, we do not know whether they are likely to be positive or negative. This is where the skewness and symmetry of a distribution come in. A distribution is <i>symmetric</i> if the parts on either side of the mean are mirror images of each other. For example, the normal distribution is symmetric. The normal distribution with mean $\mu$ and standard deviation $\sigma$ is defined as
$$ f(x) = \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$
We can plot it to confirm that it is symmetric:
End of explanation
# Generate x-values for which we will plot the distribution
xs2 = np.linspace(stats.lognorm.ppf(0.01, .7, loc=-.1), stats.lognorm.ppf(0.99, .7, loc=-.1), 150)
# Negatively skewed distribution
lognormal = stats.lognorm.pdf(xs2, .7)
plt.plot(xs2, lognormal, label='Skew > 0')
# Positively skewed distribution
plt.plot(xs2, lognormal[::-1], label='Skew < 0')
plt.legend();
Explanation: A distribution which is not symmetric is called <i>skewed</i>. For instance, a distribution can have many small positive and a few large negative values (negatively skewed) or vice versa (positively skewed), and still have a mean of 0. A symmetric distribution has skewness 0. Positively skewed unimodal (one mode) distributions have the property that mean > median > mode. Negatively skewed unimodal distributions are the reverse, with mean < median < mode. All three are equal for a symmetric unimodal distribution.
The explicit formula for skewness is:
$$ S_K = \frac{n}{(n-1)(n-2)} \frac{\sum_{i=1}^n (X_i - \mu)^3}{\sigma^3} $$
Where $n$ is the number of observations, $\mu$ is the arithmetic mean, and $\sigma$ is the standard deviation. The sign of this quantity describes the direction of the skew as described above. We can plot a positively skewed and a negatively skewed distribution to see what they look like. For unimodal distributions, a negative skew typically indicates that the tail is fatter on the left, while a positive skew indicates that the tail is fatter on the right.
End of explanation
start = '2012-01-01'
end = '2015-01-01'
pricing = get_pricing('SPY', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:]
print 'Skew:', stats.skew(returns)
print 'Mean:', np.mean(returns)
print 'Median:', np.median(returns)
plt.hist(returns, 30);
Explanation: Although skew is less obvious when graphing discrete data sets, we can still compute it. For example, below are the skew, mean, and median for S&P 500 returns 2012-2014. Note that the skew is negative, and so the mean is less than the median.
End of explanation
# Plot some example distributions
plt.plot(xs,stats.laplace.pdf(xs), label='Leptokurtic')
print 'Excess kurtosis of leptokurtic distribution:', (stats.laplace.stats(moments='k'))
plt.plot(xs, normal, label='Mesokurtic (normal)')
print 'Excess kurtosis of mesokurtic distribution:', (stats.norm.stats(moments='k'))
plt.plot(xs,stats.cosine.pdf(xs), label='Platykurtic')
print 'Excess kurtosis of platykurtic distribution:', (stats.cosine.stats(moments='k'))
plt.legend();
Explanation: Kurtosis
Kurtosis attempts to measure the shape of the deviation from the mean. Generally, it describes how peaked a distribution is compared the the normal distribution, called mesokurtic. All normal distributions, regardless of mean and variance, have a kurtosis of 3. A leptokurtic distribution (kurtosis > 3) is highly peaked and has fat tails, while a platykurtic distribution (kurtosis < 3) is broad. Sometimes, however, kurtosis in excess of the normal distribution (kurtosis - 3) is used, and this is the default in scipy. A leptokurtic distribution has more frequent large jumps away from the mean than a normal distribution does while a platykurtic distribution has fewer.
End of explanation
print "Excess kurtosis of returns: ", stats.kurtosis(returns)
Explanation: The formula for kurtosis is
$$ K = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) $$
while excess kurtosis is given by
$$ K_E = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) - \frac{3(n-1)^2}{(n-2)(n-3)} $$
For a large number of samples, the excess kurtosis becomes approximately
$$ K_E \approx \frac{1}{n} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} - 3 $$
Since above we were considering perfect, continuous distributions, this was the form that kurtosis took. However, for a set of samples drawn for the normal distribution, we would use the first definition, and (excess) kurtosis would only be approximately 0.
We can use scipy to find the excess kurtosis of the S&P 500 returns from before.
End of explanation
from statsmodels.stats.stattools import jarque_bera
N = 1000
M = 1000
pvalues = np.ndarray((N))
for i in range(N):
# Draw M samples from a normal distribution
X = np.random.normal(0, 1, M);
_, pvalue, _, _ = jarque_bera(X)
pvalues[i] = pvalue
# count number of pvalues below our default 0.05 cutoff
num_significant = len(pvalues[pvalues < 0.05])
print float(num_significant) / N
Explanation: The histogram of the returns shows significant observations beyond 3 standard deviations away from the mean, multiple large spikes, so we shouldn't be surprised that the kurtosis is indicating a leptokurtic distribution.
Other standardized moments
It's no coincidence that the variance, skewness, and kurtosis take similar forms. They are the first and most important standardized moments, of which the $k$th has the form
$$ \frac{E[(X - E[X])^k]}{\sigma^k} $$
The first standardized moment is always 0 $(E[X - E[X]] = E[X] - E[E[X]] = 0)$, so we only care about the second through fourth. All of the standardized moments are dimensionless numbers which describe the distribution, and in particular can be used to quantify how close to normal (having standardized moments $0, \sigma, 0, \sigma^2$) a distribution is.
Normality Testing Using Jarque-Bera
The Jarque-Bera test is a common statistical test that compares whether sample data has skewness and kurtosis similar to a normal distribution. We can run it here on the S&P 500 returns to find the p-value for them coming from a normal distribution.
The Jarque Bera test's null hypothesis is that the data came from a normal distribution. Because of this it can err on the side of not catching a non-normal process if you have a low p-value. To be safe it can be good to increase your cutoff when using the test.
Remember to treat p-values as binary and not try to read into them or compare them. We'll use a cutoff of 0.05 for our p-value.
Test Calibration
Remember that each test is written a little differently across different programming languages. You might not know if it's the null or alternative hypothesis that the tested data come from a normal distribution. It is recommended that you use the ? notation plus online searching to find documentation on the test; plus it is often a good idea to calibrate a test by checking it on simulated data and making sure it gives the right answer. Let's do that now.
End of explanation
_, pvalue, _, _ = jarque_bera(returns)
if pvalue > 0.05:
print 'The returns are likely normal.'
else:
print 'The returns are likely not normal.'
Explanation: Great, if properly calibrated we should expect to be wrong $5\%$ of the time at a 0.05 significance level, and this is pretty close. This means that the test is working as we expect.
End of explanation |
10,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 1
Assignment
Train a sklearn.ensemble.RandomForestClassifier that given a soccer player description outputs his skin color.
- Show how different parameters passed to the Classifier affect the overfitting issue.
- Perform cross-validation to mitigate the overfitting of your model.
Once you assessed your model,
- inspect the feature_importances_ attribute and discuss the obtained results.
- With different assumptions on the data (e.g., dropping certain features even before feeding them to the classifier), can you obtain a substantially different feature_importances_ attribute?
Plan
First we will just lok at the Random Forest classifier without any parameters (just use the default) -> gives very good scores.
Look a bit at the feature_importances
Then we see that it is better to aggregate the data by player (We can't show overfitting with 'flawed' data and very good scores, so we first aggregate)
Load the data aggregated by player
Look again at the classifier with default parameters
Show the effect of some parameters to overfitting and use that to...
...find acceptable parameters
Inspect the feature_importances and discuss the results
At the end we look very briefly at other classifiers.
Note that we use the values 1, 2, 3, 4, 5 or WW, W, N, B, BB interchangably for the skin color categories of the players
Step1: Load the preprocessed data and look at it. We preprocess the data in the HW01-1-Preprocessing notebook.
The data is already encoded to be used for the RandomForestClassifier.
Step2: First we just train and test the preprocessed data with the default values of the Random Forest to see what happens.
For this first model, we will use all the features (color_rating) and then we will observe which are the most important.
Step3: Quite good results...
Observe the important features
Step4: We can see that the most important features are
Step5: Indeed, some players appear around 200 times, so it is easy to determine the skin-color of the player djibril cisse if he appears both in the training set and in the test set. But in the reality the probability to have 2 djibril cisse with the same birthday and same color skin is almost null.
The reason why this attributes are so important is that some of the rows of one player appear in the train and test set, so the classifier can take those to determine the skin-color.
So we drop those attributes and see what happens.
Step6: The accuracy of the classifier dropped a bit, which is no surprise.
Step7: That makes more sences, it is possible that dark persons are statistically taller than white persons, but the club and position should not be that important.
So we decided to aggregate on the players name to have only one row with the personal information of one player
We do the aggregation in the HW04-1-Preprocessing notebook.
Aggregated data
Load the aggregated data.
Step8: Drop the player unique features because they can't be usefull to classify since they are unique.
Step9: Train the defualt classifier on the new data and look at the important features
Step10: The results are not very impressive...
Step12: That makes a lot more sense. The features are much more equal and several IAT and EXP are on top.
But before going more into detail, we adress the overfitting issue mentioned in the assignment.
Show overfitting issue
The classifier overfitts when the Training accuracy is much higher than the testing accuracy (the classifier fits too much to the trainig data and thus generalizes badly).
So we look at the different parameters and discuss how they contribute to the overfitting issue.
To show the impact of each parameter we try different values and plot the train vs test accuracy.
Luckily there is a function for this
Step13: n_estimators
How many trees to be used. As expected, we see that more trees improve the train and test accuracy, however the test accuracy is bounded and it does not really make sense to use more than 500 trees. (Adding trees also means more computation time).
More trees also mean more overfitting. The train accuracy goes almost to 1 while the test stays around 0.42.
min_samples_leaf
The minimum number of samples required to be at a leaf node. The higher this value, the less overfitting. It effectively limits how good a tree can fit to a given train set.
criterion
The function to measure the quality of a split. You can see that 'entropy' scores higher in the test. So we take it even though gini has a much lover variance.
max_depth
The maximal depth of the tree. The higher the more the tree overfits. It seems that no tree is grown more than 10 deep. So we wont limit it.
max_leaf_nodes
An upper limit on how many leaf the tree can have. The train accuracy grows until about 400 where there is no more gain in more leaf nodes. probably because the trees don't create that big leaf nodes anyway.
min_samples_split
The minimum number of samples required to split an internal node. Has a similar effect and behaviour as min_samples_leaf.
class_weight
Weights associated with classes. Gives more weight to classes with fewer members. It does not seem to have a big influence.
Note that the third option is None which sets all classes weight to 1.
Find a good classifier
The default classifier achieves about 40% accuracy. This is not much considering that about 40% of players are in category 2. This classifier is not better than classifying all players into category 2.
So we are going to find better parameters for the classifier.
Based on the plots above and trial and error, we find good parameters for the RandomForestClassifier and look if feature importance changed.
Step15: We can see that the accuracy is only a bit better. But the most important features are even more balanced. The confidence intervalls are huge and almost all features could be on top. More importantly, the IAT and EXP features seem to play some role in gaining those 4% of accuracy. But clearly we can't say that there is a big difference between players of different skin colors.
Observe the confusion matrix
Now we observe the confusion matrix to see what the classifier accutally does. We split the data in training ans testing set (test set = 25%) and then we train our random forest using the best parameters selected above
Step16: Our model predicts almost only 2 categories instead of 5. It predicts mostly WW or W. This is because we have imbalanced data and the balancing did not really help apparently. We can see in the matrix above by looking at the True label that there is clearly a majority of of white player. Let's have a look at the exact distribution.
Step17: Those 2 histograms show the imbalance data. Indeed the 2 first category represent more than 50% of the data. Let's look at the numbers
Step18: WW and W reprensent 75% of the data.
Now assume a new classifier that always classify in the W category. This classifier has an accuracy of 40%. It means that our classifiery is not much better than always classifying a player as W...
What happens when we do a ternary and binary classification?
Binary Classification
For ternary we put WW and W in one class, N in the second and B BB in the last (the classes then are WWW, N and BBB.
For binary we merge the N with the BBB class. -> WWW vs NBBB
Step19: We see that our classifier is only a little bit better than the 'stupid' one. The difference between the ternary and binary classification is also small.
Confusion Matrix of the binary classifier
Step20: Even for the 2 class problem it is hard to predict the colors and the classifier still mostly predicts WWW.
From that results we might conclude that there is just not enough difference between the 'black' and 'white' players to classify them.
Try other classifiers
A quick and short exploration of other classifiers to show that the RandomForest is not the 'wrong' classifier for that problem.
TLDR; They don't do better than the RandomForest.
Step21: Only the AdaBoostClassifier is slightly better than our random forest. Probably because it uses our rf_good random forest and combines the results smartly. That might explain the extra 1%
For the MLP classifier we just tried a few architectures, there might be better ones...
Note that the accuracy score is the result of 5 way cross validation. | Python Code:
# imports
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.pyplot import show
import itertools
# sklearn
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn import preprocessing as pp
from sklearn.model_selection import KFold , cross_val_score, train_test_split, validation_curve
from sklearn.metrics import make_scorer, roc_curve, roc_auc_score, accuracy_score, confusion_matrix
from sklearn.model_selection import learning_curve
import sklearn.preprocessing as preprocessing
%matplotlib inline
sns.set_context('notebook')
pd.options.mode.chained_assignment = None # default='warn'
pd.set_option('display.max_columns', 500) # to see all columns
Explanation: Part 1
Assignment
Train a sklearn.ensemble.RandomForestClassifier that given a soccer player description outputs his skin color.
- Show how different parameters passed to the Classifier affect the overfitting issue.
- Perform cross-validation to mitigate the overfitting of your model.
Once you assessed your model,
- inspect the feature_importances_ attribute and discuss the obtained results.
- With different assumptions on the data (e.g., dropping certain features even before feeding them to the classifier), can you obtain a substantially different feature_importances_ attribute?
Plan
First we will just lok at the Random Forest classifier without any parameters (just use the default) -> gives very good scores.
Look a bit at the feature_importances
Then we see that it is better to aggregate the data by player (We can't show overfitting with 'flawed' data and very good scores, so we first aggregate)
Load the data aggregated by player
Look again at the classifier with default parameters
Show the effect of some parameters to overfitting and use that to...
...find acceptable parameters
Inspect the feature_importances and discuss the results
At the end we look very briefly at other classifiers.
Note that we use the values 1, 2, 3, 4, 5 or WW, W, N, B, BB interchangably for the skin color categories of the players
End of explanation
data = pd.read_csv('CrowdstormingDataJuly1st_preprocessed_encoded.csv', index_col=0)
data_total = data.copy()
print('Number of dayads', data.shape)
data.head()
print('Number of diads: ', len(data))
print('Number of players: ', len(data.playerShort.unique()))
print('Number of referees: ', len(data.refNum.unique()))
Explanation: Load the preprocessed data and look at it. We preprocess the data in the HW01-1-Preprocessing notebook.
The data is already encoded to be used for the RandomForestClassifier.
End of explanation
player_colors = data['color_rating']
rf_input_data = data.drop(['color_rating'], axis=1)
player_colors.head() # values 1 to 5
rf = RandomForestClassifier()
cross_val_score(rf, rf_input_data, player_colors, cv=10, n_jobs=3, pre_dispatch='n_jobs+1', verbose=1)
Explanation: First we just train and test the preprocessed data with the default values of the Random Forest to see what happens.
For this first model, we will use all the features (color_rating) and then we will observe which are the most important.
End of explanation
def show_important_features_random_forest(X, y, rf=None):
if rf is None:
rf = RandomForestClassifier()
# train the forest
rf.fit(X, y)
# find the feature importances
importances = rf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
# plot the feature importances
cols = X.columns
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature n° %d %s (%f)" % (f + 1, indices[f], cols[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
show_important_features_random_forest(rf_input_data, player_colors)
Explanation: Quite good results...
Observe the important features
End of explanation
data.playerShort.value_counts()[:10]
Explanation: We can see that the most important features are:
- photoID
- player
- the birthday
- playerShort
The obtained result is weird. From personal experience, those 4 features should to be independant of the skin color and they also should be unique to one player. PhotoID is the id of the photo and thus unique for one player and independent of the skin_color. Same about 'player' and 'playerShort' (both represent the players name). Birthday is not necessarily unique, but should not be that important for the skin color since people all over the world are born all the time.
We have to remember that our data contains dyads between player and referee, so a player can appear several times in our data. It could be the reason why the unique features for the players are imprtant. Let's look at the data:
End of explanation
rf_input_data_drop = rf_input_data.drop(['birthday', 'player','playerShort', 'photoID'], axis=1)
rf = RandomForestClassifier()
result = cross_val_score(rf, rf_input_data_drop, player_colors, cv=10, n_jobs=3, pre_dispatch='n_jobs+1', verbose=1)
result
Explanation: Indeed, some players appear around 200 times, so it is easy to determine the skin-color of the player djibril cisse if he appears both in the training set and in the test set. But in the reality the probability to have 2 djibril cisse with the same birthday and same color skin is almost null.
The reason why this attributes are so important is that some of the rows of one player appear in the train and test set, so the classifier can take those to determine the skin-color.
So we drop those attributes and see what happens.
End of explanation
show_important_features_random_forest(rf_input_data_drop, player_colors)
Explanation: The accuracy of the classifier dropped a bit, which is no surprise.
End of explanation
data_aggregated = pd.read_csv('CrowdstormingDataJuly1st_aggregated_encoded.csv')
data_aggregated.head()
Explanation: That makes more sences, it is possible that dark persons are statistically taller than white persons, but the club and position should not be that important.
So we decided to aggregate on the players name to have only one row with the personal information of one player
We do the aggregation in the HW04-1-Preprocessing notebook.
Aggregated data
Load the aggregated data.
End of explanation
data_aggregated = data_aggregated.drop(['playerShort', 'player', 'birthday'], axis=1)
Explanation: Drop the player unique features because they can't be usefull to classify since they are unique.
End of explanation
rf = RandomForestClassifier()
aggr_rf_input_data = data_aggregated.drop(['color_rating'], axis=1)
aggr_player_colors = data_aggregated['color_rating']
result = cross_val_score(rf, aggr_rf_input_data, aggr_player_colors,
cv=10, n_jobs=3, pre_dispatch='n_jobs+1', verbose=1)
print("mean result: ", np.mean(result))
result
Explanation: Train the defualt classifier on the new data and look at the important features
End of explanation
show_important_features_random_forest(aggr_rf_input_data, aggr_player_colors)
Explanation: The results are not very impressive...
End of explanation
# does the validation with cross validation
def val_curve_rf(input_data, y, param_name, param_range, cv=5, rf=RandomForestClassifier()):
return validation_curve(rf, input_data, y, param_name, param_range, n_jobs=10,verbose=0, cv=cv)
# defines the parameters and the ranges to try
def val_curve_all_params(input_data, y, rf=RandomForestClassifier()):
params = {
'class_weight': ['balanced', 'balanced_subsample', None],
'criterion': ['gini', 'entropy'],
'n_estimators': [1, 10, 100, 500, 1000, 2000],
'max_depth': list(range(1, 100, 5)),
'min_samples_split': [0.001,0.002,0.004,0.005, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.4, 0.5, 0.8, 0.9],
'min_samples_leaf': list(range(1, 200, 5)),
'max_leaf_nodes': [2, 50, 100, 200, 300, 400, 500, 1000]
}
RandomForestClassifier
# does the validation for all parameters from above
for p, r in params.items():
train_scores, valid_scores = val_curve_rf(input_data, y, p, r, rf=rf)
plot_te_tr_curve(train_scores, valid_scores, p, r)
def plot_te_tr_curve(train_scores, valid_scores, param_name, param_range, ylim=None):
Generate the plot of the test and training(validation) accuracy curve.
plt.figure()
if ylim is not None:
plt.ylim(*ylim)
plt.grid()
# if the parameter values are strings
if isinstance(param_range[0], str):
plt.subplot(1, 2, 1)
plt.title(param_name+" train")
plt.boxplot(train_scores.T, labels=param_range)
plt.subplot(1, 2, 2)
plt.title(param_name+" test")
plt.boxplot(valid_scores.T, labels=param_range)
# parameter names are not strings (are numeric)
else:
plt.title(param_name)
plt.ylabel("accuracy")
plt.xlabel("value")
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(valid_scores, axis=1)
test_scores_std = np.std(valid_scores, axis=1)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(param_range, train_scores_mean, '-', color="r",
label="Training score")
plt.plot(param_range, test_scores_mean, '-', color="g",
label="Testing score")
plt.legend(loc="best")
return plt
val_curve_all_params(aggr_rf_input_data, aggr_player_colors, rf)
Explanation: That makes a lot more sense. The features are much more equal and several IAT and EXP are on top.
But before going more into detail, we adress the overfitting issue mentioned in the assignment.
Show overfitting issue
The classifier overfitts when the Training accuracy is much higher than the testing accuracy (the classifier fits too much to the trainig data and thus generalizes badly).
So we look at the different parameters and discuss how they contribute to the overfitting issue.
To show the impact of each parameter we try different values and plot the train vs test accuracy.
Luckily there is a function for this :D
End of explanation
rf_good = RandomForestClassifier(n_estimators=500,
max_depth=None,
criterion='entropy',
min_samples_leaf=2,
min_samples_split=5,
class_weight='balanced_subsample')
aggr_rf_input_data = data_aggregated.drop(['color_rating'], axis=1)
aggr_player_colors = data_aggregated['color_rating']
result = cross_val_score(rf_good, aggr_rf_input_data, aggr_player_colors,
cv=10, n_jobs=3, pre_dispatch='n_jobs+1', verbose=1)
print("mean result: ", np.mean(result))
result
show_important_features_random_forest(aggr_rf_input_data, aggr_player_colors, rf=rf_good)
Explanation: n_estimators
How many trees to be used. As expected, we see that more trees improve the train and test accuracy, however the test accuracy is bounded and it does not really make sense to use more than 500 trees. (Adding trees also means more computation time).
More trees also mean more overfitting. The train accuracy goes almost to 1 while the test stays around 0.42.
min_samples_leaf
The minimum number of samples required to be at a leaf node. The higher this value, the less overfitting. It effectively limits how good a tree can fit to a given train set.
criterion
The function to measure the quality of a split. You can see that 'entropy' scores higher in the test. So we take it even though gini has a much lover variance.
max_depth
The maximal depth of the tree. The higher the more the tree overfits. It seems that no tree is grown more than 10 deep. So we wont limit it.
max_leaf_nodes
An upper limit on how many leaf the tree can have. The train accuracy grows until about 400 where there is no more gain in more leaf nodes. probably because the trees don't create that big leaf nodes anyway.
min_samples_split
The minimum number of samples required to split an internal node. Has a similar effect and behaviour as min_samples_leaf.
class_weight
Weights associated with classes. Gives more weight to classes with fewer members. It does not seem to have a big influence.
Note that the third option is None which sets all classes weight to 1.
Find a good classifier
The default classifier achieves about 40% accuracy. This is not much considering that about 40% of players are in category 2. This classifier is not better than classifying all players into category 2.
So we are going to find better parameters for the classifier.
Based on the plots above and trial and error, we find good parameters for the RandomForestClassifier and look if feature importance changed.
End of explanation
x_train, x_test, y_train, y_test = train_test_split(aggr_rf_input_data, aggr_player_colors, test_size=0.25)
rf_good.fit(x_train, y_train)
prediction = rf_good.predict(x_test)
accuracy = accuracy_score(y_test, prediction)
print('Accuracy: ',accuracy)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_test, prediction)
class_names = ['WW', 'W', 'N', 'B', 'BB']
plot_confusion_matrix(cm, classes=class_names, title='Confusion matrix')
Explanation: We can see that the accuracy is only a bit better. But the most important features are even more balanced. The confidence intervalls are huge and almost all features could be on top. More importantly, the IAT and EXP features seem to play some role in gaining those 4% of accuracy. But clearly we can't say that there is a big difference between players of different skin colors.
Observe the confusion matrix
Now we observe the confusion matrix to see what the classifier accutally does. We split the data in training ans testing set (test set = 25%) and then we train our random forest using the best parameters selected above:
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].hist(aggr_player_colors)
ax[1].hist(aggr_player_colors, bins=3)
Explanation: Our model predicts almost only 2 categories instead of 5. It predicts mostly WW or W. This is because we have imbalanced data and the balancing did not really help apparently. We can see in the matrix above by looking at the True label that there is clearly a majority of of white player. Let's have a look at the exact distribution.
End of explanation
print('Proportion of WW: {:.2f}%'.format(
100*aggr_player_colors[aggr_player_colors == 1].count()/aggr_player_colors.count()))
print('Proportion of W: {:.2f}%'.format(
100*aggr_player_colors[aggr_player_colors == 2].count()/aggr_player_colors.count()))
print('Proportion of N: {:.2f}%'.format(
100*aggr_player_colors[aggr_player_colors == 3].count()/aggr_player_colors.count()))
print('Proportion of B: {:.2f}%'.format(
100*aggr_player_colors[aggr_player_colors == 4].count()/aggr_player_colors.count()))
print('Proportion of BB: {:.2f}%'.format(
100*aggr_player_colors[aggr_player_colors == 5].count()/aggr_player_colors.count()))
Explanation: Those 2 histograms show the imbalance data. Indeed the 2 first category represent more than 50% of the data. Let's look at the numbers
End of explanation
player_colors_3 = aggr_player_colors.map(lambda x: 1 if(x == 1 or x == 2) else max(x, 2) )
player_colors_2 = player_colors_3.map(lambda x: min(x, 2) )
result3 = cross_val_score(rf_good, aggr_rf_input_data, player_colors_3,
cv=10, n_jobs=3, pre_dispatch='n_jobs+1', verbose=1)
result2 = cross_val_score(rf_good, aggr_rf_input_data, player_colors_2,
cv=10, n_jobs=3, pre_dispatch='n_jobs+1', verbose=1)
print('Proportion of WWW: {:.2f}%'.format(
100*player_colors_2[player_colors_2 == 1].count()/player_colors_2.count()))
print('Proportion of NBBB: {:.2f}%'.format(
100*player_colors_2[player_colors_2 == 2].count()/player_colors_2.count()))
print("mean res3: ", np.mean(result3))
print("mean res2: ", np.mean(result2))
Explanation: WW and W reprensent 75% of the data.
Now assume a new classifier that always classify in the W category. This classifier has an accuracy of 40%. It means that our classifiery is not much better than always classifying a player as W...
What happens when we do a ternary and binary classification?
Binary Classification
For ternary we put WW and W in one class, N in the second and B BB in the last (the classes then are WWW, N and BBB.
For binary we merge the N with the BBB class. -> WWW vs NBBB
End of explanation
x_train, x_test, y_train, y_test = train_test_split(aggr_rf_input_data, player_colors_2, test_size=0.25)
rf_good.fit(x_train, y_train)
prediction = rf_good.predict(x_test)
accuracy = accuracy_score(y_test, prediction)
cm = confusion_matrix(y_test, prediction)
class_names = ['WWW', 'BBB']
plot_confusion_matrix(cm, classes=class_names, title='Confusion matrix')
Explanation: We see that our classifier is only a little bit better than the 'stupid' one. The difference between the ternary and binary classification is also small.
Confusion Matrix of the binary classifier:
End of explanation
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn import svm
from sklearn.ensemble import AdaBoostClassifier
def make_print_confusion_matrix(clf, clf_name):
x_train, x_test, y_train, y_test = train_test_split(aggr_rf_input_data, player_colors_2, test_size=0.25)
clf.fit(x_train, y_train)
prediction = clf.predict(x_test)
accuracy = np.mean(cross_val_score(clf, aggr_rf_input_data, player_colors_2, cv=5, n_jobs=3, pre_dispatch='n_jobs+1', verbose=1))
print(clf_name + ' Accuracy: ',accuracy)
cm = confusion_matrix(y_test, prediction)
class_names = ['WWW', 'BBB']
plot_confusion_matrix(cm, classes=class_names, title='Confusion matrix of '+clf_name)
plt.show()
Explanation: Even for the 2 class problem it is hard to predict the colors and the classifier still mostly predicts WWW.
From that results we might conclude that there is just not enough difference between the 'black' and 'white' players to classify them.
Try other classifiers
A quick and short exploration of other classifiers to show that the RandomForest is not the 'wrong' classifier for that problem.
TLDR; They don't do better than the RandomForest.
End of explanation
make_print_confusion_matrix(svm.SVC(kernel='rbf', degree=3, class_weight='balanced'), "SVC")
make_print_confusion_matrix(AdaBoostClassifier(n_estimators=500, base_estimator=rf_good), "AdaBoostClassifier")
make_print_confusion_matrix(MLPClassifier(activation='tanh', learning_rate='adaptive',
solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(100, 100, 50, 50, 2), random_state=1),
"MLPclassifier")
make_print_confusion_matrix(GaussianNB(), "GaussianNB")
Explanation: Only the AdaBoostClassifier is slightly better than our random forest. Probably because it uses our rf_good random forest and combines the results smartly. That might explain the extra 1%
For the MLP classifier we just tried a few architectures, there might be better ones...
Note that the accuracy score is the result of 5 way cross validation.
End of explanation |
10,394 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have the following torch tensor: | Problem:
import numpy as np
import pandas as pd
import torch
t, idx = load_data()
assert type(t) == torch.Tensor
assert type(idx) == np.ndarray
idxs = torch.from_numpy(idx).long().unsqueeze(1)
# or torch.from_numpy(idxs).long().view(-1,1)
result = t.gather(1, idxs).squeeze(1) |
10,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Querying a Population and Plotting the Results
Before we can query a population, we must have one. We will use a population of satellites as an example.
In this portion of the tutorial we will query and plot the undigested population data, and not their implications. All of the queries here are analogous to similar or identical queries in Structured Query Language (SQL).
For simplicity we still use the same pre-analyzed population database (.bdb file) as in other portions of the tutorial, even though it is not important that any probabilistic analysis has been done. If we have not yet downloaded that pre-analyzed database, do so now
Step1: We construct a Population instance that helps us read, query, and visualize a particular population.
Step4: Querying the data using SQL
Before querying the implications of a population, it can be useful to look at a sample of the raw data and metadata. This can be done using a combination of ordinary SQL and convenience functions built into bayeslite. We start by finding one of the most well-known satellites, the International Space Station
Step6: We can select multiple items using a SQL wildcard, in this case the match-anything '%' on either side of "GPS".
We ask for variables as rows and observations as columns by using .transpose() as we did for the ISS above. By default, observations map to rows, and variables map to columns.
Step8: Select just a few variables in the data, ordering by the number of minutes it takes for the satellite to complete one orbit, measured in minutes, and sorted ascending (as opposed to DESC), again as in SQL
Step9: Note that NaN is ordered before 0 in this sort.
Plots and Graphs
Bayeslite includes statistical graphics procedures designed for easy use with data extracted from an SQL database.
Before we introduce those, let the notebook know that we would like to use and display matplotlib figures within the notebook
Step10: Let's see a menu of the easily available plotting utilities
Step11: We will get more detailed help on each plotting utility as we introduce it.
Pairplots — Exploring two variables at a time
The methods pairplot and pairplot_vars are intended to plot all pairs within a group of variables. The plots are arranged as a lower-triangular matrix of plots.
Along the diagonal, there are histograms with the values of the given variable along the x axis, and the counts of occurrences of those values (or bucketed ranges of those values) on the y axis.
The rest of the lower triangle plots the row variable on the y axis against the column variable on the x axis.
Different kinds of plots are used for categorical vs. numeric values.
The fuller documentation
Step12: Pairplots
Step14: Pairplots
Step16: We might learn that meteorology satellites in geosynchronous orbit use about as much or more power than meteorology satellites in low-earth orbit (see power_watts row of plots), but that they use a little less power at a given mass (see scatter of launch mass vs. power_watts), and that there are no meteorology satellites in medium earth orbit or in elliptical orbits (class_of_orbit color legend box).
An expert might be able to help us interpret these observations, e.g. why certain orbits are preferred for meteorology, what the driving considerations are for power consumption and launch mass, etc., but pairplots are a powerful tool for visually finding questions to ask.
Pairplots
Step18: So when we show them, the way the underlying plotting utility works, we see suggestions of negative wattages and masses!
The contours in the power vs. mass plot also obscure the small number of data points, lending a false sense of meaning.
When there are enough data points, it can be useful to plot kernel density estimators (contours) on each plot, to see tendencies overlaid above the data points, so long as one keeps the above shortcomings in mind
Step19: Pairplots
Step21: Pairplots
Step22: Pairplots
Step23: When we pairplot these, normally that data point would simply be missing, but with show_missing, there is a line indicating that period_minutes could be anything at an apogee around 35k.
Step25: Pairplots
Step27: Other Plot Types
Barplot
Step29: Let's add the type of orbit too
Step31: One can even do a bit of computation here, in this case computing and plotting the average power_watts, rather than the merely the count
Step33: Histogram
Step35: We can break down that silhouette according to a categorical column that comes second.
We can also show percentages rather than absolute counts using normed.
Step37: Heatmap (a.k.a. 2d histogram)
Step39: Figsize
But that's a bit too small to read. For most of these plot functions, you can specify a figure size as a tuple (width-in-inches, height-in-inches) | Python Code:
import os
import subprocess
if not os.path.exists('satellites.bdb'):
subprocess.check_call(['curl', '-O', 'http://probcomp.csail.mit.edu/bayesdb/downloads/satellites.bdb'])
Explanation: Querying a Population and Plotting the Results
Before we can query a population, we must have one. We will use a population of satellites as an example.
In this portion of the tutorial we will query and plot the undigested population data, and not their implications. All of the queries here are analogous to similar or identical queries in Structured Query Language (SQL).
For simplicity we still use the same pre-analyzed population database (.bdb file) as in other portions of the tutorial, even though it is not important that any probabilistic analysis has been done. If we have not yet downloaded that pre-analyzed database, do so now:
End of explanation
from bdbcontrib import Population
satellites = Population(name='satellites', bdb_path='satellites.bdb')
Explanation: We construct a Population instance that helps us read, query, and visualize a particular population.
End of explanation
satellites.q(
SELECT * FROM satellites
WHERE Name LIKE 'International Space Station%'
).transpose()
satellites.q(SELECT COUNT(*) FROM satellites;)
Explanation: Querying the data using SQL
Before querying the implications of a population, it can be useful to look at a sample of the raw data and metadata. This can be done using a combination of ordinary SQL and convenience functions built into bayeslite. We start by finding one of the most well-known satellites, the International Space Station:
End of explanation
satellites.q(SELECT * FROM satellites WHERE Name LIKE '%GPS%').transpose()
Explanation: We can select multiple items using a SQL wildcard, in this case the match-anything '%' on either side of "GPS".
We ask for variables as rows and observations as columns by using .transpose() as we did for the ISS above. By default, observations map to rows, and variables map to columns.
End of explanation
satellites.q(
SELECT name, dry_mass_kg, period_minutes, class_of_orbit FROM satellites
ORDER BY period_minutes ASC LIMIT 10;
)
Explanation: Select just a few variables in the data, ordering by the number of minutes it takes for the satellite to complete one orbit, measured in minutes, and sorted ascending (as opposed to DESC), again as in SQL:
End of explanation
%matplotlib inline
Explanation: Note that NaN is ordered before 0 in this sort.
Plots and Graphs
Bayeslite includes statistical graphics procedures designed for easy use with data extracted from an SQL database.
Before we introduce those, let the notebook know that we would like to use and display matplotlib figures within the notebook:
End of explanation
satellites.help("plot")
Explanation: Let's see a menu of the easily available plotting utilities:
End of explanation
help(satellites.pairplot)
Explanation: We will get more detailed help on each plotting utility as we introduce it.
Pairplots — Exploring two variables at a time
The methods pairplot and pairplot_vars are intended to plot all pairs within a group of variables. The plots are arranged as a lower-triangular matrix of plots.
Along the diagonal, there are histograms with the values of the given variable along the x axis, and the counts of occurrences of those values (or bucketed ranges of those values) on the y axis.
The rest of the lower triangle plots the row variable on the y axis against the column variable on the x axis.
Different kinds of plots are used for categorical vs. numeric values.
The fuller documentation:
End of explanation
satellites.pairplot_vars(['purpose', 'power_watts', 'launch_mass_kg'],
colorby='class_of_orbit', show_contour=False);
Explanation: Pairplots: pairplot_vars
pairplot_vars is a shortcut to help you just name the variables you want to see, rather than writing the BQL to select those variables. As we will see, you may often start with pairplot_vars, and decide to refine your query in BQL to focus on particular areas of interest:
End of explanation
satellites.pairplot(SELECT purpose, power_watts, launch_mass_kg, class_of_orbit
FROM satellites
WHERE purpose LIKE '%Meteorology%';,
colorby='class_of_orbit', show_contour=False);
Explanation: Pairplots: with SQL WHERE
The purposes are hard to read, and we may not be interested in all of them. Say we're interested only in meteorology satellites of one variety or another. It's easy to restrict to just those if you use pairplot instead of pairplot_vars, and use a bit of extra BQL:
End of explanation
satellites.pairplot(SELECT purpose, power_watts, launch_mass_kg, class_of_orbit
FROM satellites
WHERE purpose LIKE '%Meteorology%';,
colorby='class_of_orbit', show_contour=True);
Explanation: We might learn that meteorology satellites in geosynchronous orbit use about as much or more power than meteorology satellites in low-earth orbit (see power_watts row of plots), but that they use a little less power at a given mass (see scatter of launch mass vs. power_watts), and that there are no meteorology satellites in medium earth orbit or in elliptical orbits (class_of_orbit color legend box).
An expert might be able to help us interpret these observations, e.g. why certain orbits are preferred for meteorology, what the driving considerations are for power consumption and launch mass, etc., but pairplots are a powerful tool for visually finding questions to ask.
Pairplots: show_contour
Why did we choose not to show contours? Let's try:
End of explanation
satellites.pairplot(SELECT power_watts, launch_mass_kg
FROM satellites,
show_contour=True);
Explanation: So when we show them, the way the underlying plotting utility works, we see suggestions of negative wattages and masses!
The contours in the power vs. mass plot also obscure the small number of data points, lending a false sense of meaning.
When there are enough data points, it can be useful to plot kernel density estimators (contours) on each plot, to see tendencies overlaid above the data points, so long as one keeps the above shortcomings in mind:
End of explanation
satellites.pairplot_vars(['purpose', 'class_of_orbit']);
Explanation: Pairplots: two categoricals
Where two variables are both categorical, we show a 2d histogram (a heatmap).
Also, we can turn off the one-variable histograms along the diagonal:
End of explanation
satellites.pairplot(SELECT purpose, class_of_orbit FROM %t
GROUP BY purpose
HAVING COUNT(purpose) >= 5;);
Explanation: Pairplots: with SQL HAVING
We can use the usual SQL constructs to restrict our plot. For example, in this plot of users vs. countries, restrict to those purposes that have at least five satellites:
End of explanation
satellites.q('''SELECT apogee_km FROM %t WHERE period_minutes is NULL;''')
Explanation: Pairplots: with show_missing and NULL values.
End of explanation
satellites.pairplot_vars(['period_minutes', 'apogee_km'], show_missing=True);
Explanation: When we pairplot these, normally that data point would simply be missing, but with show_missing, there is a line indicating that period_minutes could be anything at an apogee around 35k.
End of explanation
satellites.pairplot(
SELECT period_minutes / 60.0 as period_hours,
apogee_km / 1000.0 as apogee_x1000km FROM %t,
show_missing=True, show_contour=False);
Explanation: Pairplots: with SQL arithmetic
The values are large enough to be hard to read, but of course we can resolve that in the query:
End of explanation
help(satellites.barplot)
satellites.barplot(
SELECT class_of_orbit, count(*) AS class_count FROM satellites
GROUP BY class_of_orbit
ORDER BY class_count DESC
);
Explanation: Other Plot Types
Barplot
End of explanation
satellites.barplot(
SELECT class_of_orbit || "--" || type_of_orbit as class_type,
count(*) AS class_type_count
FROM satellites
GROUP BY class_type
ORDER BY class_type_count DESC
);
Explanation: Let's add the type of orbit too:
End of explanation
satellites.barplot(
SELECT class_of_orbit || "--" || type_of_orbit as class_type,
sum(power_watts)/count(*) AS average_power
FROM satellites
GROUP BY class_type
ORDER BY average_power DESC
);
Explanation: One can even do a bit of computation here, in this case computing and plotting the average power_watts, rather than the merely the count:
End of explanation
help(satellites.histogram)
satellites.histogram(SELECT dry_mass_kg FROM %t, nbins=35);
Explanation: Histogram
End of explanation
satellites.histogram(
SELECT dry_mass_kg, class_of_orbit FROM satellites
WHERE dry_mass_kg < 5000
, nbins=15, normed=True);
Explanation: We can break down that silhouette according to a categorical column that comes second.
We can also show percentages rather than absolute counts using normed.
End of explanation
help(satellites.heatmap)
satellites.heatmap(
SELECT users, country_of_operator, COUNT(country_of_operator) as country_count FROM %t
GROUP BY country_of_operator
HAVING COUNT(country_of_operator) >= 5;
)
Explanation: Heatmap (a.k.a. 2d histogram)
End of explanation
satellites.heatmap(
SELECT users, country_of_operator, COUNT(country_of_operator) as country_count FROM %t
GROUP BY country_of_operator
HAVING COUNT(country_of_operator) >= 5;,
figsize=(12, 10))
Explanation: Figsize
But that's a bit too small to read. For most of these plot functions, you can specify a figure size as a tuple (width-in-inches, height-in-inches):
End of explanation |
10,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step5: Полносвязная нейронная сеть
В данном домашнем задании вы подготовите свою реализацию полносвязной нейронной сети и обучите классификатор на датасете CIFAR-10
Step6: Прямой проход, скор
Реализуйте прямой проход в TwoLayerNet.loss
Step7: Прямой проход, функция потерь
В том же методе реализуйте вычисление функции потерь
Step8: Обратный проход
Закончите реализацию метода, вычислением градиентов для W1, b1, W2, и b2
Step9: Обучение сети
Реализуйте метод TwoLayerNet.train и метод TwoLayerNet.predict
После того как закончите, обучите сеть на игрушечных данных, которые мы сгенерировали выше, лосс дольжен быть менее или около 0.2.
Step10: CIFAR-10
Step11: Обучение сети
Обучите сеть на данных CIFAR-10
Step12: Дебаггинг процесса обучния
Step13: Настройка гиперпараметров
Step14: Проверка качества
С оптимальными гиперпараметрами сеть должна выдавать точнов около 48%. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
class TwoLayerNet(object):
def __init__(self, input_size, hidden_size, output_size, std=1e-4):
W1: Первый слой с размерами (D, H)
b1: Вектор байесов, размер (H,)
W2: Второй слой с размерами (H, C)
b2: Вектор байесов, размер (C,)
Входные парамерты:
- input_size: Размерность входных данных
- hidden_size: Размер скрытого слоя
- output_size: Количество классов
self.params = {}
self.params['W1'] = std * np.random.randn(input_size, hidden_size)
self.params['b1'] = np.zeros(hidden_size)
self.params['W2'] = std * np.random.randn(hidden_size, output_size)
self.params['b2'] = np.zeros(output_size)
def loss(self, X, y=None, reg=0.0):
Вычисление функции потерь
Входные парамерты:
- X: Таблица данных (N, D). X[i] - один пример
- y: Вектор лейблов. Если отсутсвует, то возвращается предсказание лейблов
- reg: Коэффициент регуляризации
Возвращает:
Если y == None, то возвращаются скор для классов
Если y != None, то возвращаются:
- Лосс для данного семпла данных
- grads: Словарь градиентов, ключи соответствуют ключам словаря self.params.
W1, b1 = self.params['W1'], self.params['b1']
W2, b2 = self.params['W2'], self.params['b2']
N, D = X.shape
scores = None
#############################################################################
# TODO: Расчет forward pass или прямой проход, для данных находятся скоры, #
# на выходе размер (N, C) #
#############################################################################
pass
#############################################################################
# END OF YOUR CODE #
#############################################################################
# Если y == None, то завершаем вызов
if y is None:
return scores
loss = None
#############################################################################
# TODO: Расчет Softmax loss для полученных скоров обьектов, на выходе скаляр #
#############################################################################
pass
#############################################################################
# END OF YOUR CODE #
#############################################################################
grads = {}
#############################################################################
# TODO: Расчет обратнохо прохода или backward pass, находятся градиенты для всех #
# параметров, результаты сохраняются в grads, например grads['W1'] #
#############################################################################
pass
#############################################################################
# END OF YOUR CODE #
#############################################################################
return loss, grads
def train(self, X, y, X_val, y_val,
learning_rate=1e-3, learning_rate_decay=0.95,
reg=5e-6, num_iters=100,
batch_size=200, verbose=False):
Обучение нейронной сети с помощью SGD
Входные парамерты:
- X: Матрица данных (N, D)
- y: Вектор лейблов (N, )
- X_val: Данные для валидации (N_val, D)
- y_val: Вектор лейблов валидации (N_val, )
- reg: Коэффициент регуляризации
- num_iters: Количнство итераций
- batch_size: Размер семпла данных, на 1 шаг алгоритма
- verbose: Вывод прогресса
num_train = X.shape[0]
iterations_per_epoch = max(num_train / batch_size, 1)
loss_history = []
train_acc_history = []
val_acc_history = []
for it in range(num_iters):
X_batch = None
y_batch = None
#########################################################################
# TODO: Семпл данных их X-> X_batch, y_batch #
#########################################################################
pass
#########################################################################
# END OF YOUR CODE #
#########################################################################
loss, grads = self.loss(X_batch, y=y_batch, reg=reg)
loss_history.append(loss)
#########################################################################
# TODO: Используя градиенты из grads обновите параметры сети #
#########################################################################
pass
#########################################################################
# END OF YOUR CODE #
#########################################################################
if verbose and it % 100 == 0:
print('iteration %d / %d: loss %f' % (it, num_iters, loss))
if it % iterations_per_epoch == 0:
train_acc = (self.predict(X_batch) == y_batch).mean()
val_acc = (self.predict(X_val) == y_val).mean()
train_acc_history.append(train_acc)
val_acc_history.append(val_acc)
# Decay learning rate
learning_rate *= learning_rate_decay
return {
'loss_history': loss_history,
'train_acc_history': train_acc_history,
'val_acc_history': val_acc_history,
}
def predict(self, X):
Входные параметры:
- X: Матрица данных (N, D)
Возвращает:
- y_pred: Вектор предсказаний классов для обьектов (N,)
y_pred = None
###########################################################################
# TODO: Предсказание классов для обьектов из X #
###########################################################################
pass
###########################################################################
# END OF YOUR CODE #
###########################################################################
return y_pred
# Инициализация простого примера. Данные и обьект модели
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
Explanation: Полносвязная нейронная сеть
В данном домашнем задании вы подготовите свою реализацию полносвязной нейронной сети и обучите классификатор на датасете CIFAR-10
End of explanation
scores = net.loss(X)
print('Your scores:')
print(scores)
print()
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print()
# The difference should be very small. We get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
Explanation: Прямой проход, скор
Реализуйте прямой проход в TwoLayerNet.loss
End of explanation
loss, _ = net.loss(X, y, reg=0.05)
correct_loss = 1.30378789133
# Ошибка должна быть < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
Explanation: Прямой проход, функция потерь
В том же методе реализуйте вычисление функции потерь
End of explanation
def eval_numerical_gradient(f, x, verbose=True, h=0.00001):
fx = f(x) # evaluate function value at original point
grad = np.zeros_like(x)
# iterate over all indexes in x
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
# evaluate function at x+h
ix = it.multi_index
oldval = x[ix]
x[ix] = oldval + h # increment by h
fxph = f(x) # evalute f(x + h)
x[ix] = oldval - h
fxmh = f(x) # evaluate f(x - h)
x[ix] = oldval # restore
# compute the partial derivative with centered formula
grad[ix] = (fxph - fxmh) / (2 * h) # the slope
if verbose:
print(ix, grad[ix])
it.iternext() # step to next dimension
return grad
loss, grads = net.loss(X, y, reg=0.05)
# Ошибка должна быть меньше или около 1e-8
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
Explanation: Обратный проход
Закончите реализацию метода, вычислением градиентов для W1, b1, W2, и b2
End of explanation
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=5e-6,
num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Обучение сети
Реализуйте метод TwoLayerNet.train и метод TwoLayerNet.predict
После того как закончите, обучите сеть на игрушечных данных, которые мы сгенерировали выше, лосс дольжен быть менее или около 0.2.
End of explanation
from keras.datasets import cifar10
(X_train, y_train), (X_val, y_val) = cifar10.load_data()
X_test, y_test = X_val[:int(X_val.shape[0]*0.5)], y_val[:int(X_val.shape[0]*0.5)]
X_val, y_val = X_val[int(X_val.shape[0]*0.5):], y_val[int(X_val.shape[0]*0.5):]
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
Explanation: CIFAR-10
End of explanation
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, verbose=True)
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
Explanation: Обучение сети
Обучите сеть на данных CIFAR-10
End of explanation
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
Explanation: Дебаггинг процесса обучния
End of explanation
best_net = None
#################################################################################
# TODO: Напишите свою реализцию кросс валидации для настройки гиперпараметров сети #
#################################################################################
pass
#################################################################################
# END OF YOUR CODE #
#################################################################################
Explanation: Настройка гиперпараметров
End of explanation
test_acc = (best_net.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
Explanation: Проверка качества
С оптимальными гиперпараметрами сеть должна выдавать точнов около 48%.
End of explanation |
10,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 使用 3D 卷积实现视频插画
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: BAIR:基于 NumPy 数组输入的演示
Step3: 加载 Hub 模块
Step4: 生成并显示视频 | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow_datasets.core import SplitGenerator
from tensorflow_datasets.video.bair_robot_pushing import BairRobotPushingSmall
import tempfile
import pathlib
TEST_DIR = pathlib.Path(tempfile.mkdtemp()) / "bair_robot_pushing_small/softmotion30_44k/test/"
# Download the test split to $TEST_DIR
!mkdir -p $TEST_DIR
!wget -nv https://storage.googleapis.com/download.tensorflow.org/data/bair_test_traj_0_to_255.tfrecords -O $TEST_DIR/traj_0_to_255.tfrecords
# Since the dataset builder expects the train and test split to be downloaded,
# patch it so it only expects the test data to be available
builder = BairRobotPushingSmall()
test_generator = SplitGenerator(name='test', gen_kwargs={"filedir": str(TEST_DIR)})
builder._split_generators = lambda _: [test_generator]
builder.download_and_prepare()
Explanation: 使用 3D 卷积实现视频插画
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/tweening_conv3d"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tweening_conv3d.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tweening_conv3d.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/tweening_conv3d.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">Download notebook</a>
</td>
<td><a href="https://tfhub.dev/google/tweening_conv3d_bair/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
Yunpeng Li、Dominik Roblek 和 Marco Tagliasacchi。From Here to There: Video Inbetweening Using Direct 3D Convolutions,2019 年。
https://arxiv.org/abs/1905.10240
最新 Hub 特性:
具有适用于 BAIR 机器人推动视频和 KTH 动作视频数据集的模型(不过,本 Colab 仅使用 BAIR)
Hub 中已提供 BAIR 数据集,但需要用户自己提供 KTH 视频。
目前仅限评估(视频生成)
批次大小和帧大小为硬编码
设置
由于 tfds.load('bair_robot_pushing_small', split='test') 将下载一个 30GB 的归档(其中还包含训练数据),因此,我们下载仅包含 190MB 测试数据的单独归档。使用的数据集已在本论文中发布,且已根据 Creative Commons BY 4.0 获得许可。
End of explanation
# @title Load some example data (BAIR).
batch_size = 16
# If unable to download the dataset automatically due to "not enough disk space", please download manually to Google Drive and
# load using tf.data.TFRecordDataset.
ds = builder.as_dataset(split="test")
test_videos = ds.batch(batch_size)
first_batch = next(iter(test_videos))
input_frames = first_batch['image_aux1'][:, ::15]
input_frames = tf.cast(input_frames, tf.float32)
# @title Visualize loaded videos start and end frames.
print('Test videos shape [batch_size, start/end frame, height, width, num_channels]: ', input_frames.shape)
sns.set_style('white')
plt.figure(figsize=(4, 2*batch_size))
for i in range(batch_size)[:4]:
plt.subplot(batch_size, 2, 1 + 2*i)
plt.imshow(input_frames[i, 0] / 255.0)
plt.title('Video {}: First frame'.format(i))
plt.axis('off')
plt.subplot(batch_size, 2, 2 + 2*i)
plt.imshow(input_frames[i, 1] / 255.0)
plt.title('Video {}: Last frame'.format(i))
plt.axis('off')
Explanation: BAIR:基于 NumPy 数组输入的演示
End of explanation
hub_handle = 'https://tfhub.dev/google/tweening_conv3d_bair/1'
module = hub.load(hub_handle).signatures['default']
Explanation: 加载 Hub 模块
End of explanation
filled_frames = module(input_frames)['default'] / 255.0
# Show sequences of generated video frames.
# Concatenate start/end frames and the generated filled frames for the new videos.
generated_videos = np.concatenate([input_frames[:, :1] / 255.0, filled_frames, input_frames[:, 1:] / 255.0], axis=1)
for video_id in range(4):
fig = plt.figure(figsize=(10 * 2, 2))
for frame_id in range(1, 16):
ax = fig.add_axes([frame_id * 1 / 16., 0, (frame_id + 1) * 1 / 16., 1],
xmargin=0, ymargin=0)
ax.imshow(generated_videos[video_id, frame_id])
ax.axis('off')
Explanation: 生成并显示视频
End of explanation |
10,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Display Examples
Step1: Load some image data
Step2: Matplotlib imshow()
Matplotlib is a great high-quality data display tool used by lots of people for a long time. It has long been my first choice for interactive data exploration on my PC when I want a native GUI framework. But when I use the IPython Notebook I want my interactive display tools to live entirely in the world of HTML and JavaScript. Static image display works fine enough (see the example below), but fully-interactive displays are still a work in progress. Ultimately I need compatibility with IPython widgets and Matplotlib is not there yet.
Step4: IPython's Built-in Image Widget
The IPython built-in image widget accepts as input a string of byte data representing an already-compressed image. The compressed image data is synchronized from the Python backend to the Notebook's Javascript frontend and copied into the widget's image element for display.
The upside of this display widget is simplicity of implementaion. The downside is the depth of understanding and complexity of implementation required of the user. I want an easy-to-use image display widget that readily accepts Numpy arrays as input.
Step5: Canvas Element with Basic HTML and JavaScript
The HTML5 Canvas Element is a great tool for displaying images and drawing artwork onto a bitmap surface in the browser. It has built-in support for mouse events plus size and rotation transforms.
The example below uses HTML and JavaScript to display an image to a canvas element. But since the image originates from a local data file and not a remote URL, a special URL form must be used to encode the image data into something compatible. See mozilla and wikipedia for details.
The above example already compressed the original image data into a sequence of bytes. Now those bytes need to be encoded into a form that will survive delivery over the internet.
xml
data
Step8: So that takes care of getting the data ready. Next we use some HTML and JavaScript to display the image right here in the notebook.
Step9: My New Canvas Widget
My new canvas widget is simpler to use than IPython's built-in image display widget since it takes a Numpy array as input. Behind the scenes it takes care of compressing and encoding the data and then feeding it into the canvas element in a manner similar to the example just above.
Step10: Modifying the image data in place is as easy as any other notebook widget. | Python Code:
from __future__ import print_function, unicode_literals, division, absolute_import
import io
import IPython
from ipywidgets import widgets
import PIL.Image
from widget_canvas import CanvasImage
from widget_canvas.image import read
Explanation: Image Display Examples
End of explanation
data_image = read('images/Whippet.jpg')
data_image.shape
Explanation: Load some image data
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(data_image)
plt.tight_layout()
Explanation: Matplotlib imshow()
Matplotlib is a great high-quality data display tool used by lots of people for a long time. It has long been my first choice for interactive data exploration on my PC when I want a native GUI framework. But when I use the IPython Notebook I want my interactive display tools to live entirely in the world of HTML and JavaScript. Static image display works fine enough (see the example below), but fully-interactive displays are still a work in progress. Ultimately I need compatibility with IPython widgets and Matplotlib is not there yet.
End of explanation
def compress_to_bytes(data, fmt):
Helper function to compress image data via PIL/Pillow.
buff = io.BytesIO()
img = PIL.Image.fromarray(data)
img.save(buff, format=fmt)
return buff.getvalue()
# Compress the image data.
fmt = 'png'
data_comp = compress_to_bytes(data_image, fmt)
# Display first 100 bytes of compressed data just for fun.
data_comp[:100]
# Built-in IPython image widget.
wid_builtin = widgets.Image(value=data_comp)
wid_builtin.border_color = 'black'
wid_builtin.border_width = 2
wid_builtin
# At one point during development the above image was stretched out to the full width of containing cell.
# Not sure why. The two lines below are meant to address that problem.
wid_builtin.width = data_image.shape[1]
wid_builtin.height = data_image.shape[0]
Explanation: IPython's Built-in Image Widget
The IPython built-in image widget accepts as input a string of byte data representing an already-compressed image. The compressed image data is synchronized from the Python backend to the Notebook's Javascript frontend and copied into the widget's image element for display.
The upside of this display widget is simplicity of implementaion. The downside is the depth of understanding and complexity of implementation required of the user. I want an easy-to-use image display widget that readily accepts Numpy arrays as input.
End of explanation
import base64
data_encode = base64.b64encode(data_comp)
# Display first 100 bytes of compressed and encoded data just for fun.
data_encode[:100]
# Compare sizes.
print('Original Image: {:7d} bytes (raw)'.format(data_image.size))
print('Compressed: {:7d} bytes ({})'.format(len(data_comp), fmt))
print('Compressed & Encoded:{:7d} bytes (base64)'.format(len(data_encode)))
# The decoding step here is necesary since we need to interpret byte data as text.
# See this link for a nice explanation:
# http://stackoverflow.com/questions/14010551/how-to-convert-between-bytes-and-strings-in-python-3
enc = 'utf-8'
data_url = 'data:image/{:s};charset={};base64,{:s}'.format(fmt, enc, data_encode.decode(encoding=enc))
Explanation: Canvas Element with Basic HTML and JavaScript
The HTML5 Canvas Element is a great tool for displaying images and drawing artwork onto a bitmap surface in the browser. It has built-in support for mouse events plus size and rotation transforms.
The example below uses HTML and JavaScript to display an image to a canvas element. But since the image originates from a local data file and not a remote URL, a special URL form must be used to encode the image data into something compatible. See mozilla and wikipedia for details.
The above example already compressed the original image data into a sequence of bytes. Now those bytes need to be encoded into a form that will survive delivery over the internet.
xml
data:[<MIME-type>][;charset=<encoding>][;base64],<data>
End of explanation
doc_html = \
<html>
<head></head>
<body>
<canvas id='hello_example' style='border: solid black 2px'/>
</body>
</html>
template_js = \
// Embedded data URI goes right here.
var url = "{}"
// Get the canvas element plus corresponding drawing context
var canvas = document.getElementById('hello_example');
var context = canvas.getContext('2d');
// Create a hidden <img> element to manage incoming data.
var img = new Image();
// Add new-data event handler to the hidden <img> element.
img.onload = function () {{
// This function will be called when new image data has finished loading
// into the <img> element. This new data will be the source for drawing
// onto the Canvas.
// Set canvas geometry.
canvas.width = img.width
canvas.style.width = img.width + 'px'
canvas.height = img.height
canvas.style.height = img.height + 'px'
// Draw new image data onto the Canvas.
context.drawImage(img, 0, 0);
}}
// Assign image URL.
img.src = url
doc_js = template_js.format(data_url)
# Display the HTML via IPython display hook.
IPython.display.display_html(doc_html, raw=True)
# Update HTML canvas element with some JavaScript.
IPython.display.display_javascript(doc_js, raw=True)
Explanation: So that takes care of getting the data ready. Next we use some HTML and JavaScript to display the image right here in the notebook.
End of explanation
wid_canvas = CanvasImage(data_image)
wid_canvas.border_color = 'black'
wid_canvas.border_width = 2
wid_canvas
Explanation: My New Canvas Widget
My new canvas widget is simpler to use than IPython's built-in image display widget since it takes a Numpy array as input. Behind the scenes it takes care of compressing and encoding the data and then feeding it into the canvas element in a manner similar to the example just above.
End of explanation
data_image_2 = read('images/Doberman.jpg')
wid_canvas.data = data_image_2
Explanation: Modifying the image data in place is as easy as any other notebook widget.
End of explanation |
10,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<hr style="border
Step1: <hr style="border
Step2:
<hr style="border
Step3:
<hr style="border
Step4:
<hr style="border
Step5:
<hr style="border
Step6:
<hr style="border
Step7: <hr style="border
Step8:
<hr style="border
Step9:
<hr style="border
Step10: <hr style="border
Step11:
<hr style="border
Step12:
<hr style="border
Step13:
<hr style="border
Step14:
<hr style="border
Step15:
<hr style="border
Step16:
<hr style="border
Step17: <hr style="border
Step18: <hr style="border
Step19: <hr style="border
Step20: <hr style="border
Step21: <hr style="border
Step22: <hr style="border | Python Code:
# calculate pi
import numpy as np
# N : number of iterations
def calc_pi(N):
x = np.random.ranf(N);
y = np.random.ranf(N);
r = np.sqrt(x*x + y*y);
c=r[ r <= 1.0 ]
return 4*float((c.size))/float(N)
# time the results
pts = 6; N = np.logspace(1,8,num=pts);
result = np.zeros(pts); count = 0;
for n in N:
result = %timeit -o -n1 calc_pi(n)
result[count] = result.best
count += 1
# and save results to file
np.savetxt('calcpi_timings.txt', np.c_[N,results],
fmt='%1.4e %1.6e');
Explanation: <hr style="border: solid 1px red; margin-bottom: -1% ">
NumPy <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; background-color:transparent;">
<hr style="border: solid 1px red; margin-top: 1.5% ">
Kevin Stratford <a style="color: blue">[email protected]</a><br>
Emmanouil Farsarakis <a style="color: blue">[email protected]</a><br><br>
Other course authors: <br>
Neelofer Banglawala <br>
Andy Turner <br>
Arno Proeme <br>
<hr style="border: solid 1px red; margin-bottom: 2% ">
<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
<img src="reusematerial.png"; style="float: center; width: 90"; >
<hr style="border: solid 1px red; margin-bottom: 2% ">
<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: 1%; ">
<div align="center" style="float: center; color: blue;">
www.archer.ac.uk <br><br>
[email protected]
</div>
<div>
<img src="epsrclogo.png"; style="float: left; width: 35%; margin-left: 20%; margin-top: 2% ">
<img src="nerclogo.png"; style="float: left; width: 25%; margin-left: 5%">
</div>
<br><br>
<br><br>
<div>
<img src="craylogo.png"; style="float: left; width: 30%; margin-left: 10%; margin-top: 6% ">
<img src="epcclogo.png"; style="float: left; width: 30%; margin-left: 5%; margin-top: 6% " >
<img src="ediunilogo.png"; style="float: left; width: 20%; margin-left: 5%; margin-top: 2% " >
</div>
<br>
<br>
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Introducing NumPy <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
Core Python provides lists
Lists are slow for many numerical algorithms
NumPy provides fast precompiled functions for numerical routines:
multidimensional arrays : faster than lists
matrices and linear algebra operations
random number generation
Fourier transforms and much more...
https://www.numpy.org/
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Calculating $\pi$ <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
<img src="montecarlo.png"; style="float: right; width: 40%; margin-right: 2%; margin-top: 0%; margin-bottom: -1%">
If we know the area $A$ of square length $R$, and the area $Q$ of the quarter circle with radius $R$, we can calculate $\pi$ : $ Q/A = \pi R^2/ 4 R^2 $, so
$ \pi = 4\,Q/A $
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Calculating $\pi$ : monte carlo method <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
We can use the <i>monte carlo</i> method to determine areas $A$ and $Q$ and approximate $\pi$. For $N$ iterations
randomly generate the coordinates $(x,\,y)$, where $0 \leq \,x,\, y <R$ <br>
Calculate distance $ r = x^2 + y^2 $. Check if $(x,\,y)$ lies within radius of circle<br>
Check if $ r $ lies within radius $R$ of circle i.e. if $r \leq R $ <br>
if yes, add to count for approximating area of circle<br>
The numerical approximation of $\pi$ is then : 4 * (count/$N$)
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Calculating $\pi$ : a solution <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# import numpy as alias np
import numpy as np
# create a 1d array with a list
a = np.array( [-1,0,1] ); a
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Creating arrays I <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# use arrays to create arrays
b = np.array( a ); b
# use numpy functions to create arrays
# arange for arrays, range for lists!
a = np.arange( -2, 6, 2 ); a
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Creating arrays II<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# between start, stop, sample step points
a = np.linspace(-10, 10, 5);
a;
# Ex: can you guess these functions do?
b = np.zeros(3); print b
c = np.ones(3); print c
# Ex++: what does this do? Check documentation!
h = np.hstack( (a, a, a), 0 ); print h
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Creating arrays III <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# array characteristics such as:
print a
print a.ndim # dimensions
print a.shape # shape
print a.size # size
print a.dtype # data type
# can choose data type
a = np.array( [1,2,3], np.int16 ); a.dtype
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Array characteristics <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# multi-dimensional arrays e.g. 2d array or matrix
# e.g. list of lists
mat = np.array( [[1,2,3], [4,5,6]]);
print mat; print mat.size; mat.shape
# join arrays along first axis (0)
d = np.r_[np.array([1,2,3]), 0, 0, [4,5,6]];
print d; d.shape
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Multi-dimensional arrays I <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# join arrays along second axis (1)
d = np.c_[np.array([1,2,3]), [4,5,6]];
print d; d.shape
# Ex: use r_, c_ with nd (n>1) arrays
# Ex: can you guess the shape of these arrays?
h = np.array( [1,2,3,4,5,6] );
i = np.array( [[1,1],[2,2],[3,3],[4,4],[5,5],[6,6]] );
j = np.array( [[[1],[2],[3],[4],[5],[6]]] );
k = np.array( [[[[1],[2],[3],[4],[5],[6]]]] );
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Multi-dimensional arrays II <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# reshape 1d arrays into nd arrays original matrix unaffected
mat = np.arange(6); print mat
print mat.reshape( (3, 2) )
print mat; print mat.size;
print mat.shape
# can also use the shape, this modifies the original array
a = np.zeros(10); print a
a.shape = (2,5)
print a; print a.shape;
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Reshaping arrays I<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# Ex: what do flatten() and ravel()?
# use online documentation, or '?'
mat2 = mat.flatten()
mat2 = mat.ravel()
# Ex: split a martix? Change the cuts and axis values
# need help?: np.split?
cuts=2;
np.split(mat, cuts, axis=0)
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Reshaping arrays II <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# Ex: can you guess what these functions do?
# np.copyto(b, a);
# v = np.vstack( (arr2d, arr2d) ); print v; v.ndim;
# c0 = np.concatenate( (arr2d, arr2d), axis=0); c0;
# c1 = np.concatenate(( mat, mat ), axis=1); print "c1:", c1;
# Ex++: other functions to explore
#
# stack(arrays[, axis])
# tile(A, reps)
# repeat(a, repeats[, axis])
# unique(ar[, return_index, return_inverse, ...])
# trim_zeros(filt[, trim]), fill(scalar)
# xv, yv = meshgrid(x,y)
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Functions for you to explore <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# basic indexing and slicing we know from lists
a = np.arange(8); print a
a[3]
# a[start:stop:step] --> [start, stop every step)
print a[0:7:2]
print a[0::2]
# negative indices are valid!
# last element index is -1
print a[2:-3:2]
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Accessing arrays I <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# basic indexing of a 2d array : take care of each dimension
nd = np.arange(12).reshape((4,3)); print nd;
print nd[2,2];
print nd[2][2];
# get corner elements 0,2,9,11
print nd[0:4:3, 0:3:2]
# Ex: get elements 7,8,10,11 that make up the bottom right corner
nd = np.arange(12).reshape((4,3));
print nd; nd[2:4, 1:3]
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Accessing arrays II <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# slices are views (like references)
# on an array, can change elements
nd[2:4, 1:3] = -1; nd
# assign slice to a variable to prevent this
s = nd[2:4, 1:3]; print nd;
s = -1; nd
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Slices and copies I <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# Care - simple assignment between arrays
# creates references!
nd = np.arange(12).reshape((4,3))
md = nd
md[3] = 1000
print nd
# avoid this by creating distinct copies
# using copy()
nd = np.arange(12).reshape((4,3))
md = nd.copy()
md[3] = 999
print nd
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Slices and copies II <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# advanced or fancy indexing lets you do more
p = np.array( [[0,1,2], [3,4,5], [6,7,8], [9,10,11]] );
print p
rows = [0,0,3,3]; cols = [0,2,0,2];
print p[rows, cols]
# Ex: what will this slice look like?
m = np.array( [[0,-1,4,20,99], [-3,-5,6,7,-10]] );
print m[[0,1,1,1], [1,0,1,4]];
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Fancy indexing I<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# can use conditionals in indexing
# m = np.array([[0,-1,4,20,99],[-3,-5,6,7,-10]]);
m[ m < 0 ]
# Ex: can you guess what this does? query: np.sum?
y = np.array([[0, 1], [1, 1], [2, 2]]);
rowsum = y.sum(1);
y[rowsum <= 2, :]
# Ex: and this?
a = np.arange(10);
mask = np.ones(len(a), dtype = bool);
mask[[0,2,4]] = False; print mask
result = a[mask]; result
# Ex: r=np.array([[0,1,2],[3,4,5]]);
xp = np.array( [[[1,11],[2,22],[3,33]], [[4,44],[5,55],[6,66]]] );
xp[slice(1), slice(1,3,None), slice(1)]; xp[:1, 1:3:, :1];
print xp[[1,1,1],[1,2,1],[0,1,0]]
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Fancy indexing II <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# add an element with insert
a = np.arange(6).reshape([2,3]); print a
np.append(a, np.ones([2,3]), axis=0)
# inserting an array of elements
np.insert(a, 1, -10, axis=0)
# can use delete, or a boolean mask, to delete array elements
a = np.arange(10)
np.delete(a, [0,2,4], axis=0)
Explanation:
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Manipulating arrays <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# vectorization allows element-wise operations (no for loop!)
a = np.arange(10).reshape([2,5]); b = np.arange(10).reshape([2,5]);
-0.1*a
a*b
a/(b+1) #.astype(float)
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Vectorization I<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# random floats
a = np.random.ranf(10); a
# create random 2d int array
a = np.random.randint(0, high=5, size=25).reshape(5,5);
print a;
# generate sample from normal distribution
# (mean=0, standard deviation=1)
s = np.random.standard_normal((5,5)); s;
# Ex: what other ways are there to generate random numbers?
# What other distributions can you sample?
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Random number generation <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# easy way to save data to text file
pts = 5; x = np.arange(pts); y = np.random.random(pts);
# format specifiers: d = int, f = float, e = scientific
np.savetxt('savedata.txt', np.c_[x,y], header = 'DATA', footer = 'END',
fmt = '%d %1.4f')
!cat savedata.txt
# One could do ...
# p = np.loadtxt('savedata.txt')
# ...but much more flexibility with genfromtext
p = np.genfromtxt('savedata.txt', skip_header=2, skip_footer=1); p
# Ex++: what do numpy.save, numpy.load do ?
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] File IO <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# calculate pi using polynomials
# import Polynomial class
from numpy.polynomial import Polynomial as poly;
num = 100000;
denominator = np.arange(num);
denominator[3::4] *= -1 # every other odd coefficient is -ve
numerator = np.ones(denominator.size);
# avoid dividing by zero, drop first element denominator
almost = numerator[1:]/denominator[1:];
# make even coefficients zero
almost[1::2] = 0
# add back zero coefficient
coeffs = np.r_[0,almost];
p = poly(coeffs);
4*p(1) # pi approximation
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Polynomials I<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
Can represent polynomials with the numpy class Polynomial from <i>numpy.polynomial.polynomial</i>.
Polynomial([a, b, c, d, e]) is equivalent to $p(x) = a\,+\,b\,x \,+\,c\,x^2\,+\,d\,x^3\,+\,e\,x^4$. <br>
For example:
Polynomial([1,2,3]) is equivalent to $p(x) = 1\,+\,2\,x \,+\,3\,x^2$
Polynomial([0,1,0,2,0,3]) is equivalent to $p(x) = x \,+\,2\,x^3\,+\,3\,x^5 $
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Polynomials II<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
Can carry out arithmetic operations on polynomials, as well integrate and differentiate them.
Can also use the <i>polynomial</i> package to find a least-squares fit to data.
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Polynomials : calculating $\pi$ I <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
The Taylor series expansion for the trigonometric function $\arctan(y)$ is :<br>
$\arctan ( y) \, = \,y - \frac{y^3}{3} + \frac{y^5}{5} - \frac{y^7}{7} + \dots $
Now, $\arctan(1) = \frac{\pi}{4} $, so ...
$ \pi = 4 \, \big( - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + ... \big) $
We can represent the series expansion using a numpy Polynomial, with coefficients: <br>
$p(x)$ = [0, 1, 0, -1/3, 0, 1/5, 0, -1/7,...], and use it to approximate $\pi$.
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Polynomials : calculating $\pi$ II <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation
# accessing a 2d array
nd = np.arange(100).reshape((10,10))
# accessing element of 2d array
%timeit -n10000000 -r3 nd[5][5]
%timeit -n10000000 -r3 nd[(5,5)]
# Ex: multiplying two vectors
x=np.arange(10E7)
%timeit -n1 -r10 x*x
%timeit -n1 -r10 x**2
# Ex++: from the linear algebra package
%timeit -n1 -r10 np.dot(x,x)
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Performance I <img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
Python has a convenient timing function called <i><b>timeit</b></i>.
Can use this to measure the execution time of small code snippets.
To use timeit function
import module timeit and use <i><b>timeit.timeit</b></i> or
use magic command <b>%timeit</b> in an IPython shell
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Performance II<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
By default, <i>timeit</i>:
Takes the best time out of 3 repeat tests (-r)
takes the average time for a number of iterations (-n) per repeat
In an IPython shell:
<i>%timeit <b>-n</b><iterations> <b>-r</b><repeats> <code></i>
query %timeit? for more information
https://docs.python.org/2/library/timeit.html
<hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Performance : experiments I<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
Here are some timeit experiments for you to run.
End of explanation
import numpy as np
# Ex: range functions and iterating in for loops
size = int(1E6);
%timeit for x in range(size): x ** 2
# faster than range for very large arrays?
%timeit for x in xrange(size): x ** 2
%timeit for x in np.arange(size): x ** 2
%timeit np.arange(size) ** 2
# Ex: look at the calculating pi code
# Make sure you understand it. Time the code.
Explanation: <hr style="border: solid 1px red; margin-bottom: 2% ">
[NumPy] Performance : experiments II<img src="headerlogos.png"; style="float: right; width: 25%; margin-right: -1%; margin-top: 0%; margin-bottom: -1%">
<hr style="border: solid 1px red; margin-bottom: -1%; ">
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.